title
stringlengths
4
52
text
stringlengths
396
121k
relevans
float64
0.76
0.82
popularity
float64
0.9
1
ranking
float64
0.71
0.82
Anamnesis (philosophy)
In Plato's theory of epistemology, anamnesis (; ) refers to the recollection of innate knowledge acquired before birth. The concept posits the claim that learning involves the act of rediscovering knowledge from within oneself. This stands in contrast to the opposing doctrine known as empiricism, which posits that all knowledge is derived from experience and sensory perception. Plato develops the theory of anamnesis in his Socratic dialogues: Meno, Phaedo, and Phaedrus. Meno In Meno, Plato's character (and old teacher) Socrates is challenged by Meno to explain how someone could find out what the nature of virtue is if they did not already know anything about it. In other words, one who knows none of the attributes, properties, and/or other descriptive markers of any kind that help signify what something is (physical or otherwise) will not recognize it even after coming across it. Therefore, if the converse is true, and one knows the attributes, properties and/or other descriptive markers of this thing, one should not need to seek it out at all. The conclusion is that in either instance, there is no point trying to gain that "something"; in the case of Plato's aforementioned work, there is no point in seeking knowledge. Socrates' response is to develop his theory of anamnesis and to suggest that the soul is immortal, and repeatedly incarnated; knowledge is in the soul from eternity (86b), but each time the soul is incarnated its knowledge is forgotten in the trauma of birth. What one perceives to be learning, then, is the recovery of what one has forgotten. (Once it has been brought back it is true belief, to be turned into genuine knowledge by understanding.) Socrates (and Plato) thus sees himself not as a teacher but as a midwife, aiding with the birth of knowledge that was already there in the student. The theory is illustrated by Socrates asking a slave boy questions about geometry. At first, the boy gives the wrong answer; when that is pointed out to him, he is puzzled, but by asking questions, Socrates helps him to reach the correct answer. That is intended to show that since the boy was not told the answer, he reached the truth by only recollecting what he had once known but later forgotten. Phaedo In Phaedo, Plato develops his theory of anamnesis, in part by combining it with his theory of forms. Firstly, he elaborates how anamnesis can be achieved: whereas in Meno, nothing more than Socrates' method of questioning is offered, in Phaedo, Plato presents a way of living that would enable one to overcome the misleading nature of the body through katharsis (, , i.e., from guilt or defilement). The body and its senses are the source of error; knowledge cannot be regained except through the use of reason, and contemplating things with the soul (noesis). While the body's perceptual faculties are deceptive, Plato also argues that the falsehoods that they communicate to the soul can be used to trigger or prompt recollection. Secondly, Plato clarifies that genuine knowledge, as opposed to mere true belief (doxa), is distinguished by its content. One can know only eternal truths since they are the only truths that possibly were in the soul from eternity. It may be very useful to have a true belief about, say, the best way to get from London to Oxford, but such a belief does not qualify as knowledge; it is not possible for the soul to possess such factually contingent propositions for all eternity. Neoplatonism For later interpreters of Plato, the concept of anamnesis became less epistemic and more ontological. Plotinus himself did not posit recollection in the strict sense of the term because all knowledge of universally important ideas (logos) came from a source outside of time (Dyad or the divine nous) and was accessible, using contemplation, to the soul as part of noesis. They were more objects of experience, of inner knowledge or insight, than of recollection. However, in Neoplatonism, the theory of anamnesis became part of the mythology of the descent of the soul. Porphyry's short work On the Cave of the Nymphs in the Odyssey (ostensibly a commentary on the brief passage in Odyssey XIII) elucidated that notion, as did Macrobius's much longer Commentary on the Dream of Scipio. The idea of psychic memory was used by Neoplatonists to demonstrate the celestial and immaterial origins of the soul and to explain how memories of the world-soul could be recalled by everyday human beings. As such, psychic recollection was intrinsically connected to the Platonic conception of the soul. Since the contents of individual "material" or physical memories were trivial, only the universal recollection of Forms, or divine objects, drew one closer to the immortal source of being. Anamnesis is the closest that human minds can come to experiencing the freedom of the soul before it is encumbered by matter. The incarnation process is described in Neoplatonism as a trauma that causes the soul to forget its experiences (and often its divine origins). The storyteller's voice is concealed by John and Plato in order to pursue their anamnetic efforts and to encourage the following generations to be not only readers but also partakers in their original discussions on the soul. Gratitude, as an example of divine salvation, was expressed by offering to God the first fruits of the harvest which maintains an identity with those who performed these actions in the past and therefore actualising them in the present. References Bibliography Plato Phaedo, 1911: edited with introduction and notes by John Burnet (Oxford: Clarendon Press) Jane M. Day 1994 Plato's Meno in Focus (London: Routledge) – contains an introduction and full translation by Day, together with papers on Meno by various philosophers Don S. Armentrout and Robert Boak Slocum [edd], An Episcopal Dictionary of the Church, A User Friendly Reference for Episcopalians (New York, Church Publishing Incorporated) Jacob Klein, A Commentary on Plato's Meno (Chicago, 1989), pp. 103–173. Norman Gulley, Plato's Theory of Knowledge (London, 1962), pp. 1–47. Concepts in ancient Greek epistemology Platonism
0.775941
0.98898
0.76739
Epicurean paradox
The Epicurus paradox is a logical dilemma about the problem of evil attributed to the Greek philosopher Epicurus, who argued against the existence of a god who is simultaneously omniscient, omnipotent, and omnibenevolent. The paradox The logic of the paradox proposed by Epicurus takes three possible characteristics of a god (omnipotence, omniscience, and omnibenevolence — complete power, knowledge, and benevolence) and pairs the concepts together. It is postulated that in each pair, if the two members are true, the missing member cannot also be true, making the paradox a trilemma. The paradox also theorizes how if it is illogical for one of the characteristics to be true, then it cannot be the case that a god with all three exists. The pairs of the characteristics and their potential contradictions they would create consist of the following: If a god knows everything and has unlimited power, then it has knowledge of all evil and has the power to put an end to it. But if it does not end it, it is not completely benevolent. If a god has unlimited power and is completely good, then it has the power to extinguish evil and want to extinguish it. But if it does not do it, its knowledge of evil is limited, so it is not all-knowing. If a god is all-knowing and totally good, then it knows of all the evil that exists and wants to change it. But if it does not, it must be because it is not capable of changing it, so it is not omnipotent. God in Epicureanism Epicurus was not an atheist, although he rejected the idea of a god concerned with human affairs; followers of Epicureanism denied the idea that there was no god. While the conception of a supreme, happy and blessed god was the most popular during his time, Epicurus rejected such a notion, as he considered it too heavy a burden for a god to have to worry about all the problems in the world. For this reason, Epicureanism postulates that gods would not have any special affection for human beings and would not know of their existence, serving only as moral ideals that humanity could try to get closer to. Epicurus came to the conclusion that the gods could not be concerned with the well-being of humanity through observing the problem of evil; that is, the presence of suffering on earth. Attribution and variations There is no text by Epicurus that confirms his authorship of the argument. Therefore, although it was popular with the skeptical school of Greek philosophy, it is possible that Epicurus' paradox was wrongly attributed to him by Lactantius who, from his Christian perspective, while attacking the problem proposed by the Greek, would have considered him an atheist. There is a suggestion that it was in fact the work of a skeptical philosopher who preceded Epicurus, possibly Carneades. German scholar Reinhold F. Glei believes that the theodicy argument is from a non-Epicurean or anti-Epicurean academic source. The oldest preserved version of this trilemma appears in the writings of the skeptic Sextus Empiricus. Charles Bray, in his book The Philosophy of Necessity of 1863, quotes Epicurus without mentioning his source as the author of the following excerpt: N. A. Nicholson, in his Philosophical Papers of 1864, attributes "the famous inquiries" to Epicurus, using words previously phrased by Hume. Hume's phrase occurs in the tenth book of his acclaimed Dialogues Concerning Natural Religion, published posthumously in 1779. The character Philo begins his speech by saying "Epicurus' ancient questions remain unanswered". Hume's quote comes from Pierre Bayle's influential Dictionnaire Historique et Critique, which quotes Lactantius attributing the questions to Epicurus. This attribution occurs in chapter 13 of Lactantius's "De Ira Dei", which provides no sources. Hume postulates: See also Theodicy Problem of evil Epicurus Epicureanism Carneades Atheism References General references Mark Joseph Larrimore, (2001), The Problem of Evil, pp. xix-xxi. Wiley-Blackwell Mark Joseph Larrimore, The Problem of Evil: a reader, Blackwell (2001), pp. xx. Reinhold F. Glei, Et invidus et inbecillus. Das angebliche Epikurfragment bei Laktanz, De ira dei 13,20-21, in: Vigiliae Christianae 42 (1988), pp. 47-58 Sexto Empírico, Outlines of Pyrrhonism, 175: "those who firmly maintain that god exists will be forced into impiety; for if they say that he [god] takes care of everything, they will be saying that god is the cause of evils, while if they say that he takes care of some things only or even nothing, they will be forced to say that he is either malevolent or weak" Lucius Caecilius Firmianus Lactantius (1532). Divinae institutiones. VII. [S.l.: s.n.] Epicureanism Theology Philosophy and atheism Paradoxes Philosophy of religion
0.770881
0.995412
0.767344
Post-truth
Post-truth is a term that refers to the widespread documentation of, and concern about, disputes over public truth claims in the 21st century. The term's academic development refers to the theories and research that explain the specific causes historically, and the effects of the phenomenon. Oxford Dictionaries popularly defines it as "relating to and denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief." While the term was used in phrases like "post-truth politics" academically and publicly before 2016, in 2016 the term was named Word of the Year by Oxford Dictionaries after the term's proliferation in the election of president Trump in the United States and the Brexit referendum in the United Kingdom; Donald Trump has been characterized as engaging in a "war on truth." Oxford Dictionaries further notes that post-truth was often used as an adjective to signal a distinctive kind of politics. Some scholars argue that post-truth has similarities with past moral, epistemic, and political debates about relativism, postmodernity, and dishonesty in politics. Others insist that post-truth is specifically concerned with 21st century communication technologies and cultural practices. Historical precedents in philosophy Post-truth is about a historical problem regarding truth in everyday life, especially politics. But truth has long been one of the major preoccupations of philosophy. Truth is also one of the most complicated concepts in the history of philosophy, and much of the research and public debate about post-truth assumes a particular theory of truth, what philosophers call a correspondence theory of truth. The most prominent theory of truth, though with its share of critics, is correspondence theory, roughly, where words correspond to an accepted or mutually available reality to be examined and confirmed. Another major theory of truth is coherence theory, where truth is not just about one claim but a series of statements that cohere about the world. Several academic experts note that the emphasis on philosophical debates about truth have little to do with the concept of post-truth as it has historically emerged in popular politics (see post-truth politics), not in philosophy. As the philosopher Julian Baggini explains: The merits of these competing theories are of mainly academic concern. When people debate whether there were weapons of mass destruction in Saddam Hussain's Iraq, whether global warming is real and anthropogenic, or whether austerity is necessary, their disagreements are not the consequence of competing theories of truth. No witness need ask a judge which theory she has in mind when asked to promise to tell the truth, the whole truth and nothing but the truth. Why then has truth become so problematic in the world outside academic philosophy? One reason is that there is major disagreement and uncertainty concerning what counts as a reliable source of truth. For most of human history, there was some stable combination of trust in religious texts and leaders, learned experts and the enduring folk wisdom called common sense. Now, it seems, virtually nothing is universally taken as an authority. This leaves us having to pick our own experts or simply to trust our guts. It follows that according to experts who approach the concept of post-truth as something historically specific, as a contemporary sociological phenomenon, post-truth theory is only remotely related to traditional debates in philosophy about the nature of truth. In other words, post-truth as a contemporary phenomenon is not about asking "what is truth?" or "is X true?" but "why don't we agree that this or that is true?" A broad range of scholarship increasingly insists a breakdown in institutional authority for truth-telling (government, news media, especially) ushered by new media and communication technologies of user-generated-content, new media editing technologies (visual, audio-visual), and a saturating promotional culture has resulted in confusion and games of truth-telling, even truth markets. Friedrich Nietzsche Not all commentators, however, treat post-truth as a historically specific phenomenon discussed through implicit correspondence, coherence, or pragmatist theories of truth. They discuss it within a philosophical tradition that asks what truth is. Friedrich Nietzsche, a 19th century German philosopher, is sometimes cited in this camp of post-truth commentators. Friedrich Nietzsche is sometimes invoked as a predecessor to theories of post-truth. He argues that humans create the concepts through which they define the good and the just, thereby replacing the concept of truth with the concept of value, and grounding reality in the human will and will to power. In his 1873 essay Truth and Lying in an Extra-Moral Sense, Nietzsche holds that humans create truth about the world through their use of metaphor, myth, and poetry. He writes,If someone hides an object behind a bush, then seeks and finds it there, that seeking and finding is not very laudable: but that is the way it is with the seeking and finding of "truth" within the rational sphere. If I define the mammal and then after examining a camel declare, "See, a mammal", a truth is brought to light, but it is of limited value. I mean, it is anthropomorphic through and through and contains not a single point that would be "true in itself" or really and universally valid, apart from man. The investigator into such truths is basically seeking just the metamorphosis of the world into man; he is struggling to understand the world as a human-like thing and acquires at best a feeling of assimilation. According to Nietzsche, all insights and ideas arise from a particular perspective. This means that there are many possible perspectives in which a truth or value judgment can be made. This amounts to declaring that there is no "true" way of seeing the world, but it does not necessarily mean that all perspectives are equally valid. Nietzschean perspectivism denies that a metaphysical objectivism is anything possible and asserts that there are no objective evaluations capable of transcending any cultural formation or subjective designations. This means that there are no objective facts and that understanding or knowledge of a thing in itself is not possible: Therefore, the truth (and above all the belief in it) is an error, but it is an error necessary for life: "Truth is a kind of error without which a certain kind of living creature would not be able to live". (The Will to Power, KGW VII, 34 [253].) Max Weber Critical theory and continental philosophy Some influential philosophers are skeptical of the division between facts and values. They argue that scientific facts are socially produced through relations of power. Bruno Latour The French philosopher Bruno Latour has been criticized for contributing to the intellectual foundations for post-truth. In 2018, the New York Times ran a profile on Bruno Latour and post-truth politics. According to the article, However, the article claims that it is a misinterpretation to claim that Latour doesn't believe in reality or that truth is relative: Had they been among our circus that day, Latour's critics might have felt that there was something odd about the scene – the old adversary of science worshipers kneeling before the altar of science. But what they would have missed – what they have always missed – was that Latour never sought to deny the existence of gravity. He has been doing something much more unusual: trying to redescribe the conditions by which this knowledge comes to be known. The disputable reputation of Latour as a "fact-denier", stemmed from his article in La Recherche (1998), a French monthly magazine. Here Latour discusses the discovery in 1976, by French scientists working on the mummy of the pharaoh Ramses II, that his death was due to tuberculosis. In the 1990s, Jean Bricmont and Alan Sokal wrote of Latour: In this sense, Latour (or Michel Foucault as well) draws attention to the institutional and practical contingencies for producing knowledge (which in science is always changing at slower and faster rates). Hannah Arendt Hannah Arendt has been cited as an important conceptual resource for post-truth theory in that she attempted to theorize something historically shifting, instead of meditating on the nature of truth itself. In her essay Lying in Politics (1972), Hannah Arendt describes what she terms defactualization, or the inability to discern fact from fiction – a concept very close to what we now understand by post-truth. The essay's central theme is the thoroughgoing political deception that was unveiled with the leaking of the Pentagon Papers in 1971. Her main target of critique is the professional "problem-solvers" tasked with solving American foreign policy "problems" during the Vietnam War, and who comprised the group that authored the McNamara report. Arendt distinguishes defactualization from deliberate falsehood and from lying. She writes,The deliberate falsehood deals with contingent facts; that is, with matters that carry no inherent truth within themselves, no necessity to be as they are. Factual truths are never compellingly true. The historian knows how vulnerable is the whole texture of facts in which we spend our daily life; it is always in danger of being perforated by single lies or torn to shreds by the organized lying of groups, nations, or classes, or denied and distorted, often carefully covered up by reams of falsehoods or simply allowed to fall into oblivion. She goes on,There always comes the point beyond which lying becomes counterproductive. This point is reached when the audience to which the lies are addressed is forced to disregard altogether the distinguishing line between truth and falsehood in order to be able to survive. Truth or falsehoodit does not matter which anymore, if your life depends on your acting as though you trusted; truth that can be relied on disappears entirely from public life, and with it the chief stabilizing factor in the ever-changing affairs of men. Arendt faults the Vietnam era problem-solvers with being overly rational, "trained in translating all factual contents into the language of numbers and percentages", and out of touch with the facts of "given reality." Contrary to contemporary definitions of post-truth that emphasize a reliance on emotion over facts and evidence, Arendt's notion of defactualization identifies hyper-rationality as the mechanism that blurs the line between "fact and fantasy": the problem-solvers "were indeed to a rather frightening degree above 'sentimentality' and in love with 'theory', the world of sheer mental effort. They were eager to find formulas, preferably expressed in a pseudo-mathematical language..." Arendt writes: "What these problem-solvers have in common with down-to-earth liars is the attempt to get rid of facts and the confidence that this should be possible because of the inherent contingency of facts." She explains that deception and even self-deception are rendered meaningless in a defactualized world, for both rely on preserving the distinction between truth and falsehood. On the other hand, in a defactualized environment, the individual "loses all contact with not only his audience, but also the real world, which will still catch up with him because he can remove his mind from it but not his body." Arendt specifically pointed to advertising (and what has recently been described as an all-encompassing "promotional culture") as having played a primary role in creating knowledge conditions of "ready to buy" that contemporary thinkers describe as characteristic of post-truth. Referring to the aforementioned concept of "defactualization" by Arendt, but applying it to the information society of the twenty-first century, Byung-Chul Han argues that a "new nihilism" is emerging, in which the lie is no longer passed off as truth, or in which the truth is disavowed as a lie. Rather it is the very distinction between truth and falsehood that is undermined. Anyone who knowingly lies and resists the truth, paradoxically recognizes it. Lying is possible only where the distinction between truth and falsehood is intact. The liar does not lose touch with the truth. His faith in reality does not waver. The liar is not a nihilist, he does not question truth itself. The more determined he lies, the more the truth is confirmed. "Fake news" are not lies: they attack "facticity" itself. They "de-facticize" reality. When Donald Trump offhandedly says whatever suits him, he is not a classic liar knowingly distorting reality, as to do that one would need to know it. He is rather indifferent to the truth of facts. Contemporary evaluation Carl Sagan, a famous astronomer and science communicator, argued in his work The Demon-Haunted World: Science as a Candle in the Dark: Carl Sagan's words are thought to be a prediction of a "post-truth" or "alternative facts" world. Following this, some scholars use the term "post-truth" to refer to such "a situation in society and politics, in which the boundary between truth and untruth is erased, facts and related narratives are purposefully produced, emotions are more important than knowledge and the actors of social or political life do not care for truth, proof and evidence". In the context of politics, post-truth has recently been applied to the 2016 and 2020 U.S. presidential elections, Brexit, the COVID-19 "infodemic", and the conditions that led to the storming of the US capital on January 6, 2021. The historian Timothy Snyder wrote of post-truth and the 2021 storming of the United States Capitol: The writer George Gillett has suggested that the term "post-truth" mistakenly conflates empirical and ethical judgements, writing that the supposedly "post-truth" movement is in fact a rebellion against "expert economic opinion becoming a surrogate for values-based political judgements". See also Consensus reality Consensus theory of truth Fake news Filter bubble Nihilism Philosophical skepticism Postmodern philosophy Social bot Social constructionism Truthiness References Further reading Abraham, Praveen & Mathew, Raisun (2021). The Post-Truth Era: Literature and Media, Authorspress Kakutani, Michiko. "The Death of Truth" Penguin Random House (August 2019) McIntyre, Lee. "Post Truth" MIT Press (February 2018) Concepts in political philosophy Social commentary Social influence
0.771898
0.993898
0.767188
Equivocation
In logic, equivocation ("calling two different things by the same name") is an informal fallacy resulting from the use of a particular word or expression in multiple senses within an argument. It is a type of ambiguity that stems from a phrase having two or more distinct meanings, not from the grammar or structure of the sentence. Fallacy of four terms Equivocation in a syllogism (a chain of reasoning) produces a fallacy of four terms. Below is an example: Since only man [human] is rational. And no woman is a man [male]. Therefore, no woman is rational. The first instance of "man" implies the entire human species, while the second implies just those who are male. Motte-and-bailey fallacy Equivocation can also be used to conflate two positions which share similarities, one modest and easy to defend and one much more controversial. The arguer advances the controversial position, but when challenged, they insist that they are only advancing the more modest position. See also Antanaclasis: a related purposeful rhetorical device Circumlocution: phrasing to explain something without saying it Equivocality: Organizational information theory Etymological fallacy: a kind of linguistic misconception Evasion (ethics): tell the truth while deceiving False equivalence: fallacy based on flawed reasoning If-by-whiskey: an example Mental reservation: a doctrine in moral theology No true Scotsman: changing a definition to exclude a counter-example Persuasive definition: skewed definition of term Plausible deniability: a blame-shifting technique Polysemy: the property of word or phrase having certain type of multiple meanings Principle of explosion: one of the fundamental laws in logic Syntactic ambiguity, Amphiboly, Amphibology: ambiguity of a sentence by its grammatical structure When a white horse is not a horse: an example Map-territory relation: concept that words used to describe an underlying reality are arbitrary abstractions not to be confused with the reality itself References Verbal fallacies Ambiguity
0.774355
0.990721
0.76717
Quantitative research
Quantitative research is a research strategy that focuses on quantifying the collection and analysis of data. It is formed from a deductive approach where emphasis is placed on the testing of theory, shaped by empiricist and positivist philosophies. Associated with the natural, applied, formal, and social sciences this research strategy promotes the objective empirical investigation of observable phenomena to test and understand relationships. This is done through a range of quantifying methods and techniques, reflecting on its broad utilization as a research strategy across differing academic disciplines. There are several situations where quantitative research may not be the most appropriate or effective method to use: 1. When exploring in-depth or complex topics. 2. When studying subjective experiences and personal opinions. 3. When conducting exploratory research. 4. When studying sensitive or controversial topics The objective of quantitative research is to develop and employ mathematical models, theories, and hypotheses pertaining to phenomena. The process of measurement is central to quantitative research because it provides the fundamental connection between empirical observation and mathematical expression of quantitative relationships. Quantitative data is any data that is in numerical form such as statistics, percentages, etc. The researcher analyses the data with the help of statistics and hopes the numbers will yield an unbiased result that can be generalized to some larger population. Qualitative research, on the other hand, inquires deeply into specific experiences, with the intention of describing and exploring meaning through text, narrative, or visual-based data, by developing themes exclusive to that set of participants. Quantitative research is widely used in psychology, economics, demography, sociology, marketing, community health, health & human development, gender studies, and political science; and less frequently in anthropology and history. Research in mathematical sciences, such as physics, is also "quantitative" by definition, though this use of the term differs in context. In the social sciences, the term relates to empirical methods originating in both philosophical positivism and the history of statistics, in contrast with qualitative research methods. Qualitative research produces information only on the particular cases studied, and any more general conclusions are only hypotheses. Quantitative methods can be used to verify which of such hypotheses are true. A comprehensive analysis of 1274 articles published in the top two American sociology journals between 1935 and 2005 found that roughly two-thirds of these articles used quantitative method. Overview Quantitative research is generally closely affiliated with ideas from 'the scientific method', which can include: The generation of models, theories and hypotheses The development of instruments and methods for measurement Experimental control and manipulation of variables Collection of empirical data Modeling and analysis of data Quantitative research is often contrasted with qualitative research, which purports to be focused more on discovering underlying meanings and patterns of relationships, including classifications of types of phenomena and entities, in a manner that does not involve mathematical models. Approaches to quantitative psychology were first modeled on quantitative approaches in the physical sciences by Gustav Fechner in his work on psychophysics, which built on the work of Ernst Heinrich Weber. Although a distinction is commonly drawn between qualitative and quantitative aspects of scientific investigation, it has been argued that the two go hand in hand. For example, based on analysis of the history of science, Kuhn concludes that "large amounts of qualitative work have usually been prerequisite to fruitful quantification in the physical sciences". Qualitative research is often used to gain a general sense of phenomena and to form theories that can be tested using further quantitative research. For instance, in the social sciences qualitative research methods are often used to gain better understanding of such things as intentionality (from the speech response of the researchee) and meaning (why did this person/group say something and what did it mean to them?) (Kieron Yeoman). Although quantitative investigation of the world has existed since people first began to record events or objects that had been counted, the modern idea of quantitative processes have their roots in Auguste Comte's positivist framework. Positivism emphasized the use of the scientific method through observation to empirically test hypotheses explaining and predicting what, where, why, how, and when phenomena occurred. Positivist scholars like Comte believed only scientific methods rather than previous spiritual explanations for human behavior could advance. Quantitative methods are an integral component of the five angles of analysis fostered by the data percolation methodology, which also includes qualitative methods, reviews of the literature (including scholarly), interviews with experts and computer simulation, and which forms an extension of data triangulation. Quantitative methods have limitations. These studies do not provide reasoning behind participants' responses, they often do not reach underrepresented populations, and they may span long periods in order to collect the data. Use of statistics Statistics is the most widely used branch of mathematics in quantitative research outside of the physical sciences, and also finds applications within the physical sciences, such as in statistical mechanics. Statistical methods are used extensively within fields such as economics, social sciences and biology. Quantitative research using statistical methods starts with the collection of data, based on the hypothesis or theory. Usually a big sample of data is collected – this would require verification, validation and recording before the analysis can take place. Software packages such as SPSS and R are typically used for this purpose. Causal relationships are studied by manipulating factors thought to influence the phenomena of interest while controlling other variables relevant to the experimental outcomes. In the field of health, for example, researchers might measure and study the relationship between dietary intake and measurable physiological effects such as weight loss, controlling for other key variables such as exercise. Quantitatively based opinion surveys are widely used in the media, with statistics such as the proportion of respondents in favor of a position commonly reported. In opinion surveys, respondents are asked a set of structured questions and their responses are tabulated. In the field of climate science, researchers compile and compare statistics such as temperature or atmospheric concentrations of carbon dioxide. Empirical relationships and associations are also frequently studied by using some form of general linear model, non-linear model, or by using factor analysis. A fundamental principle in quantitative research is that correlation does not imply causation, although some such as Clive Granger suggest that a series of correlations can imply a degree of causality. This principle follows from the fact that it is always possible a spurious relationship exists for variables between which covariance is found in some degree. Associations may be examined between any combination of continuous and categorical variables using methods of statistics. Other data analytical approaches for studying causal relations can be performed with Necessary Condition Analysis (NCA), which outlines must-have conditions for the studied outcome variable. Measurement Views regarding the role of measurement in quantitative research are somewhat divergent. Measurement is often regarded as being only a means by which observations are expressed numerically in order to investigate causal relations or associations. However, it has been argued that measurement often plays a more important role in quantitative research. For example, Kuhn argued that within quantitative research, the results that are shown can prove to be strange. This is because accepting a theory based on results of quantitative data could prove to be a natural phenomenon. He argued that such abnormalities are interesting when done during the process of obtaining data, as seen below: When measurement departs from theory, it is likely to yield mere numbers, and their very neutrality makes them particularly sterile as a source of remedial suggestions. But numbers register the departure from theory with an authority and finesse that no qualitative technique can duplicate, and that departure is often enough to start a search (Kuhn, 1961, p. 180). In classical physics, the theory and definitions which underpin measurement are generally deterministic in nature. In contrast, probabilistic measurement models known as the Rasch model and Item response theory models are generally employed in the social sciences. Psychometrics is the field of study concerned with the theory and technique for measuring social and psychological attributes and phenomena. This field is central to much quantitative research that is undertaken within the social sciences. Quantitative research may involve the use of proxies as stand-ins for other quantities that cannot be directly measured. Tree-ring width, for example, is considered a reliable proxy of ambient environmental conditions such as the warmth of growing seasons or amount of rainfall. Although scientists cannot directly measure the temperature of past years, tree-ring width and other climate proxies have been used to provide a semi-quantitative record of average temperature in the Northern Hemisphere back to 1000 A.D. When used in this way, the proxy record (tree ring width, say) only reconstructs a certain amount of the variance of the original record. The proxy may be calibrated (for example, during the period of the instrumental record) to determine how much variation is captured, including whether both short and long term variation is revealed. In the case of tree-ring width, different species in different places may show more or less sensitivity to, say, rainfall or temperature: when reconstructing a temperature record there is considerable skill in selecting proxies that are well correlated with the desired variable. Relationship with qualitative methods In most physical and biological sciences, the use of either quantitative or qualitative methods is uncontroversial, and each is used when appropriate. In the social sciences, particularly in sociology, social anthropology and psychology, the use of one or other type of method can be a matter of controversy and even ideology, with particular schools of thought within each discipline favouring one type of method and pouring scorn on to the other. The majority tendency throughout the history of social science, however, is to use eclectic approaches-by combining both methods. Qualitative methods might be used to understand the meaning of the conclusions produced by quantitative methods. Using quantitative methods, it is possible to give precise and testable expression to qualitative ideas. This combination of quantitative and qualitative data gathering is often referred to as mixed-methods research. Examples Research that consists of the percentage amounts of all the elements that make up Earth's atmosphere. Survey that concludes that the average patient has to wait two hours in the waiting room of a certain doctor before being selected. An experiment in which group x was given two tablets of aspirin a day and group y was given two tablets of a placebo a day where each participant is randomly assigned to one or other of the groups. The numerical factors such as two tablets, percent of elements and the time of waiting make the situations and results quantitative. In economics, quantitative research is used to analyze business enterprises and the factors contributing to the diversity of organizational structures and the relationships of firms with labour, capital and product markets. See also Antipositivism Case study research Econometrics Falsifiability Market research Positivism Qualitative research Quantitative marketing research Quantitative psychology Quantification (science) Observational study Sociological positivism Statistical survey Statistics References
0.768509
0.998116
0.767061
Thomism
Thomism is the philosophical and theological school which arose as a legacy of the work and thought of Thomas Aquinas (1225–1274), the Dominican philosopher, theologian, and Doctor of the Church. In philosophy, Thomas's disputed questions and commentaries on Aristotle are perhaps his best-known works. In theology, his Summa Theologica is amongst the most influential documents in medieval theology and continues to be the central point of reference for the philosophy and theology of the Catholic Church. In the 1914 motu proprio Doctoris Angelici, Pope Pius X cautioned that the teachings of the Church cannot be understood without the basic philosophical underpinnings of Thomas's major theses: Overview Thomas Aquinas held and practiced the principle that truth is to be accepted no matter where it is found. His doctrines drew from Greek, Roman, Islamic and Jewish philosophers. Specifically, he was a realist (i.e. unlike skeptics, he believed that the world can be known as it is). He often affirmed Aristotle's views with independent arguments, and largely followed Aristotelian terminology and metaphysics. He wrote comprehensive commentaries on Aristotle, and respectfully referred to him simply as "the Philosopher". He also adhered to some neoplatonic principles, for example that "it is absolutely true that there is first something which is essentially being and essentially good, which we call God, [...] [and that] everything can be called good and a being, inasmuch as it participates in it by way of a certain assimilation". Metaphysics Aquinas says that the fundamental axioms of ontology are the principle of non-contradiction and the principle of causality. Therefore, any being that does not contradict these two laws could theoretically exist, even if said being were incorporeal. Predication Aquinas noted three forms of descriptive language when predicating: univocal, analogical, and equivocal. Univocality is the use of a descriptor in the same sense when applied to two objects or groups of objects. For instance, when the word "milk" is applied both to milk produced by cows and by any other female mammal. Analogy occurs when a descriptor changes some but not all of its meaning. For example, the word "healthy" is analogical in that it applies both to a person or animal which enjoys good health and to some food or drink which promotes health. Equivocation is the complete change in meaning of the descriptor and is an informal fallacy, for example when the word "bank" is applied to river banks and financial banks. Modern philosophers call it ambiguity. Further, the usage of "definition" that Aquinas gives is the genus of the being, plus a difference that sets it apart from the genus itself. For instance, the Aristotelian definition of "man" is "rational animal"; its genus being animal, and what sets apart man from other animals is his rationality. Being In Thomist philosophy, the definition of a being is "that which is", a principle with two parts: "that which" refers to its quiddity (literally "whatness"), and "is" refers to its esse (Latin "to be"). Quiddity means an essence, form, or nature which may or may not exist; whereas esse refers to existence or reality. That is, a being is "an essence that exists." Being is divided in two ways: that which is in itself (substances), and that which is in another (accidents). Substances are things which exist per se or in their own right. Accidents are qualities that apply to other things, such as shape or color: "[A]ccidents must include in their definition a subject which is outside their genus." Because they only exist in other things, Aquinas holds that metaphysics is primarily the study of substances, as they are the primary mode of being. The Catholic Encyclopedia pinpoints Aquinas' definition of quiddity as "that which is expressed by its definition." The quiddity or form of a thing is what makes the object what it is: "[T]hrough the form, which is the actuality of matter, matter becomes something actual and something individual", and also, "the form causes matter to be." Thus, it consists of two parts: "prime matter" (matter without form), and substantial form, which is what causes a substance to have its characteristics. For instance, an animal can be said to be a being whose matter is its body, and whose soul is its substantial form. Together, these constitute its quiddity/essence. All real things have the transcendental properties of being: oneness, truth, goodness (that is, all things have a final cause and therefore a purpose), etc. Causality Aristotle categorized causality into four subsets in the Metaphysics, which is an integral part of Thomism: (a) refers to the material cause, what a being's matter consists of (if applicable). (b) refers to the formal cause, what a being's essence is. (c) refers to the efficient cause, what brings about the beginning of, or change to, a being. (d) refers to the final cause, what a being's purpose is. Unlike many ancient Greeks, who thought that an infinite regress of causality is possible (and thus held that the universe is uncaused), Aquinas argues that an infinite chain never accomplishes its objective and is thus impossible. Hence, a first cause is necessary for the existence of anything to be possible. Further, the First Cause must continuously be in action (similar to how there must always be a first chain in a chain link), otherwise the series collapses: Thus, both Aristotle and Aquinas conclude that there must be an uncaused Primary Mover, because an infinite regress is impossible. However, the First Cause does not necessarily have to be temporally the first. Thus, the question of whether or not the universe can be imagined as eternal was fiercely debated in the Middle Ages. The University of Paris's condemnation of 1270 denounced the belief that the world is eternal. Aquinas' intellectual rival, Bonaventure, held that the temporality of the universe is demonstrable by reason. Aquinas' position was that the temporality of the world is an article of faith, and not demonstrable by reason; one could reasonably conclude either that the universe is temporal or that it is eternal. Goodness As per the Nicomachean Ethics of Aristotle, Aquinas defines "the good" as what all things strive for. E.g., a cutting knife is said to be good if it is effective at its function, cutting. As all things have a function/final cause, all real things are good. Consequently, evil is nothing but privatio boni, or "lack of good", as Augustine of Hippo defined it. Commentating on the aforementioned, Aquinas says that "there is no problem from the fact that some men desire evil. For they desire evil only under the aspect of good, that is, insofar as they think it good. Hence their intention primarily aims at the good and only incidentally touches on the evil." As God is the ultimate end of all things, God is by essence goodness itself. Furthermore, since love is "to wish the good of another", true love in Thomism is to lead another to God. Hence why John the Evangelist says, "Whoever is without love does not know God, for God is love." Existence of God Thomas Aquinas holds that the existence of God can be demonstrated by reason, a view that is taught by the Catholic Church. The quinque viae (Latin: five ways) found in the Summa Theologica (I, Q.2, art.3) are five possible ways of demonstrating the existence of God, which today are categorized as: 1. Argumentum ex motu, or the argument of the unmoved mover; 2. Argumentum ex ratione causae efficientis, or the argument of the first cause; 3. Argumentum ex contingentia, or the argument from contingency; 4. Argumentum ex gradu, or the argument from degree; and 5. Argumentum ex fine, or the teleological argument. Despite this, Aquinas also thought that sacred mysteries such as the Trinity could only be obtained through revelation; though these truths cannot contradict reason: Aquinas responds to the problem of evil by saying that God allows evil to exist so that good may come of it (for goodness done out of free will is superior than goodness done from biological imperative), but does not personally cause evil Himself. View of God Aquinas articulated and defended, both as a philosopher and a theologian, the orthodox Christian view of God. God is the sole being whose existence is the same as His essence: "what subsists in God is His existence." (Hence why God names himself "I Am that I Am" in Exodus 3:14.) Consequently, God cannot be a body (that is, He cannot be composed of matter), He cannot have any accidents, and He must be simple (that is, not separated into parts; the Trinity is one substance in three persons). Further, He is goodness itself, perfect, infinite, omnipotent, omniscient, happiness itself, knowledge itself, love itself, omnipresent, immutable, and eternal. Summing up these properties, Aquinas offers the term actus purus (Latin: "pure actuality"). Aquinas held that not only does God have knowledge of everything, but that God has "the most perfect knowledge", and that it is also true to say that God "is" His understanding. Aquinas also understands God as the transcendent cause of the universe, the "first Cause of all things, exceeding all things caused by Him", the source of all creaturely being and the cause of every other cause. Consequently, God's causality is not like the causality of any other causes (all other causes are "secondary causes"), because He is the transcendent source of all being, causing and sustaining every other existing thing at every instant. Consequently, God's causality is never in competition with the causality of creatures; rather, God even causes some things through the causality of creatures. Aquinas was an advocate of the "analogical way", which says that because God is infinite, people can only speak of God by analogy, for some of the aspects of the divine nature are hidden (Deus absconditus) and others revealed (Deus revelatus) to finite human minds. Thomist philosophy holds that we can know about God through his creation (general revelation), but only in an analogous manner. For instance, we can speak of God's goodness only by understanding that goodness as applied to humans is similar to, but not identical with, the goodness of God. Further, he argues that sacred scripture employs figurative language: "Now it is natural to man to attain to intellectual truths through sensible objects, because all our knowledge originates from sense. Hence in Holy Writ, spiritual truths are fittingly taught under the likeness of material things." In order to demonstrate God's creative power, Aquinas says: "If a being participates, to a certain degree, in an 'accident,' this accidental property must have been communicated to it by a cause which possesses it essentially. Thus iron becomes incandescent by the action of fire. Now, God is His own power which subsists by itself. The being which subsists by itself is necessarily one." Anthropology In addition to agreeing with the Aristotelian definition of man as "the rational animal", Aquinas also held various other beliefs about the substance of man. For instance, as the essence (nature) of all men are the same, and the definition of being is "an essence that exists", humans that are real therefore only differ by their specific qualities. More generally speaking, all beings of the same genus have the same essence, and so long as they exist, only differ by accidents and substantial form. Soul Thomists define the soul as the substantial form of living beings. Thus, plants have "vegetative souls", animals have "sensitive souls", while human beings alone have "intellectual" – rational and immortal – souls. The appetite of man has two parts, rational and irrational. The rational part is called the will, and the irrational part is called passion. Ethics Aquinas affirms Aristotle's definition of happiness as "an operation according to perfect virtue", and that "happiness is called man's supreme good, because it is the attainment or enjoyment of the supreme good." Aquinas defines virtue as a good habit, which is a good quality of a person demonstrated by his actions and reactions over a substantial period of time. He writes: Aquinas ascertained the cardinal virtues to be prudence, temperance, justice, and fortitude. The cardinal virtues are natural and revealed in nature, and they are binding on everyone. There are, however, three theological virtues: faith, hope, and charity (which is used interchangeably with love in the sense of agape). These are supernatural and are distinct from other virtues in their object, namely, God. In accordance with Roman Catholic theology, Aquinas argues that humans can neither wish nor do good without divine grace. However, "doing good" here refers to doing good per se: man can do, moved by God even then but "only" in the sense in which even his nature depends on God's moving, things that happen to be good in some respect, and are not sinful, though if he has not grace, it will be without merit, and he will not succeed in it all the time. Therefore, happiness is attained through the perseverance of virtue given by the Grace of God, which is not fully attained on earth; only at the beatific vision. Notably, man cannot attain true happiness without God. Regarding emotion (used synonymously with the word "passion" in this context), which, following John Damascene, Aquinas defines as "a movement of the sensitive appetite when we imagine good or evil", Thomism repudiates both the Epicurean view that happiness consists in pleasure (sensual experiences that invoke positive emotion), and the Stoic view that emotions are vices by nature. Aquinas takes a moderate view of emotion, quoting Augustine: "They are evil if our love is evil; good if our love is good." While most emotions are morally neutral, some are inherently virtuous (e.g. pity) and some are inherently vicious (e.g. envy). Thomist ethics hold that it is necessary to observe both circumstances and intention to determine an action's moral value, and therefore Aquinas cannot be said to be strictly either a deontologicalist or a consequentialist. Rather, he would say that an action is morally good if it fulfills God's antecedent will. Of note is the principle of double effect, formulated in the Summa, II-II, Q.64, art.7, which is a justification of homicide in self-defense. Previously experiencing difficulties in the world of Christian philosophy, the doctrine of Just War was expounded by Aquinas with this principle. He says: Law Thomism recognizes four different species of law, which he defines as "an ordinance of reason for the common good, made by him who has care of the community, and promulgated": Eternal law, which is "the type of Divine Wisdom, as directing all actions and movements;" Natural law, "whereby each one knows, and is conscious of, what is good and what is evil", which is the rational being's participation in the eternal law; Human or temporal law, laws made by humans by necessity; and Divine law, which are moral imperatives specifically given through revelation. The development of natural law is one of the most influential parts of Thomist philosophy. Aquinas says that "[the law of nature] is nothing other than the light of the intellect planted in us by God, by which we know what should be done and what should be avoided. God gave this light and this law in creation... For no one is ignorant that what he would not like to be done to himself he should not do to others, and similar norms." Aquinas argues that the Mosaic covenant was divine, though rightfully only given to the Jews before Christ; whereas the New Covenant replaces the Old Covenant and is meant for all humans. Free will Aquinas argues that there is no contradiction between God's providence and human free will: Aquinas argues that God offers man both a prevenient grace to enable him to perform supernaturally good works, and cooperative grace within the same. The relation of prevenient grace to voluntariness has been the subject of further debate; the position known here as "Thomist" was originated by Domingo Báñez and says that God gives an additional grace (the "efficient grace") to the predestined which makes them accept, while Luis de Molina held that God distributes grace according to a middle knowledge, and man can accept it without a different grace. Molinism is a school that is part of Thomism in the general sense (it originated in commentaries to Aquinas), yet it must be borne in mind that, here, Thomism and Molinism oppose each other. (The question has been declared undecided by the Holy See.) Epistemology Aquinas preceded the existence of the discipline of epistemology, which began among modern thinkers whose positions, following in the wake of Descartes, are fundamentally opposed to Aquinas'. Nonetheless, a Thomistic theory of knowledge can be derived from a mixture of Aquinas' logical, psychological, metaphysical, and even Theological doctrines. Aquinas' thought is an instance of the correspondence theory of truth, which says that something is true "when it conforms to the external reality." Therefore, any being that exists can be said to be true insofar that it participates in the world. Aristotle's De anima (On the Soul) divides the mind into three parts: sensation, imagination and intellection. When one perceives an object, his mind composites a sense-image. When he remembers the object he previously sensed, he is imagining its form (the image of the imagination is often translated as "phantasm"). When he extracts information from this phantasm, he is using his intellect. Consequently, all human knowledge concerning universals (such as species and properties) are derived from the phantasm ("the received is in the receiver according to the mode of the receiver"), which itself is a recollection of an experience. Concerning the question of "Whether the intellect can actually understand through the intelligible species of which it is possessed, without turning to the phantasms?" in the Summa Theologica, Aquinas quotes Aristotle in the sed contra: "the soul understands nothing without a phantasm." Hence the peripatetic axiom. (Another theorem to be drawn from this is that error is a result of drawing false conclusions based on our sensations.) Aquinas' epistemological theory would later be classified as empiricism, for holding that sensations are a necessary step in acquiring knowledge, and that deductions cannot be made from pure reason. Impact Aquinas shifted Scholasticism away from neoplatonism and towards Aristotle. The ensuing school of thought, through its influence on Catholicism and the ethics of the Catholic school, is one of the most influential philosophies of all time, also significant due to the number of people living by its teachings. Before Aquinas' death, Stephen Tempier, Bishop of Paris, forbade certain positions associated with Aquinas (especially his denial of both universal hylomorphism and a plurality of substantial forms in a single substance) to be taught in the Faculty of Arts at Paris. Through the influence of traditional Augustinian theologians, some theses of Aquinas were condemned in 1277 by the ecclesiastical authorities of Paris and Oxford (the most important theological schools in the Middle Ages). The Franciscan Order opposed the ideas of the Dominican Aquinas, while the Dominicans institutionally took up the defense of his work (1286), and thereafter adopted it as an official philosophy of the order to be taught in their studia. Early opponents of Aquinas include William de la Mare, Henry of Ghent, Giles of Rome, and Jon Duns Scotus. Early and noteworthy defenders of Aquinas were his former teacher Albertus Magnus, the ill-fated Richard Knapwell, William Macclesfeld, Giles of Lessines, John of Quidort, Bernard of Auvergne and Thomas of Sutton. The canonization of Aquinas in 1323 led to a revocation of the condemnation of 1277. Later, Aquinas and his school would find a formidable opponent in the via moderna, particularly in William of Ockham and his adherents. Thomism remained a doctrine held principally by Dominican theologians, such as Giovanni Capreolo (1380–1444) or Tommaso de Vio (1468–1534). Eventually, in the 16th century, Thomism found a stronghold on the Iberian Peninsula, through for example the Dominicans Francisco de Vitoria (particularly noteworthy for his work in natural law theory), Domingo de Soto (notable for his work on economic theory), John of St. Thomas, and Domingo Báñez; the Carmelites of Salamanca (i.e., the Salmanticenses); and even, in a way, the newly formed Jesuits, particularly Francisco Suárez, and Luis de Molina. The modern period brought considerable difficulty for Thomism. Pope Leo XIII attempted a Thomistic revival, particularly with his 1879 encyclical Aeterni Patris and his establishment of the Leonine Commission, established to produce critical editions of Aquinas' opera omnia. This encyclical served as the impetus for the rise of Neothomism, which brought an emphasis on the ethical parts of Thomism, as well as a large part of its views on life, humans, and theology, are found in the various schools of Neothomism. Neothomism held sway as the dominant philosophy of the Roman Catholic Church until the Second Vatican Council, which seemed, in the eyes of Homiletic and Pastoral Review writer Fr. Brian Van Hove, SJ, to confirm the significance of Ressourcement theology. Thomism remains a school of philosophy today, and influential in Catholicism, though "The Church has no philosophy of her own nor does she canonize any one particular philosophy in preference to others." In recent years, the cognitive neuroscientist Walter Freeman proposes that Thomism is the philosophical system explaining cognition that is most compatible with neurodynamics, in a 2008 article in the journal Mind and Matter entitled "Nonlinear Brain Dynamics and Intention According to Aquinas." Connection with Jewish thought Aquinas did not disdain to draw upon Jewish philosophical sources. His main work, the Summa Theologica, shows a profound knowledge not only of the writings of Avicebron (Ibn Gabirol), whose name he mentions, but also of most Jewish philosophical works then existing. Aquinas pronounces himself energetically against the hypothesis of the eternity of the world, in agreement with both Christian and Jewish theology. But as this theory is attributed to Aristotle, he seeks to demonstrate that the latter did not express himself categorically on this subject. "The argument", said he, "which Aristotle presents to support this thesis is not properly called a demonstration, but is only a reply to the theories of those ancients who supposed that this world had a beginning and who gave only impossible proofs. There are three reasons for believing that Aristotle himself attached only a relative value to this reasoning..." In this, Aquinas paraphrases Maimonides' Guide for the Perplexed, where those reasons are given. Scholarly perspectives René Descartes Thomism began to decline in popularity in the modern period, which was inaugurated by René Descartes' works Discourse on the Method in 1637 and Meditations on First Philosophy in 1641. The Cartesian doctrines of mind–body dualism and the fallibility of the senses implicitly contradicted Aristotle and Aquinas: G. K. Chesterton In describing Thomism as a philosophy of common sense, G. K. Chesterton wrote: History J. A. Weisheipl emphasizes that within the Dominican Order the history of Thomism has been continuous since the time of Aquinas: An idea of the longstanding historic continuity of Dominican Thomism may be derived from the list of people associated with the Pontifical University of St. Thomas Aquinas. Outside the Dominican Order, Thomism has had varying fortunes leading some to periodize it historically or thematically. Weisheipl distinguishes "wide" Thomism, which includes those who claim to follow the spirit and basic insights of Aquinas and manifest an evident dependence on his texts, from "eclectic" Thomism which includes those with a willingness to allow the influence of other philosophical and theological systems in order to relativize the principles and conclusions of traditional Thomism. John Haldane gives an historic division of Thomism including 1) the period of Aquinas and his first followers from the 13th to 15th centuries, a second Thomism from the 16th to 18th centuries, and a Neo-Thomism from the 19th to 20th centuries. One might justifiably articulate other historical divisions on the basis of shifts in perspective on Aquinas' work including the period immediately following Aquinas' canonization in 1325, the period following the Council of Trent, and the period after the Second Vatican Council. Romanus Cessario thinks it better not to identify intervals of time or periods within the larger history of Thomism because Thomists have addressed such a broad variety of issues and in too many geographical areas to permit such divisions. First Thomistic School The first period of Thomism stretches from Aquinas' teaching activity beginning in 1256 at Paris to Cologne, Orvieto, Viterbo, Rome, and Naples until his canonization in 1325. In this period his doctrines "were both attacked and defended" as for example after his death (1274) the condemnations of 1277, 1284 and 1286 were counteracted by the General Chapters of the Dominican Order and other disciples who came to Aquinas' defense. 1325 to the Council of Trent After Aquinas' canonisation, commentaries on Aquinas increased, especially at Cologne which had previously been a stronghold of Albert the Great's thought. Henry of Gorkum (1386-1431) wrote what may well be the earliest commentary on the Summa Theologiae, followed in due course by his student Denis the Carthusian. Council of Trent to Aeterni Patris Responding to prevailing philosophical rationalism during the Enlightenment Salvatore Roselli, professor of theology at the College of St. Thomas, the future Pontifical University of Saint Thomas Aquinas, Angelicum in Rome, published a six volume Summa philosophica (1777) giving an Aristotelian interpretation of Aquinas validating the senses as a source of knowledge. While teaching at the College Roselli is considered to have laid the foundation for Neothomism in the nineteenth century. According to historian J.A. Weisheipl in the late 18th and early 19th centuries "everyone who had anything to do with the revival of Thomism in Italy, Spain and France was directly influenced by Roselli’s monumental work. Aeterni Patris to Vatican II The Thomist revival that began in the mid-19th century, sometimes called "neo-scholasticism" or "neo-Thomism", can be traced to figures such as Angelicum professor Tommaso Maria Zigliara, Jesuits Josef Kleutgen, and Giovanni Maria Cornoldi, and secular priest Gaetano Sanseverino. This movement received impetus from Pope Leo XIII's encyclical Aeterni Patris of 1879. Generally the revival accepts the interpretative tradition of Aquinas' great commentators such as Capréolus, Cajetan, and John of St. Thomas. Its focus, however, is less exegetical and more concerned with carrying out the program of deploying a rigorously worked out system of Thomistic metaphysics in a wholesale critique of modern philosophy. Other seminal figures in the early part of the century include Martin Grabmann (1875-1949) and Amato Masnovo (1880-1955). The movement's core philosophical commitments are summarized in "Twenty-Four Thomistic Theses" approved by Pope Pius X. In the first half of the twentieth century Angelicum professors Edouard Hugon, Réginald Garrigou-Lagrange among others, carried on Leo's call for a Thomist revival. Their approach is reflected in many of the manuals and textbooks widely in use in Roman Catholic colleges and seminaries before Vatican II. While the Second Vatican Council took place from 1962 to 1965 Cornelio Fabro was already able to write in 1949 that the century of revival with its urgency to provide a synthetic systematization and defense of Aquinas' thought was coming to an end. Fabro looked forward to a more constructive period in which the original context of Aquinas' thought would be explored. Recent schools and interpretations A summary of some recent and current schools and interpretations of Thomism can be found, among other places, in La Metafisica di san Tommaso d'Aquino e i suoi interpreti (2002), by Battista Mondin, Being and Some 20th Century Thomists (2003), by John F. X. Knasas as well as in the writing of Edward Feser. Neo-Scholastic Thomism Neo-Scholastic Thomism identifies with the philosophical and theological tradition stretching back to the time of St. Thomas. In the nineteenth century authors such as Tommaso Maria Zigliara focused not only on exegesis of the historical Aquinas but also on the articulation of a rigorous system of orthodox Thomism to be used as an instrument of critique of contemporary thought. Due to its suspicion of attempts to harmonize Aquinas with non-Thomistic categories and assumptions, Neo-Scholastic Thomism has sometimes been called "strict observance Thomism." A discussion of recent and current Neo-Scholastic Thomism can be found in La Metafisica di san Tommaso d'Aquino e i suoi interpreti (2002) by Battista Mondin, which includes such figures as Martin Grabmann, Reginald Garrigou-Lagrange, Sofia Vanni Rovighi (1908–1990), Cornelio Fabro (1911–1995), Carlo Giacon (1900–1984), Tomáš Týn (1950–1990), Abelardo Lobato (1925–2012), Leo Elders (1926–2019) and Giovanni Ventimiglia (b. 1964) among others. Fabro in particular emphasizes Aquinas' originality, especially with respect to the actus essendi or act of existence of finite beings by participating in being itself. Other scholars such as those involved with the "Progetto Tommaso" seek to establish an objective and universal reading of Aquinas' texts. Cracow Circle Thomism Cracow Circle Thomism (named after Kraków) has been called "the most significant expression of Catholic thought between the two World Wars." The Circle was founded by a group of philosophers and theologians that in distinction to more traditional Neo-Scholastic Thomism embraced modern formal logic as an analytical tool for traditional Thomist philosophy and theology. Inspired by the logical clarity of Aquinas, members of the Circle held both philosophy and theology to contain "propositions with truth-values…a structured body of propositions connected in meaning and subject matter, and linked by logical relations of compatibility and incompatibility, entailment etc." "The Cracow Circle set about investigating and where possible improving this logical structure with the most advanced logical tools available at the time, namely those of modern mathematical logic, then called 'logistic'." Existential Thomism Étienne Gilson (1884–1978), the key proponent of existential Thomism, tended to emphasize the importance of historical exegesis but also to deemphasize Aquinas's continuity with the Aristotelian tradition, and like Cornelio Fabro of the Neo-scholastic school, to highlight the originality of Aquinas's doctrine of being as existence. He was also critical of the Neo-Scholastics' focus on the tradition of the commentators, and given what he regarded as their insufficient emphasis on being or existence accused them of "essentialism" (to allude to the other half of Aquinas's distinction between being and essence). Gilson's reading of Aquinas as putting forward a distinctively "Christian philosophy" tended, at least in the view of his critics, to blur Aquinas's distinction between philosophy and theology. Jacques Maritain (1882–1973) introduced into Thomistic metaphysics the notion that philosophical reflection begins with an "intuition of being", and in ethics and social philosophy sought to harmonize Thomism with personalism and pluralistic democracy. Though "existential Thomism" was sometimes presented as a counterpoint to modern existentialism, the main reason for the label is the emphasis this approach puts on Aquinas's doctrine of existence. Other proponents include Joseph Owens, Eugene Fairweather, and John F. X. Knasas. River Forest Thomism According to River Forest Thomism (named after River Forest, Illinois), the natural sciences are epistemologically prior to metaphysics, preferably called metascience. This approach emphasizes the Aristotelian foundations of Aquinas's philosophy, and in particular the idea that the construction of a sound metaphysics must be preceded by a sound understanding of natural science, as interpreted in light of an Aristotelian philosophy of nature. Accordingly, it is keen to show that modern physical science can and should be given such an interpretation. Charles De Koninck, Raymond Jude Nogar, James A. Weisheipl, William A. Wallace, and Benedict Ashley, are among its representatives. It is sometimes called "Laval Thomism" after the University of Laval in Quebec, where De Koninck was a professor. The alternative label "River Forest Thomism" derives from a suburb of Chicago, the location of the Albertus Magnus Lyceum for Natural Science, whose members have been associated with this approach. It is also sometimes called "Aristotelian Thomism" (to highlight its contrast with Gilson's brand of existential Thomism) though since Neo-Scholastic Thomism also emphasizes Aquinas's continuity with Aristotle, this label seems a bit too proprietary. (There are writers, like the contemporary Thomist Ralph McInerny who have exhibited both Neo-Scholastic and Laval/River Forest influences, and the approaches are not necessarily incompatible.) Transcendental Thomism Unlike the first three schools mentioned above, transcendental Thomism, associated with Joseph Maréchal (1878–1944), Karl Rahner (1904–84), and Bernard Lonergan (1904–84), does not oppose modern philosophy wholesale, but seeks to reconcile Thomism with a Cartesian subject-centered approach to knowledge in general, and Kantian transcendental philosophy in particular. To Feser, "It seems fair to say that most Thomists otherwise tolerant of diverse approaches to Aquinas's thought tend to regard transcendental Thomism as having conceded too much to modern philosophy genuinely to count as a variety of Thomism, strictly speaking, and this school of thought has in any event been far more influential among theologians than among philosophers." Lublin Thomism Lublin Thomism, which derives its name from the Catholic University of Lublin in Poland where it is centered, is also sometimes called "phenomenological Thomism." Like transcendental Thomism, it seeks to combine Thomism with certain elements of modern philosophy. In particular, it seeks to make use of the phenomenological method of philosophical analysis associated with Edmund Husserl and the ethical personalism of writers like Max Scheler in articulating the Thomist conception of the human person. Its best-known proponent is Karol Wojtyla (1920–2005), who went on to become Pope John Paul II. However, unlike transcendental Thomism, the metaphysics of Lublin Thomism places priority on existence (as opposed to essence), making it an existential Thomism that demonstrates consonance with the Thomism of Étienne Gilson. The phenomenological concerns of the Lublin school are not metaphysical in nature as this would constitute idealism. Rather, they are considerations which are brought into relation with central positions of the school, such as when dealing with modern science, its epistemological value, and its relation to metaphysics. Analytical Thomism Analytical Thomism described by John Haldane, its key proponent, as "a broad philosophical approach that brings into mutual relationship the styles and preoccupations of recent English-speaking philosophy and the concepts and concerns shared by Aquinas and his followers" (from the article on "analytical Thomism" in The Oxford Companion to Philosophy, edited by Ted Honderich). By "recent English-speaking philosophy" Haldane means the analytical tradition founded by thinkers like Gottlob Frege, Bertrand Russell, G. E. Moore, and Ludwig Wittgenstein, which tends to dominate academic philosophy in the English-speaking world. Elizabeth Anscombe (1919–2001) and her husband Peter Geach are sometimes considered the first "analytical Thomists", though (like most writers to whom this label has been applied) they did not describe themselves in these terms, and as Haldane's somewhat vague expression "mutual relationship" indicates, there does not seem to be any set of doctrines held in common by all analytical Thomists. What they do have in common seems to be that they are philosophers trained in the analytic tradition who happen to be interested in Aquinas in some way; and the character of their "analytical Thomism" is determined by whether it tends to stress the "analytical" side of analytical Thomism, or the "Thomism" side, or, alternatively, attempts to emphasize both sides equally. 24 Thomistic theses of Pius X With the decree Postquam sanctissimus of 27 July 1914, Pope Pius X stated that 24 theses formulated by "teachers from various institutions [...] clearly contain the principles and more important thoughts" of Aquinas. Ontology Potency and Act divide being in such a way that whatever is, is either pure act, or of necessity it is composed of potency and act as primary and intrinsic principles. Since act is perfection, it is not limited except through a potency which itself is a capacity for perfection. Hence in any order in which an act is pure act, it will only exist, in that order, as a unique and unlimited act. But whenever it is finite and manifold, it has entered into a true composition with potency. Consequently, the one God, unique and simple, alone subsists in absolute being. All other things that participate in being have a nature whereby their being is restricted; they are constituted of essence and being, as really distinct principles. A thing is called a being because of "esse". God and creature are not called beings univocally, nor wholly equivocally, but analogically, by an analogy both of attribution and of proportionality. In every creature there is also a real composition of the subsisting subject and of added secondary forms, i.e. accidental forms. Such composition cannot be understood unless being is really received in an essence distinct from it. Besides the absolute accidents there is also the relative accident, relation. Although by reason of its own character relation does not signify anything inhering in another, it nevertheless often has a cause in things, and hence a real entity distinct from the subject. A spiritual creature is wholly simple in its essence. Yet there is still a twofold composition in the spiritual creature, namely, that of the essence with being, and that of the substance with accidents. However, the corporeal creature is composed of act and potency even in its very essence. These act and potency in the order of essence are designated by the names form and matter respectively. Cosmology Neither the matter nor the form have being of themselves, nor are they produced or corrupted of themselves, nor are they included in any category otherwise than reductively, as substantial principles. Although extension in quantitative parts follows upon a corporeal nature, nevertheless it is not the same for a body to be a substance and for it to be quantified. For of itself substance is indivisible, not indeed as a point is indivisible, but as that which falls outside the order of dimensions is indivisible. But quantity, which gives the substance extension, really differs from the substance and is truly an accident. The principle of individuation, i.e., of numerical distinction of one individual from another with the same specific nature, is matter designated by quantity. Thus in pure spirits there cannot be more than one individual in the same specific nature. By virtue of a body's quantity itself, the body is circumscriptively in a place, and in one place alone circumscriptively, no matter what power might be brought to bear. Bodies are divided into two groups; for some are living and others are devoid of life. In the case of the living things, in order that there be in the same subject an essentially moving part and an essentially moved part, the substantial form, which is designated by the name soul, requires an organic disposition, i.e. heterogeneous parts. Psychology Souls in the vegetative and sensitive orders cannot subsist of themselves, nor are they produced of themselves. Rather, they are no more than principles whereby the living thing exists and lives; and since they are wholly dependent upon matter, they are incidentally corrupted through the corruption of the composite. On the other hand, the human soul subsists of itself. When it can be infused into a sufficiently disposed subject, it is created by God. By its very nature, it is incorruptible and immortal. This rational soul is united to the body in such a manner that it is the only substantial form of the body. By virtue of his soul a man is a man, an animal, a living thing, a body, a substance and a being. Therefore, the soul gives man every essential degree of perfection; moreover, it gives the body a share in the act of being whereby it itself exists. From the human soul there naturally issue forth powers pertaining to two orders, the organic and the non-organic. The organic powers, among which are the senses, have the composite as their subject. The non-organic powers have the soul alone as their subject. Hence, the intellect is a power intrinsically independent of any bodily organ. Intellectuality necessarily follows upon immateriality, and furthermore, in such manner that the further the distance from matter, the higher the degree of intellectuality. Any being is the adequate object of understanding in general. But in the present state of union of soul and body, quantities abstracted from the material conditions of individuality are the proper object of the human intellect. Therefore, we receive knowledge from sensible things. But since sensible things are not actually intelligible, in addition to the intellect, which formally understands, an active power must be acknowledged in the soul, which power abstracts intelligible likeness or species from sense images in the imagination. Through these intelligible likenesses or species we directly know universals, i.e. the natures of things. We attain to singulars by our senses, and also by our intellect, when it beholds the sense images. But we ascend to knowledge of spiritual things by analogy. The will does not precede the intellect but follows upon it. The will necessarily desires that which is presented to it as a good in every respect satisfying the appetite. But it freely chooses among the many goods that are presented to it as desirable according to a changeable judgment or evaluation. Consequently, the choice follows the final practical judgment. But the will is the cause of it being the final one. God We do not perceive by an immediate intuition that God exists, nor do we prove it a priori. But we do prove it a posteriori, i.e., from the things that have been created, following an argument from the effects to the cause: namely, from things which are moved and cannot be the adequate source of their motion, to a first unmoved mover; from the production of the things in this world by causes subordinated to one another, to a first uncaused cause; from corruptible things which equally might be or not be, to an absolutely necessary being; from things which more or less are, live, and understand, according to degrees of being, living and understanding, to that which is maximally understanding, maximally living and maximally a being; finally, from the order of all things, to a separated intellect which has ordered and organized things, and directs them to their end. The metaphysical motion of the Divine Essence is correctly expressed by saying that it is identified with the exercised actuality of its own being, or that it is subsistent being itself. And this is the reason for its infinite and unlimited perfection. By reason of the very purity of His being, God is distinguished from all finite beings. Hence it follows, in the first place, that the world could only have come from God by creation; secondly, that not even by way of a miracle can any finite nature be given creative power, which of itself directly attains the very being of any being; and finally, that no created agent can in any way influence the being of any effect unless it has itself been moved by the first Cause. Criticism In his Against Henry, King of the English, Luther criticized a perceived use of the proof by assertion and a reliance on style over substance in the Thomist form of disputation, which he alleged as being, "It seems so to me. I think so. I believe so." Luther also argued that the Thomist method led to shallowness among theological debates in England at the time. Thomism was criticized by Bertrand Russell in A History of Western Philosophy (1946). Neo-Thomism has been criticized by Catholic modernists such as George Tyrell and by supporters of the Nouvelle théologie. See also List of works by Thomas Aquinas Theism Catholic theology Brian Davies Peter Kreeft Brian Leftow List of Thomist writers (13th–18th centuries) Alasdair MacIntyre Rule according to higher law Rule of law School of Salamanca The Thomist Thomistic sacramental theology Thomistic Institute References Further reading Reality: A Synthesis of Thomistic Thought by Reginald Garrigou-Lagrange Modern Thomistic Philosophy by Richard Percival Phillips, an introduction on the Thomistic philosophy of nature Introductory chapter by Craig Paterson and Matthew Pugh on the development of Thomism The XXIV Theses of Thomistic Philosophy and commentary by P. Lumbreras, O.P. External links Corpus Thomisticum – Aquina's complete works Bibliographia Thomistica Thomas Aquinas Emulator Project, research into the use of generative AI to emulate Thomas Aquinas for an interactive engagement with Thomism Aristotelianism Catholic theology and doctrine Christian theological movements Philosophical schools and traditions Pope Leo XIII Christian philosophy Christian terminology Scholasticism Western Christianity
0.769722
0.996374
0.766931
Experimentalism
Experimentalism is the philosophical belief that the way to truth is through experiments and empiricism. It is also associated with instrumentalism, the belief that truth should be evaluated based upon its demonstrated usefulness. Experimentalism is considered a theory of knowledge that emphasizes direct action and scientific control as well as methods and consequences. Conceptualizations Experimentalism is referred to as John Dewey's version of pragmatism. The theory, which he also called as practicalism, holds that the pattern for knowledge should be modern science and modern scientific methods. Dewey explained that philosophy involves the critical evaluation of belief and that the concept's function is practical. This perspective has influenced modern American intellectual culture leading to a correction of approaches to science that had excessive concentrations on theory. While experimentalism is empirical in approach, experimentalism is distinguished from empiricism. The latter involves the passive view of sense data and observational reports while the former focuses on conditions where hypotheses are tested. Experimentalists maintain that political and moral concepts arise because of conflict, hence consider experience and history as essential. It is also maintained that the experimental attitude is based on the principle of fallibilism, operating with the notion that outcomes of prior inquiries are not absolutely certain or already known and that prior findings could be wrong. Deborah Mayo suggests that we should focus on how experimental knowledge is actually arrived at and how it functions in science. Mayo also suggests that the reason New Experimentalists have come up short, is that the part of experiments that have the most to offer in building an account of inference and evidence that are left untapped: designing, generating, modelling and analysing experiments and data. Applications Artists often pursue their visions through trial and error; this form of experimentalism has been practiced in every field, including music, film, literature, and theatre. A more specific explanation cites that this experimentalism is inductive in nature, with artists (e.g. Michelangelo and Titian) proceeding by trial and error as opposed to the conceptualists' approach, which favors making preparatory work while step changes are made in their progress. Artistic experimentalism taken as a rule is generally associated with an attendant avant-garde. In literature, the experimental approach may involve the production of texts through a combination of new procedures of literary production such as the inclusion of images in poetry. This is also seen in the works of computer artists or those who integrate technology in their art. For instance, Stan VanderBeek produced Poemfield through programming using BEFLIX to animate the poem's words and embed a geometric background. In education, there is the position that learners continuously need new methods and experimentalism is essential in the development process. Through the method of learning-by-doing, it is expected that the learner develops his capacities and interests so that they empower him to assume the role of constructive participant in the life of the wider society. The experimentalist's view emphasizes the importance of life experience as the basis of what is learned. Experiences are said to consist the active interrelationship between the individual and the external world. Global security specialists employ experimentalism to develop and maintain multi-faceted projects as well as determine innovative tools of governance. Such projects are operationalized through a trial-and-error and adaptive manner. See also Neo-experimentalism Experimental political science Experimental philosophy Experimental physics Positivism References Experiments Broad-concept articles Philosophy of science
0.803551
0.954426
0.76693
State of nature
In ethics, political philosophy, social contract theory, religion, and international law, the term state of nature describes the hypothetical way of life that existed before humans organised themselves into societies or civilizations. Philosophers of the state of nature theory propose that there was a historical period before societies existed, and seek answers to the questions: "What was life like before civil society?", "How did government emerge from such a primitive start?", and "What are the hypothetical reasons for entering a state of society by establishing a nation-state?". In some versions of social contract theory, there are freedoms, but no rights in the state of nature; and, by way of the social contract, people create societal rights and obligations. In other versions of social contract theory, society imposes restrictions (law, custom, tradition, etc.) that limit the natural rights of a person. Societies existing before the political state are investigated and studied as Mesolithic history, as archaeology, and as cultural anthropology, as social anthropology, and as ethnology to determine the particulars of the indigenous society's social structures and power structures. Noted philosophers Mozi The early Warring States philosopher Mozi was one of the first thinkers in recorded history to develop the idea of the state of nature. He developed the idea to defend the need for a single overall ruler. According to Mozi, in the state of nature, each person has their own moral rules (yi, 義). As a result, people were unable to reach agreements and resources were wasted. Since Mozi promoted ways of strengthening and unifying the state (li, 利), such natural dis-organization was rejected: In the beginning of human life, when there was yet no law and government, the custom was "everybody according to his rule (yi, 義)." Accordingly each man had his own rule, two men had two different rules and ten men had eleven different rules -- the more people the more different notions. And everybody approved his own moral views and disapproved the views of others, and so arose mutual disapproval among men. As a result, father and son and elder and younger brothers became enemies and were estranged from each other, since they were unable to reach any agreement. Everybody worked for the disadvantage of the others with water, fire, and poison. Surplus energy was not spent for mutual aid; surplus goods were allowed to rot without sharing; excellent teachings (dao, 道) were kept secret and not revealed. The disorder in the (human) world could be compared to that among birds and beasts. —Chapter 3 - 1 His proposal was to unify rules according to a single moral system or standard (fa, 法) that can be used by anyone: calculating benefit of each act. In that way, the ruler of the state and his subjects will have the same moral system; cooperation and joint efforts will be the rule. Later his proposal would be strongly rejected by Confucianism (especially Mencius) because of the preference of benefit over morals. Thomas Hobbes The pure state of nature, or "the natural condition of mankind", was described by the 17th century English philosopher Thomas Hobbes in Leviathan and his earlier work De Cive. Hobbes argued that natural inequalities between humans are not so great as to give anyone clear superiority; and thus all must live in constant fear of loss or violence; so that "during the time men live without a common power to keep them all in awe, they are in that condition which is called war; and such a war as is of every man against every man". In this state, every person has a natural right to do anything one thinks necessary for preserving one's own life, and life is "solitary, poor, nasty, brutish, and short" (Leviathan, Chapters XIII–XIV). Hobbes described this natural condition with the Latin phrase (bellum omnium contra omnes) meaning "war of all against all", in De Cive. Within the state of nature, there is neither personal property nor injustice since there is no law, except for certain natural precepts discovered by reason ("laws of nature"): the first of which is "that every man ought to endeavour peace, as far as he has hope of obtaining it" (Leviathan, Ch. XIV); and the second is "that a man be willing, when others are so too, as far forth as for peace and defence of himself he shall think it necessary, to lay down this right to all things; and be contented with so much liberty against other men as he would allow other men against himself" (loc. cit.). From here, Hobbes developed the way out of the state of nature into political society and government by mutual contracts. According to Hobbes, the state of nature exists at all times among independent countries, over whom there is no law except for those same precepts or laws of nature (Leviathan, Chapters XIII, XXX end). His view of the state of nature helped to serve as a basis for theories of international law and relations and even some theories about domestic relations. John Locke John Locke considers the state of nature in his Second Treatise on Civil Government written around the time of the Exclusion Crisis in England during the 1680s. For Locke, in the state of nature all men are free "to order their actions, and dispose of their possessions and persons, as they think fit, within the bounds of the law of nature." (2nd Tr., §4). "The state of Nature has a law of Nature to govern it", and that law is reason. Locke believes that reason teaches that "no one ought to harm another in his life, liberty, and or property" (2nd Tr., §6) ; and that transgressions of this may be punished. Locke describes the state of nature and civil society to be opposites of each other, and the need for civil society comes in part from the perpetual existence of the state of nature. This view of the state of nature is partly deduced from Christian belief (unlike Hobbes, whose philosophy is not dependent upon any prior theology). Although it may be natural to assume that Locke was responding to Hobbes, Locke never refers to Hobbes by name, and may instead have been responding to other writers of the day, like Robert Filmer. In fact, Locke's First Treatise is entirely a response to Filmer's Patriarcha, and takes a step by step method to refuting Filmer's theory set out in Patriarcha. The conservative party at the time had rallied behind Filmer's Patriarcha, whereas the Whigs, scared of another persecution of Protestants, rallied behind the theory set out by Locke in his Two Treatises of Government as it gave a clear theory as to why the people would be justified in overthrowing a monarchy which abuses the trust they had placed in it. Montesquieu Montesquieu makes use of the concept of the state of nature in his The Spirit of the Laws, first printed in 1748. Montesquieu states the thought process behind early human beings before the formation of society. He says that human beings would have the faculty of knowing and would first think to preserve their life in the state. Human beings would also at first feel themselves to be impotent and weak. As a result, humans would not be likely to attack each other in this state. Next, humans would seek nourishment and, out of fear and impulse, would eventually unite to create society. Once society was created, a state of war would ensue amongst societies, which would all have been created the same way. The purpose of war is the preservation of the society and the self. The formation of law within society is the reflection and application of reason for Montesquieu. Jean-Jacques Rousseau Hobbes' view was challenged in the eighteenth century by Jean-Jacques Rousseau, who claimed that Hobbes was taking socialized people and simply imagining them living outside of the society in which they were raised. He affirmed instead that people were neither good nor bad, but were born as a blank slate, and later society and the environment influence which way we lean. In Rousseau's state of nature, people did not know each other enough to come into serious conflict and they did have normal values. The modern society, and the ownership it entails, is blamed for the disruption of the state of nature which Rousseau sees as true freedom. David Hume David Hume offers in A Treatise of Human Nature (1739) that human beings are naturally social: "'Tis utterly impossible for men to remain any considerable time in that savage condition, which precedes society; but that his very first state and situation may justly be esteem'd social. This, however, hinders not, but that philosophers may, if they please, extend their reasoning to the suppos'd state of nature; provided they allow it to be a mere philosophical fiction, which never had, and never could have any reality." Hume's ideas about human nature expressed in the Treatise suggest that he would be happy with neither Hobbes' nor his contemporary Rousseau's thought-experiments. He explicitly derides as incredible the hypothetical humanity described in Hobbes' Leviathan. Additionally, he argues in "Of the Origin of Justice and Property" that if mankind were universally benevolent, we would not hold Justice to be a virtue: "’tis only from the selfishness and confin’d generosity of men, along with the scanty provision nature has made for his wants, that justice derives its origin." John Calhoun John C. Calhoun, in his A Disquisition on Government (1851), wrote that a state of nature is merely hypothetical and argues that the concept is self-contradictory and that political states naturally always existed. It is, indeed, difficult to explain how an opinion so destitute of all sound reason, ever could have been so extensively entertained. . . . I refer to the assertion, that all men are equal in the state of nature; meaning, by a state of nature, a state of individuality, supposed to have existed prior to the social and political state; and in which men lived apart and independent of each other. . . . But such a state is purely hypothetical. It never did, nor can exist; as it is inconsistent with the preservation and perpetuation of the race. It is, therefore, a great misnomer to call it the state of nature. Instead of being the natural state of man, it is, of all conceivable states, the most opposed to his nature—most repugnant to his feelings, and most incompatible with his wants. His natural state is, the social and political—the one for which his Creator made him, and the only one in which he can preserve and perfect his race. As, then, there never was such a state as the, so-called, state of nature, and never can be, it follows, that men, instead of being born in it, are born in the social and political state; and of course, instead of being born free and equal, are born subject, not only to parental authority, but to the laws and institutions of the country where born, and under whose protection they draw their first breath. Karl Marx Karl Marx's concept of primitive communism—or the economic mode of production before the development of class systems—may be seen as analogous to the state of nature. John Rawls John Rawls used what amounted to an artificial state of nature. To develop his theory of justice, Rawls places everyone in the original position. The original position is a hypothetical state of nature used as a thought experiment. People in the original position have no society and are under a veil of ignorance that prevents them from knowing how they may benefit from society. They lack foreknowledge of their intelligence, wealth, or abilities. Rawls reasons that people in the original position would want a society where they had their basic liberties protected and where they had some economic guarantees as well. If society were to be constructed from scratch through a social agreement between individuals, these principles would be the expected basis of such an agreement. Thus, these principles should form the basis of real, modern societies since everyone should consent to them if society were organized from scratch in fair agreements. Robert Nozick Rawls' Harvard colleague Robert Nozick countered the liberal A Theory of Justice with the libertarian Anarchy, State, and Utopia, also grounded in the state of nature tradition. Nozick argued that a minimalist state of property rights and basic law enforcement would develop out of a state of nature without violating anyone's rights or using force. Mutual agreements among individuals rather than social contract would lead to this minimal state. Between nations In Hobbes' view, once a civil government is instituted, the state of nature has disappeared between individuals because of the civil power which exists to enforce contracts and the laws of nature generally. Between nations, however, no such power exists and therefore nations have the same rights to preserve themselves—including making war—as individuals possessed. Such a conclusion led some writers to the idea of an association of nations or worldwide civil society, an example being Immanuel Kant's work on perpetual peace. Rawls also examines the state of nature between nations. In his work the Law of Peoples, Rawls applies a modified version of his original position thought experiment to international relationships. Rawls says that peoples, not states, form the basic unit that should be examined. States should be encouraged to follow the principles from Rawls' earlier A Theory of Justice. Democracy seems like it would be the most logical means of accomplishing these goals, but benign non-democracies should be seen as acceptable at the international stage. Rawls develops eight principles for how a people should act on an international stage. Within international relations theory, anarchy is the state of affairs wherein nations exist without a higher power to govern them. The three principal schools of international relations theory hold different beliefs about anarchy and how to approach it. Realism approaches global politics as if the world's nations were each an individual under a state of nature: it tends to take anarchy for granted, and does not see a solution to it as possible or even necessarily desirable. Liberalism claims that anarchy may be mitigated through the spread of liberal democracy and the use of international organizations, thus creating a global civil society; this approach may be summed up by the words of George H. W. Bush, who sought to create "a world where the rule of law, not the law of the jungle, governs the conduct of nations". Constructivist theorists, like liberals, also do not see anarchy as a given in international affairs, but are open to other approaches besides those given by realists and liberals. See also Natural law Law of the jungle Antisocial personality disorder Noble savage Primitive communism Primitivism References Concepts in political philosophy Enlightenment philosophy John Locke Thomas Hobbes
0.770738
0.995035
0.766911
Moral skepticism
Moral skepticism (or moral scepticism in British English) is a class of meta-ethical theories all members of which entail that no one has any moral knowledge. Many moral skeptics also make the stronger, modal claim that moral knowledge is impossible. Moral skepticism is particularly opposed to moral realism: the view that there are knowable and objective moral truths. Some defenders of moral skepticism include Pyrrho, Aenesidemus, Sextus Empiricus, David Hume, J. L. Mackie (1977), Friedrich Nietzsche, Richard Joyce (2001), Joshua Greene, Richard Garner, Walter Sinnott-Armstrong (2006b), and the philosopher James Flynn. Strictly speaking, Gilbert Harman (1975) argues in favor of a kind of moral relativism, not moral skepticism. However, he has influenced some contemporary moral skeptics. Forms of moral skepticism Moral skepticism is divided into three subclasses: moral error theory (or moral nihilism), epistemological moral skepticism, and noncognitivism. All three of these theories reach the same conclusions, which are: (a) we are never justified in believing that moral claims (claims of the form "state of affairs x is (morally) good," "action y is morally obligatory," etc.) are true, and, furthermore, (b) we never know that any moral claim is true. However, each method arrives at (a) and (b) by a different route. Moral error theory holds that we do not know that any moral claim is true because (i) all moral claims are false, (ii) we have reason to believe that all moral claims are false, and (iii) since we are not justified in believing any claim we have reason to deny, we are not justified in believing any moral claims. Epistemological moral skepticism is a subclass of theory, the members of which include Pyrrhonian moral skepticism and dogmatic moral skepticism. All members of epistemological moral skepticism share two things: first, they acknowledge that we are unjustified in believing any moral claim, and second, they are agnostic on whether (i) is true (i.e. on whether all moral claims are false). Pyrrhonian moral skepticism holds that the reason we are unjustified in believing any moral claim is that it is irrational for us to believe either that any moral claim is true or that any moral claim is false. Thus, in addition to being agnostic on whether (i) is true, Pyrrhonian moral skepticism denies (ii). Dogmatic moral skepticism, on the other hand, affirms (ii) and cites (ii)'s truth as the reason we are unjustified in believing any moral claim. Finally, Noncognitivism holds that we can never know that any moral claim is true because moral claims are incapable of being true or false (they are not truth-apt). Instead, moral claims are imperatives (e.g. "Don't steal babies!"), expressions of emotion (e.g. "stealing babies: Boo!"), or expressions of "pro-attitudes" ("I do not believe that babies should be stolen.") Moral error theory Moral error theory is a position characterized by its commitment to two propositions: (i) all moral claims are false and (ii) we have reason to believe that all moral claims are false. The most famous moral error theorist is J. L. Mackie, who defended the metaethical view in Ethics: Inventing Right and Wrong (1977). Mackie has been interpreted as giving two arguments for moral error theory. The first argument people attribute to Mackie, often called the argument from queerness, holds that moral claims imply motivation internalism (the doctrine that "It is necessary and a priori that any agent who judges that one of his available actions is morally obligatory will have some (defeasible) motivation to perform that action"). Because motivation internalism is false, however, so too are all moral claims. The other argument often attributed to Mackie, often called the argument from disagreement, maintains that any moral claim (e.g. "Killing babies is wrong") entails a correspondent "reasons claim" ("one has reason not to kill babies"). Put another way, if "killing babies is wrong" is true then everybody has a reason to not kill babies. This includes the psychopath who takes great pleasure from killing babies, and is utterly miserable when he does not have their blood on his hands. But, surely, (if we assume that he will suffer no reprisals) this psychopath has every reason to kill babies, and no reason not to do so. All moral claims are thus false. Epistemological moral skepticism All versions of epistemological moral skepticism hold that we are unjustified in believing any moral proposition. However, in contradistinction to moral error theory, epistemological moral skeptical arguments for this conclusion do not include the premise that "all moral claims are false." For example, Michael Ruse gives what Richard Joyce calls an "evolutionary argument" for the conclusion that we are unjustified in believing any moral proposition. He argues that we have evolved to believe moral propositions because our believing the same enhances our genetic fitness (makes it more likely that we will reproduce successfully). However, our believing these propositions would enhance our fitness even if they were all false (they would make us more cooperative, etc.). Thus, our moral beliefs are unresponsive to evidence; they are analogous to the beliefs of a paranoiac. As a paranoiac is plainly unjustified in believing his conspiracy theories, so too are we unjustified in believing moral propositions. We therefore have reason to jettison our moral beliefs. Nietzsche's moral skepticism centers on the profound and ongoing lack of consensus among philosophers regarding foundational moral propositions. He highlights the persistent debates on whether the basis of right action is rooted in reasons or consequences, and the diverse, conflicting theories within Western moral philosophy. Noncognitivism Criticisms Criticisms of moral skepticism come primarily from moral realists. The moral realist argues that there is in fact good reason to believe that there are objective moral truths and that we are justified in holding many moral beliefs. One moral realist response to moral error theory holds that it "proves too much"—if moral claims are false because they entail that we have reasons to do certain things regardless of our preferences, then so too are "hypothetical imperatives" (e.g. "if you want to get your hair-cut you ought to go to the barber"). This is because all hypothetical imperatives imply that "we have reason to do that which will enable us to accomplish our ends" and so, like moral claims, they imply that we have reason to do something regardless of our preferences. If moral claims are false because they have this implication, then so too are hypothetical imperatives. But hypothetical imperatives are true. Thus the argument from the non-instantiation of (what Mackie terms) "objective prescriptivity" for moral error theory fails. Russ Shafer-Landau and Daniel Callcut have each outlined anti-skeptical strategies. Callcut argues that moral skepticism should be scrutinized in introductory ethics classes in order to get across the point that "if all views about morality, including the skeptical ones, face difficulties, then adopting a skeptical position is not an escape from difficulty." See also Amoralism Perspectivism Psychological determinism Is–ought problem References Further reading Butchvarov, Panayot (1989). Skepticism in Ethics, Indiana University Press. Gibbard, Allan (1990). Wise Choices, Apt Feelings. Cambridge: Harvard University Press. Harman, Gilbert (1975). "Moral Relativism Defended," Philosophical Review, pp. 3–22. Harman, Gilbert (1977). The Nature of Morality. New York: Oxford University Press. Joyce, Richard (2001). The Myth of Morality, Cambridge University Press. Joyce, Richard (2006). The Evolution of Morality, MIT Press. (link) Lillehammer, Halvard (2007). Companions in Guilt: arguments for ethical objectivity, Palgrave MacMillan. Mackie, J. L. (1977). Ethics: Inventing Right and Wrong, Penguin. Sinnott-Armstrong, Walter (2006a). "Moral Skepticism", The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.). (link) Sinnott-Armstrong, Walter (2006b). Moral Skepticisms, Oxford University Press. Olson, Jonas (2014) Moral Error Theory: History, Critique, Defence, Oxford University Press. Kalf, Wouter (2018) Moral Error Theory, Palgrave MacMillan. External links Metaethics Skepticism Ethical theories
0.782624
0.979916
0.766906
Anthroposophy
Anthroposophy is a spiritual new religious movement which was founded in the early 20th century by the esotericist Rudolf Steiner that postulates the existence of an objective, intellectually comprehensible spiritual world, accessible to human experience. Followers of anthroposophy aim to engage in spiritual discovery through a mode of thought independent of sensory experience. Though proponents claim to present their ideas in a manner that is verifiable by rational discourse and say that they seek precision and clarity comparable to that obtained by scientists investigating the physical world, many of these ideas have been termed pseudoscientific by experts in epistemology and debunkers of pseudoscience. Anthroposophy has its roots in German idealism, Western and Eastern esoteric ideas, various religious traditions, and modern Theosophy. Steiner chose the term anthroposophy (from Greek ἄνθρωπος , 'human', and σοφία sophia, 'wisdom') to emphasize his philosophy's humanistic orientation. He defined it as "a scientific exploration of the spiritual world", others have variously called it a "philosophy and cultural movement", a "spiritual movement", a "spiritual science", "a system of thought", or "a spiritualist movement". Anthroposophical ideas have been applied in a range of fields including education (both in Waldorf schools and in the Camphill movement), environmental conservation and banking; with additional applications in agriculture, organizational development, the arts, and more. The Anthroposophical Society is headquartered at the Goetheanum in Dornach, Switzerland. Anthroposophy's supporters include writers Saul Bellow, and Selma Lagerlöf, painters Piet Mondrian, Wassily Kandinsky and Hilma af Klint, filmmaker Andrei Tarkovsky, child psychiatrist Eva Frommer, music therapist Maria Schüppel, Romuva religious founder Vydūnas, and former president of Georgia Zviad Gamsakhurdia. While critics and proponents alike acknowledge Steiner's many anti-racist statements. "Steiner's collected works...contain pervasive internal contradictions and inconsistencies on racial and national questions." The historian of religion Olav Hammer has termed anthroposophy "the most important esoteric society in European history". Many scientists, physicians, and philosophers, including Michael Shermer, Michael Ruse, Edzard Ernst, David Gorski, and Simon Singh have criticized anthroposophy's application in the areas of medicine, biology, agriculture, and education to be dangerous and pseudoscientific. Ideas of Steiner's that are unsupported or disproven by modern science include: racial evolution, clairvoyance (Steiner claimed he was clairvoyant), and the Atlantis myth. History The early work of the founder of anthroposophy, Rudolf Steiner, culminated in his Philosophy of Freedom (also translated as The Philosophy of Spiritual Activity and Intuitive Thinking as a Spiritual Path). Here, Steiner developed a concept of free will based on inner experiences, especially those that occur in the creative activity of independent thought. "Steiner was a moral individualist". By the beginning of the twentieth century, Steiner's interests turned almost exclusively to spirituality. His work began to draw the attention of others interested in spiritual ideas; among these was the Theosophical Society. From 1900 on, thanks to the positive reception his ideas received from Theosophists, Steiner focused increasingly on his work with the Theosophical Society, becoming the secretary of its section in Germany in 1902. During his leadership, membership increased dramatically, from just a few individuals to sixty-nine lodges. By 1907, a split between Steiner and the Theosophical Society became apparent. While the Society was oriented toward an Eastern and especially Indian approach, Steiner was trying to develop a path that embraced Christianity and natural science. The split became irrevocable when Annie Besant, then president of the Theosophical Society, presented the child Jiddu Krishnamurti as the reincarnated Christ. Steiner strongly objected and considered any comparison between Krishnamurti and Christ to be nonsense; many years later, Krishnamurti also repudiated the assertion. Steiner's continuing differences with Besant led him to separate from the Theosophical Society Adyar. He was subsequently followed by the great majority of the Theosophical Society's German members, as well as many members of other national sections. By this time, Steiner had reached considerable stature as a spiritual teacher and expert in the occult. He spoke about what he considered to be his direct experience of the Akashic Records (sometimes called the "Akasha Chronicle"), thought to be a spiritual chronicle of the history, pre-history, and future of the world and mankind. In a number of works, Steiner described a path of inner development he felt would let anyone attain comparable spiritual experiences. In Steiner's view, sound vision could be developed, in part, by practicing rigorous forms of ethical and cognitive self-discipline, concentration, and meditation. In particular, Steiner believed a person's spiritual development could occur only after a period of moral development. In 1912, Steiner broke away from the Theosophical Society to found an independent group, which he named the Anthroposophical Society. After World War I, members of the young society began applying Steiner's ideas to create cultural movements in areas such as traditional and special education, farming, and medicine. By 1923, a schism had formed between older members, focused on inner development, and younger members eager to become active in contemporary social transformations. In response, Steiner attempted to bridge the gap by establishing an overall School for Spiritual Science. As a spiritual basis for the reborn movement, Steiner wrote a Foundation Stone Meditation which remains a central touchstone of anthroposophical ideas. Steiner died just over a year later, in 1925. The Second World War temporarily hindered the anthroposophical movement in most of Continental Europe, as the Anthroposophical Society and most of its practical counter-cultural applications were banned by the Nazi government. Though at least one prominent member of the Nazi Party, Rudolf Hess, was a strong supporter of anthroposophy, very few anthroposophists belonged to the National Socialist Party. In reality, Steiner had both enemies and loyal supporters in the upper echelons of the Nazi regime. Staudenmaier speaks of the "polycratic party-state apparatus", so Nazism's approach to Anthroposophy was not characterized by monolithic ideological unity. When Hess flew to the UK and was imprisoned, their most powerful protector was gone, but Anthroposophists were still not left without supporters among higher-placed Nazis. The Third Reich had banned almost all esoteric organizations, claiming that these were controlled by Jews. The truth was that while Anthroposophists complained of bad press, they were to a surprising extent tolerated by the Nazi regime, "including outspokenly supportive pieces in the Völkischer Beobachter". Ideological purists from Sicherheitsdienst argued largely in vain against Anthroposophy. According to Staudenmaier, "The prospect of unmitigated persecution was held at bay for years in a tenuous truce between pro-anthroposophical and anti-anthroposophical Nazi factions." Morals: Anthroposophy was not the stake of that dispute, but merely powerful Nazis wanting to get rid of other powerful Nazis. E.g. Jehovah's Witnesses were treated much more aggressively than Anthroposophists. Kurlander stated that "the Nazis were hardly ideologically opposed to the supernatural sciences themselves"—rather they objected to the free (i.e. non-totalitarian) pursuit of supernatural sciences. According to Hans Büchenbacher, an anthroposophist, the Secretary General of the General Anthroposophical Society, Guenther Wachsmuth, as well as Steiner's widow, Marie Steiner, were “completely pro-Nazi.” Marie Steiner-von Sivers, Guenther Wachsmuth, and Albert Steffen, had publicly expressed sympathy for the Nazi regime since its beginnings; led by such sympathies of their leadership, the Swiss and German Anthroposophical organizations chose for a path conflating accommodation with collaboration, which in the end ensured that while the Nazi regime hunted the esoteric organizations, Gentile Anthroposophists from Nazi Germany and countries occupied by it were let be to a surprising extent. Of course they had some setbacks from the enemies of Anthroposophy among the upper echelons of the Nazi regime, but Anthroposophists also had loyal supporters among them, so overall Gentile Anthroposophists were not badly hit by the Nazi regime. Staudenmaier's overall argument is that "there were often no clear-cut lines between theosophy, anthroposophy, ariosophy, astrology and the völkisch movement from which the Nazi Party arose." By 2007, national branches of the Anthroposophical Society had been established in fifty countries and about 10,000 institutions around the world were working on the basis of anthroposophical ideas. Etymology and earlier uses of the word Anthroposophy is an amalgam of the Greek terms ( 'human') and ( 'wisdom'). An early English usage is recorded by Nathan Bailey (1742) as meaning "the knowledge of the nature of man." The first known use of the term anthroposophy occurs within Arbatel de magia veterum, summum sapientiae studium, a book published anonymously in 1575 and attributed to Heinrich Cornelius Agrippa. The work describes anthroposophy (as well as theosophy) variously as an understanding of goodness, nature, or human affairs. In 1648, the Welsh philosopher Thomas Vaughan published his Anthroposophia Theomagica, or a discourse of the nature of man and his state after death. The term began to appear with some frequency in philosophical works of the mid- and late-nineteenth century. In the early part of that century, Ignaz Troxler used the term anthroposophy to refer to philosophy deepened to self-knowledge, which he suggested allows deeper knowledge of nature as well. He spoke of human nature as a mystical unity of God and world. Immanuel Hermann Fichte used the term anthroposophy to refer to "rigorous human self-knowledge," achievable through thorough comprehension of the human spirit and of the working of God in this spirit, in his 1856 work Anthropology: The Study of the Human Soul. In 1872, the philosopher of religion Gideon Spicker used the term anthroposophy to refer to self-knowledge that would unite God and world: "the true study of the human being is the human being, and philosophy's highest aim is self-knowledge, or Anthroposophy." In 1882, the philosopher Robert Zimmermann published the treatise, "An Outline of Anthroposophy: Proposal for a System of Idealism on a Realistic Basis," proposing that idealistic philosophy should employ logical thinking to extend empirical experience. Steiner attended lectures by Zimmermann at the University of Vienna in the early 1880s, thus at the time of this book's publication. In the early 1900s, Steiner began using the term anthroposophy (i.e. human wisdom) as an alternative to the term theosophy (i.e. divine wisdom). Central ideas Spiritual knowledge and freedom Anthroposophical proponents aim to extend the clarity of the scientific method to phenomena of human soul-life and spiritual experiences. Steiner believed this required developing new faculties of objective spiritual perception, which he maintained was still possible for contemporary humans. The steps of this process of inner development he identified as consciously achieved imagination, inspiration, and intuition. Steiner believed results of this form of spiritual research should be expressed in a way that can be understood and evaluated on the same basis as the results of natural science. Steiner hoped to form a spiritual movement that would free the individual from any external authority. For Steiner, the human capacity for rational thought would allow individuals to comprehend spiritual research on their own and bypass the danger of dependency on an authority such as himself. Steiner contrasted the anthroposophical approach with both conventional mysticism, which he considered lacking the clarity necessary for exact knowledge, and natural science, which he considered arbitrarily limited to what can be seen, heard, or felt with the outward senses. Nature of the human being In Theosophy, Steiner suggested that human beings unite a physical body of substances gathered from and returning to the inorganic world; a life body (also called the etheric body), in common with all living creatures (including plants); a bearer of sentience or consciousness (also called the astral body), in common with all animals; and the ego, which anchors the faculty of self-awareness unique to human beings. Anthroposophy describes a broad evolution of human consciousness. Early stages of human evolution possess an intuitive perception of reality, including a clairvoyant perception of spiritual realities. Humanity has progressively evolved an increasing reliance on intellectual faculties and a corresponding loss of intuitive or clairvoyant experiences, which have become atavistic. The increasing intellectualization of consciousness, initially a progressive direction of evolution, has led to an excessive reliance on abstraction and a loss of contact with both natural and spiritual realities. However, to go further requires new capacities that combine the clarity of intellectual thought with the imagination and with consciously achieved inspiration and intuitive insights. Anthroposophy speaks of the reincarnation of the human spirit: that the human being passes between stages of existence, incarnating into an earthly body, living on earth, leaving the body behind, and entering into the spiritual worlds before returning to be born again into a new life on earth. After the death of the physical body, the human spirit recapitulates the past life, perceiving its events as they were experienced by the objects of its actions. A complex transformation takes place between the review of the past life and the preparation for the next life. The individual's karmic condition eventually leads to a choice of parents, physical body, disposition, and capacities that provide the challenges and opportunities that further development requires, which includes karmically chosen tasks for the future life. Steiner described some conditions that determine the interdependence of a person's lives, or karma. Evolution The anthroposophical view of evolution considers all animals to have evolved from an early, unspecialized form. As the least specialized animal, human beings have maintained the closest connection to the archetypal form; contrary to the Darwinian conception of human evolution, all other animals devolve from this archetype. The spiritual archetype originally created by spiritual beings was devoid of physical substance; only later did this descend into material existence on Earth. In this view, human evolution has accompanied the Earth's evolution throughout the existence of the Earth. Anthroposophy adapted Theosophy's complex system of cycles of world development and human evolution. The evolution of the world is said to have occurred in cycles. The first phase of the world consisted only of heat. In the second phase, a more active condition, light, and a more condensed, gaseous state separate out from the heat. In the third phase, a fluid state arose, as well as a sounding, forming energy. In the fourth (current) phase, solid physical matter first exists. This process is said to have been accompanied by an evolution of consciousness which led up to present human culture. Ethics The anthroposophical view is that good is found in the balance between two polar influences on world and human evolution. These are often described through their mythological embodiments as spiritual adversaries which endeavour to tempt and corrupt humanity, Lucifer and his counterpart Ahriman. These have both positive and negative aspects. Lucifer is the light spirit, which "plays on human pride and offers the delusion of divinity", but also motivates creativity and spirituality; Ahriman is the dark spirit that tempts human beings to "...deny [their] link with divinity and to live entirely on the material plane", but that also stimulates intellectuality and technology. Both figures exert a negative effect on humanity when their influence becomes misplaced or one-sided, yet their influences are necessary for human freedom to unfold. Each human being has the task to find a balance between these opposing influences, and each is helped in this task by the mediation of the Representative of Humanity, also known as the Christ being, a spiritual entity who stands between and harmonizes the two extremes. Claimed applications Steiner/Waldorf education There is a pedagogical movement with over 1000 Steiner or Waldorf schools (the latter name stems from the first such school, founded in Stuttgart in 1919) located in some 60 countries; the great majority of these are independent (private) schools. Sixteen of the schools have been affiliated with the United Nations' UNESCO Associated Schools Project Network, which sponsors education projects that foster improved quality of education throughout the world. Waldorf schools receive full or partial governmental funding in some European nations, Australia and in parts of the United States (as Waldorf method public or charter schools) and Canada. The schools have been founded in a variety of communities: for example in the favelas of São Paulo to wealthy suburbs of major cities; in India, Egypt, Australia, the Netherlands, Mexico and South Africa. Though most of the early Waldorf schools were teacher-founded, the schools today are usually initiated and later supported by a parent community. Waldorf schools are among the most visible anthroposophical institutions. Biodynamic agriculture Biodynamic agriculture, is a form of alternative agriculture based on pseudo-scientific and esoteric concepts. It was also the first intentional form of organic farming, begun in 1924, when Rudolf Steiner gave a series of lectures published in English as The Agriculture Course. Steiner is considered one of the founders of the modern organic farming movement. "And Himmler, Hess, and Darré all promoted biodynamic (anthroposophic) approaches to farming as an alternative to industrial agriculture." "'[...] with the active cooperation of the Reich League for Biodynamic Agriculture' [...] Pancke, Pohl, and Hans Merkel established additional biodynamic plantations across the eastern territories as well as Dachau, Ravensbrück, and Auschwitz concentration camps. Many were staffed by anthroposophists." "Steiner’s 'biodynamic agriculture' based on 'restoring the quasi-mystical relationship between earth and the cosmos' was widely accepted in the Third Reich (28)." Anthroposophical medicine Anthroposophical medicine is a form of alternative medicine based on pseudoscientific and occult notions rather than in science-based medicine. Most anthroposophic medical preparations are highly diluted, like homeopathic remedies, while harmless in of themselves, using them in place of conventional medicine to treat illness is ineffective and risks adverse consequences. One of the most studied applications has been the use of mistletoe extracts in cancer therapy, but research has found no evidence of benefit. Special needs education and services In 1922, Ita Wegman founded an anthroposophical center for special needs education, the Sonnenhof, in Switzerland. In 1940, Karl König founded the Camphill Movement in Scotland. The latter in particular has spread widely, and there are now over a hundred Camphill communities and other anthroposophical homes for children and adults in need of special care in about 22 countries around the world. Both Karl König, Thomas Weihs and others have written extensively on these ideas underlying Special education. Architecture Steiner designed around thirteen buildings in an organic—expressionist architectural style. Foremost among these are his designs for the two Goetheanum buildings in Dornach, Switzerland. Thousands of further buildings have been built by later generations of anthroposophic architects. Architects who have been strongly influenced by the anthroposophic style include Imre Makovecz in Hungary, Hans Scharoun and Joachim Eble in Germany, Erik Asmussen in Sweden, Kenji Imai in Japan, Thomas Rau, Anton Alberts and Max van Huut in the Netherlands, Christopher Day and Camphill Architects in the UK, Thompson and Rose in America, Denis Bowman in Canada, and Walter Burley Griffin and Gregory Burgess in Australia. ING House in Amsterdam is a contemporary building by an anthroposophical architect which has received awards for its ecological design and approach to a self-sustaining ecology as an autonomous building and example of sustainable architecture. Eurythmy Together with Marie von Sivers, Steiner developed eurythmy, a performance art combining dance, speech, and music. Social finance and entrepreneurship Around the world today are a number of banks, companies, charities, and schools for developing co-operative forms of business using Steiner's ideas about economic associations, aiming at harmonious and socially responsible roles in the world economy. The first anthroposophic bank was the Gemeinschaftsbank für Leihen und Schenken in Bochum, Germany, founded in 1974. Socially responsible banks founded out of anthroposophy include Triodos Bank, founded in the Netherlands in 1980 and also active in the UK, Germany, Belgium, Spain and France. Other examples include Cultura Sparebank which dates from 1982 when a group of Norwegian anthroposophists began an initiative for ethical banking but only began to operate as a savings bank in Norway in the late 90s, La Nef in France and RSF Social Financein San Francisco. Harvard Business School historian Geoffrey Jones traced the considerable impact both Steiner and later anthroposophical entrepreneurs had on the creation of many businesses in organic food, ecological architecture and sustainable finance. Organizational development, counselling and biography work Bernard Lievegoed, a psychiatrist, founded a new method of individual and institutional development oriented towards humanizing organizations and linked with Steiner's ideas of the threefold social order. This work is represented by the NPI Institute for Organizational Development in the Netherlands and sister organizations in many other countries. Speech and drama There are also anthroposophical movements to renew speech and drama, the most important of which are based in the work of Marie Steiner-von Sivers (speech formation, also known as Creative Speech) and the Chekhov Method originated by Michael Chekhov (nephew of Anton Chekhov). Art Anthroposophic painting, a style inspired by Rudolf Steiner, featured prominently in the first Goetheanum's cupola. The technique frequently begins by filling the surface to be painted with color, out of which forms are gradually developed, often images with symbolic-spiritual significance. Paints that allow for many transparent layers are preferred, and often these are derived from plant materials. Rudolf Steiner appointed the English sculptor Edith Maryon as head of the School of Fine Art at the Goetheanum. Together they carved the 9-metre tall sculpture titled The Representative of Humanity, on display at the Goetheanum. Other Phenomenological approaches to science, pseudo-scientific ideas based on Goethe's philosophy of nature. John Wilkes' fountain-like flowforms, sculptural forms that guide water into rhythmic movement for the purposes of decoration. Antisemitic legislation in Italy (1938–1945). The Fellowship Community in Chestnut Ridge, New York, United States, which includes a retirement community and other anthroposophic projects. The Harduf kibbutz in Israel. Social goals For a period after World War I, Steiner was extremely active and well known in Germany, in part because he lectured widely proposing social reforms. Steiner was a sharp critic of nationalism, which he saw as outdated, and a proponent of achieving social solidarity through individual freedom. A petition proposing a radical change in the German constitution and expressing his basic social ideas (signed by Herman Hesse, among others) was widely circulated. His main book on social reform is Toward Social Renewal. Anthroposophy continues to aim at reforming society through maintaining and strengthening the independence of the spheres of cultural life, human rights and the economy. It emphasizes a particular ideal in each of these three realms of society: Liberty in cultural life Equality of rights, the sphere of legislation Fraternity in the economic sphere According to Cees Leijenhorst, "Steiner outlined his vision of a new political and social philosophy that avoids the two extremes of capitalism and socialism." Steiner did influence Italian Fascism, which exploited "his racial and anti-democratic dogma." The fascist ministers Giovanni Antonio Colonna di Cesarò (nicknamed "the Anthroposophist duke"; he became antifascist after taking part in Benito Mussolini's government) and Ettore Martinoli have openly expressed their sympathy for Rudolf Steiner. Most from the occult pro-fascist UR Group were Anthroposophists. According to Egil Asprem, "Steiner’s teachings had a clear authoritarian ring, and developed a rather crass polemic against 'materialism', 'liberalism', and cultural 'degeneration'. [...] For example, anthroposophical medicine was developed to contrast with the 'materialistic' (and hence 'degenerate') medicine of the establishment." Esoteric path Paths of spiritual development According to Steiner, a real spiritual world exists, evolving along with the material one. Steiner held that the spiritual world can be researched in the right circumstances through direct experience, by persons practicing rigorous forms of ethical and cognitive self-discipline. Steiner described many exercises he said were suited to strengthening such self-discipline; the most complete exposition of these is found in his book How To Know Higher Worlds. The aim of these exercises is to develop higher levels of consciousness through meditation and observation. Details about the spiritual world, Steiner suggested, could on such a basis be discovered and reported, though no more infallibly than the results of natural science. Steiner regarded his research reports as being important aids to others seeking to enter into spiritual experience. He suggested that a combination of spiritual exercises (for example, concentrating on an object such as a seed), moral development (control of thought, feelings and will combined with openness, tolerance and flexibility) and familiarity with other spiritual researchers' results would best further an individual's spiritual development. He consistently emphasised that any inner, spiritual practice should be undertaken in such a way as not to interfere with one's responsibilities in outer life. Steiner distinguished between what he considered were true and false paths of spiritual investigation. In anthroposophy, artistic expression is also treated as a potentially valuable bridge between spiritual and material reality. Prerequisites to and stages of inner development Steiner's stated prerequisites to beginning on a spiritual path include a willingness to take up serious cognitive studies, a respect for factual evidence, and a responsible attitude. Central to progress on the path itself is a harmonious cultivation of the following qualities: Control over one's own thinking Control over one's will Composure Positivity Impartiality Steiner sees meditation as a concentration and enhancement of the power of thought. By focusing consciously on an idea, feeling or intention the meditant seeks to arrive at pure thinking, a state exemplified by but not confined to pure mathematics. In Steiner's view, conventional sensory-material knowledge is achieved through relating perception and concepts. The anthroposophic path of esoteric training articulates three further stages of supersensory knowledge, which do not necessarily follow strictly sequentially in any single individual's spiritual progress. By focusing on symbolic patterns, images, and poetic mantras, the meditant can achieve consciously directed Imaginations that allow sensory phenomena to appear as the expression of underlying beings of a soul-spiritual nature. By transcending such imaginative pictures, the meditant can become conscious of the meditative activity itself, which leads to experiences of expressions of soul-spiritual beings unmediated by sensory phenomena or qualities. Steiner calls this stage Inspiration. By intensifying the will-forces through exercises such as a chronologically reversed review of the day's events, the meditant can achieve a further stage of inner independence from sensory experience, leading to direct contact, and even union, with spiritual beings ("Intuition") without loss of individual awareness. Spiritual exercises Steiner described numerous exercises he believed would bring spiritual development; other anthroposophists have added many others. A central principle is that "for every step in spiritual perception, three steps are to be taken in moral development." According to Steiner, moral development reveals the extent to which one has achieved control over one's inner life and can exercise it in harmony with the spiritual life of other people; it shows the real progress in spiritual development, the fruits of which are given in spiritual perception. It also guarantees the capacity to distinguish between false perceptions or illusions (which are possible in perceptions of both the outer world and the inner world) and true perceptions: i.e., the capacity to distinguish in any perception between the influence of subjective elements (i.e., viewpoint) and objective reality. Place in Western philosophy Steiner built upon Goethe's conception of an imaginative power capable of synthesizing the sense-perceptible form of a thing (an image of its outer appearance) and the concept we have of that thing (an image of its inner structure or nature). Steiner added to this the conception that a further step in the development of thinking is possible when the thinker observes his or her own thought processes. "The organ of observation and the observed thought process are then identical, so that the condition thus arrived at is simultaneously one of perception through thinking and one of thought through perception." Thus, in Steiner's view, we can overcome the subject-object divide through inner activity, even though all human experience begins by being conditioned by it. In this connection, Steiner examines the step from thinking determined by outer impressions to what he calls sense-free thinking. He characterizes thoughts he considers without sensory content, such as mathematical or logical thoughts, as free deeds. Steiner believed he had thus located the origin of free will in our thinking, and in particular in sense-free thinking. Some of the epistemic basis for Steiner's later anthroposophical work is contained in the seminal work, Philosophy of Freedom. In his early works, Steiner sought to overcome what he perceived as the dualism of Cartesian idealism and Kantian subjectivism by developing Goethe's conception of the human being as a natural-supernatural entity, that is: natural in that humanity is a product of nature, supernatural in that through our conceptual powers we extend nature's realm, allowing it to achieve a reflective capacity in us as philosophy, art and science. Steiner was one of the first European philosophers to overcome the subject-object split in Western thought. Though not well known among philosophers, his philosophical work was taken up by Owen Barfield (and through him influenced the Inklings, an Oxford group of Christian writers that included J. R. R. Tolkien and C. S. Lewis). Christian and Jewish mystical thought have also influenced the development of anthroposophy. Union of science and spirit Steiner believed in the possibility of applying the clarity of scientific thinking to spiritual experience, which he saw as deriving from an objectively existing spiritual world. Steiner identified mathematics, which attains certainty through thinking itself, thus through inner experience rather than empirical observation, as the basis of his epistemology of spiritual experience. Anthroposophy regards mainstream science as Ahrimanic. Relationship to religion Christ as the center of earthly evolution Steiner's writing, though appreciative of all religions and cultural developments, emphasizes Western tradition as having evolved to meet contemporary needs. He describes Christ and his mission on earth of bringing individuated consciousness as having a particularly important place in human evolution, whereby: Christianity has evolved out of previous religions; The being which manifests in Christianity also manifests in all faiths and religions, and each religion is valid and true for the time and cultural context in which it was born; All historical forms of Christianity need to be transformed considerably to meet the continuing evolution of humanity. Thus, anthroposophy considers there to be a being who unifies all religions, and who is not represented by any particular religious faith. This being is, according to Steiner, not only the Redeemer of the Fall from Paradise, but also the unique pivot and meaning of earth's evolutionary processes and of human history. To describe this being, Steiner periodically used terms such as the "Representative of Humanity" or the "good spirit" rather than any denominational term. Divergence from conventional Christian thought Steiner's views of Christianity diverge from conventional Christian thought in key places, and include gnostic elements: One central point of divergence is Steiner's views on reincarnation and karma. Steiner differentiated three contemporary paths by which he believed it possible to arrive at Christ: Through heart-felt experiences of the Gospels; Steiner described this as the historically dominant path, but becoming less important in the future. Through inner experiences of a spiritual reality; this Steiner regarded as increasingly the path of spiritual or religious seekers today. Through initiatory experiences whereby the reality of Christ's death and resurrection are experienced; Steiner believed this is the path people will increasingly take. Steiner also believed that there were two different Jesus children involved in the Incarnation of the Christ: one child descended from Solomon, as described in the Gospel of Matthew, the other child from Nathan, as described in the Gospel of Luke. (The genealogies given in the two gospels diverge some thirty generations before Jesus' birth, and 'Jesus' was a common name in biblical times.) His view of the second coming of Christ is also unusual; he suggested that this would not be a physical reappearance, but that the Christ being would become manifest in non-physical form, visible to spiritual vision and apparent in community life for increasing numbers of people beginning around the year 1933. He emphasized his belief that in the future humanity would need to be able to recognize the Spirit of Love in all its genuine forms, regardless of what name would be used to describe this being. He also warned that the traditional name of the Christ might be misused, and the true essence of this being of love ignored. According to Jane Gilmer, "Jung and Steiner were both versed in ancient gnosis and both envisioned a paradigmatic shift in the way it was delivered." As Gilles Quispel put it, "After all, Theosophy is a pagan, Anthroposophy a Christian form of modern Gnosis." Maria Carlson stated "Theosophy and Anthroposophy are fundamentally Gnostic systems in that they posit the dualism of Spirit and Matter." R. McL. Wilson in The Oxford Companion to the Bible agrees that Steiner and Anthroposophy are under the influence of gnosticism. Robert A. McDermott says Anthroposophy belongs to Christian Rosicrucianism. According to Nicholas Goodrick-Clarke, Rudolf Steiner "blended modern Theosophy with a Gnostic form of Christianity, Rosicrucianism, and German Naturphilosophie". Geoffrey Ahern states that Anthroposophy belongs to neo-gnosticism broadly conceived, which he identifies with Western esotericism and occultism. According to Catholic scholars Anthroposophy belongs to the New Age. Judaism Rudolf Steiner wrote and lectured on Judaism and Jewish issues over much of his adult life. He was a fierce opponent of popular antisemitism, but asserted that there was no justification for the existence of Judaism and Jewish culture in the modern world, a radical assimilationist perspective which saw the Jews completely integrating into the larger society. He also supported Émile Zola's position in the Dreyfus affair. Steiner emphasized Judaism's central importance to the constitution of the modern era in the West but suggested that to appreciate the spirituality of the future it would need to overcome its tendency toward abstraction. Steiner financed the publication of the book Die Entente-Freimaurerei und der Weltkrieg (1919) by ; Steiner also wrote the foreword for the book, partly based upon his own ideas. The publication comprised a conspiracy theory according to whom World War I was a consequence of a collusion of Freemasons and Jews – still favorite scapegoats of the conspiracy theorists – their purpose being the destruction of Germany. Fact is that Steiner spent a large sum of money for publishing "a now classic work of anti-Masonry and anti-Judaism". The writing was later enthusiastically received by the Nazi Party. In his later life, Steiner was accused by the Nazis of being Jewish, and Adolf Hitler called anthroposophy "Jewish methods". The anthroposophical institutions in Germany were banned during Nazi rule and several anthroposophists sent to concentration camps. Important early anthroposophists who were Jewish included two central members on the executive boards of the precursors to the modern Anthroposophical Society, and Karl König, the founder of the Camphill movement, who had converted to Christianity. Martin Buber and Hugo Bergmann, who viewed Steiner's social ideas as a solution to the Arab–Jewish conflict, were also influenced by anthroposophy. There are numerous anthroposophical organisations in Israel, including the anthroposophical kibbutz Harduf, founded by Jesaiah Ben-Aharon, forty Waldorf kindergartens and seventeen Waldorf schools (as of 2018). A number of these organizations are striving to foster positive relationships between the Arab and Jewish populations: The Harduf Waldorf school includes both Jewish and Arab faculty and students, and has extensive contact with the surrounding Arab communities, while the first joint Arab-Jewish kindergarten was a Waldorf program in Hilf near Haifa. Christian Community Towards the end of Steiner's life, a group of theology students (primarily Lutheran, with some Roman Catholic members) approached Steiner for help in reviving Christianity, in particular "to bridge the widening gulf between modern science and the world of spirit". They approached a notable Lutheran pastor, Friedrich Rittelmeyer, who was already working with Steiner's ideas, to join their efforts. Out of their co-operative endeavor, the Movement for Religious Renewal, now generally known as The Christian Community, was born. Steiner emphasized that he considered this movement, and his role in creating it, to be independent of his anthroposophical work, as he wished anthroposophy to be independent of any particular religion or religious denomination. Reception Anthroposophy's supporters include Saul Bellow, Selma Lagerlöf, Andrei Bely, Joseph Beuys, Owen Barfield, architect Walter Burley Griffin, Wassily Kandinsky, Andrei Tarkovsky, Bruno Walter, Right Livelihood Award winners Sir George Trevelyan, and Ibrahim Abouleish, and child psychiatrist Eva Frommer. The historian of religion Olav Hammer has termed anthroposophy "the most important esoteric society in European history." However authors, scientists, and physicians including Michael Shermer, Michael Ruse, Edzard Ernst, David Gorski, and Simon Singh have criticized anthroposophy's application in the areas of medicine, biology, agriculture, and education to be dangerous and pseudoscientific. Others including former Waldorf pupil Dan Dugan and historian Geoffrey Ahern have criticized anthroposophy itself as a dangerous quasi-religious movement that is fundamentally anti-rational and anti-scientific. Scientific basis Though Rudolf Steiner studied natural science at the Vienna Technical University at the undergraduate level, his doctorate was in epistemology and very little of his work is directly concerned with the empirical sciences. In his mature work, when he did refer to science it was often to present phenomenological or Goethean science as an alternative to what he considered the materialistic science of his contemporaries. Steiner's primary interest was in applying the methodology of science to realms of inner experience and the spiritual worlds (his appreciation that the essence of science is its method of inquiry is unusual among esotericists), and Steiner called anthroposophy Geisteswissenschaft (science of the mind, cultural/spiritual science), a term generally used in German to refer to the humanities and social sciences. Whether this is a sufficient basis for anthroposophy to be considered a spiritual science has been a matter of controversy. As Freda Easton explained in her study of Waldorf schools, "Whether one accepts anthroposophy as a science depends upon whether one accepts Steiner's interpretation of a science that extends the consciousness and capacity of human beings to experience their inner spiritual world." Sven Ove Hansson has disputed anthroposophy's claim to a scientific basis, stating that its ideas are not empirically derived and neither reproducible nor testable. Carlo Willmann points out that as, on its own terms, anthroposophical methodology offers no possibility of being falsified except through its own procedures of spiritual investigation, no intersubjective validation is possible by conventional scientific methods; it thus cannot stand up to empiricist critics. Peter Schneider describes such objections as untenable, asserting that if a non-sensory, non-physical realm exists, then according to Steiner the experiences of pure thinking possible within the normal realm of consciousness would already be experiences of that, and it would be impossible to exclude the possibility of empirically grounded experiences of other supersensory content. Olav Hammer suggests that anthroposophy carries scientism "to lengths unparalleled in any other Esoteric position" due to its dependence upon claims of clairvoyant experience, its subsuming natural science under "spiritual science." Hammer also asserts that the development of what he calls "fringe" sciences such as anthroposophic medicine and biodynamic agriculture are justified partly on the basis of the ethical and ecological values they promote, rather than purely on a scientific basis. Though Steiner saw that spiritual vision itself is difficult for others to achieve, he recommended open-mindedly exploring and rationally testing the results of such research; he also urged others to follow a spiritual training that would allow them directly to apply his methods to achieve comparable results. Anthony Storr stated about Rudolf Steiner's Anthroposophy: "His belief system is so eccentric, so unsupported by evidence, so manifestly bizarre, that rational skeptics are bound to consider it delusional... But, whereas Einstein's way of perceiving the world by thought became confirmed by experiment and mathematical proof, Steiner's remained intensely subjective and insusceptible of objective confirmation." According to Dan Dugan, Steiner was a champion of the following pseudoscientific claims, also championed by Waldorf schools: wrong color theory; obtuse criticism of the theory of relativity; weird ideas about motions of the planets; supporting vitalism; doubting germ theory; weird approach to physiological systems; "the heart is not a pump". Religious nature Two German scholars have called Anthroposophy "the most successful form of 'alternative' religion in the [twentieth] century." Other scholars stated that Anthroposophy is "aspiring to the status of religious dogma". According to Maria Carlson, anthroposophy is a "positivistic religion" "offering a seemingly logical theology based on pseudoscience." According to Swartz, Brandt, Hammer, and Hansson, Anthroposophy is a religion. They also call it "settled new religious movement", while Martin Gardner called it a cult. Another scholar also calls it a new religious movement or a new spiritual movement. Already in 1924 Anthroposophy got labeled "new religious movement" and "occultist movement". Other scholars agree it is a new religious movement. According to , both the theory and practice of Anthroposophy display characteristics of religion, and, according to Zander, Rudolf Steiner would plead no contest. According to Zander, Steiner's book Geheimwissenschaft [Occult Science] contains Steiner's mythology about cosmogenesis. Hammer notices that Anthroposophy is a synthesis which does include occultism. Hammer also notices that Steiner's occult doctrines bear a strong resemblance to post-Blavatskyan Theosophy (e.g. Annie Besant and Charles Webster Leadbeater). According to Helmut Zander, Steiner's clairvoyant insights always developed according to the same pattern. He took revised texts from theosophical literature and then passed them off as his own higher insights. Because he did not want to be an occult storyteller, but a (spiritual) scientist, he adapted his reading, which he had seen supernaturally in the world's memory, to the current state of technology. When, for example, the Wright brothers began flying with gliders and eventually with motorized aircraft in 1903, Steiner transformed the ponderous gondola airships of his Atlantis story into airplanes with elevators and rudders in 1904. As an explicitly spiritual movement, anthroposophy has sometimes been called a religious philosophy. In 1998 People for Legal and Non-Sectarian Schools (PLANS) started a lawsuit alleging that anthroposophy is a religion for Establishment Clause purposes and therefore several California school districts should not be chartering Waldorf schools; the lawsuit was dismissed in 2012 for failure to show anthroposophy was a religion. A 2012 paper in legal science reports this verdict as being provisional, and disagrees with its result, i.e. anthroposophy was declared "not a religion" due to an outdated legal framework. In 2000, a French court ruled that a government minister's description of anthroposophy as a cult was defamatory. The French governmental anti-cults agency MIVILUDES reported that it remains vigilant about Anthroposophy, especially because of its deviant medical applications and its work with underage persons, and that the works of Grégoire Perra which lambast anthroposophical medicine do not constitute defamation. Anthroposophical MDs think diseases are caused primarily by karma and demons, rather than materialistic causes. The Gospel of Luke is their main handbook of medical science; this makes them believe they have magical powers, and that medicine is essentially a form of magic. The professional French organization of Anthroposophic MDs have sued Mr. Perra for such claims; they have been condemned to pay 25,000 Euros damages for abusively suing him. Scholars state that Anthroposophy is influenced by Christian Gnosticism. The Catholic Church did in 1919 issue an edict classifying Anthroposophy as "a neognostic heresy" despite the fact that Steiner "very well respected the distinctions on which Catholic dogma insists". Some Baptist and mainstream academical heresiologists still appear inclined to agree with the more narrow prior edict of 1919 on dogma and the Lutheran (Missouri Sinod) apologist and heresiologist Eldon K. Winker quoted Ron Rhodes that Steiner's Christology is very similar to Cerinthus. Steiner did perceive "a distinction between the human person Jesus, and Christ as the divine Logos", which could be construed as Gnostic but not Docetic, since "they do not believe the Christ departed from Jesus prior to the crucfixion". "Steiner's Christology is discussed as a central element of his thought in Johannes Hemleben, Rudolf Steiner: A Documentary Biography, trans. Leo Twyman (East Grinstead, Sussex: Henry Goulden, 1975), pp. 96-100. From the perspective of orthodox Christianity, it may be said that Steiner combined a docetic understanding of Christ's nature with the Adoptionist heresy." Older scholarship says Steiner's Christology is Nestorian. According to Egil Asprem, "Steiner’s Christology was, however, quite heterodox, and hardly compatible with official church doctrine." Statements on race Rudolf Steiner was an extreme pan-German nationalist, and never disavowed such stance. Some anthroposophical ideas challenged the National Socialist racialist and nationalistic agenda. In contrast, some American educators have criticized Waldorf schools for failing to equally include the fables and myths of all cultures, instead favoring European stories over African ones. From the mid-1930s on, National Socialist ideologues attacked the anthroposophical worldview as being opposed to Nazi racist and nationalistic principles; anthroposophy considered "Blood, Race and Folk" as primitive instincts that must be overcome. An academic analysis of the educational approach in public schools noted that "[A] naive version of the evolution of consciousness, a theory foundational to both Steiner's anthroposophy and Waldorf education, sometimes places one race below another in one or another dimension of development. It is easy to imagine why there are disputes [...] about Waldorf educators' insisting on teaching Norse tales and Greek myths to the exclusion of African modes of discourse." In response to such critiques, the Anthroposophical Society in America published in 1998 a statement clarifying its stance: We explicitly reject any racial theory that may be construed to be part of Rudolf Steiner's writings. The Anthroposophical Society in America is an open, public society and it rejects any purported spiritual or scientific theory on the basis of which the alleged superiority of one race is justified at the expense of another race. Tommy Wieringa, a Dutch writer who grew among Anthroposophists, commenting upon an essay by the Anthroposophist , he wrote "It was a meeting of old acquaintances: Nazi leaders such as Rudolf Hess and Heinrich Himmler already recognized a kindred spirit in Rudolf Steiner, with his theories about racial purity, esoteric medicine and biodynamic agriculture." The racism of Anthroposophy is spiritual and paternalistic (i.e. benevolent), while the racism of fascism is materialistic and often malign. Olav Hammer, university professor expert in new religious movements and Western esotericism, confirms that now the racist and anti-Semitic character of Steiner's teachings can no longer be denied, even if that is "spiritual racism". According to Munoz, in the materialist perspective (i.e. no reincarnations), Anthroposophy is racist, but in the spiritual perspective (i.e. reincarnations mandatory) it is not racist. Reception by Nazi regime in Germany Though several prominent members of the Nazi Party were supporters of anthroposophy and its movements, including agriculturalist , SS colonel Hermann Schneider, and Gestapo chief Heinrich Müller, anti-Nazis such as Traute Lafrenz, a member of the White Rose resistance movement, were also followers. Rudolf Hess, the adjunct Führer, was a patron of Waldorf schools and a staunch defender of biodynamic agriculture. "Before 1933, Himmler, Walther Darré (the future Reich Agriculture Minister), and Rudolf Höss (the future commandant of Auschwitz) had studied ariosophy and anthroposophy, belonged to the occult-inspired Artamanen movement, [...]" "One of the most insightful contributions to this area is Peter Staudenmaier's case study of Anthroposophy, which has demonstrated the ambiguous role of Anthroposophists in fascist Italy and Nazi Germany." According to Staudenmaier, the fascist and Nazi authorities saw occultism not as deviant, but as deeply familiar. See also Esotericism in Germany and Austria Pneumatosophy Spiritual but not religious References Notes Citations External links Rudolf Steiner Archive (Steiner's works online) Steiner's complete works in German Rudolf Steiner Handbook (PDF; 56 MB) Goetheanum Societies General Anthroposophical Society Anthroposophical Society in America Anthroposophical Society in Great Britain Anthroposophical Initiatives in India Anthroposophical Society in Australia Anthroposophical Society in New Zealand Esoteric Christianity Rudolf Steiner Spirituality New religious movements
0.767854
0.99864
0.76681
Decoloniality
Decoloniality is a school of thought that aims to delink from Eurocentric knowledge hierarchies and ways of being in the world in order to enable other forms of existence on Earth. It critiques the perceived universality of Western knowledge and the superiority of Western culture, including the systems and institutions that reinforce these perceptions. Decolonial perspectives understand colonialism as the basis for the everyday function of capitalist modernity and imperialism. Decoloniality emerged as part of a South America movement examining the role of the European colonization of the Americas in establishing Eurocentric modernity/coloniality according to Aníbal Quijano, who defined the term and reach. Decolonial theory and practice have recently been subject to increasing critique. For example, Olúfẹ́mi Táíwò argued that it is analytically unsound, that "coloniality" is often conflated with "modernity", and that "decolonisation" becomes an impossible project of total emancipation. Jonatan Kurzwelly and Malin Wilckens used the example of decolonisation of academic collections of human remains, which were collected during colonial times to support racist theories and give legitimacy to colonial oppression, and showed how both contemporary scholarly methods and political practice perpetuate reified and essentialist notions of identities. Foundational principles Coloniality of knowledge Coloniality of power Colonialism as the root The decolonial movement includes diverse forms of critical theory, articulated by pluriversal forms of liberatory thinking that arise out of distinct situations. In its academic forms, it analyzes class distinctions, ethnic studies, gender studies, and area studies. It has been described as consisting of analytic (in the sense of semiotics) and practical "options confronting and delinking from [...] the colonial matrix of power" or from a "matrix of modernity" rooted in colonialism. It considers colonialism "the underlying logic of the foundation and unfolding of Western civilization from the Renaissance to today," although this foundational interconnectedness is often downplayed. This logic is commonly referred to as the colonial matrix of power or coloniality of power. Some have built upon decolonial theory by proposing Critical Indigenous Methodologies for research. Imperialism as the successor Although formal and explicit colonization ended with the decolonization of the Americas during the eighteenth and nineteenth century and the decolonization of much of the Global South in the late twentieth century, its successors, Western imperialism and globalization perpetuate those inequalities. The colonial matrix of power produced social discrimination eventually variously codified as racial, ethnic, anthropological or national according to specific historic, social, and geographic contexts. Decoloniality emerged as the colonial matrix of power was put into place during the 16th century. It is, in effect, a continuing confrontation of, and delinking from, Eurocentrism. Coloniality of gender Disobedience and de-linking Decoloniality has been called a form of "epistemic disobedience", "epistemic de-linking", and "epistemic reconstruction". In this sense, decolonial thinking is the recognition and implementation of a border gnosis or subaltern, a means of eliminating the provincial tendency to pretend that Western European modes of thinking are universal. In less theoretical applications—such as movements for Indigenous autonomy—decoloniality is considered a program of de-linking from contemporary legacies of coloniality, a response to needs unmet by the modern Rightist or Leftist governments, or, most broadly, social movements in search of a "new humanity" or the search for "social liberation from all power organized as inequality, discrimination, exploitation, and domination". Decoloniality Frantz Fanon and Aimé Césaire contributed to decolonial thinking, theory, and practice by identifying core principles of decoloniality. The first principle they identified is that colonialism must be confronted and treated as a discourse which fundamentally frames all aspects of thinking, organization, and existence. Framing colonialism as a "fundamental problem" empowers the colonized to center their experiences and thinking without seeking the recognition of the colonizer—a step towards the creation of decolonial thinking. The second core principle is that decolonization goes beyond ending colonization. Nelson Maldonado-Torres explains, "For decolonial thinking decolonization is less the end of colonialism wherever it has occurred and more the project of undoing and unlearning the coloniality of power, knowledge, and being and of creating a new sense of humanity and forms of interrelationality." This is the work of the decolonial project that has epistemic, political, and ethical dimensions. Aníbal Quijano summarized the goals of decoloniality as a need to recognize that the instrumentation of reason by the colonial matrix of power produced distorted paradigms of knowledge and spoiled the liberating promises of modernity, and by that recognition, realize the destruction of the global coloniality of power. Alanna Lockward explains that Europe has engaged in an intentional "politics of confusion" to conceal the relationship between modernity and coloniality. Decoloniality is synonymous with decolonial "thinking and doing", and it questions or problematizes the histories of power emerging from Europe. These histories underlie the logic of Western civilization. Thus, decoloniality refers to analytic approaches and socioeconomic and political practices opposed to pillars of Western civilization: coloniality and modernity. This makes decoloniality both a political and epistemic project. Examples Examples of contemporary decolonial programmatics and analytics exist throughout the Americas. Decolonial movements include the contemporary Zapatista governments of Southern Mexico, Indigenous movements for autonomy throughout South America, ALBA, CONFENIAE in Ecuador, ONIC in Colombia, the TIPNIS movement in Bolivia, and the Landless Workers' Movement in Brazil. These movements embody action oriented towards the goals expressed to seek ever-increasing freedoms by challenging the reasoning behind modernity, since modernity is in fact a facet of the colonial matrix of power. Examples of contemporary decolonial analytics include ethnic studies programs at various educational levels designed primarily to appeal to certain ethnic groups, including those at the K-12 level recently banned in Arizona, as well as long-established university programs. Scholars primarily with analytics who fail to recognize the connection between politics or decoloniality and the production of knowledge—between programmatics and analytics—are those claimed by decolonialists to most likely to reflect "an underlying acceptance of capitalist modernity, liberal democracy, and individualism" values which decoloniality seeks to challenge. Decolonial critique Researchers, authors, creators, theorists, and others engage in decoloniality through essays, artwork, and media. Many of these creators engage in decolonial critique. In decolonial critique, thinkers employ the theoretical, political, epistemic, and social frameworks advanced by decoloniality to scrutinize, reformulate, and denaturalize often widely accepted and celebrated concepts. Many decolonial critiques focus on reformulating the concept of modernity as situated within colonial and racial frameworks. Decolonial critique may inspire a decolonial culture that delinks from reproducing Western hierarchies. Decolonial critique is a method of applying decolonial methods and practices to all facets of epistemic, social, and political thinking. Decolonial art Decolonial art critiques Western art for the way it is alienated from the surrounding world and its focus on pursuing aesthetic beauty. Rather than feelings of sublime at the beauty of an art object, decolonial art seeks to evoke feelings of "sadness, indignation, repentance, hope, solidarity, resolution to change the world in the future, and, most importantly, with the restoration of human dignity." Decolonial aesthetics "seek to recognize and open options for liberating the senses" beyond just visual senses and challenge "the idea of art from Eurocentric forms of expression and philosophies of the beautiful." Decolonial art may "re-inscribe indigeneity on the land" that has been obscured by colonialism and reveal alternatives or an "always elsewhere of colonialism." Graffiti can function as an open or public challenge to colonial or imperialist structures and disrupt notions of a contented oppressed or colonized people. Notable artists include: Kwame Akoto-Bamfo (Ghana): Creates sculptures and installations that reflect on the history of the Transatlantic Slave Trade and its impact on African communities. Maria Thereza Alves (Brazil): Focuses on Indigenous and environmental issues, shedding light on the impact of colonization on Indigenous communities. Wangechi Mutu (Kenya/United States): Explores African identities and the interplay between tradition and modernity in a postcolonial context through painting, collage, and sculpture. Tracey Moffatt (Australia): Examines identities, stories, and representations of Indigenous populations in Australia, focusing on colonial and postcolonial themes. Yinka Shonibare (United Kingdom/Nigeria): Utilizes African batik-printed fabrics and examines cultural identity, colonialism, and postcolonial issues through sculptures and installations. Decolonial feminism Decolonial feminism reformulates the coloniality of gender by critiquing the very formation of gender and its subsequent formations of patriarchy and the gender binary, not as universal constants across cultures, but as structures that have been instituted by and for the benefit of European colonialism. Marìa Lugones proposes that decolonial feminism speaks to how "the colonial imposition of gender cuts across questions of ecology, economics, government, relations with the spirit world, and knowledge, as well as across everyday practices that either habituate us to take care of the world or to destroy it." Decolonial feminists like Karla Jessen Williamson and Rauna Kuokkanen have examined colonialism as a force that has imposed gender hierarchies on Indigenous women that have disempowered and fractured Indigenous communities and ways of life. Decolonial love Decolonial love is a love established on our relationality that is directed toward the emancipation of community from the coloniality of power, including human and non-human beings. It was developed by Chicana feminist Chela Sandoval as a reformulation of love beyond individualist romantic notions of love. Decolonial love "demands a deep recognition of our humanity and mutual implacability in undoing colonial relations of power and oppression that lead to indifference, contempt, and dehumanization." It begins from within, as a love of one's humanity and for those who have resisted colonial violence in their pursuit of healing and liberation. Thinkers who speak to the concept state that it is rooted in Indigenous cosmologies, including In Lak'ech ("you are my other me"), where love is a relational and resisting act toward the coloniality of power. Critiquing Western liberal democracy Moving beyond the critiques of enlightenment philosophy and modernity, decolonial critiques of democracy uncover how practices in democratic governance root themselves in colonial and racial rhetoric. Subhabrata Bobby Banerjee seeks to counter "hegemonic models of democracy that cannot address issues of inequality and colonial difference." Banerjee critiques western liberal democracy: "In liberal democracies colonial power becomes the epistemic basis of a privileged Eurocentric position that can explain culture and define the realities and identities of marginalized populations, while eliding power asymmetries inherent in the fixing of colonial difference." He also extends this analysis against deliberative democracy, arguing that this political theory fails to take into account colonized forms of deliberation often discounted and silenced—including oral history, music production, and more—as well as how asymmetries of power are reproduced within political arenas. Distinction from related ideas Decoloniality is often conflated with postcolonialism, decolonization, and postmodernism. However, decolonial theorists draw clear distinctions. Postcolonialism Postcolonialism is often mainstreamed into general oppositional practices by "people of color", "Third World intellectuals", or ethnic groups. Decoloniality—as both an analytic and a programmatic approach—is said to move "away and beyond the post-colonial" because "post-colonialism criticism and theory is a project of scholarly transformation within the academy". This final point is debatable, as some postcolonial scholars consider postcolonial criticism and theory to be both an analytic (a scholarly, theoretical, and epistemic) project and a programmatic (a practical, political) stance. This disagreement is an example of the ambiguity—"sometimes dangerous, sometimes confusing, and generally limited and unconsciously employed"—of the term "postcolonialism," which has been applied to analysis of colonial expansion and decolonization, in contexts such as Algeria, the 19th-century United States, and 19th-century Brazil. Decolonial scholars consider the colonization of the Americas a precondition for postcolonial analysis. The seminal text of postcolonial studies, Orientalism by Edward Said, describes the nineteenth-century European invention of the Orient as a geographic region considered racially and culturally distinct from, and inferior to, Europe. However, without the European invention of the Americas in the sixteenth century, sometimes referred to as Occidentalism, the later invention of the Orient would have been impossible. This means that postcolonialism becomes problematic when applied to post-nineteenth-century Latin America. Political decolonization Decolonization is largely political and historical: the end of the period of territorial domination of lands primarily in the global south by European powers. Decolonial scholars contend that colonialism did not disappear with political decolonization. It is important to note the vast differences in the histories, socioeconomics, and geographies of colonization in its various global manifestations. However, coloniality— meaning racialized and gendered socioeconomic and political stratification according to an invented Eurocentric standard—was common to all forms of colonization. Similarly, decoloniality in the form of challenges to this Eurocentric stratification manifested previous to de jure decolonization. Gandhi and Jinnah in India, Fanon in Algeria, Mandela in South Africa, and the early 20th-century Zapatistas in Mexico are all examples of decolonial projects that existed before decolonization. Postmodernism "Modernity" as a concept is complementary to coloniality. Coloniality is called "the darker side of western modernity". The problematic aspects of coloniality are often overlooked when describing the totality of Western society, whose advent is instead often framed as the introduction of modernity and rationality, a concept critiqued by post-modern thinkers. However, this critique is largely "limited and internal to European history and the history of European ideas". Although postmodern thinkers recognize the problematic nature of the notions of modernity and rationality, these thinkers often overlook the fact that modernity as a concept emerged when Europe defined itself as the center of the world. In this sense, those seen as part of the periphery are themselves part of Europe's self-definition. To summarize, like modernity, postmodernity often reproduces the "Eurocentric fallacy" foundational to modernity. Therefore, rather than criticizing the terrors of modernity, decolonialism criticizes Eurocentric modernity and rationality because of the "irrational myth" that these conceal. Decolonial approaches thus seek to "politicise epistemology from the experiences of those on the 'border,' not to develop yet another epistemology of politics". See also Anti-imperialism References Works cited Further reading LeVine, Mark 2005a: Overthrowing Geography: Jaffa, Tel Aviv and the Struggle for Palestine. Berkeley: University of California Press. LeVine, Mark 2005b: Why They Don't Hate Us: Lifting the Veil on the Axis of Evil. Oxford, UK: Oneworld Publications. Quijano, Aníbal and Immanuel Wallerstein 1992: Americanity as Concept: Or the Americas in the Modern World-System. International Social Science Journal 131: 549–557. Vallega, Alejandro A. 2015: Latin American Philosophy: from Identity to Radical Exteriority. Indiana University Press. Walsh, Catherine & Mignolo Walter (2018) On Decoloniality Duke University Press Walsh, Catherine. (2012) ""Other" Knowledges,"Other" Critiques: Reflections on the Politics and Practices of Philosophy and Decoloniality in the "Other" America." Transmodernity: Journal of Peripheral Cultural Production of the Luso-Hispanic World 1.3. Wan-hua, Huang. (2011) "The Process of Decoloniality of Taiwan Literature in the Early Postwar Period." Taiwan Research Journal 1: 006. Bhambra, G. (2012). Postcolonialism and decoloniality: A dialogue. In The Second ISA Forum of Sociology (August 1–4). Isaconf. Drexler-Dreis, J. (2013). Decoloniality as Reconciliation. Concilium: International Review of Theology-English Edition, (1), 115–122. Wanzer, D. A. (2012). Delinking Rhetoric, or Revisiting McGee's Fragmentation Thesis through Decoloniality. Rhetoric & Public Affairs, 15(4), 647–657. Chalmers, Gordon (2013) Indigenous as ’not-Indigenous' as ’Us'?: A dissident insider's views on pushing the bounds for what constitutes 'our mob'. Australian Indigenous Law Review, 17(2), pp. 47–55. http://search.informit.com.au/documentSummary;dn=900634481905301;res=IELIND Smith, Linda Tuhiwai (2012) Decolonizing Methodologies: Research and Indigenous Peoples (2nd edition). London: Zed Books. Critical theory Decolonization International relations theory
0.774184
0.990284
0.766661
Arbitrariness
Arbitrariness is the quality of being "determined by chance, whim, or impulse, and not by necessity, reason, or principle". It is also used to refer to a choice made without any specific criterion or restraint. Arbitrary decisions are not necessarily the same as random decisions. For example, during the 1973 oil crisis, Americans were allowed to purchase gasoline only on odd-numbered days if their license plate was odd, and on even-numbered days if their license plate was even. The system was well-defined and not random in its restrictions; however, since license plate numbers are completely unrelated to a person's fitness to purchase gasoline, it was still an arbitrary division of people. Similarly, schoolchildren are often organized by their surname in alphabetical order, a non-random yet an arbitrary method—at least in cases where surnames are irrelevant. Philosophy Arbitrary actions are closely related to teleology, the study of purpose. Actions lacking a telos, a goal, are necessarily arbitrary. With no end to measure against, there can be no standard applied to choices, so all decisions are alike. Note that arbitrary or random methods in the standard sense of arbitrary may not qualify as arbitrary choices philosophically if they were done in furtherance of a larger purpose (such as the examples above for the purposes of establishing discipline in school and avoiding overcrowding at gas stations). Nihilism is the philosophy that believes that there is no purpose in the universe, and that every choice is arbitrary. According to nihilism, the universe contains no value and is essentially meaningless. Because the universe and all of its constituents contain no higher goal for us to make subgoals from, all aspects of human life and experiences are completely arbitrary. There is no right or wrong decision, thought or practice and whatever choice a human being makes is just as meaningless and empty as any other choice he or she could have made. Many brands of theism, the belief in a deity or deities, believe that everything has a purpose and that nothing is arbitrary. In these philosophies, God created the universe for a reason, and every event flows from that. Even seemingly random events cannot escape God's hand and purpose. This is somewhat related to the argument from design—the argument for God's existence because a purpose can be found in the universe. Arbitrariness is also related to ethics, the philosophy of decision-making. Even if a person has a goal, they may choose to attempt to achieve it in ways that may be considered arbitrary. Rationalism holds that knowledge comes about through intellectual calculation and deduction; many rationalists (though not all) apply this to ethics as well. All decisions should be made through reason and logic, not via whim or how one "feels" what is right. Randomness may occasionally be acceptable as part of a subtask in furtherance of a larger goal, but not in general. In semiotics, the general theory of signs, sign systems, and sign processes, Saussure introduced the notion of arbitrariness according to which there is no necessary connection between the material sign (or signifier), and the entity it refers to or denotes as its meaning (or signified) as a mental concept or real object. Linguistics The principle of semiotic arbitrariness refers to the idea that social convention is what imbues meaning to a given semiosis (any activity, conduct, or process that involves signs, including the production of meaning) or sign. Mathematics A logical symbol is a fundamental concept in logic, tokens of which may be marks or a configuration of marks which form a particular pattern. In mathematics, arbitrary corresponds to the term "any" and the universal quantifier , as in an arbitrary division of a set or an arbitrary permutation of a sequence. Its use implies generality and that a statement does not only apply to special cases, but that one may select any available choice and the statement will still hold. For example, one might say that: "Given an arbitrary integer, multiplying it by two will result in an even number." Even further, the implication of the use of "arbitrary" is that generality will hold—even if an opponent were to choose the item in question. In which case, arbitrary can be regarded as synonymous to worst-case. Law Arbitrary comes from the Latin arbitrarius, the source of arbiter; someone who is tasked to judge some matter. An arbitrary legal judgment is a decision made at the discretion of the judge, not one that is fixed by law. In some countries, a prohibition of arbitrariness is enshrined into the constitution. Article 9 of the Swiss Federal Constitution theoretically overrides even democratic decisions in prohibiting arbitrary government action. The US Supreme Court has overturned laws for having "no rational basis." A recent study of the U.S. asylum system suggests that arbitrariness in decision-making might be the cause of large disparities in outcomes between different adjudicators, a phenomenon described as refugee roulette. Article 330 of the Russian penal code defines arbitrariness as a specific crime, but with a very broad definition encompassing any "actions contrary to the order presented by a law". See also Randomness Existential nihilism Metaphysical nihilism References External links Free will Legal terminology Semiotics Teleology Legal reasoning
0.781284
0.981262
0.766644
Antipositivism
In social science, antipositivism (also interpretivism, negativism or antinaturalism) is a theoretical stance which proposes that the social realm cannot be studied with the methods of investigation utilized within the natural sciences, and that investigation of the social realm requires a different epistemology. Fundamental to that antipositivist epistemology is the belief that the concepts and language researchers use in their research shape their perceptions of the social world they are investigating and seeking to define. Interpretivism (anti-positivism) developed among researchers dissatisfied with post-positivism, the theories of which they considered too general and ill-suited to reflect the nuance and variability found in human interaction. Because the values and beliefs of researchers cannot fully be removed from their inquiry, interpretivists believe research on human beings by human beings cannot yield objective results. Thus, rather than seeking an objective perspective, interpretivists look for meaning in the subjective experiences of individuals engaging in social interaction. Many interpretivist researchers immerse themselves in the social context they are studying, seeking to understand and formulate theories about a community or group of individuals by observing them from the inside. Interpretivism is an inductive practice influenced by philosophical frameworks such as hermeneutics, phenomenology, and symbolic interactionism. Interpretive methods are used in many fields of the social sciences, including human geography, sociology, political science, cultural anthropology, among others. History Beginning with Giambattista Vico, in the early eighteenth century, and later with Montesquieu, the study of natural history and human history were separate fields of intellectual enquiry. Natural history is not under human control, whereas human history is a human creation. As such, antipositivism is informed by an epistemological distinction between the natural world and the social realm. The natural world can only be understood by its external characteristics, whereas the social realm can be understood externally and internally, and thus can be known. In the early nineteenth century, intellectuals, led by the Hegelians, questioned the prospect of empirical social analysis. Karl Marx died before the establishment of formal social science, but nonetheless rejected the sociological positivism of Auguste Comte—despite his attempt to establish a historical materialist science of society. The enhanced positivism of Émile Durkheim served as foundation of modern academic sociology and social research, yet retained many mechanical elements of its predecessor. Hermeneuticians such as Wilhelm Dilthey theorized in detail on the distinction between natural and social science ('Geisteswissenschaft'), whilst neo-Kantian philosophers such as Heinrich Rickert maintained that the social realm, with its abstract meanings and symbolisms, is inconsistent with scientific methods of analysis. Edmund Husserl, meanwhile, negated positivism through the rubric of phenomenology. At the turn of the twentieth century, the first wave of German sociologists formally introduced verstehende (interpretive) sociological antipositivism, proposing research should concentrate on human cultural norms, values, symbols, and social processes viewed from a resolutely subjective perspective. As an antipositivist, however, one seeks relationships that are not as "ahistorical, invariant, or generalizable" as those pursued by natural scientists. The interaction between theory (or constructed concepts) and data is always fundamental in social science and this subjection distinguishes it from physical science. Durkheim himself noted the importance of constructing concepts in the abstract (e.g. "collective consciousness" and "social anomie") in order to form workable categories for experimentation. Both Weber and Georg Simmel pioneered the verstehen (or 'interpretative') approach toward social science; a systematic process in which an outside observer attempts to relate to a particular cultural group, or indigenous people, on their own terms and from their own point of view. Through the work of Simmel in particular, sociology acquired a possible character beyond positivist data-collection or grand, deterministic systems of structural law. Relatively isolated from the sociological academy throughout his lifetime, Simmel presented idiosyncratic analyses of modernity more reminiscent of the phenomenological and existential writers than of Comte or Durkheim, paying particular concern to the forms of, and possibilities for, social individuality. His sociology engaged in a neo-Kantian critique of the limits of human perception. Antipositivism thus holds there is no methodological unity of the sciences: the three goals of positivism – description, control, and prediction – are incomplete, since they lack any understanding. Science aims at understanding causality so control can be exerted. If this succeeded in sociology, those with knowledge would be able to control the ignorant and this could lead to social engineering. This perspective has led to controversy over how one can draw the line between subjective and objective research, much less draw an artificial line between environment and human organization (see environmental sociology), and influenced the study of hermeneutics. The base concepts of antipositivism have expanded beyond the scope of social science, in fact, phenomenology has the same basic principles at its core. Simply put, positivists see sociology as a science, while anti-positivists do not. Frankfurt School The antipositivist tradition continued in the establishment of critical theory, particularly the work associated with the Frankfurt School of social research. Antipositivism would be further facilitated by rejections of 'scientism'; or science as ideology. Jürgen Habermas argues, in his On the Logic of the Social Sciences (1967), that "the positivist thesis of unified science, which assimilates all the sciences to a natural-scientific model, fails because of the intimate relationship between the social sciences and history, and the fact that they are based on a situation-specific understanding of meaning that can be explicated only hermeneutically ... access to a symbolically prestructured reality cannot be gained by observation alone." The sociologist Zygmunt Bauman argued that "our innate tendency to express moral concern and identify with the Other's wants is stifled in modernity by positivistic science and dogmatic bureaucracy. If the Other does not 'fit in' to modernity's approved classifications, it is liable to be extinguished." See also Critical theory Grounded theory Holism Humanistic sociology Methodological dualism Philosophy of social science Poststructuralism Social action Symbolic interactionism References Philosophy of science Sociological theories History of sociology Philosophy of social science Politics of science Symbolic interactionism
0.773373
0.991156
0.766533
Social theory
Social theories are analytical frameworks, or paradigms, that are used to study and interpret social phenomena. A tool used by social scientists, social theories relate to historical debates over the validity and reliability of different methodologies (e.g. positivism and antipositivism), the primacy of either structure or agency, as well as the relationship between contingency and necessity. Social theory in an informal nature, or authorship based outside of academic social and political science, may be referred to as "social criticism" or "social commentary", or "cultural criticism" and may be associated both with formal cultural and literary scholarship, as well as other non-academic or journalistic forms of writing. Definitions Social theory by definition is used to make distinctions and generalizations among different types of societies, and to analyze modernity as it has emerged in the past few centuries. Social theory, as it is recognized today, emerged in the 20th century as a distinct discipline, and was largely equated with an attitude of critical thinking and the desire for knowledge through a posteriori methods of discovery, rather than a priori methods of tradition. Social thought provides general theories to explain actions and behavior of society as a whole, encompassing sociological, political, and philosophical ideas. Classical social theory has generally been presented from a perspective of Western philosophy, and often regarded as Eurocentric. Theory construction, according to The Blackwell Encyclopedia of Sociology, is instrumental: "Their goal is to promote accurate communication, rigorous testing, high accuracy, and broad applicability. They include the following: absence of contradictions, absence of ambivalence, abstractness, generality, precision, parsimony, and conditionality." Therefore, a social theory consists of well-defined terms, statements, arguments and scope conditions. History Ancient Confucius (551–479 BCE) envisaged a just society that went beyond his contemporary society of the Warring States. Later on, also in China, Mozi (circa 470 – circa 390 BCE) recommended a more pragmatic sociology, but ethical at base. In the West, Saint Augustine (354–430) was concerned exclusively with the idea of the just society. St. Augustine describes late Ancient Roman society through a lens of hatred and contempt for what he saw as false Gods, and in reaction theorized City of God. Ancient Greek philosophers, including Aristotle (384–322 BC) and Plato (428/427 or 424/423 – 348/347 BC), did not see a distinction between politics and society. The concept of society did not come until the Enlightenment period. The term, société, was probably first used as key concept by Rousseau in discussion of social relations. Prior to the enlightenment, social theory took largely narrative and normative form. It was expressed as stories and fables, and it may be assumed the pre-Socratic philosophers and religious teachers were the precursors to social theory proper. Medieval There is evidence of early Muslim sociology from the 14th century: in Ibn Khaldun's Muqaddimah (later translated as Prolegomena in Latin), the introduction to a seven volume analysis of universal history, was the first to advance social philosophy and social science in formulating theories of social cohesion and social conflict. Ibn Khaldun is thus considered by many to be the forerunner of sociology. Khaldun's treatise described in Muqaddimah (Introduction to History), published in 1377, two types of societies: (1) the city or town-dweller and (2) the mobile, nomadic societies. European social thought Modernity arose during the Enlightenment period, with the emergence of the world economy and exchange among diverse societies, bringing sweeping changes and new challenges for society. Many French and Scottish intellectuals and philosophers embraced the idea of progress and ideas of modernity. The Enlightenment period was marked by the idea that with new discoveries challenging the traditional way of thinking, scientists were required to find new normativity. This process allowed scientific knowledge and society to progress. French thought during this period focused on moral critique and criticisms of the monarchy. These ideas did not draw on ideas of the past from classical thinkers, nor involved following religious teachings and authority of the monarch. A common factor among the classical theories was the agreement that the history of humanity is pursuing a fixed path. They differed on where that path would lead: social progress, technological progress, decline or even fall. Social cycle theorists were skeptical of the Western achievements and technological progress, but argued that progress is an illusion of the ups and downs of the historical cycles. The classical approach has been criticized by many modern sociologists and theorists; among them Karl Popper, Robert Nisbet, Charles Tilly and Immanuel Wallerstein. The 19th century brought questions involving social order. The French Revolution freed French society of control by the monarchy, with no effective means of maintaining social order until Napoleon came to power. Three great classical theories of social and historical change emerged: the social evolutionism theory (of which Social Darwinism forms a part), the social cycle theory, and the Marxist historical materialism theory. 19th-century classical social theory has been expanded upon to create newer, contemporary social theories such as multilineal theories of evolution (neoevolutionism, sociobiology, theory of modernization, theory of post-industrial society) and various strains of Neo-Marxism. In the late 19th and early 20th centuries, social theory became closely related to academic sociology, and other related studies such as anthropology, philosophy, and social work branched out into their own disciplines. Subjects like "philosophy of history" and other multi-disciplinary subject matter became part of social theory as taught under sociology. A revival of discussion free of disciplines began in the late 1920s and early 1930s. The Frankfurt Institute for Social Research is a historical example. The Committee on Social Thought at the University of Chicago followed in the 1940s. In the 1970s, programs in Social and Political Thought were established at Sussex and York. Others followed, with emphases and structures, such as Social Theory and History (University of California, Davis). Cultural Studies programs extended the concerns of social theory into the domain of culture and thus anthropology. A chair and undergraduate program in social theory was established at the University of Melbourne. Social theory at present seems to be gaining acceptance as a classical academic discipline. Classical social theory Adam Ferguson, Montesquieu, and John Millar, among others, were the first to study society as distinct from political institutions and processes. In the nineteenth century, the scientific method was introduced into study of society, which was a significant advance leading to development of sociology as a discipline. In the 18th century, the pre-classical period of social theories developed a new form that provides the basic ideas for social theory, such as evolution, philosophy of history, social life and social contract, public and general will, competition in social space, organismic pattern for social description. Montesquieu, in The Spirit of Laws, which established that social elements influence human nature, was possibly the first to suggest a universal explanation for history. Montesquieu included changes in mores and manners as part of his explanation of political and historic events. Philosophers, including Jean-Jacques Rousseau, Voltaire, and Denis Diderot, developed new social ideas during the Enlightenment period that were based on reason and methods of scientific inquiry. Jean-Jacques Rousseau in this time played a significant role in social theory. He revealed the origin of inequality, analyzed the social contract (and social compact) that forms social integration and defined the social sphere or civil society. Jean-Jacques Rousseau also emphasized that man has the liberty to change his world, an assertion that made it possible to program and change society. Adam Smith addressed the question of whether vast inequalities of wealth represented progress. He explained that the wealthy often demand convenience, employing numerous others to carry out labor to meet their demands. Smith argued that this allows wealth to be redistributed among inhabitants, and for all to share in progress of society. Smith explained that social forces could regulate the market economy with social objectivity and without need for government intervention. Smith regarded the division of labor as an important factor for economic progress. John Millar suggested that improved status of women was important for progress of society. Millar also advocated for abolition of slavery, suggesting that personal liberty makes people more industrious, ambitious, and productive. The first "modern" social theories (known as classical theories) that begin to resemble the analytic social theory of today developed simultaneously with the birth of the science of sociology. Auguste Comte (1798–1857), known as the "father of sociology" and regarded by some as the first philosopher of science, laid the groundwork for positivism – as well as structural functionalism and social evolutionism. Karl Marx rejected Comtean positivism but nevertheless aimed to establish a science of society based on historical materialism, becoming recognised as a founding figure of sociology posthumously. At the turn of the 20th century, the first of German sociologists, including Max Weber and Georg Simmel, developed sociological antipositivism. The field may be broadly recognized as an amalgam of three modes of social scientific thought in particular; Durkheimian sociological positivism and structural functionalism, Marxist historical materialism and conflict theory, and Weberian antipositivism and verstehen critique. Another early modern theorist, Herbert Spencer (1820–1903), coined the term "survival of the fittest". Vilfredo Pareto (1848–1923) and Pitirim A. Sorokin argued that "history goes in cycles," and presented the social cycle theory to illustrate their point. Ferdinand Tönnies (1855–1936) made community and society (Gemeinschaft and Gesellschaft, 1887) the special topics of the new science of "sociology", both of them based on different modes of will of social actors. The 19th century pioneers of social theory and sociology, like Saint-Simon, Comte, Marx, John Stuart Mill or Spencer, never held university posts and they were broadly regarded as philosophers. Emile Durkheim endeavoured to formally established academic sociology, and did so at the University of Bordeaux in 1895, he published Rules of the Sociological Method. In 1896, he established the journal L'Année Sociologique. Durkheim's seminal monograph, Suicide (1897), a case study of suicide rates amongst Catholic and Protestant populations, distinguished sociological analysis from psychology or philosophy. Post-modern social theory The term "postmodernism" was brought into social theory in 1971 by the Arab American Theorist Ihab Hassan in his book: The Dismemberment of Orpheus: Toward a Postmodern Literature. In 1979 Jean-François Lyotard wrote a short but influential work The Postmodern Condition: A report on knowledge. Jean Baudrillard, Michel Foucault, and Roland Barthes were influential in the 1970s in developing postmodern theory. Scholars most commonly hold postmodernism to be a movement of ideas arising from, but also critical of elements of modernism. The wide range of uses of this term resulted in different elements of modernity are chosen as being continuous. Each of the different uses is rooted in some argument about the nature of knowledge, known in philosophy as epistemology. Individuals who use the term are arguing that either there is something fundamentally different about the transmission of meaning, or that modernism has fundamental flaws in its system of knowledge. The argument for the necessity of the term states that economic and technological conditions of our age have given rise to a decentralized, media-dominated society. These ideas are simulacra, and only inter-referential representations and copies of each other, with no real original, stable or objective source for communication and meaning. Globalization, brought on by innovations in communication, manufacturing and transportation, is cited as one force which has decentralized modern life, creating a culturally pluralistic and interconnected global society, lacking any single dominant center of political power, communication, or intellectual production. The postmodern view is that inter-subjective knowledge, and not objective knowledge, is the dominant form of discourse. The ubiquity of copies and dissemination alters the relationship between reader and what is read, between observer and the observed, between those who consume and those who produce. Not all people who use the term postmodern or postmodernism see these developments as positive. Users of the term argue that their ideals have arisen as the result of particular economic and social conditions, including "late capitalism", the growth of broadcast media, and that such conditions have pushed society into a new historical period. Today In the past few decades, in response to postmodern critiques, social theory has begun to stress free will, individual choice, subjective reasoning, and the importance of unpredictable events in place of deterministic necessity. Rational choice theory, symbolic interactionism, false necessity are examples of more recent developments. A view among contemporary sociologists is that there are no great unifying 'laws of history', but rather smaller, more specific, and more complex laws that govern society. Philosopher and politician Roberto Mangabeira Unger recently attempted to revise classical social theory by exploring how things fit together, rather than to provide an all encompassing single explanation of a universal reality. He begins by recognizing the key insight of classical social theory of society as an artifact, and then by discarding the law-like characteristics forcibly attached to it. Unger argues that classical social theory was born proclaiming that society is made and imagined, and not the expression of an underlying natural order, but at the same time its capacity was checked by the equally prevalent ambition to create law-like explanations of history and social development. The human sciences that developed claimed to identify a small number of possible types of social organization that coexisted or succeeded one another through inescapable developmental tendencies or deep-seated economic organization or psychological constraints. Marxism is the star example. Unger, calling his efforts "super-theory", has thus sought to develop a comprehensive view of history and society. Unger does so without subsuming deep structure analysis under an indivisible and repeatable type of social organization or with recourse to law-like constraints and tendencies. His articulation of such a theory is in False Necessity: anti-necessitarian social theory in the service of radical democracy, where he uses deep-logic practice to theorize human social activity through anti-necessitarian analysis. Unger begins by formulating the theory of false necessity, which claims that social worlds are the artifact of human endeavors. There is no pre-set institutional arrangement that societies must adhere to, and there is no necessary historical mold of development that they will follow. We are free to choose and to create the forms and the paths that our societies will take. However, this does not give license to absolute contingency. Unger finds that there are groups of institutional arrangements that work together to bring about certain institutional forms—liberal democracy, for example. These forms are the basis of a social structure, which Unger calls formative context. In order to explain how we move from one formative context to another without the conventional social theory constraints of historical necessity (e.g. feudalism to capitalism), and to do so while remaining true to the key insight of individual human empowerment and anti-necessitarian social thought, Unger recognized that there are an infinite number of ways of resisting social and institutional constraints, which can lead to an infinite number of outcomes. This variety of forms of resistance and empowerment make change possible. Unger calls this empowerment negative capability. However, Unger adds that these outcomes are always reliant on the forms from which they spring. The new world is built upon the existing one. Schools of thought Chicago school The Chicago school developed in the 1920s, through the work of Albion Woodbury Small, W. I. Thomas, Ernest W. Burgess, Robert E. Park, Ellsworth Faris, George Herbert Mead, and other sociologists at the University of Chicago. The Chicago school focused on patterns and arrangement of social phenomenon across time and place, and within context of other social variables. Critical theory Critical theorists focus on reflective assessment and critique of society and culture in order to reveal and challenge power structures and their relations and influences on social groups. Marxism Karl Marx wrote and theorized about the importance of political economy on society, and focused on the "material conditions" of life. His theories centered around capitalism and its effect on class-struggle between the proletariat and bourgeoisie. Postmodernism Postmodernism was defined by Jean-François Lyotard as "incredulity towards metanarratives" and contrasted that with modern which he described as "any science that legitimates itself with reference to a metadiscourse... making an explicit appeal to some grand narrative, such as the dialectics of Spirit the hermeneutics of meaning, the emancipation of the rational or working subject, or the creation of wealth." Other perspectives Other theories include: Social constructionist theory Rational choice theory Structural functionalism – influenced by Spencer and Durkheim Social action – influenced by Weber and Pareto Conflict theory – influenced by Marx, Simmel Symbolic interaction – influenced by George Herbert Mead False necessity Agential realism Key thinkers French social thought Some known French social thinkers are Claude Henri Saint-Simon, Auguste Comte, Émile Durkheim, and Michel Foucault. British social thought British social thought, with thinkers such as Herbert Spencer, addressed questions and ideas relating to political economy and social evolution. The political ideals of John Ruskin were a precursor of social economy (Unto This Last had a very important impact on Gandhi's philosophy). German social thought Important German philosophers and social thinkers included Immanuel Kant, Georg Wilhelm Friedrich Hegel, Karl Marx, Max Weber, Georg Simmel, Theodor W. Adorno, Max Horkheimer, Herbert Marcuse and Niklas Luhmann. Chinese social thought Important Chinese philosophers and social thinkers included Shang Yang, Lao Zi, Confucius, Mencius, Wang Chong, Wang Yangming, Li Zhi, Zhu Xi, Gu Yanwu, Gong Zizhen, Wei Yuan, Kang Youwei, Lu Xun, Mao Zedong, Zhu Ming. Italian sociology Important Italian social scientists include Antonio Gramsci, Gaetano Mosca, Vilfredo Pareto, Franco Ferrarotti. Thai social thought Important Thai social theorists include Jit Phumisak, Kukrit Pramoj, and Prawase Wasi In academic practices Social theory seeks to question why humans inhabit the world the way they do, and how that came to be by looking at power relations, social structures, and social norms, while also examining how humans relate to each other and the society they find themselves in, how this has changed over time and in different cultures, and the tools used to measure those things. Social theory looks to interdisciplinarity, combining knowledge from multiple academic disciplines in order to enlighten these complex issues, and can draw on ideas from fields as diverse as anthropology and media studies. Social theory guides scientific inquiry by promoting scientists to think about which topics are suitable for investigation and how they should measure them. Selecting or creating appropriate theory for use in examining an issue is an important skill for any researcher. Important distinctions: a theoretical orientation (or paradigm) is a worldview, the lens through which one organizes experience (i.e. thinking of human interaction in terms of power or exchange). A theory is an attempt to explain and predict behavior in particular contexts. A theoretical orientation cannot be proven or disproven; a theory can. Having a theoretical orientation that sees the world in terms of power and control, one could create a theory about violent human behavior which includes specific causal statements (e.g. being the victim of physical abuse leads to psychological problems). This could lead to a hypothesis (prediction) about what one expects to see in a particular sample, e.g. "a battered child will grow up to be shy or violent". One can then test the hypothesis by looking to see if it is consistent with data. One might, for instance, review hospital records to find children who were abused, then track them down and administer a personality test to see if they show signs of being violent or shy. The selection of an appropriate (i.e. useful) theoretical orientation within which to develop a potentially helpful theory is the bedrock of social science. Example of questions posed by social theorists Philosophical questions addressed by social thinkers often centered around modernity, including: Can human reason make sense of the social world and shape it for the better? Did the development of modern societies, with vast inequalities in wealth among citizens, constitute progress? How do particular government interventions and regulations impact natural social processes? Should the economy/market be regulated or not? Other issues relating to modernity that were addressed by social thinkers include social atomization, alienation, loneliness, social disorganization, and secularization. See also Continental philosophy Critical theory Culture theory Engaged theory Ethnomethodology Feminist theory History of sociology History of the social sciences Literary theory Political philosophy Political theory Post-colonial theory Post-structuralism Postmodernism Queer theory Social evolution Sociological theory References Further reading External links The International Social Theory Consortium Theoria: A Journal of Social and Political Theory (archived) Sociological Theorists Social Theory Research Network of the European Sociological Association David Harris, Why is Social Theory So "Difficult" Harriet Martineau 1802-1876, prolific writer on social theory, some at Project Gutenberg Teng Wang, Social Phenomena Sociological theories
0.771452
0.993545
0.766473
Rights
Rights are legal, social, or ethical principles of freedom or entitlement; that is, rights are the fundamental normative rules about what is allowed of people or owed to people according to some legal system, social convention, or ethical theory. Rights are an important concept in law and ethics, especially theories of justice and deontology. The history of social conflicts has often involved attempts to define and redefine rights. According to the Stanford Encyclopedia of Philosophy, "rights structure the form of governments, the content of laws, and the shape of morality as it is currently perceived". Types of rights Natural versus legal Natural rights are rights which are "natural" in the sense of "not artificial, not man-made", as in rights deriving from human nature or from the edicts of a god. They are universal; that is, they apply to all people, and do not derive from the laws of any specific society. They exist necessarily, inhere in every individual, and cannot be taken away. For example, it has been argued that humans have a natural right to life. These are sometimes called moral rights or inalienable rights. Legal rights, in contrast, are based on a society's customs, laws, statutes or actions by legislatures. An example of a legal right is the right to vote of citizens. Citizenship, itself, is often considered as the basis for having legal rights, and has been defined as the "right to have rights". Legal rights are sometimes called civil rights or statutory rights and are culturally and politically relative since they depend on a specific societal context to have meaning. Some thinkers see rights in only one sense while others accept that both senses have a measure of validity. There has been considerable philosophical debate about these senses throughout history. For example, Jeremy Bentham believed that legal rights were the essence of rights, and he denied the existence of natural rights, whereas Thomas Aquinas held that rights purported by positive law but not grounded in natural law were not properly rights at all, but only a facade or pretense of rights. Claim versus liberty A claim right is a right which entails that another person has a duty to the right-holder. Somebody else must do or refrain from doing something to or for the claim holder, such as perform a service or supply a product for him or her; that is, he or she has a claim to that service or product (another term is thing in action). In logic, this idea can be expressed as: "Person A has a claim that person B do something if and only if B has a duty to A to do that something." Every claim-right entails that some other duty-bearer must do some duty for the claim to be satisfied. This duty can be to act or to refrain from acting. For example, many jurisdictions recognize broad claim rights to things like "life, liberty, and property"; these rights impose an obligation upon others not to assault or restrain a person, or use their property, without the claim-holder's permission. Likewise, in jurisdictions where social welfare services are guaranteed, citizens have legal claim rights to be provided with those services. A liberty right or privilege, in contrast, is simply a freedom or permission for the right-holder to do something, and there are no obligations on other parties to do or not do anything. This can be expressed in logic as: "Person A has a privilege to do something if and only if A has no duty not to do that something." For example, if a person has a legal liberty right to free speech, that merely means that it is not legally forbidden for them to speak freely: it does not mean that anyone has to help enable their speech, or to listen to their speech; or even, per se, refrain from stopping them from speaking, though other rights, such as the claim right to be free from assault, may severely limit what others can do to stop them. Liberty rights and claim rights are the inverse of one another: a person has a liberty right permitting him to do something only if there is no other person who has a claim right forbidding him from doing so. Likewise, if a person has a claim right against someone else, then that other person's liberty is limited. For example, a person has a liberty right to walk down a sidewalk and can decide freely whether or not to do so, since there is no obligation either to do so or to refrain from doing so. But pedestrians may have an obligation not to walk on certain lands, such as other people's private property, to which those other people have a claim right. So a person's liberty right of walking extends precisely to the point where another's claim right limits his or her freedom. Positive versus negative In one sense, a right is a permission to do something or an entitlement to a specific service or treatment from others, and these rights have been called positive rights. However, in another sense, rights may allow or require inaction, and these are called negative rights; they permit or require doing nothing. For example, in some countries, e.g. the United States, citizens have the positive right to vote and they have the negative right to not vote; people can choose not to vote in a given election without punishment. In other countries, e.g. Australia, however, citizens have a positive right to vote but they do not have a negative right to not vote, since voting is compulsory. Accordingly: Positive rights are permissions to do things, or entitlements to be done unto. One example of a positive right is the purported "right to welfare". Negative rights are permissions not to do things, or entitlements to be left alone. Often the distinction is invoked by libertarians who think of a negative right as an entitlement to non-interference such as a right against being assaulted. Though similarly named, positive and negative rights should not be confused with active rights (which encompass "privileges" and "powers") and passive rights (which encompass "claims" and "immunities"). Individual versus group Individual rights are rights held by individual people regardless of their group membership or lack thereof. Group rights, including the rights of nations, have been argued to exist when a group is seen as more than a mere composite or assembly of separate individuals but an entity in its own right. In other words, it is possible to see a group as a distinct being in and of itself; it is akin to an enlarged individual, a corporate body, which has a distinct will and power of action and can be thought of as having rights. Rights of nations, including a national right to self-determination have been argued for, and a platoon of soldiers in combat can be thought of as a distinct group, since individual members are willing to risk their lives for the survival of the group, and therefore the group can be conceived as having a "right" which is superior to that of any individual member; for example, a soldier who disobeys an officer can be punished, perhaps even killed, for a breach of obedience. But there is another sense of group rights in which people who are members of a group can be thought of as having specific individual rights because of their membership in a group. In this sense, the set of rights which individuals-as-group-members have is expanded because of their membership in a group. For example, workers who are members of a group such as a labor union can be thought of as having expanded individual rights because of their membership in the labor union, such as the rights to specific working conditions or wages. There can be tension between individual and group rights. A classic instance in which group and individual rights clash is conflicts between unions and their members. For example, individual members of a union may wish a wage higher than the union-negotiated wage, but are prevented from making further requests; in a so-called closed shop which has a union security agreement, only the union has a right to decide matters for the individual union members such as wage rates. So, do the supposed "individual rights" of the workers prevail about the proper wage? Or do the "group rights" of the union regarding the proper wage prevail? The Austrian School of Economics holds that only individuals think, feel, and act whether or not members of any abstract group. The society should thus according to economists of the school be analyzed starting from the individual. This methodology is called methodological individualism and is used by the economists to justify individual rights. Similarly, the author Ayn Rand argued that only individuals have rights, according to her philosophy known as Objectivism. However, others have argued that there are situations in which a group of persons is thought to have rights, or group rights. Other senses Other distinctions between rights draw more on historical association or family resemblance than on precise philosophical distinctions. These include the distinction between civil and political rights and economic, social and cultural rights, between which the articles of the Universal Declaration of Human Rights are often divided. Another conception of rights groups them into three generations. These distinctions have much overlap with that between negative and positive rights, as well as between individual rights and group rights, but these groupings are not entirely coextensive. Politics Rights are often included in the foundational questions that governments and politics have been designed to deal with. Often the development of these socio-political institutions have formed a dialectical relationship with rights. Rights about particular issues, or the rights of particular groups, are often areas of special concern. Often these concerns arise when rights come into conflict with other legal or moral issues, sometimes even other rights. Issues of concern have historically included Indigenous rights, labor rights, LGBT rights, reproductive rights, disability rights, patient rights and prisoners' rights. With increasing monitoring and the information society, information rights, such as the right to privacy are becoming more important. Some examples of groups whose rights are of particular concern include animals, and amongst humans, groups such as children and youth, parents (both mothers and fathers), and men and women. Accordingly, politics plays an important role in developing or recognizing the above rights, and the discussion about which behaviors are included as "rights" is an ongoing political topic of importance. The concept of rights varies with political orientation. Positive rights such as a "right to medical care" are emphasized more often by left-leaning thinkers, while right-leaning thinkers place more emphasis on negative rights such as the "right to a fair trial". Further, the term equality which is often bound up with the meaning of "rights" often depends on one's political orientation. Conservatives and right-wing libertarians and advocates of free markets often identify equality with equality of opportunity, and want what they perceive as equal and fair rules in the process of making things, while agreeing that sometimes these fair rules lead to unequal outcomes. In contrast, socialists see the power imbalance of employer-employee relationships in capitalism as a cause of inequality and often see unequal outcomes as a hindrance to equality of opportunity. They tend to identify equality of outcome as a sign of equality and therefore think that people have a right to portions of necessities such as health care or economic assistance or housing that align with their needs. Philosophy In philosophy, meta-ethics is the branch of ethics that seeks to understand the nature of ethical properties, statements, attitudes, and judgments. Meta-ethics is one of the three branches of ethics generally recognized by philosophers, the others being normative ethics and applied ethics. While normative ethics addresses such questions as "What should one do?", thus endorsing some ethical evaluations and rejecting others, meta-ethics addresses questions such as "What is goodness?" and "How can we tell what is good from what is bad?", seeking to understand the nature of ethical properties and evaluations. Rights ethics is an answer to the meta-ethical question of what normative ethics is concerned with (meta-ethics also includes a group of questions about how ethics comes to be known, true, etc. which is not directly addressed by rights ethics). Rights ethics holds that normative ethics is concerned with rights. Alternative meta-ethical theories are that ethics is concerned with one of the following: Duties (deontology) Value (axiology) Virtue (virtue ethics) Consequences (consequentialism, e.g. utilitarianism) Rights ethics has had considerable influence on political and social thinking. The Universal Declaration of Human Rights gives some concrete examples of widely accepted rights. Criticism Some philosophers have criticised some rights as ontologically dubious entities. History The specific enumeration of rights has differed greatly in different periods of history. In many cases, the system of rights promulgated by one group has come into sharp and bitter conflict with that of other groups. In the political sphere, a place in which rights have historically been an important issue, constitutional provisions of various states sometimes address the question of who has what legal rights. Historically, many notions of rights were authoritarian and hierarchical, with different people granted different rights, and some having more rights than others. For instance, the right of a father to be respected by his son did not indicate a right of the son to receive something in return for that respect; and the divine right of kings, which permitted absolute power over subjects, did not leave much possibility for many rights for the subjects themselves. In contrast, modern conceptions of rights have often emphasized liberty and equality as among the most important aspects of rights, as was evident in the American and French revolutions. Important documents in the political history of rights include: The Persian Empire of ancient Iran established unprecedented principles of human rights in the 6th century BC under Cyrus the Great. After his conquest of Babylon in 539 BC, the king issued the Cyrus cylinder, discovered in 1879 and seen by some today as the first human rights document. The Constitution of Medina (622 AD; Arabia) instituted a number of rights for the Muslim, Jewish, camp followers and "believers" of Medina. Magna Carta (1215; England) required the King of England to renounce certain rights and respect certain legal procedures, and to accept that the will of the king could be bound by law, after King John promised his barons he would follow the "law of the land". While Magna Carta was originally a set of rules that the king had to follow, and mainly protected the property of aristocratic landowners, today it is seen as the basis of certain rights for ordinary people, such as the right of due process. The Declaration of Arbroath (1320; Scotland) established the right of the people to choose a head of state (see popular sovereignty). The Henrician Articles (1573; Poland-Lithuania) or King Henry's Articles were a permanent contract that stated the fundamental principles of governance and constitutional law in the Polish-Lithuanian Commonwealth, including the rights of the nobility to elect the king, to meet in parliament whose approval was required to levy taxes and declare war or peace, to religious liberty and the right to rebel in case the king transgressed against the laws of the republic or the rights of the nobility. The Bill of Rights (1689; England) declared that Englishmen, as embodied by Parliament, possess certain civil and political rights; the Claim of Right (1689; Scotland) was similar but distinct. The Virginia Declaration of Rights (1776) by George Mason declared the inherent natural rights and separation of powers. The United States Declaration of Independence (1776) succinctly defined the rights of man as including, but not limited to, "Life, liberty, and the pursuit of happiness" which later influenced "" (liberty, equality, fraternity) in France. The phrase can also be found in Chapter III, Article 13 of the 1947 Constitution of Japan, and in President Ho Chi Minh's 1945 declaration of independence of the Democratic Republic of Vietnam. An alternative phrase "life, liberty and property", is found in the Declaration of Colonial Rights, a resolution of the First Continental Congress. Also, Article 3 of the Universal Declaration of Human Rights reads, "Everyone has the right to life, liberty and security of person". The Declaration of the Rights of Man and of the Citizen (1789; France), one of the fundamental documents of the French Revolution, defined a set of individual rights and collective rights of the people. The Virginia Statute for Religious Freedom (1785; United States), written by Thomas Jefferson in 1779, was a document that asserted the right of man to form a personal relationship with God free from interference by the state. The United States Bill of Rights (1789–1791; United States), the first ten amendments of the United States Constitution specified rights of individuals in which government could not interfere, including the rights of free assembly, freedom of religion, trial by jury, and the right to keep and bear arms. The Constitution of Poland-Lithuania (1791; Poland-Lithuania) was the first constitution in Europe, and second in the world. It built upon previous Polish law documents such as the Henrician Articles, as well as the US constitution, and it too, specified many rights. The Universal Declaration of Human Rights (1948) is an overarching set of standards by which governments, organisations and individuals would measure their behaviour towards each other. The preamble declares that the "...recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world..." The European Convention on Human Rights (1950; Europe) was adopted under the auspices of the Council of Europe to protect human rights and fundamental freedoms. The International Covenant on Civil and Political Rights (1966), a follow-up to the Universal Declaration of Human Rights, concerns civil and political rights. The International Covenant on Economic, Social and Cultural Rights (1966), another follow-up to the Universal Declaration of Human Rights, concerns economic, social and cultural rights. The Canadian Charter of Rights and Freedoms (1982; Canada) was created to protect the rights of Canadian citizens from actions and policies of all levels of government. The Charter of Fundamental Rights of the European Union (2000) is one of the most recent proposed legal instruments concerning human rights. See also Outline of rights Animal rights Contractual rights Constitutionalism Deed Droit Equal rights (disambiguation), various meanings Exclusive rights Freedom of religion Freedom of speech Freedom of the press History of citizenship Jurisprudence Prerogative Right to food Right to housing Right to property Right to water Right to an adequate standard of living Right to health Right to social security Rule according to higher law Social contract Organisations: Amnesty International Human Rights Watch United States Commission on Civil Rights References Concepts in ethics Theories of law Libertarian theory Social concepts Legal doctrines and principles Concepts in political philosophy
0.768012
0.99792
0.766415
Quietism (philosophy)
Quietism in philosophy sees the role of philosophy as broadly therapeutic or remedial. Quietist philosophers believe that philosophy has no positive thesis to contribute; rather, it defuses confusions in the linguistic and conceptual frameworks of other subjects, including non-quietist philosophy. For quietists, advancing knowledge or settling debates (particularly those between realists and non-realists) is not the job of philosophy, rather philosophy should liberate the mind by diagnosing confusing concepts. Status within philosophy Crispin Wright said that "Quietism is the view that significant metaphysical debate is impossible." It has been described as "the view or stance that entails avoidance of substantive philosophical theorizing and is usually associated with certain forms of skepticism, pragmatism, and minimalism about truth. More particularly, it is opposed to putting forth positive theses and developing constructive arguments." Quietism by its nature is not a philosophical school as understood in the sense of a systematic body of truths. The objective of quietism is to show that philosophical positions or theories cannot solve problems, settle debates or advance knowledge. It is often raised in discussion as an opposite position to both philosophical realism and anti-realism. Specifically, quietists deny that there is any substantial debate between the positions of realism and non-realism. There are a range of justifications for quietism about the realism debate offered by Gideon Rosen and John McDowell. History and proponents Ancient Pyrrhonism represents perhaps the earliest example of an identifiably quietist position in the West. The Pyrrhonist philosopher Sextus Empiricus described Pyrrhonism as a form of philosophical therapy: Some have identified Epicureans as another early proponent of quietism. The goal of Epicurean philosophy is the decidedly quietist objectives of aponia (freedom from pain) and ataraxia, even dismissing Stoic logic as useless. The neo-Confucian philosopher Cheng Hao is also associated with advocating quietism. He argued that the goal of existence should be calming one's natural biases and embracing impartial tranquility. This aversion to bias is nevertheless quite distinct from Wittgenstein's position. Contemporary Contemporary discussion of quietism can be traced back to Ludwig Wittgenstein, whose work greatly influenced the ordinary language philosophers. While Wittgenstein himself did not advocate quietism, he expressed sympathy with the viewpoint. One of the early 'ordinary language' works, Gilbert Ryle's The Concept of Mind, attempted to demonstrate that dualism arises from a failure to appreciate that mental vocabulary and physical vocabulary are simply different ways of describing one and the same thing, namely human behaviour. J. L. Austin's Sense and Sensibilia took a similar approach to the problems of skepticism and the reliability of sense perception, arguing that they arise only by misconstruing ordinary language, not because there is anything genuinely wrong with empirical evidence. Norman Malcolm, a friend of Wittgenstein's, took a quietist approach to skeptical problems in the philosophy of mind. More recently, the philosophers John McDowell, Irad Kimhi, Sabina Lovibond, Eric Marcus, Gideon Rosen, and to a certain degree Richard Rorty have taken explicitly quietist positions. Pete Mandik has argued for a position of qualia quietism on the hard problem of consciousness. Varieties Some philosophers have advanced quietism about specific subjects such as realism or truth. These positions can be held independent of one's view on quietism about the entire project of philosophy. On realism One may be a realist about a range of subjects within philosophy from ethics and aesthetics to science and mathematics. Realists claim that a given concept exists, has particular properties and is in some way mind independent, while non-realists deny this claim. Quietists take a third position, claiming that there is no real debate between realists and non-realists on a given subject. A version of this position espoused by John McDowell claims that the debate hinges on theses about the relationship between the mind and the world around us that are unsupported or unsupportable, and without those claims there will be no debate. Others, such as Gideon Rosen argue more specifically against individual cases of the realism debate. On truth Quietism about truth is a version of the identity theory of truth. Specifically, Jennifer Hornsby and John McDowell argue against any ontological gap between what we think is true and what is actually true. Quietists about truth resist the distinction between truth bearers and truthmakers as leading to a correspondence theory of truth. Rather they claim that such a distinction should be eliminated, true statements are simply one thinking truly about the world. The target of these thoughts is not a truthbearer, but rather the facts of the world themselves. See also Philosophical hermeneutics Critical philosophy Fictionalism for Wittgenstein's approach to philosophical problems References Sources Wittgenstein, Ludwig. Philosophical Investigations. 3rd Rev Edn, Blackwell, 2002. Ryle, Gilbert. The Concept of Mind. London: Hutchinson, 1949. Austin, J L. Sense and Sensibilia. OUP, 1962. Macarthur, David. "Pragmatism, Metaphysical Quietism and the Problem of Normativity", Philosophical Topics. Vol.36 No.1, 2009. Malcolm, Norman. Dreaming (Studies in Philosophical Psychology). Routledge & Kegan Paul, 1959. McDowell, John and Evans, Gareth. Truth and Meaning. Oxford: Clarendon Press, 1976. McDowell, John. Mind and World. New Ed, Harvard, 1996. Metaphysical theories Theories of language Philosophical methodology Psychological attitude Scientific method Analytic philosophy
0.77255
0.992034
0.766396
Logic and rationality
As the study of argument is of clear importance to the reasons that we hold things to be true, logic is of essential importance to rationality. Arguments may be logical if they are "conducted or assessed according to strict principles of validity", while they are rational according to the broader requirement that they are based on reason and knowledge. Logic and rationality have each been taken as fundamental concepts in philosophy. They are not the same thing. Philosophical rationalism in its most extreme form is the doctrine that knowledge can ultimately be founded on pure reason, while logicism is the doctrine that mathematical concepts, among others, are reducible to pure logic. Forms of reasoning Deductive reasoning concerns the logical consequence of given premises. On a narrow conception of logic, logic concerns just deductive reasoning, although such a narrow conception controversially excludes most of what is called informal logic from the discipline. Other forms of reasoning are sometimes also taken to be part of logic, such as inductive reasoning and abductive reasoning, which are forms of reasoning that are not purely deductive, but include material inference. Similarly, it is important to distinguish deductive validity and inductive validity (called "strength"). An inference is deductively valid if and only if there is no possible situation in which all the premises are true but the conclusion false. An inference is inductively strong if and only if its premises give some degree of probability to its conclusion. The notion of deductive validity can be rigorously stated for systems of formal logic in terms of the well-understood notions of semantics. Inductive validity, on the other hand, requires us to define a reliable generalization of some set of observations. The task of providing this definition may be approached in various ways, some less formal than others; some of these definitions may use logical association rule induction, while others may use mathematical models of probability such as decision trees. For the most part this discussion of logic deals only with deductive logic. Abductive reasoning is a form of inference which goes from an observation to a theory which accounts for the observation, ideally seeking to find the simplest and most likely explanation. In abductive reasoning, unlike in deductive reasoning, the premises do not guarantee the conclusion. One can understand abductive reasoning as "inference to the best explanation". Critical thinking Critical thinking, also called critical analysis, is clear, rational thinking involving critique. Dialectic Dialectic is a discourse between two or more people holding different points of view about a subject but wishing to establish the truth through reasoned arguments. It has been the object of study since ancient times, but only recently has it been the subject of attempts at formalisation. Illogical thinking and irrational processes Illogicality in terms of thinking processes are, as defined by researchers such as Aaron T. Beck, cognitive distortions that cause abnormal functioning. The state of depression often feeds off of illogical thinking and results in victims being mired in self-defeating conclusions. Patients seeking psychological help may suffer from problems of over-generalization, becoming mired in general, negative conclusions on the basis of essentially insignificant life events. Cognitive behavioral therapy can assist individuals in recognizing their own habits of faulty logic and slanted interpretations of past experiences. On the other hand, depression in the sense of "Weltschmerz" in its non-aesthetically realistic and non-positivistic nature is intrinsically logical and rational. Some philosophers assert that the question of value of life has not been answered in psychologically pleasing way without embracing circular reasoning fallacy. In the socio-political context, the ability to amalgamate disparate, conflicting interests and passions into an illogical synthesis has been labeled as a possible strength, albeit one with concurrent weaknesses, by literary publications such as Blackwood's Magazine: See also Alogia Dysrationalia References Bibliography Robert Hanna, 2009. Rationality and Logic. MIT Press. Logic Reasoning
0.788985
0.971329
0.766364
Modal verb
A modal verb is a type of verb that contextually indicates a modality such as a likelihood, ability, permission, request, capacity, suggestion, order, obligation, necessity, possibility or advice. Modal verbs generally accompany the base (infinitive) form of another verb having semantic content. In English, the modal verbs commonly used are can, could, may, might, shall, should, will, would, and ought. Function Modal verbs have a wide variety of communicative functions, but these functions can generally be related to a scale ranging from possibility ("may") to necessity ("must"), in terms of one of the following types of modality: epistemic modality, concerned with the theoretical possibility of propositions being true or not true (including likelihood and certainty) deontic modality, concerned with possibility and necessity in terms of freedom to act (including permission and duty) dynamic modality, which may be distinguished from deontic modality in that, with dynamic modality, the conditioning factors are internal – the subject's own ability or willingness to act The following sentences illustrate epistemic and deontic uses of the English modal verb must: epistemic: You must be starving. ("I think it is almost a certainty that you are starving.") deontic: You must leave now. ("You are required to leave now.") An ambiguous case is You must speak Spanish. The primary meaning would be the deontic meaning ("You are required to speak Spanish.") but this may be intended epistemically ("It is surely the case that you speak Spanish"). Epistemic modals can be analyzed as raising verbs, while deontic modals can be analyzed as control verbs. Epistemic usages of modals tend to develop from deontic usages. For example, the inferred certainty sense of English must developed after the strong obligation sense; the probabilistic sense of should developed after the weak obligation sense; and the possibility senses of may and can developed later than the permission or ability sense. Two typical sequences of evolution of modal meanings are: internal mental ability → internal ability → root possibility (internal or external ability) → permission and epistemic possibility obligation → probability English The following table lists English modal verbs and various senses in which they are used: {| class="wikitable" |- ! Modal verb !! Epistemic sense || Deontic sense || Dynamic sense |- | can || That can indeed hinder. || You can, if you are allowed. || She can really sing. |- | could || That could happen soon. || – || He could swim when he was young. |- | may || That may be a problem. || May I stay? || – |- | might || The weather might improve. || Might I help you? || – |- | must || It must be hot outside. || Sam must go to school. || – |- | shall || This shall not be viewed kindly. || You shall not pass. || – |- | should || That should be surprising. || You should stop that. || – |- | will || She will try to lie. || – || – |- | would || Nothing would accomplish that. || – || – |- |ought |That ought to be correct. |You ought to be kind. |} In other languages Hawaiian Pidgin Hawaiian Pidgin is a creole language most of whose vocabulary, but not grammar, is drawn from English. As is generally the case with creole languages, it is an isolating language and modality is typically indicated by the use of invariant pre-verbal auxiliaries. The invariance of the modal auxiliaries to person, number, and tense makes them analogous to modal auxiliaries in English. However, as in most creoles the main verbs are also invariant; the auxiliaries are distinguished by their use in combination with (followed by) a main verb. There are various preverbal modal auxiliaries: Kaen "can", laik "want to", gata "have got to", haeftu "have to", baeta "had better", sapostu "am/is/are supposed to". Unlike in Germanic languages, tense markers are used, albeit infrequently, before modals: Gon kaen kam "is going to be able to come". Waz "was" can indicate past tense before the future/volitional marker gon and the modal sapostu: Ai waz gon lift weits "I was gonna lift weights"; Ai waz sapostu go "I was supposed to go". Hawaiian Hawaiian, like the Polynesian languages generally, is an isolating language, so its verbal grammar exclusively relies on unconjugated verbs. Thus, as with creoles, there is no real distinction between modal auxiliaries and lexically modal main verbs that are followed by another main verb. Hawaiian has an imperative indicated by e + verb (or in the negative by mai + verb). Some examples of the treatment of modality are as follows: Pono conveys obligation/necessity as in He pono i nā kamali'i a pau e maka'ala, "It's right for children all to beware", "All children should/must beware"; ability is conveyed by hiki as in Ua hiki i keia kamali'i ke heluhelu "Has enabled to this child to read", "This child can read". French French, like some other Romance languages, does not have a grammatically distinct class of modal auxiliary verbs and expresses modality using lexical verbs followed by infinitives: for example, pouvoir "to be able" (Je peux aller, "I can go"), devoir "to have an obligation" (Je dois aller, "I must go"), and vouloir "to want" (Je veux aller "I want to go"). Italian Modal verbs in Italian form a distinct class (verbi modali or verbi servili). They can be easily recognized by the fact that they are the only group of verbs that does not have a fixed auxiliary verb for forming the perfect, but they can inherit it from the verb they accompany – Italian can have two different auxiliary verbs for forming the perfect, avere ("to have"), and essere ("to be"). There are in total four modal verbs in Italian: potere ("can"), volere ("want"), dovere ("must"), sapere ("to be able to"). Modal verbs in Italian are the only group of verbs allowed to follow this particular behavior. When they do not accompany other verbs, they all use avere ("to have") as a helping verb for forming the perfect. For example, the helping verb for the perfect of potere ("can") is avere ("have"), as in ho potuto (lit. "I-have been-able","I could"); nevertheless, when used together with a verb that has as auxiliary essere ("be"), potere inherits the auxiliary of the second verb. For example: ho visitato il castello (lit. "I-have visited the castle") / ho potuto visitare il castello (lit. "I-have been-able to-visit the castle","I could visit the castle"); but sono scappato (lit. "I-am escaped", "I have escaped") / sono potuto scappare (lit. "I-am been-able to-escape", "I could escape"). Note that, like in other Romance languages, there is no distinction between an infinitive and a bare infinitive in Italian, hence modal verbs are not the only group of verbs that accompanies an infinitive (where in English instead there would be the form with "to" – see for example Ho preferito scappare ("I have preferred to escape"). Thus, while in English a modal verb can be easily recognized by the sole presence of a bare infinitive, there is no easy way to distinguish the four traditional Italian modal verbs from other verbs, except the fact that the former are the only verbs that do not have a fixed auxiliary verb for the perfect. For this reason some grammars consider also the verbs osare ("to dare to"), preferire ("to prefer to"), desiderare ("to desire to"), solere ("to use to") as modal verbs, despite these always use avere as auxiliary verb for the perfect. Mandarin Chinese Mandarin Chinese is an isolating language without inflections. As in English, modality can be indicated either lexically, with main verbs such as yào "want" followed by another main verb, or with auxiliary verbs. In Mandarin the auxiliary verbs have six properties that distinguish them from main verbs: They must co-occur with a verb (or an understood verb). They cannot be accompanied by aspect markers. They cannot be modified by intensifiers such as "very". They cannot be nominalized (used in phrases meaning, for example, "one who can") They cannot occur before the subject. They cannot take a direct object. The complete list of modal auxiliary verbs consists of three meaning "should", four meaning "be able to", two meaning "have permission to", one meaning "dare", one meaning "be willing to", four meaning "must" or "ought to", and one meaning "will" or "know how to". Spanish Spanish, like French, uses fully conjugated verbs followed by infinitives. For example, poder "to be able" (Puedo andar, "I can walk"), deber "to have an obligation" (Debo andar, "I must walk"), and querer "to want" (Quiero andar "I want to walk"). The correct use of andar in these examples would be reflexive. "Puedo andar" means "I can walk", "Puedo irme" means "I can leave" or "I can take myself off/away". The same applies to the other examples. See also English auxiliaries and contractions German modal particle Grammatical mood Modal logic Modal word References Modalverben Bibliography The Syntactic Evolution of Modal Verbs in the History of English Walter W. Skeat, The Concise Dictionary of English Etymology (1993), Wordsworth Editions Ltd. External links German Modal Verbs A grammar lesson covering the German modal verbs Modal Verbs Modal Verb Tutorial Wikiversity:Explication of modalities Verb Verb types Philosophy of language
0.767982
0.997865
0.766342
Instrumentalism
In philosophy of science and in epistemology, instrumentalism is a methodological view that ideas are useful instruments, and that the worth of an idea is based on how effective it is in explaining and predicting natural phenomena. According to instrumentalists, a successful scientific theory reveals nothing known either true or false about nature's unobservable objects, properties or processes. Scientific theory is merely a tool whereby humans predict observations in a particular domain of nature by formulating laws, which state or summarize regularities, while theories themselves do not reveal supposedly hidden aspects of nature that somehow explain these laws. Instrumentalism is a perspective originally introduced by Pierre Duhem in 1906. Rejecting scientific realism's ambitions to uncover metaphysical truth about nature, instrumentalism is usually categorized as an antirealism, although its mere lack of commitment to scientific theory's realism can be termed nonrealism. Instrumentalism merely bypasses debate concerning whether, for example, a particle spoken about in particle physics is a discrete entity enjoying individual existence, or is an excitation mode of a region of a field, or is something else altogether. Instrumentalism holds that theoretical terms need only be useful to predict the phenomena, the observed outcomes. There are multiple versions of instrumentalism. History British empiricism Newton's theory of motion, whereby any object instantly interacts with all other objects across the universe, motivated the founder of British empiricism, John Locke, to speculate that matter is capable of thought. The next leading British empiricist, George Berkeley, argued that an object's putative primary qualities as recognized by scientists, such as shape, extension, and impenetrability, are inconceivable without the putative secondary qualities of color, hardness, warmth, and so on. He also posed the question how or why an object could be properly conceived to exist independently of any perception of it. Berkeley did not object to everyday talk about the reality of objects, but instead took issue with philosophers' talk, who spoke as if they knew something beyond sensory impressions that ordinary folk did not. For Berkeley, a scientific theory does not state causes or explanations, but simply identifies perceived types of objects and traces their typical regularities. Berkeley thus anticipated the basis of what Auguste Comte in the 1830s called positivism, although Comtean positivism added other principles concerning the scope, method, and uses of science that Berkeley would have disavowed. Berkeley also noted the usefulness of a scientific theory having terms that merely serve to aid calculations without their having to refer to anything in particular, so long as they proved useful in practice. Berkeley thus predated the insight that logical positivists—who originated in the late 1920s, but who, by the 1950s, had softened into logical empiricists—would be compelled to accept: theoretical terms in science do not always translate into observational terms. The last great British empiricist, David Hume, posed a number of challenges to Francis Bacon's inductivism, which had been the prevailing, or at least the professed view concerning the attainment of scientific knowledge. Regarding himself as having placed his own theory of knowledge on par with Newton's theory of motion, Hume supposed that he had championed inductivism over scientific realism. Upon reading Hume's work, Immanuel Kant was "awakened from dogmatic slumber", and thus sought to neutralise any threat to science posed by Humean empiricism. Kant would develop the first stark philosophy of physics. Transcendental idealism To save Newton's law of universal gravitation, Immanuel Kant reasoned that the mind is the precondition of experience and so, as the bridge from the noumena, which are how the world's things exist in themselves, to the phenomena, which are humans' recognized experiences. And so mind itself contains the structure that determines space, time, and substance, how mind's own categorization of noumena renders space Euclidean, time constant, and objects' motions exhibiting the very determinism predicted by Newtonian physics. Kant apparently presumed that the human mind, rather than a phenomenon itself that had evolved, had been predetermined and set forth upon the formation of humankind. In any event, the mind also was the veil of appearance that scientific methods could never lift. And yet the mind could ponder itself and discover such truths, although not on a theoretical level, but only by means of ethics. Kant's metaphysics, then, transcendental idealism, secured science from doubt—in that it was a case of "synthetic a priori" knowledge ("universal, necessary and informative")—and yet discarded hope of scientific realism. Logical empiricism Since the mind has virtually no power to know anything beyond direct sensory experience, Ernst Mach's early version of logical positivism (empirio-criticism) verged on idealism. It was alleged to even be a surreptitious solipsism, whereby all that exists is one's own mind. Mach's positivism also strongly asserted the ultimate unity of the empirical sciences. Mach's positivism asserted phenomenalism as to new basis of scientific theory, all scientific terms to refer to either actual or potential sensations, thus eliminating hypotheses while permitting such seemingly disparate scientific theories as physical and psychological to share terms and forms. Phenomenalism was insuperably difficult to implement, yet heavily influenced a new generation of philosophers of science, who emerged in the 1920s while terming themselves logical positivists while pursuing a program termed verificationism. Logical positivists aimed not to instruct or restrict scientists, but to enlighten and structure philosophical discourse to render scientific philosophy that would verify philosophical statements as well as scientific theories, and align all human knowledge into a scientific worldview, freeing humankind from so many of its problems due to confused or unclear language. The verificationists expected a strict gap between theory versus observation, mirrored by a theory's theoretical terms versus observable terms. Believing a theory's posited unobservables to always correspond to observations, the verificationists viewed a scientific theory's theoretical terms, such as electron, as metaphorical or elliptical at observations, such as white streak in cloud chamber. They believed that scientific terms lacked meanings unto themselves, but acquired meanings from the logical structure that was the entire theory that in turn matched patterns of experience. So by translating theoretical terms into observational terms and then decoding the theory's mathematical/logical structure, one could check whether the statement indeed matched patterns of experience, and thereby verify the scientific theory false or true. Such verification would be possible, as never before in science, since translation of theoretical terms into observational terms would make the scientific theory purely empirical, none metaphysical. Yet the logical positivists ran into insuperable difficulties. Moritz Schlick debated with Otto Neurath over foundationalism—the traditional view traced to Descartes as founder of modern Western philosophy—whereupon only nonfoundationalism was found tenable. Science, then, could not find a secure foundation of indubitable truth. And since science aims to reveal not private but public truths, verificationists switched from phenomenalism to physicalism, whereby scientific theory refers to objects observable in space and at least in principle already recognizable by physicists. Finding strict empiricism untenable, verificationism underwent "liberalization of empiricism". Rudolf Carnap even suggested that empiricism's basis was pragmatic. Recognizing that verification—proving a theory false or true—was unattainable, they discarded that demand and focused on confirmation theory. Carnap sought simply to quantify a universal law's degree of confirmation—its probable truth—but, despite his great mathematical and logical skill, discovered equations never operable to yield over zero degree of confirmation. Carl Hempel found the paradox of confirmation. By the 1950s, the verificationists had established philosophy of science as subdiscipline within academia's philosophy departments. By 1962, verificationists had asked and endeavored to answer seemingly all the great questions about scientific theory. Their discoveries showed that the idealized scientific worldview was naively mistaken. By then the leader of the legendary venture, Hempel raised the white flag that signaled verificationism's demise. Suddenly striking Western society, then, was Kuhn's landmark thesis, introduced by none other than Carnap, verificationism's greatest firebrand. Instrumentalism exhibited by scientists often does not even discern unobservable from observable entities. Historical turn From the 1930s until Thomas Kuhn's 1962 The Structure of Scientific Revolutions, there were roughly two prevailing views about the nature of science. The popular view was scientific realism, which usually involved a belief that science was progressively unveiling a truer view, and building a better understanding, of nature. The professional approach was logical empiricism, wherein a scientific theory was held to be a logical structure whose terms all ultimately refer to some form of observation, while an objective process neutrally arbitrates theory choice, compelling scientists to decide which scientific theory was superior. Physicists knew better, but, busy developing the Standard Model, were so steeped in developing quantum field theory, that their talk, largely metaphorical, perhaps even metaphysical, was unintelligible to the public, while the steep mathematics warded off philosophers of physics. By the 1980s, physicists regarded not particles, but fields as the more fundamental, and no longer even hoped to discover what entities and processes might be truly fundamental to nature, perhaps not even the field. Kuhn had not claimed to have developed a novel thesis, but instead hoped to synthesize more usefully recent developments in the philosophy and history of science. Scientific realism One scientific realist, Karl Popper, rejected all variants of positivism via its focus on sensations rather than realism, and developed critical rationalism instead. Popper alleged that instrumentalism reduces basic science to what is merely applied science. The British physicist David Deutsch, in his much later 1997 book The Fabric of Reality, followed Popper's critique of instrumentalism and argued that a scientific theory stripped of its explanatory content would be of strictly limited utility. Constructive empiricism as a form of instrumentalism Bas van Fraassen's (1980) project of constructive empiricism focuses on belief in the domain of the observable, so for this reason it is described as a form of instrumentalism. In the philosophy of mind In the philosophy of mind, instrumentalism is the view that propositional attitudes like beliefs are not actually concepts on which we can base scientific investigations of mind and brain, but that acting as if other beings have beliefs is a successful strategy. Relation to pragmatism Instrumentalism is closely related to pragmatism, the position that practical consequences are an essential basis for determining meaning, truth or value. Notable proponents John Dewey (American pragmatist) Richard Rorty See also Instrumental and value rationality Instrumental and intrinsic value Natural kind Fact–value distinction Inductive reasoning Notes Sources Torretti, Roberto, The Philosophy of Physics (Cambridge: Cambridge University Press, 1999), Berkeley, pp. 98, 101–4. Empiricism Epistemology of science John Dewey Metatheory of science Positivism Pragmatism Rationalism
0.775227
0.988518
0.766326
Religious naturalism
Religious naturalism is a framework for religious orientation in which a naturalist worldview is used to respond to types of questions and aspirations that are parts of many religions. It has been described as "a perspective that finds religious meaning in the natural world." Religious naturalism can be considered intellectually, as a philosophy, and it can be embraced as a part of, or as the focus of, a personal religious orientation. Advocates have stated that it can be a significant option for people who are unable to embrace religious traditions in which supernatural presences or events play prominent roles, and that it provides "a deeply spiritual and inspiring religious vision" that is particularly relevant in a time of ecological crisis. Overview Naturalism Naturalism is the view that the natural world is all that exists, and that its constituents, principles, and relationships are the sole reality. All that occurs is seen as being due to natural processes, with nothing supernatural involved. As Sean Carroll put it: Naturalism comes down to three things: There is only one world, the natural world. The world evolves according to unbroken patterns, the laws of nature. The only reliable way of learning about the world is by observing it. Essentially, naturalism is the idea that the world revealed to us by scientific investigation is the one true world. In religious naturalism, a naturalist view (as described above) defines the bounds of what can be believed as being possible or real. As this does not include a view of a personal god who may cause specific actions or miracles, or of a soul that may live on after death, religious naturalists draw from what can be learned about the workings of the natural world as they try to understand why things happen as they do, and for perspectives that can help to determine what is right or good (and why) and what we might aspire to and do. Religious When the term, religious, is used with respect to religious naturalism, it is understood in a general way—separate from the beliefs or practices of specific established religions, but including types of questions, aspirations, values, attitudes, feelings, and practices that are parts of many religious traditions. It can include... interpretive, spiritual, and moral responses to questions about how things are and which things matter, beliefs, practices, and ethics that orient people to "the big picture" (including our place in relation to a vast and ancient cosmos and other people and forms of life), and pursuit of "high-minded goals" (such as truth, wisdom, fulfillment, serenity, self-understanding, justice, and a meaningful life). As Jerome Stone put it, "One way of getting at what we mean by religion is that it is our attempt to make sense of our lives and behave appropriately within the total scheme of things." When discussing distinctions between religious naturalists and non-religious (nonspiritual) naturalists, Loyal Rue said: "I regard a religious or spiritual person to be one who takes ultimate concerns to heart." He noted that, while "plain old" naturalists share similar views about what may occur in the world, those who describe themselves as religious naturalists take nature more "to heart", in seeing it as vitally important, and as something that they may respond to on a deeply personal level. Shared principles The main principle of religious naturalism is that a naturalist worldview can serve as a foundation for religious orientation. Shared principles related to naturalism include views that: the best way to understand natural processes is through methods of science; where scientists observe, test, and draw conclusions from what is seen and non-scientists learn from what scientists have described; for some topics, such as questions of purpose, meaning, morality, and emotional or spiritual responses, science may be of limited value and perspectives from psychology, philosophy, literature, and related disciplines, plus art, myth, and use of symbols, can contribute to understanding; and due to limits in human knowledge, some things are currently not well-understood, and some things may never be known. Shared principles related to having nature as a focus of religious orientation include the view that nature is of ultimate importance—as the forces and ordered processes that enable our lives, and all of life, and that cause all things to be are as they are. As such, nature can prompt religious responses, which can vary for each person and can include: a sense of amazement or awe – at the wonder of our lives and our world, and the beauty, order, and power that can be seen in nature, appreciation or gratitude – for the gift of life, and opportunities for fulfillment that can come with this, a sense of humility, in seeing ourselves as small and fleeting parts of a vast and ancient cosmos, an attitude of acceptance (or appreciation) of mystery, where learning to become comfortable with the fact that some things are unable to be known can contribute to peace of mind, and reverence – in viewing the natural world as sacred (worthy of religious veneration). Nature is not "worshipped", in the sense of reverent devotion to a deity. Instead, the natural world is respected as a primary source of truth—as it expresses and illustrates the varied principles of nature that enable life and may contribute to well-being. With this, learning about nature, including human nature, (via both academic and artistic resources and direct personal experiences) is seen as valuable—as it can provide an informed base of understanding of how things are and why things happen as they do, expand awareness and appreciation of the interdependence among all things, prompt an emotional or spiritual sense of connection with other people and forms of life in all of nature, and serve as a point of reference for considering and responding to moral and religious questions and life challenges. Tenets As in many religious orientations, religious naturalism includes a central story, with a description of how it is believed that our world and human beings came to be. In this (based on what can be understood through methods of science), the cosmos began approximately 13.8 billion years ago as a massive expansion of energy, which has been described as "the Big Bang". Due to natural forces and processes, this expansion led over time to the emergence of light, nuclear particles, galaxies, stars, and planets. Life on Earth is thought to have emerged more than 3.5 billion years ago—beginning with molecules that combined in ways that enabled them to maintain themselves as stable entities and self-replicate, which evolved to single-cell organisms and then to varied multi-cell organisms that, over time, included millions of varied species, including mammals, primates, and humans, living in complex interdependent ecosystems. This story has been described as "The Epic of Evolution" and, for religious naturalists, it provides a foundation for considering how things are, which things matter, and how we should live. It is also seen as having a potential to unite all humans with a shared understanding of our world, including conditions that are essential to all lives, as it is based on the best available scientific knowledge and is widely accepted among scientists and in many cultures worldwide. From the perspective of religious naturalism humans are seen as biological beings—composed of natural substances and products of evolution who act in ways that are enabled and limited by natural processes. With this, all of what we think, feel, desire, decide, and do is due to natural processes and, after death, each person ceases to be, with no potential for an eternal afterlife or reincarnation. Due to evolving from common ancient roots, many of the processes that enable our human lives (including aspects of body and mind) are shared by other types of living things. And, as we recognize what we share, we can feel a type of kinship or connection with all forms of life. Similarly, recognizing that all forms of life are: dependent on conditions on Earth (to provide atmosphere, soil, temperature, water, and other requirements for life) and also interdependent with other forms of life (as sources of food, and in contributing to healthy ecosystems), and in recognizing and appreciating Earth as a rare site, in a vast cosmos, where life exists, and as the environment that is essential for our lives and well-being, this planet and its life-enabling qualities is seen as being of ultimate concern, which can prompt or warrant a felt need to respect, preserve, and protect the varied ecosystems that sustain us. Values are seen as having accompanied the emergence of life—where, unlike rocks and other inanimate objects that perform no purposeful actions, living things have a type of will that prompts them to act in ways that enable them (or their group) to survive and reproduce. With this, life can be seen as a core/primary value, and things that can contribute to life and well-being are also valued. And, from a religious naturalist perspective, ongoing reproduction and continuation of life (a "credo of continuation"), has been described as a long-term goal or aspiration. Morality, likewise, is seen as having emerged in social groups, as standards for behavior and promotion of virtues that contribute to the well-being of groups. Evolutionary roots of this can be seen in groups of primates and some other types of mammals and other creatures, where empathy, helping others, a sense of fairness, and other elements of morality have often been seen. It includes promotion of "virtues" (behaviors seen pro-social or "good"). With perspectives of religious naturalism, moral concern is seen as extending beyond the well-being of human groups to an "ecomorality" that also includes concern for the well-being of non-human species (in part, as this recognizes how non-human life can contribute to the well-being of humans, and also as it respects the value of all life). With recognition that moral choices can be complex (where as some choices benefit one group, they may cause harm to others), an aspiration is that, beyond aspiring to virtues and adhering to social rules, religious naturalists can work to develop mature judgement that prepares them to consider varied aspects of challenges, judge options, and make choices that consider impact from several perspectives. Advocates of religious naturalism believe that, as they offer perspectives that can help to show how things really are in the physical world, and which things ultimately matter, and as they can contribute to development of religious attitudes, including humility, gratitude, compassion, and caring, and enhance exposure to and appreciation of the many wonders of the natural world, perspectives from religious naturalism can contribute to personal wholeness, social cohesion, and awareness and activities that can contribute to preservation of global ecosystems. Beyond supporting a credo of continuation that values varied forms of life and ecosystems, aspirations based on religious naturalism include: living in harmony with nature, exploring and celebrating the mysteries of nature, and pursuing goals that enable the long-term viability of the biosphere. As suggested by Donald Crosby, since nature is regarded as a focus of religious commitment and concern, religious naturalists may "grant to nature the kind of reverence awe, love and devotion we in the West have formerly reserved for God." History Core themes in religious naturalism have been present, in varied cultures, for centuries. But active discussion, with the use of this name, is relatively recent. Zeno (c. 334 – c. 262 BCE, a founder of Stoicism) said: Views consistent with religious naturalism can be seen in ancient Daoist texts (e.g., Dao De Jing) and some Hindu views (such as God as Nirguna Brahman, God without attributes). They may also be seen in Western images that do not focus on active, personal aspects of God, such as Thomas Aquinas' view of God as Pure Act, Augustine's God as Being Itself, and Paul Tillich's view of God as Ground of Being . As Wesley Wildman has described, views consistent with religious naturalism have long existed as part of the underside of major religious traditions, often quietly and sometimes in mystical strands or intellectual sub-traditions, by practitioners who are not drawn to supernatural claims. The earliest uses of the term, religious naturalism, seem to have occurred in the 1800s. In 1846, the American Whig Review described "a seeming 'religious naturalism, In 1869, American Unitarian Association literature adjudged:"Religious naturalism differs from this mainly in the fact that it extends the domain of nature farther outward into space and time. ...It never transcends nature". Ludwig Feuerbach wrote that religious naturalism was "the acknowledgment of the Divine in Nature" and also "an element of the Christian religion", but by no means that religion's definitive "characteristic" or "tendency". In 1864, Pope Pius IX condemned religious naturalism in the first seven articles of the Syllabus of Errors. Mordecai Kaplan (1881–1983), founder of the Jewish Reconstructionist movement, was an early advocate of religious naturalism. He believed that a naturalistic approach to religion and ethics was possible in a desacralizing world. He saw God as the sum of all-natural processes. Other verified usages of the term came in 1940 from George Perrigo Conger and from Edgar S. Brightman. Shortly thereafter, H. H. Dubs wrote an article entitled "Religious Naturalism: An Evaluation", which begins "Religious naturalism is today one of the outstanding American philosophies of religion..." and discusses ideas developed by Henry Nelson Wieman in books that predate Dubs's article by 20 years. In 1991 Jerome A. Stone wrote The Minimalist Vision of Transcendence explicitly "to sketch a philosophy of religious naturalism". Use of the term was expanded in the 1990s by Loyal Rue, who was familiar with it from Brightman's book. Rue used the term in conversations with several people before 1994, and subsequent conversations between Rue and Ursula Goodenough [both of whom were active in the Institute on Religion in an Age of Science (IRAS) led to Goodenough's use in her book The Sacred Depths of Nature and by Rue in Religion is Not About God and other writings. Since 1994 numerous authors have used the phrase or expressed similar thinking. Examples include Chet Raymo, Stuart Kauffman and Karl E. Peters. Mike Ignatowski states that "there were many religious naturalists in the first half of the 20th century and some even before that" but that "religious naturalism as a movement didn't come into its own until about 1990 [and] took a major leap forward in 1998 when Ursula Goodenough published The Sacred Depths of Nature, which is considered one of the founding texts of this movement." Biologist Ursula Goodenough states: Donald Crosby's Living with Ambiguity published in 2008, has, as its first chapter, "Religion of Nature as a Form of Religious Naturalism". Loyal Rue's Nature Is Enough published in 2011, discusses "Religion Naturalized, Nature Sanctified" and "The Promise of Religious Naturalism". Religious Naturalism Today: The Rebirth of a Forgotten Alternative is a history by Dr. Jerome A. Stone (Dec. 2008 release) that presents this paradigm as a once-forgotten option in religious thinking that is making a rapid revival. It seeks to explore and encourage religious ways of responding to the world on a completely naturalistic basis without a supreme being or ground of being. This book traces this history and analyzes some of the issues dividing religious naturalists. It covers the birth of religious naturalism, from George Santayana to Henry Nelson Wieman and briefly explores religious naturalism in literature and art. Contested issues are discussed including whether nature's power or goodness is the focus of attention and also on the appropriateness of using the term "God". The contributions of more than twenty living religious naturalists are presented. The last chapter ends the study by exploring what it is like on the inside to live as a religious naturalist. Chet Raymo writes that he had come to the same conclusion as Teilhard de Chardin: "Grace is everywhere", and that naturalistic emergence is in everything and far more magical than religion-based miracles. A future humankind religion should be ecumenical, ecological, and embrace the story provided by science as the "most reliable cosmology". Carol Wayne White is among a younger generation of scholars whose model of religious naturalism helps advance socially- and ethically- oriented models of practice. Using the best available insights from scientific studies, White conceives of the human as an emergent, interconnected life form amid spectacular biotic diversity, which has far-reaching ethical implications within the context of ecology, religion, and American life. Her religious naturalism contributes to an intellectual legacy that has attempted to overcome the deficient conceptions of our myriad nature couched in problematic binary constructions. In doing so, her religious naturalism not only presents human beings as biotic forms emerging from evolutionary processes sharing a deep homology with other sentient beings, it also emphasizes humans valuing such connection. In Black Lives and Sacred Humanity, Toward an African American Religious Naturalism (Fordham Press, 2016), White confronts both human–human forms of injustice and ecological forms of injustice that occur when we fail to recognize these basic truths. As P. Roger Gillette summarizes: Varieties The literature related to religious naturalism includes many variations in conceptual framing. This reflects individual takes on various issues, to some extent various schools of thought, such as basic naturalism, religious humanism, pantheism, panentheism, and spiritual naturalism that have had time on the conceptual stage, and to some extent differing ways of characterizing Nature. The current discussion often relates to the issue of whether belief in a God or God-language and associated concepts have any place in a framework that treats the physical universe as its essential frame of reference and the methods of science as providing the preeminent means for determining what Nature is. There are at least three varieties of religious naturalism, and three similar but somewhat different ways to categorize them. They are: An approach to naturalism using theological language but fundamentally treats God metaphorically. An approach to naturalism using theological language, but as either (1) a faith statement or supported by philosophical arguments, or (2) both, usually leaving open the question whether that usage as metaphor or refers to the ultimate answer that Nature can be. Neo-theistic (process theology, progressive religions) – Gordon Kaufman, Karl E. Peters, Ralph Wendell Burhoe, Edmund Robinson Non-theistic (agnostic, naturalistic concepts of god) – Robertson himself, Stanley Klein, Stuart Kauffman, Naturalistic Paganism. Atheistic (no God concept, some modern naturalism, Process Naturalism, C. Robert Mesle, non-militant atheism, antitheism) – Jerome A. Stone, Michael Cavanaugh, Donald A. Crosby, Ursula Goodenough, Daniel Dennett, and Carol Wayne White A miscellany of individual perspectives – Philip Hefner The first category has as many sub-groups as there are distinct definitions for god. Believers in a supernatural entity (transcendent) are by definition not religious naturalists, however the matter of a naturalistic concept of God (Immanence) is currently debated. Strong atheists are not considered religious naturalists in this differentiation. Some individuals call themselves religious naturalists but refuse to be categorized. The unique theories of religious naturalists Loyal Rue, Donald A. Crosby, Jerome A. Stone, and Ursula Goodenough are discussed by Michael Hogue in his 2010 book The Promise of Religious Naturalism. God concepts Those who conceive of God as the creative process within the universe—example, Henry Nelson Wieman Those who think of God as the totality of the universe considered religiously—Bernard Loomer. A third type of religious naturalism sees no need to use the concept or terminology of God—Stone himself and Ursula Goodenough Stone emphasizes that some religious naturalists do not reject the concept of God, but if they use the concept, it involves a radical alteration of the idea such as Gordon Kaufman who defines God as creativity. Ignatowski divides religious naturalism into only two types—theistic and non-theistic. Notable proponents and critics Proponents Proponents of religious naturalism are seen from two perspectives. The first includes contemporary individuals who have discussed and supported religious naturalism, per se. The other includes historic individuals who may not have used or been familiar with the term, "religious naturalism", but who had views that are relevant to and whose thoughts have contributed to the development of religious naturalism. Individuals who have openly discussed and supported religious naturalism, include: Chet Raymo Loyal Rue Donald A. Crosby Jerome A. Stone Michael Dowd Ursula Goodenough Terrence Deacon Loren Eiseley Philip Hefner Ralph Wendell Burhoe Mordecai Kaplan Henry Nelson Wieman George Santayana Gordon D. Kaufman Stuart Kauffman Stanley A. Klein C. Robert Mesle Karl E. Peters Varadaraja V. Raman Ian Barbour Robert S. Corrington Wesley Wildman Carol Wayne White Individuals who were precursors to religious naturalism, or who otherwise influenced its development, include: Lao-Tzu Albert Einstein W.E.B. Du Bois Aldo Leopold Critics Religious naturalism has been criticized from two perspectives. One is that of traditional Western religion, which disagrees with naturalist disbelief in a personal God. Another is that of naturalists who do not agree that a religious sense can or should be associated with naturalist views. Critics in the first group include supporters of traditional Jewish, Christian, and Islamic religions. Critics in the second group include: Richard Dawkins John Haught Prominent communities and leaders Religious naturalists sometimes use the social practices of traditional religions, including communal gatherings and rituals, to foster a sense of community, and to serve as reinforcement of its participants' efforts to expand the scope of their understandings. Some other groups mainly communicate online. Some known examples of religious naturalists groupings and congregation leaders are: Religious Naturalist Association Spiritual Naturalist Society Unitarian Universalist Religious Naturalists Religious Naturalism Facebook Group Universal Pantheist Society founded 1975 – Pantheism is an intercepting concept with religious naturalism Congregation Beth Or, a Jewish congregation near Chicago led by Rabbi David Oler Congregation of Beth Adam in Loveland Ohio led by Rabbi Robert Barr Pastor Ian Lawton, minister at the Christ Community Church in Spring Lake, West Michigan and Center for Progressive Christianity Religious Naturalism is the focus of classes and conferences at some colleges and theology schools. Articles about religious naturalism have appeared frequently in journals, including Zygon, American Journal of Theology and Philosophy, and the International Journal for Philosophy and Religion. See also List of new religious movements References Further reading 2015 – Donald A. Crosby – More Than Discourse: Symbolic Expressions of Naturalistic Faith, State University of New York Press, 2015 – Nathan Martinez – Rise Like Lions: Language and The False Gods of Civilization, 2008 – Donald A. Crosby – The Thou of Nature: Religious Naturalism and Reverence for Sentient Life, State University of New York Press, 2011 – Loyal Rue – Nature Is Enough, State University of New York Press, 2010 – Michael Hogue – The Promise of Religious Naturalism, Rowman & Littlefield Publishers, Inc., Sept.16, 2010, 2009 – Michael Ruse & Joseph Travis  – Evolution: The First Four Billion Years, Belknap Press, 2009, 2008 – Donald A. Crosby – Living with Ambiguity: Religious Naturalism and the Menace of Evil, State University of New York Press, 2008 – Michael Dowd – Thank God for Evolution:, Viking (June 2008), 2008 – Chet Raymo – When God Is Gone, Everything Is Holy: The Making of a Religious Naturalist, Sorin Books, 2008 – Kenneth R. Miller – Only a Theory: Evolution and the Battle for America's Soul, Viking Adult, 2008, 2008 – Eugenie C. Scott – Evolution vs. Creationism: An Introduction, Greenwood Press, 2007 – Eric Chaisson – Epic of Evolution, Columbia University Press (March 2, 2007), 2006 – John Haught – Is Nature Enough?, Cambridge University Press (May 31, 2006), 2006 – Loyal Rue – Religion Is Not About God, Rutgers University Press, July 24, 2006, 2004 – Gordon Kaufman – In the Beginning... Creativity, Augsburg Fortress Pub., 2004, 2003 – James B. Miller – The Epic of Evolution: Science and Religion in Dialogue, Pearson/Prentice Hall, 2003, 2002  – Donald A. Crosby – A Religion of Nature – State University of New York Press, 2000 – Ursula Goodenough – Sacred Depths of Nature, Oxford University Press, USA; 1 edition (June 15, 2000), 2000 – John Stewart – Evolution's Arrow: The Direction of Evolution and the Future of Humanity, Chapman Press, 2000, 1997 – Connie Barlow – Green Space Green Time: The Way of Science, Springer (September 1997), 1992 – Brian Swimme – The Universe Story: From the Primordial Flaring Forth to the Ecozoic Era, HarperCollins, 1992, Reading lists – Evolution Reading Resources, Books of the Epic of Evolution, Cosmic Evolution External links Religious Naturalist Association Religious Naturalism Resources Boston University The Great Story leading religious naturalist educational website Naturalism.org The New Cosmology The Spiritual Naturalist Society Religion Religious studies Religion and politics Religion and government
0.778198
0.984567
0.766188
Philosophical analysis
Philosophical analysis is any of various techniques, typically used by philosophers in the analytic tradition, in order to "break down" (i.e. analyze) philosophical issues. Arguably the most prominent of these techniques is the analysis of concepts, known as conceptual analysis. Method of analysis While analysis is characteristic of the analytic tradition in philosophy, what is to be analyzed (the analysandum) often varies. In their papers, philosophers may focus on different areas. One might analyze linguistic phenomena such as sentences, or psychological phenomena such as sense data. However, arguably the most prominent analyses are written on concepts or propositions and are known as conceptual analysis (Foley 1996). A.C. Ewing distinguished between two forms of philosophical analysis. The first is "what the persons who make a certain statement usually intend to assert" and the second "the qualities, relations and species of continuants mentioned in the statement". As an illustration he takes the statement "I see a tree", this statement could be analysed in terms what the everyday person intends what they say this or it could be analysed metaphysically by asserting representationalism. Conceptual analysis consists primarily in breaking down or analyzing concepts into their constituent parts in order to gain knowledge or a better understanding of a particular philosophical issue in which the concept is involved (Beaney 2003). For example, the problem of free will in philosophy involves various key concepts, including the concepts of freedom, moral responsibility, determinism, ability, etc. The method of conceptual analysis tends to approach such a problem by breaking down the key concepts pertaining to the problem and seeing how they interact. Thus, in the long-standing debate on whether free will is compatible with the doctrine of determinism, several philosophers have proposed analyses of the relevant concepts to argue for either compatibilism or incompatibilism. A famous example of conceptual analysis at its best is given by Bertrand Russell in his theory of descriptions. Russell attempted to analyze propositions that involved definite descriptions (such as "The tallest spy"), which pick out a unique individual, and indefinite descriptions (such as "a spy"), which pick out a set of individuals. Take Russell's analysis of definite descriptions as an example. Superficially, definite descriptions have the standard subject-predicate form of a proposition. For example, "The present king of France is bald" appears to be predicating "baldness" of the subject "the present king of France". However, Russell noted that this is problematic, because there is no present king of France (France is no longer a monarchy). Normally, to decide whether a proposition of the standard subject-predicate form is true or false, one checks whether the subject is in the extension of the predicate. The proposition is then true if and only if the subject is in the extension of the predicate. The problem is that there is no present king of France, so the present king of France cannot be found on the list of bald things or non-bald things. So, it would appear that the proposition expressed by "The present king of France is bald" is neither true nor false. However, analyzing the relevant concepts and propositions, Russell proposed that what definite descriptions really express are not propositions of the subject-predicate form, but rather they express existentially quantified propositions. Thus, "The present king of France" is analyzed, according to Russell's theory of descriptions, as "There exists an individual who is currently the king of France, there is only one such individual, and that individual is bald." Now one can determine the truth value of the proposition. Indeed, it is false, because it is not the case that there exists a unique individual who is currently the king of France and is bald—since there is no present king of France (Bertolet 1999). Criticism While the method of analysis is characteristic of contemporary analytic philosophy, its status continues to be a source of great controversy even among analytic philosophers. Several current criticisms of the analytic method derive from W.V. Quine's famous rejection of the analytic–synthetic distinction. While Quine's critique is well-known, it is highly controversial. Further, the analytic method seems to rely on some sort of definitional structure of concepts, so that one can give necessary and sufficient conditions for the application of the concept. For example, the concept "bachelor" is often analyzed as having the concepts "unmarried" and "male" as its components. Thus, the definition or analysis of "bachelor" is thought to be an unmarried male. But one might worry that these so-called necessary and sufficient conditions do not apply in every case. Wittgenstein, for instance, argues that language (e.g., the word 'bachelor') is used for various purposes and in an indefinite number of ways. Wittgenstein's famous thesis states that meaning is determined by use. This means that, in each case, the meaning of 'bachelor' is determined by its use in a context. So if it can be shown that the word means different things across different contexts of use, then cases where its meaning cannot be essentially defined as 'unmarried man' seem to constitute counterexamples to this method of analysis. This is just one example of a critique of the analytic method derived from a critique of definitions. There are several other such critiques (Margolis & Laurence 2006). This criticism is often said to have originated primarily with Wittgenstein's Philosophical Investigations. A third critique of the method of analysis derives primarily from psychological critiques of intuition. A key part of the analytic method involves analyzing concepts via "intuition tests". Philosophers tend to motivate various conceptual analyses by appeal to their intuitions about thought experiments. (See DePaul and Ramsey (1998) for a collection of current essays on the controversy over analysis as it relates to intuition and reflective equilibrium.) In short, some philosophers feel strongly that the analytic method (especially conceptual analysis) is essential to and defines philosophy—e.g. Jackson (1998), Chalmers (1996), and Bealer (1998). Yet, some philosophers argue that the method of analysis is problematic—e.g. Stich (1998) and Ramsey (1998). Some, however, take the middle ground and argue that while analysis is largely a fruitful method of inquiry, philosophers should not limit themselves to only using the method of analysis. See also Analytic philosophy Definitions of philosophy Thesis, antithesis, synthesis Notes References Bealer, George. (1998). "Intuition and the Autonomy of Philosophy". In M. DePaul & W. Ramsey (eds.) (1998), pp. 201–239. Beaney, Michael. (2003). "Analysis". The Stanford Encyclopedia of Philosophy (link). Bertolet, Rod. (1999). "Theory of Descriptions". Entry in The Cambridge Dictionary of Philosophy, second edition. New York: Cambridge University Press. Chalmers, David. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford: Oxford University Press. DePaul, M. & Ramsey, W. (eds.). (1998). Rethinking Intuition: The Psychology of Intuition and Its Role in Philosophical Inquiry. Lanham, MD: Rowman & Littlefield. Foley, Richard. (1999). "Analysis". Entry in The Cambridge Dictionary of Philosophy, second edition. New York: Cambridge University Press. Jackson, Frank. (1998). From Metaphysics to Ethics: A Defense of Conceptual Analysis. Oxford: Oxford University Press. Margolis, E. & Laurence, S. (2006). "Concepts". The Stanford Encyclopedia of Philosophy (link). Ramsey, William. (1998). "Prototypes and Conceptual Analysis". In M. DePaul & W. Ramsey (eds.) (1998), pp. 161–177. Stich, Stephen. (1998). "Reflective Equilibrium, Analytic Epistemology, and the Problem of Cognitive Diversity". In DePaul and Ramsey (eds.) (1998), pp. 95–112. Wittgenstein, Ludwig (1953). Philosophical Investigations. External links "Concepts" - an article by Margolis & Laurence in the Stanford Encyclopedia of Philosophy (section 5 is a good, but short, presentation of the current issues surrounding conceptual analysis in philosophy). "Analytic Philosophy" - an article by Aaron Preston in the Internet Encyclopedia of Philosophy. "Water's water everywhere" by Jerry Fodor - a review of C. Hughes's book Kripke: Names, Necessity and Identity at the London Review of Books (Fodor goes into several issues regarding the philosophical method of analysis). Analysis Analysis Analysis
0.781289
0.980589
0.766123
Metonymy
Metonymy is a figure of speech in which a concept is referred to by the name of something closely associated with that thing or concept. Etymology The words metonymy and metonym come ; , a suffix that names figures of speech, . Background Metonymy and related figures of speech are common in everyday speech and writing. Synecdoche and metalepsis are considered specific types of metonymy. Polysemy, the capacity for a word or phrase to have multiple meanings, sometimes results from relations of metonymy. Both metonymy and metaphor involve the substitution of one term for another. In metaphor, this substitution is based on some specific analogy between two things, whereas in metonymy the substitution is based on some understood association or contiguity. American literary theorist Kenneth Burke considers metonymy as one of four "master tropes": metaphor, metonymy, synecdoche, and irony. He discusses them in particular ways in his book A Grammar of Motives. Whereas Roman Jakobson argued that the fundamental dichotomy in trope was between metaphor and metonymy, Burke argues that the fundamental dichotomy is between irony and synecdoche, which he also describes as the dichotomy between dialectic and representation, or again between reduction and perspective. In addition to its use in everyday speech, metonymy is a figure of speech in some poetry and in much rhetoric. Greek and Latin scholars of rhetoric made significant contributions to the study of metonymy. Meaning relationships Metonymy takes many different forms. Synecdoche uses a part to refer to the whole, or the whole to refer to the part. Metalepsis uses a familiar word or a phrase in a new context. For example, "lead foot" may describe a fast driver; lead is proverbially heavy, and a foot exerting more pressure on the accelerator causes a vehicle to go faster (in this context unduly so). The figure of speech is a "metonymy of a metonymy". Many cases of polysemy originate as metonyms: for example, "chicken" means the meat as well as the animal; "crown" for the object, as well as the institution. Versus metaphor Metonymy works by the contiguity (association) between two concepts, whereas the term "metaphor" is based upon their analogous similarity. When people use metonymy, they do not typically wish to transfer qualities from one referent to another as they do with metaphor. There is nothing press-like about reporters or crown-like about a monarch, but "the press" and "the crown" are both common metonyms. Some uses of figurative language may be understood as both metonymy and metaphor; for example, the relationship between "a crown" and a "king" could be interpreted metaphorically (i.e., the king, like his gold crown, could be seemingly stiff yet ultimately malleable, over-ornate, and consistently immobile). In the phrase "lands belonging to the crown", the word "crown" is a metonymy. The reason is that monarchs by and large indeed wear a crown, physically. In other words, there is a pre-existent link between "crown" and "monarchy". On the other hand, when Ghil'ad Zuckermann argues that the Israeli language is a "phoenicuckoo cross with some magpie characteristics", he is using metaphors. There is no physical link between a language and a bird. The reason the metaphors "phoenix" and "cuckoo" are used is that on the one hand hybridic "Israeli" is based on Hebrew, which, like a phoenix, rises from the ashes; and on the other hand, hybridic "Israeli" is based on Yiddish, which like a cuckoo, lays its egg in the nest of another bird, tricking it to believe that it is its own egg. Furthermore, the metaphor "magpie" is employed because, according to Zuckermann, hybridic "Israeli" displays the characteristics of a magpie, "stealing" from languages such as Arabic and English. Two examples using the term "fishing" help clarify the distinction. The phrase "to fish pearls" uses metonymy, drawing from "fishing" the idea of taking things from the ocean. What is carried across from "fishing fish" to "fishing pearls" is the domain of metonymy. In contrast, the metaphorical phrase "fishing for information" transfers the concept of fishing into a new domain. If someone is "fishing" for information, we do not imagine that the person is anywhere near the ocean; rather, we transpose elements of the action of fishing (waiting, hoping to catch something that cannot be seen, probing, and most importantly, trying) into a new domain (a conversation). Thus, metaphors work by presenting a target set of meanings and using them to suggest a similarity between items, actions, or events in two domains, whereas metonymy calls up or references a specific domain (here, removing items from the sea). Sometimes, metaphor and metonymy may both be at work in the same figure of speech, or one could interpret a phrase metaphorically or metonymically. For example, the phrase "lend me your ear" could be analyzed in a number of ways. One could imagine the following interpretations: Analyze "ear" metonymically first – "ear" means "attention" (because people use ears to pay attention to each other's speech). Now, when we hear the phrase "Talk to him; you have his ear", it symbolizes he will listen to you or that he will pay attention to you. Another phrase "lending an ear (attention)", we stretch the base meaning of "lend" (to let someone borrow an object) to include the "lending" of non-material things (attention), but, beyond this slight extension of the verb, no metaphor is at work. Imagine the whole phrase literally – imagine that the speaker literally borrows the listener's ear as a physical object (and the person's head with it). Then the speaker has temporary possession of the listener's ear, so the listener has granted the speaker temporary control over what the listener hears. The phrase "lend me your ear" is interpreted to metaphorically mean that the speaker wants the listener to grant the speaker temporary control over what the listener hears. First, analyze the verb phrase "lend me your ear" metaphorically to mean "turn your ear in my direction", since it is known that, literally lending a body part is nonsensical. Then, analyze the motion of ears metonymically – we associate "turning ears" with "paying attention", which is what the speaker wants the listeners to do. It is difficult to say which analysis above most closely represents the way a listener interprets the expression, and it is possible that different listeners analyse the phrase in different ways, or even in different ways at different times. Regardless, all three analyses yield the same interpretation. Thus, metaphor and metonymy, though different in their mechanism, work together seamlessly. Examples Here are some broad kinds of relationships where metonymy is frequently used: Containment: When one thing contains another, it can frequently be used metonymically, as when "dish" is used to refer not to a plate but to the food it contains, or as when the name of a building is used to refer to the entity it contains, as when "the White House" or "the Pentagon" are used to refer to the Administration of the United States, or the U.S. Department of Defense, respectively. A physical item, place, or body part used to refer to a related concept, such as "the bench" for the judicial profession, "stomach" or "belly" for appetite or hunger, "mouth" for speech, being "in diapers" for infancy, "palate" for taste, "the altar" or "the aisle" for marriage, "hand" for someone's responsibility for something ("he had a hand in it"), "head" or "brain" for mind or intelligence, or "nose" for concern about someone else's affairs, (as in "keep your nose out of my business"). A reference to Timbuktu, as in "from here to Timbuktu," usually means a place or idea is too far away or mysterious. Metonymy of objects or body parts for concepts is common in dreams. Tools/instruments: Often a tool is used to signify the job it does or the person who does the job, as in the phrase "his Rolodex is long and valuable" (referring to the Rolodex instrument, which keeps contact business cards, meaning he has a lot of contacts and knows many people). Also "the press" (referring to the printing press), or as in the proverb, "The pen is mightier than the sword." Product for process: This is a type of metonymy where the product of the activity stands for the activity itself. For example, in "The book is moving right along," the book refers to the process of writing or publishing. Punctuation marks often stand metonymically for a meaning expressed by the punctuation mark. For example, "He's a big question mark to me" indicates that something is unknown. In the same way, 'period' can be used to emphasise that a point is concluded or not to be challenged. Synecdoche: A part of something is often used for the whole, as when people refer to "head" of cattle or assistants are referred to as "hands." An example of this is the Canadian dollar, referred to as the loonie for the image of a bird on the one-dollar coin. United States one hundred-dollar bills are often referred to as "Bens", "Benjamins" or "Franklins" because they bear a portrait of Benjamin Franklin. Also, the whole of something is used for a part, as when people refer to a municipal employee as "the city" or police officers as "the law". Toponyms: A country's capital city or some location within the city is frequently used as a metonym for the country's government, such as Washington, D.C., in the United States; Ottawa in Canada; Rome in Italy; Paris in France; Tokyo in Japan; New Delhi in India; London in the United Kingdom; Moscow in Russia etc. Perhaps the oldest such example is "Pharaoh" which originally referred to the residence of the King of Egypt but by the New Kingdom had come to refer to the king himself. Similarly, other important places, such as Wall Street, K Street, Madison Avenue, Silicon Valley, Hollywood, Vegas, and Detroit are commonly used to refer to the industries that are located there (finance, lobbying, advertising, high technology, entertainment, gambling, and motor vehicles, respectively). Such usage may persist even when the industries in question have moved elsewhere, for example, Fleet Street continues to be used as a metonymy for the British national press, though many national publications are no longer headquartered on the street of that name. Places and institutions A place is often used as a metonym for a government or other official institutions, for example, Brussels for the institutions of the European Union, The Hague for the International Court of Justice or International Criminal Court, Nairobi for the government of Kenya, the Kremlin for the Russian presidency, Chausseestraße and Pullach for the German Federal Intelligence Service, Number 10, Downing Street or Whitehall for the prime minister of the United Kingdom and the UK civil service, the White House and Capitol Hill for the executive and legislative branches, respectively, of the United States federal government, Foggy Bottom for the U.S. State Department, Langley for the Central Intelligence Agency, Quantico for either the Federal Bureau of Investigation academy and forensic laboratory or the Marine Corps base of the same name, Malacañang for the President of the Philippines, their advisers and Office of the President, "La Moncloa" for the Prime Minister of Spain, and Vatican for the pope, Holy See and Roman Curia. Other names of addresses or locations can become convenient shorthand names in international diplomacy, allowing commentators and insiders to refer impersonally and succinctly to foreign ministries with impressive and imposing names as (for example) the Quai d'Orsay, the Wilhelmstrasse, the Kremlin, and the Porte. A place (or places) can represent an entire industry. For instance: Wall Street, used metonymically, can stand for the entire U.S. financial and corporate banking sector; K Street for Washington, D.C.'s lobbying industry or lobbying in the United States in general; Hollywood for the U.S. film industry, and the people associated with it; Broadway for the American commercial theatrical industry; Madison Avenue for the American advertising industry; and Silicon Valley for the American technology industry. The High Street (of which there are over 5,000 in Britain) is a term commonly used to refer to the entire British retail sector. Common nouns and phrases can also be metonyms: "red tape" can stand for bureaucracy, whether or not that bureaucracy uses actual red tape to bind documents. In Commonwealth realms, The Crown is a metonym for the state in all its aspects. In recent Israeli usage, the term "Balfour" came to refer to the Israeli Prime Minister's residence, located on Balfour Street in Jerusalem, to all the streets around it where demonstrations frequently take place, and also to the Prime Minister and his family who live in the residence. Rhetoric in ancient history Western culture studied poetic language and deemed it to be rhetoric. A. Al-Sharafi supports this concept in his book Textual Metonymy, "Greek rhetorical scholarship at one time became entirely poetic scholarship." Philosophers and rhetoricians thought that metaphors were the primary figurative language used in rhetoric. Metaphors served as a better means to attract the audience's attention because the audience had to read between the lines in order to get an understanding of what the speaker was trying to say. Others did not think of metonymy as a good rhetorical method because metonymy did not involve symbolism. Al-Sharafi explains, "This is why they undermined practical and purely referential discourse because it was seen as banal and not containing anything new, strange or shocking." Greek scholars contributed to the definition of metonymy. For example, Isocrates worked to define the difference between poetic language and non-poetic language by saying that, "Prose writers are handicapped in this regard because their discourse has to conform to the forms and terms used by the citizens and to those arguments which are precise and relevant to the subject-matter." In other words, Isocrates proposes here that metaphor is a distinctive feature of poetic language because it conveys the experience of the world afresh and provides a kind of defamiliarisation in the way the citizens perceive the world. Democritus described metonymy by saying, "Metonymy, that is the fact that words and meaning change." Aristotle discussed different definitions of metaphor, regarding one type as what we know to be metonymy today. Latin scholars also had an influence on metonymy. The treatise Rhetorica ad Herennium states metonymy as, "the figure which draws from an object closely akin or associated an expression suggesting the object meant, but not called by its own name." The author describes the process of metonymy to us saying that we first figure out what a word means. We then figure out that word's relationship with other words. We understand and then call the word by a name that it is associated with. "Perceived as such then metonymy will be a figure of speech in which there is a process of abstracting a relation of proximity between two words to the extent that one will be used in place of another." Cicero viewed metonymy as more of a stylish rhetorical method and described it as being based on words, but motivated by style. Jakobson, structuralism and realism Metonymy became important in French structuralism through the work of Roman Jakobson. In his 1956 essay "The Metaphoric and Metonymic Poles", Jakobson relates metonymy to the linguistic practice of [syntagmatic] combination and to the literary practice of realism. He explains: The primacy of the metaphoric process in the literary schools of Romanticism and symbolism has been repeatedly acknowledged, but it is still insufficiently realized that it is the predominance of metonymy which underlies and actually predetermines the so-called 'realistic' trend, which belongs to an intermediary stage between the decline of Romanticism and the rise of symbolism and is opposed to both. Following the path of contiguous relationships, the realistic author metonymically digresses from the plot to the atmosphere and from the characters to the setting in space and time. He is fond of synecdochic details. In the scene of Anna Karenina's suicide Tolstoy's artistic attention is focused on the heroine's handbag; and in War and Peace the synecdoches "hair on the upper lip" or "bare shoulders" are used by the same writer to stand for the female characters to whom these features belong. Jakobson's theories were important for Claude Lévi-Strauss, Roland Barthes, Jacques Lacan, and others. Dreams can use metonyms. Art Metonyms can also be wordless. For example, Roman Jakobson argued that cubist art relied heavily on nonlinguistic metonyms, while surrealist art relied more on metaphors. Lakoff and Turner argued that all words are metonyms: "Words stand for the concepts they express." Some artists have used actual words as metonyms in their paintings. For example, Miró's 1925 painting "Photo: This is the Color of My Dreams" has the word "photo" to represent the image of his dreams. This painting comes from a series of paintings called peintures-poésies (paintings-poems) which reflect Miró's interest in dreams and the subconscious and the relationship of words, images, and thoughts. Picasso, in his 1911 painting "Pipe Rack and Still Life on Table" inserts the word "Ocean" rather than painting an ocean: These paintings by Miró and Picasso are, in a sense, the reverse of a rebus: the word stands for the picture, instead of the picture standing for the word. See also -onym Antonomasia Deferred reference Eggcorn Eponym Enthymeme Euphemism by comparison Generic trademark Kenning List of metonyms Meronymy Newspeak Pars pro toto Simile Slang Sobriquet Social stereotype Synecdoche Totum pro parte References Citations Sources Further reading Figures of speech Narrative techniques Semantics Tropes by type
0.766923
0.998888
0.766071
Pragmatic theory of truth
A pragmatic theory of truth is a theory of truth within the philosophies of pragmatism and pragmaticism. Pragmatic theories of truth were first posited by Charles Sanders Peirce, William James, and John Dewey. The common features of these theories are a reliance on the pragmatic maxim as a means of clarifying the meanings of difficult concepts such as truth; and an emphasis on the fact that belief, certainty, knowledge, or truth is the result of an inquiry. Background Pragmatic theories of truth developed from the earlier ideas of ancient philosophy, the Scholastics. Pragmatic ideas about truth are often confused with the quite distinct notions of "logic and inquiry", "judging what is true", and "truth predicates". Logic and inquiry In one classical formulation, truth is defined as the good of logic, where logic is a normative science, that is, an inquiry into a good or a value that seeks knowledge of it and the means to achieve it. In this view, truth cannot be discussed to much effect outside the context of inquiry, knowledge, and logic, all very broadly considered. Most inquiries into the character of truth begin with a notion of an informative, meaningful, or significant element, the truth of whose information, meaning, or significance may be put into question and needs to be evaluated. Depending on the context, this element might be called an artefact, expression, image, impression, lyric, mark, performance, picture, sentence, sign, string, symbol, text, thought, token, utterance, word, work, and so on. Whatever the case, one has the task of judging whether the bearers of information, meaning, or significance are indeed truth-bearers. This judgment is typically expressed in the form of a specific truth predicate, whose positive application to a sign, or so on, asserts that the sign is true. Truth predicates Theories of truth may be described according to several dimensions of description that affect the character of the predicate "true". The truth predicates that are used in different theories may be classified by the number of things that have to be mentioned in order to assess the truth of a sign, counting the sign itself as the first thing. In formal logic, this number is called the arity of the predicate. The kinds of truth predicates may then be subdivided according to any number of more specific characters that various theorists recognize as important. A monadic truth predicate is one that applies to its main subject — typically a concrete representation or its abstract content — independently of reference to anything else. In this case one can say that a truthbearer is true in and of itself. A dyadic truth predicate is one that applies to its main subject only in reference to something else, a second subject. Most commonly, the auxiliary subject is either an object, an interpreter, or a language to which the representation bears some relation. A triadic truth predicate is one that applies to its main subject only in reference to a second and a third subject. For example, in a pragmatic theory of truth, one has to specify both the object of the sign, and either its interpreter or another sign called the interpretant before one can say that the sign is true of its object to its interpreting agent or sign. Several qualifications must be kept in mind with respect to any such radically simple scheme of classification, as real practice seldom presents any pure types, and there are settings in which it is useful to speak of a theory of truth that is "almost" k-adic, or that "would be" k-adic if certain details can be abstracted away and neglected in a particular context of discussion. That said, given the generic division of truth predicates according to their arity, further species can be differentiated within each genus according to a number of more refined features. The truth predicate of interest in a typical correspondence theory of truth tells of a relation between representations and objective states of affairs, and is therefore expressed, for the most part, by a dyadic predicate. In general terms, one says that a representation is true of an objective situation, more briefly, that a sign is true of an object. The nature of the correspondence may vary from theory to theory in this family. The correspondence can be fairly arbitrary or it can take on the character of an analogy, an icon, or a morphism, whereby a representation is rendered true of its object by the existence of corresponding elements and a similar structure. Peirce Very little in Peirce's thought can be understood in its proper light without understanding that he thinks all thoughts are signs, and thus, according to his theory of thought, no thought is understandable outside the context of a sign relation. Sign relations taken collectively are the subject matter of a theory of signs. So Peirce's semiotic, his theory of sign relations, is key to understanding his entire philosophy of pragmatic thinking and thought. In his contribution to the article "Truth and Falsity and Error" for Baldwin's Dictionary of Philosophy and Psychology (1901), Peirce defines truth in the following way: Truth is that concordance of an abstract statement with the ideal limit towards which endless investigation would tend to bring scientific belief, which concordance the abstract statement may possess by virtue of the confession of its inaccuracy and one-sidedness, and this confession is an essential ingredient of truth. (Peirce 1901, see Collected Papers (CP) 5.565). This statement emphasizes Peirce's view that ideas of approximation, incompleteness, and partiality, what he describes elsewhere as fallibilism and "reference to the future", are essential to a proper conception of truth. Although Peirce occasionally uses words like concordance and correspondence to describe one aspect of the pragmatic sign relation, he is also quite explicit in saying that definitions of truth based on mere correspondence are no more than nominal definitions, which he follows long tradition in relegating to a lower status than real definitions. That truth is the correspondence of a representation with its object is, as Kant says, merely the nominal definition of it. Truth belongs exclusively to propositions. A proposition has a subject (or set of subjects) and a predicate. The subject is a sign; the predicate is a sign; and the proposition is a sign that the predicate is a sign of that of which the subject is a sign. If it be so, it is true. But what does this correspondence or reference of the sign, to its object, consist in? (Peirce 1906, CP 5.553). Here Peirce makes a statement that is decisive for understanding the relationship between his pragmatic definition of truth and any theory of truth that leaves it solely and simply a matter of representations corresponding with their objects. Peirce, like Kant before him, recognizes Aristotle's distinction between a nominal definition, a definition in name only, and a real definition, one that states the function of the concept, the reason for conceiving it, and so indicates the essence, the underlying substance of its object. This tells us the sense in which Peirce entertained a correspondence theory of truth, namely, a purely nominal sense. To get beneath the superficiality of the nominal definition it is necessary to analyze the notion of correspondence in greater depth. In preparing for this task, Peirce makes use of an allegorical story, omitted here, the moral of which is that there is no use seeking a conception of truth that we cannot conceive ourselves being able to capture in a humanly conceivable concept. So we might as well proceed on the assumption that we have a real hope of comprehending the answer, of being able to "handle the truth" when the time comes. Bearing that in mind, the problem of defining truth reduces to the following form: Now thought is of the nature of a sign. In that case, then, if we can find out the right method of thinking and can follow it out — the right method of transforming signs — then truth can be nothing more nor less than the last result to which the following out of this method would ultimately carry us. In that case, that to which the representation should conform, is itself something in the nature of a representation, or sign — something noumenal, intelligible, conceivable, and utterly unlike a thing-in-itself. (Peirce 1906, CP 5.553). Peirce's theory of truth depends on two other, intimately related subject matters, his theory of sign relations and his theory of inquiry. Inquiry is a special case of semiosis, a process that transforms signs into signs while maintaining a specific relationship to an object, which object may be located outside the trajectory of signs or else be found at the end of it. Inquiry includes all forms of belief revision and logical inference, including scientific method, what Peirce here means by "the right method of transforming signs". A sign-to-sign transaction relating to an object is a transaction that involves three parties, or a relation that involves three roles. This is called a ternary or triadic relation in logic. Consequently, pragmatic theories of truth are largely expressed in terms of triadic truth predicates. The statement above tells us one more thing: Peirce, having started out in accord with Kant, is here giving notice that he is parting ways with the Kantian idea that the ultimate object of a representation is an unknowable thing-in-itself. Peirce would say that the object is knowable, in fact, it is known in the form of its representation, however imperfectly or partially. Reality and truth are coordinate concepts in pragmatic thinking, each being defined in relation to the other, and both together as they participate in the time evolution of inquiry. Inquiry is not a disembodied process, nor the occupation of a singular individual, but the common life of an unbounded community. The real, then, is that which, sooner or later, information and reasoning would finally result in, and which is therefore independent of the vagaries of me and you. Thus, the very origin of the conception of reality shows that this conception essentially involves the notion of a COMMUNITY, without definite limits, and capable of a definite increase of knowledge. (Peirce 1868, CP 5.311). Different minds may set out with the most antagonistic views, but the progress of investigation carries them by a force outside of themselves to one and the same conclusion. This activity of thought by which we are carried, not where we wish, but to a foreordained goal, is like the operation of destiny. No modification of the point of view taken, no selection of other facts for study, no natural bent of mind even, can enable a man to escape the predestinate opinion. This great law is embodied in the conception of truth and reality. The opinion which is fated to be ultimately agreed to by all who investigate, is what we mean by the truth, and the object represented in this opinion is the real. That is the way I would explain reality. (Peirce 1878, CP 5.407). James William James's version of the pragmatic theory is often summarized by his statement that "the 'true' is only the expedient in our way of thinking, just as the 'right' is only the expedient in our way of behaving." By this, James meant that truth is a quality the value of which is confirmed by its effectiveness when applying concepts to actual practice (thus, "pragmatic"). James's pragmatic theory is a synthesis of correspondence theory of truth and coherence theory of truth, with an added dimension. Truth is verifiable to the extent that thoughts and statements correspond with actual things, as well as "hangs together," or coheres, fits as pieces of a puzzle might fit together, and these are in turn verified by the observed results of the application of an idea to actual practice. James said that "all true processes must lead to the face of directly verifying sensible experiences somewhere." He also extended his pragmatic theory well beyond the scope of scientific verifiability, and even into the realm of the mystical: "On pragmatic principles, if the hypothesis of God works satisfactorily in the widest sense of the word, then it is 'true.' " "Truth, as any dictionary will tell you, is a property of certain of our ideas. It means their 'agreement', as falsity means their disagreement, with 'reality'. Pragmatists and intellectualists both accept this definition as a matter of course. They begin to quarrel only after the question is raised as to what may precisely be meant by the term 'agreement', and what by the term 'reality', when reality is taken as something for our ideas to agree with." Pragmatism, James clarifies, is not a new philosophy. He states that it instead focuses on discerning truth between contrasting schools of thought. “To understand truth, he argues, we must consider the pragmatic ‘cash-value’ of having true beliefs and the practical difference of having true ideas." By using the term ‘cash-value,’ James refers to the practical consequences that come from discerning the truth behind arguments, through the pragmatic method, that should yield no desirable answer. In such cases, the pragmatic method must “try to interpret each notion by tracing its respective practical consequences." William James uses an analogy of a squirrel on a tree to further explain the pragmatic method. James imagines a squirrel on a tree. If it clung to one side of the tree, and a person stood on the other, and as the person walked around the tree so too did the squirrel as to never be seen by the person, would the person rightly be walking around the squirrel? “’Depends on what you practically mean by ‘going round’ the squirrel. If you mean passing from the north of him to the east, then to the south, then to the west, then to the north of him again, obviously the man does go round him… but on the contrary if you mean being first in front of him, then behind him, then on his left, then finally in front again, it is quite obvious that the man fails to go round him." In such arguments, where no practical consequences can be found after making a distinction, the argument should be dropped. If, however, the argument was to yield one result which clearly holds greater consequences, then that side should be agreed upon solely for its intrinsic value. Although James never actually clarifies what “practical consequences” are, he does mention how the best way to find division between possible consequences is by first practically defining what each side of the argument means. In terms of James’s example, he says: “You are both right and both wrong according as you conceive the verb ‘to go round’ in one practical fashion or the other." Thus the pragmatic theory seeks to find truth through the division and practical consequences between contrasting sides to establish which side is correct. William James (1907) begins his chapter on "Pragmatism's Conception of Truth" in much the same letter and spirit as the above selection from Peirce (1906), noting the nominal definition of truth as a plausible point of departure, but immediately observing that the pragmatist's quest for the meaning of truth can only begin, not end there. "The popular notion is that a true idea must copy its reality. Like other popular views, this one follows the analogy of the most usual experience. Our true ideas of sensible things do indeed copy them. Shut your eyes and think of yonder clock on the wall, and you get just such a true picture or copy of its dial. But your idea of its 'works' (unless you are a clockmaker) is much less of a copy, yet it passes muster, for it in no way clashes with reality. Even though it should shrink to the mere word 'works', that word still serves you truly; and when you speak of the 'time-keeping function' of the clock, or of its spring's 'elasticity', it is hard to see exactly what your ideas can copy." James exhibits a knack for popular expression that Peirce seldom sought, and here his analysis of correspondence by way of a simple thought experiment cuts right to the quick of the first major question to ask about it, namely: To what extent is the notion of correspondence involved in truth covered by the ideas of analogues, copies, or iconic images of the thing represented? The answer is that the iconic aspect of correspondence can be taken literally only in regard to sensory experiences of the more precisely eidetic sort. When it comes to the kind of correspondence that might be said to exist between a symbol, a word like "works", and its object, the springs and catches of the clock on the wall, then the pragmatist recognizes that a more than nominal account of the matter still has a lot more explaining to do. Making truth Instead of truth being ready-made for us, James asserts we and reality jointly "make" truth. This idea has two senses: (1) truth is mutable, (often attributed to William James and F.C.S. Schiller); and (2) truth is relative to a conceptual scheme (more widely accepted in Pragmatism). (1) Mutability of truth "Truth" is not readily defined in Pragmatism. Can beliefs pass from being true to being untrue and back? For James, beliefs are not true until they have been made true by verification. James believed propositions become true over the long term through proving their utility in a person's specific situation. The opposite of this process is not falsification, but rather the belief ceases to be a "live option." F.C.S. Schiller, on the other hand, clearly asserted beliefs could pass into and out of truth on a situational basis. Schiller held that truth was relative to specific problems. If I want to know how to return home safely, the true answer will be whatever is useful to solving that problem. Later on, when faced with a different problem, what I came to believe with the earlier problem may now be false. As my problems change, and as the most useful way to solve a problem shifts, so does the property of truth. C.S. Peirce considered the idea that beliefs are true at one time but false at another (or true for one person but false for another) to be one of the "seeds of death" by which James allowed his pragmatism to become "infected." For Peirce the pragmatic view implies theoretical claims should be tied to verification processes (i.e. they should be subject to test). They shouldn't be tied to our specific problems or life needs. Truth is defined, for Peirce, as what would be the ultimate outcome (not any outcome in real time) of inquiry by a (usually scientific) community of investigators. William James, while agreeing with this definition, also characterized truthfulness as a species of the good: if something is true it is trustworthy and reliable and will remain so in every conceivable situation. Both Peirce and Dewey connect the definitions of truth and warranted assertability. Hilary Putnam also developed his internal realism around the idea a belief is true if it is ideally justified in epistemic terms. About James' and Schiller's view, Putnam says: Rorty has also weighed in against James and Schiller: (2) Conceptual relativity With James and Schiller we make things true by verifying them—a view rejected by most pragmatists. However, nearly all pragmatists do accept the idea there can be no truths without a conceptual scheme to express those truths. That is, F.C.S. Schiller used the analogy of a chair to make clear what he meant by the phrase that truth is made: just as a carpenter makes a chair out of existing materials and doesn't create it out of nothing, truth is a transformation of our experience—but this doesn't imply reality is something we're free to construct or imagine as we please. Dewey John Dewey, less broadly than William James but much more broadly than Charles Peirce, held that inquiry, whether scientific, technical, sociological, philosophical or cultural, is self-corrective over time if openly submitted for testing by a community of inquirers in order to clarify, justify, refine and/or refute proposed truths. In his Logic: The Theory of Inquiry (1938), Dewey gave the following definition of inquiry: Inquiry is the controlled or directed transformation of an indeterminate situation into one that is so determinate in its constituent distinctions and relations as to convert the elements of the original situation into a unified whole. (Dewey, p. 108). The index of the same book has exactly one entry under the heading truth, and it refers to the following footnote: The best definition of truth from the logical standpoint which is known to me is that by Peirce: "The opinion which is fated to be ultimately agreed to by all who investigate is what we mean by the truth, and the object represented in this opinion is the real [CP 5.407]. (Dewey, 343 n). Dewey says more of what he understands by truth in terms of his preferred concept of warranted assertibility as the end-in-view and conclusion of inquiry (Dewey, 14–15). Mead Criticisms Several objections are commonly made to pragmatist account of truth, of either sort. First, due originally to Bertrand Russell (1907) in a discussion of James's theory, is that pragmatism mixes up the notion of truth with epistemology. Pragmatism describes an indicator or a sign of truth. It really cannot be regarded as a theory of the meaning of the word "true". There's a difference between stating an indicator and giving the meaning. For example, when the streetlights turn on at the end of a day, that's an indicator, a sign, that evening is coming on. It would be an obvious mistake to say that the word "evening" just means "the time that the streetlights turn on". In the same way, while it might be an indicator of truth, that a proposition is part of that perfect science at the ideal limit of inquiry, that just isn't what "true" means. Russell's objection is that pragmatism mixes up an indicator of truth with the meaning of the predicate 'true'. There is a difference between the two and pragmatism confuses them. In this pragmatism is akin to Berkeley's view that to be is to be perceived, which similarly confuses an indication or proof of that something exists with the meaning of the word 'exists', or with what it is for something to exist. Other objections to pragmatism include how we define what it means to say a belief "works", or that it is "useful to believe". The vague usage of these terms, first popularized by James, has led to much debate. A viable, more sophisticated consensus theory of truth, a mixture of Peircean theory with speech-act theory and social theory, is that presented and defended by Jürgen Habermas, which sets out the universal pragmatic conditions of ideal consensus and responds to many objections to earlier versions of a pragmatic, consensus theory of truth. Habermas distinguishes explicitly between factual consensus, i.e. the beliefs that happen to hold in a particular community, and rational consensus, i.e. consensus attained in conditions approximating an "ideal speech situation", in which inquirers or members of a community suspend or bracket prevailing beliefs and engage in rational discourse aimed at truth and governed by the force of the better argument, under conditions in which all participants in discourse have equal opportunities to engage in constative (assertions of fact), normative, and expressive speech acts, and in which discourse is not distorted by the intervention of power or the internalization of systematic blocks to communication. Recent Peirceans, Cheryl Misak, and Robert B. Talisse have attempted to formulate Peirce's theory of truth in a way that improves on Habermas and provides an epistemological conception of deliberative democracy. Notes and references Further reading Allen, James Sloan, ed. William James on Habit, Will, Truth, and the Meaning of Life. Frederic C. Beil, Publisher, Savannah, GA. Awbrey, Jon, and Awbrey, Susan (1995), "Interpretation as Action: The Risk of Inquiry", Inquiry: Critical Thinking Across the Disciplines 15, 40–52. Eprint Baldwin, J.M. (1901–1905), Dictionary of Philosophy and Psychology, 3 volumes in 4, New York, NY. Dewey, John (1929), The Quest for Certainty: A Study of the Relation of Knowledge and Action, Minton, Balch, and Company, New York, NY. Reprinted, pp. 1–254 in John Dewey, The Later Works, 1925–1953, Volume 4: 1929, Jo Ann Boydston (ed.), Harriet Furst Simon (text. ed.), Stephen Toulmin (intro.), Southern Illinois University Press, Carbondale and Edwardsville, IL, 1984. Dewey, John (1938), Logic: The Theory of Inquiry, Henry Holt and Company, New York, NY, 1938. Reprinted, pp. 1–527 in John Dewey, The Later Works, 1925–1953, Volume 12: 1938, Jo Ann Boydston (ed.), Kathleen Poulos (text. ed.), Ernest Nagel (intro.), Southern Illinois University Press, Carbondale and Edwardsville, IL, 1986. Ferm, Vergilius (1962), "Consensus Gentium", p. 64 in Runes (1962). Haack, Susan (1993), Evidence and Inquiry: Towards Reconstruction in Epistemology, Blackwell Publishers, Oxford, UK. Habermas, Jürgen (1976), "What Is Universal Pragmatics?", 1st published, "Was heißt Universalpragmatik?", Sprachpragmatik und Philosophie, Karl-Otto Apel (ed.), Suhrkamp Verlag, Frankfurt am Main. Reprinted, pp. 1–68 in Jürgen Habermas, Communication and the Evolution of Society, Thomas McCarthy (trans.), Beacon Press, Boston, MA, 1979. Habermas, Jürgen (1979), Communication and the Evolution of Society, Thomas McCarthy (trans.), Beacon Press, Boston, MA. Habermas, Jürgen (1990), Moral Consciousness and Communicative Action, Christian Lenhardt and Shierry Weber Nicholsen (trans.), Thomas McCarthy (intro.), MIT Press, Cambridge, MA. Habermas, Jürgen (2003), Truth and Justification, Barbara Fultner (trans.), MIT Press, Cambridge, MA. James, William (1907), Pragmatism, A New Name for Some Old Ways of Thinking, Popular Lectures on Philosophy, Longmans, Green, and Company, New York, NY. James, William (1909), The Meaning of Truth, A Sequel to 'Pragmatism''', Longmans, Green, and Company, New York, NY. Kant, Immanuel (1800), Introduction to Logic. Reprinted, Thomas Kingsmill Abbott (trans.), Dennis Sweet (intro.), Barnes and Noble, New York, NY, 2005. Peirce, C.S., Writings of Charles S. Peirce, A Chronological Edition, Peirce Edition Project (eds.), Indiana University Press, Bloomington and Indianoplis, IN, 1981–. Volume 1 (1857–1866), 1981. Volume 2 (1867–1871), 1984. Volume 3 (1872–1878), 1986. Cited as W volume:page. Peirce, C.S., Collected Papers of Charles Sanders Peirce, vols. 1–6, Charles Hartshorne and Paul Weiss (eds.), vols. 7–8, Arthur W. Burks (ed.), Harvard University Press, Cambridge, MA, 1931–1935, 1958. Cited as CP vol.para. Peirce, C.S., The Essential Peirce, Selected Philosophical Writings, Volume 1 (1867–1893), Nathan Houser and Christian Kloesel (eds.), Indiana University Press, Bloomington and Indianapolis, IN, 1992. Cited as EP 1:page. Peirce, C.S., The Essential Peirce, Selected Philosophical Writings, Volume 2 (1893–1913), Peirce Edition Project (eds.), Indiana University Press, Bloomington and Indianapolis, IN, 1998. Cited as EP 2:page. Peirce, C.S. (1868), "Some Consequences of Four Incapacities", Journal of Speculative Philosophy 2 (1868), 140–157. Reprinted (CP 5.264–317), (W 2:211–242), (EP 1:28–55). Eprint. NB. Misprints in CP and Eprint copy. Peirce, C.S. (1877), "The Fixation of Belief", Popular Science Monthly 12 (1877), 1–15. Reprinted (CP 5.358–387), (W 3:242–257), (EP 1:109–123). Eprint. Peirce, C.S. (1878), "How to Make Our Ideas Clear", Popular Science Monthly 12 (1878), 286–302. Reprinted (CP 5.388–410), (W 3:257–276)), (EP 1:124–141). Peirce, C.S. (1901), section entitled "Logical", pp. 718–720 in "Truth and Falsity and Error", pp. 716–720 in J.M. Baldwin (ed.), Dictionary of Philosophy and Psychology, vol. 2. Google Books Eprint. Reprinted (CP 5.565–573). Peirce, C.S. (1905), "What Pragmatism Is", The Monist 15, 161–181. Reprinted (CP 5.411–437), (EP 2:331–345). Internet Archive Eprint. Peirce, C.S. (1906), "Basis of Pragmaticism", first published in Collected Papers, CP 1.573–574 and 5.549–554. Rescher, Nicholas (1995), Pluralism: Against the Demand for Consensus, Oxford University Press, Oxford, UK. Rorty, R. (1979), Philosophy and the Mirror of Nature, Princeton University Press, Princeton, NJ. Runes, Dagobert D. (ed., 1962), Dictionary of Philosophy'', Littlefield, Adams, and Company, Totowa, NJ. Cited as DOP. Belief revision Logic Theories of truth Pragmatism Charles Sanders Peirce William James
0.776828
0.986074
0.76601
Western philosophy
Western philosophy refers to the philosophical thought, traditions and works of the Western world. Historically, the term refers to the philosophical thinking of Western culture, beginning with the ancient Greek philosophy of the pre-Socratics. The word philosophy itself originated from the Ancient Greek (φιλοσοφία), literally, "the love of wisdom" , "to love" and σοφία sophía, "wisdom". History Ancient The scope of ancient Western philosophy included the problems of philosophy as they are understood today; but it also included many other disciplines, such as pure mathematics and natural sciences such as physics, astronomy, and biology (Aristotle, for example, wrote on all of these topics). Pre-Socratics The pre-Socratic philosophers were interested in cosmology; the nature and origin of the universe, while rejecting unargued fables in place for argued theory, i.e., dogma superseded reason, albeit in a rudimentary form. They were specifically interested in the (the cause or first principle) of the world. The first recognized philosopher, Thales of Miletus (born in Ionia) identified water as the (claiming "all is water") His use of observation and reason to derive this conclusion is the reason for distinguishing him as the first philosopher. Thales' student Anaximander claimed that the was the apeiron, the infinite. Following both Thales and Anaximander, Anaximenes of Miletus claimed that air was the most suitable candidate. Pythagoras (born ), from the island of Samos off the coast of Ionia, later lived in Croton in southern Italy (Magna Graecia). Pythagoreans hold that "all is number", giving formal accounts in contrast to the previous material of the Ionians. The discovery of consonant intervals in music by the group enabled the concept of harmony to be established in philosophy, which suggested that opposites could together give rise to new things. They also believed in metempsychosis, the transmigration of souls, or reincarnation. Parmenides argued that, unlike the other philosophers who believed the was transformed into multiple things, the world must be singular, unchanging and eternal, while anything suggesting the contrary was an illusion. Zeno of Elea formulated his famous paradoxes in order to support Parmenides' views about the illusion of plurality and change (in terms of motion), by demonstrating them to be impossible. An alternative explanation was presented by Heraclitus, who claimed that everything was in flux all the time, famously pointing out that one could not step into the same river twice. Empedocles may have been an associate of both Parmenides and the Pythagoreans. He claimed the was in fact composed of multiple sources, giving rise to the model of the four classical elements. These in turn were acted upon by the forces of Love and Strife, creating the mixtures of elements which form the world. Another view of the being acted upon by an external force was presented by his older contemporary Anaxagoras, who claimed that nous, the mind, was responsible for that. Leucippus and Democritus proposed atomism as an explanation for the fundamental nature of the universe. Jonathan Barnes called atomism "the culmination of early Greek thought". In addition to these philosophers, the Sophists comprised teachers of rhetoric who taught students to debate on any side of an issue. While as a group, they held no specific views, in general they promoted subjectivism and relativism. Protagoras, one of the most influential Sophist philosophers, claimed that "man is the measure of all things", suggesting there is no objective truth. This was also applied to issues of ethics, with Prodicus arguing that laws could not be taken seriously because they changed all the time, while Antiphon made the claim that conventional morality should only be followed when in society. Classical period The Classical period of ancient Greek philosophy centers on Socrates and the two generations of students who followed. Socrates experienced a life-changing event when his friend, Chaerephon visited the Oracle of Delphi where the Pythia told him that no one in Athens was wiser than Socrates. Learning of this, Socrates subsequently spent much of his life questioning anyone in Athens who would engage him, in order to investigate the Pithia's claim. Socrates developed a critical approach, now called the Socratic method, to examine people's views. He focused on issues of human life: eudaimonia, justice, beauty, truth, and virtue. Although Socrates wrote nothing himself, two of his disciples, Plato and Xenophon, wrote about some of his conversations, although Plato also deployed Socrates as a fictional character in some of his dialogues. These Socratic dialogues display the Socratic method being applied to examine philosophical problems. Socrates's questioning earned him enemies who eventually accused him of impiety and corrupting the youth. For this, he was tried by the Athenian democracy, was found guilty, and was sentenced to death. Although his friends offered to help him escape from prison, Socrates chose to remain in Athens and abide by his principles. His execution consisted of drinking poison hemlock. He died in 399 BCE. After Socrates' death, Plato founded the Platonic Academy and Platonic philosophy. As Socrates had done, Plato identified virtue with knowledge. This led him to questions of epistemology on what knowledge is and how it is acquired. Socrates had several other students who also founded schools of philosophy. Two of these were short-lived: the Eretrian school, founded by Phaedo of Elis, and the Megarian school, founded by Euclid of Megara. Two others were long-lasting: Cynicism, founded by Antisthenes, and Cyrenaicism, founded by Aristippus. The Cynics considered life's purpose to live in virtue, in agreement with nature, rejecting all conventional desires for wealth, power, and fame, leading a simple life free from all possessions. The Cyrenaics promoted a philosophy nearly opposite that of the Cynics, endorsing hedonism, holding that pleasure was the supreme good, especially immediate gratifications; and that people could only know their own experiences, beyond that truth was unknowable. The final school of philosophy to be established during the Classical period was the Peripatetic school, founded by Plato's student, Aristotle. Aristotle wrote widely about topics of philosophical concern, including physics, biology, zoology, metaphysics, aesthetics, poetry, theater, music, rhetoric, politics, and logic. Aristotelian logic was the first type of logic to attempt to categorize every valid syllogism. His epistemology comprised an early form of empiricism. Aristotle criticized Plato's metaphysics as being poetic metaphor, with its greatest failing being the lack of an explanation for change. Aristotle proposed the four causes model to explain change – material, efficient, formal, and final – all of which were grounded on what Aristotle termed the unmoved mover. His ethical views identified eudaimonia as the ultimate good, as it was good in itself. He thought that eudaimonia could be achieved by living according to human nature, which is to live with reason and virtue, defining virtue as the golden mean between extremes. Aristotle saw politics as the highest art, as all other pursuits are subservient to its goal of improving society. The state should aim to maximize the opportunities for the pursuit of reason and virtue through leisure, learning, and contemplation. Aristotle tutored Alexander the Great, who conquered much of the ancient Western world. Hellenization and Aristotelian philosophy have exercised considerable influence on almost all subsequent Western and Middle Eastern philosophers. Hellenistic and Roman philosophy The Hellenistic and Roman Imperial periods saw the continuation of Aristotelianism and Cynicism, and the emergence of new philosophies, including Pyrrhonism, Epicureanism, Stoicism, and Neopythagoreanism. Platonism also continued but came under new interpretations, particularly Academic skepticism in the Hellenistic period and Neoplatonism in the Imperial period. The traditions of Greek philosophy heavily influenced Roman philosophy. In Imperial times, Epicureanism and Stoicism were particularly popular. The various schools of philosophy proposed various and conflicting methods for attaining eudaimonia. For some schools, it was through internal means, such as calmness, ataraxia (ἀταραξία), or indifference, apatheia (ἀπάθεια), which was possibly caused by the increased insecurity of the era. The aim of the Cynics was to live according to nature and against convention with courage and self-control. This was directly inspiring to the founder of Stoicism, Zeno of Citium, who took up the Cynic ideals of steadfastness and self-discipline, but applied the concept of apatheia to personal circumstances rather than social norms, and switched shameless flouting of the latter for a resolute fulfillment of social duties. The ideal of 'living in accordance with nature' also continued, with this being seen as the way to eudaimonia, which in this case was identified as the freedom from fears and desires and required choosing how to respond to external circumstances, as the quality of life was seen as based on one's beliefs about it. An alternative view was presented by the Cyrenaics and the Epicureans. The Cyrenaics were hedonists and believed that pleasure was the supreme good in life, especially physical pleasure, which they thought more intense and more desirable than mental pleasures. The followers of Epicurus also identified "the pursuit of pleasure and the avoidance of pain" as the ultimate goal of life, but noted that "We do not mean the pleasures of the prodigal or of sensuality . . . we mean the absence of pain in the body and trouble in the mind". This brought hedonism back to the search for ataraxia.Another important strand of thought in post-Classical Western thought was the question of skepticism. Pyrrho of Elis, a Democritean philosopher, traveled to India with Alexander the Great's army where Pyrrho was influenced by Buddhist teachings, most particularly the three marks of existence. After returning to Greece, Pyrrho started a new school of philosophy, Pyrrhonism, which taught that it is one's opinions about non-evident matters (i.e., dogma) that prevent one from attaining ataraxia. To bring the mind to ataraxia, Pyrrhonism uses epoché (suspension of judgment) regarding all non-evident propositions. After Arcesilaus became head of the academy, he adopted skepticism as a central tenet of Platonism, making Platonism nearly the same as Pyrrhonism. After Arcesilaus, Academic skepticism diverged from Pyrrhonism. The Academic skeptics did not doubt the existence of truth; they just doubted that humans had the capacities for obtaining it. They based this position on Plato's Phaedo, sections 64–67, in which Socrates discusses how knowledge is not accessible to mortals. Following the end of the skeptical period of the academy with Antiochus of Ascalon, Platonic thought entered the period of Middle Platonism, which absorbed ideas from the Peripatetic and Stoic schools. More extreme syncretism was done by Numenius of Apamea, who combined it with Neopythagoreanism. Also affected by the Neopythagoreans, the Neoplatonists, first of them Plotinus, argued that mind exists before matter, and that the universe has a singular cause which must therefore be a single mind. As such, Neoplatonism become essentially a religion, and had much impact on later Christian thought. Medieval Medieval philosophy roughly extends from the Christianization of the Roman Empire until the Renaissance. It is defined partly by the rediscovery and further development of classical Greek and Hellenistic philosophy, and partly by the need to address theological problems and to integrate the then-widespread sacred doctrines of Abrahamic religion (Judaism, Christianity, and Islam) with secular learning. Some problems discussed throughout this period are the relation of faith to reason, the existence and unity of God, the object of theology and metaphysics, the problems of knowledge, of universals, and of individuation. A prominent figure of this period was Augustine of Hippo, one of the most important Church Fathers in Western Christianity. Augustine adopted Plato's thought and Christianized it. His influence dominated medieval philosophy perhaps up to the end of era and the rediscovery of Aristotle's texts. Augustinianism was the preferred starting point for most philosophers up until the 13th century. Among the issues his philosophy touched upon were the problem of evil, just war and what time is. On the problem of evil, he argued that evil was a necessary product of human free will. When this raised the issue of the incompatibility of free will and divine foreknowledge, both he and Boethius solved the issue by arguing that God did not see the future, but rather stood outside of time entirely. Scholasticism An influential school of thought was that of scholasticism, which is not so much a philosophy or a theology as a methodology, as it places a strong emphasis on dialectical reasoning to extend knowledge by inference and to resolve contradictions. Scholastic thought is also known for rigorous conceptual analysis and the careful drawing of distinctions. In the classroom and in writing, it often takes the form of explicit disputation; a topic drawn from the tradition is broached in the form of a question, oppositional responses are given, a counterproposal is argued and oppositional arguments rebutted. Because of its emphasis on rigorous dialectical method, scholasticism was eventually applied to many other fields of study. Anselm of Canterbury (called the 'father of scholasticism') argued that the existence of God could be irrefutably proved with the logical conclusion apparent in the ontological argument, according to which God is by definition the greatest thing in conceivable, and since an existing thing is greater than a non-existing one, it must be that God exists or is not the greatest thing conceivable (the latter being by definition impossible). A refutation of this was offered by Gaunilo of Marmoutiers, who applied the same logic to an imagined island, arguing that somewhere there must exist a perfect island using the same steps of reasoning (therefore leading to an absurd outcome). Boethius also worked on the problem of universals, arguing that they did not exist independently as claimed by Plato, but still believed, in line with Aristotle, that they existed in the substance of particular things. Another important figure for scholasticism, Peter Abelard, extended this to nominalism, which states (in complete opposition to Plato) that universals were in fact just names given to characteristics shared by particulars.Thomas Aquinas, an academic philosopher and the father of Thomism, was immensely influential in medieval Christendom. He was influenced by newly discovered Aristotle, and aimed to reconcile his philosophy with Christian theology. Aiming to develop an understanding of the soul, he was led to consider metaphysical questions of substance, matter, form, and change. He defined a material substance as the combination of an essence and accidental features, with the essence being a combination of matter and form, similar to the Aristotelian view. For humans, the soul is the essence. Also influenced by Plato, he saw the soul as unchangeable and independent of the body. Other Western philosophers from the Middle Ages include John Scotus Eriugena, Gilbert de la Porrée, Peter Lombard, Hildegard of Bingen, Albertus Magnus, Robert Grosseteste, Roger Bacon, Bonaventure, Peter John Olivi, Mechthild of Magdeburg, Robert Kilwardby, Henry of Ghent, Duns Scotus, Marguerite Porete, Dante Alighieri, Marsilius of Padua, William of Ockham, Jean Buridan, Nicholas of Autrecourt, Meister Eckhart, Catherine of Siena, Jean Gerson, and John Wycliffe. The medieval tradition of scholasticism continued to flourish as late as the 17th century, in figures such as Francisco Suárez and John of St. Thomas. During the Middle Ages, Western philosophy was also influenced by the Jewish philosophers Maimonides and Gersonides; and the Muslim philosophers Alkindus, Alfarabi, Alhazen, Avicenna, Algazel, Avempace, Abubacer, and Averroes. Renaissance humanism The Renaissance ("rebirth") was a period of transition between the Middle Ages and modern thought, in which the recovery of ancient Greek philosophical texts helped shift philosophical interests away from technical studies in logic, metaphysics, and theology towards eclectic inquiries into morality, philology, and mysticism. The study of the classics and the humane arts generally, such as history and literature, enjoyed a scholarly interest hitherto unknown in Christendom, a tendency referred to as humanism. Displacing the medieval interest in metaphysics and logic, the humanists followed Petrarch in making humanity and its virtues the focus of philosophy. At the point of passage from Renaissance into early/classical modern philosophy, the dialogue was used as a primary style of writing by Renaissance philosophers, such as Giordano Bruno. The dividing line between what is classified as Renaissance versus modern philosophy is disputed. Modern The term "modern philosophy" has multiple usages. For example, Thomas Hobbes is sometimes considered the first modern philosopher because he applied a systematic method to political philosophy. By contrast, René Descartes is often considered the first modern philosopher because he grounded his philosophy in problems of knowledge, rather than problems of metaphysics. Modern philosophy and especially Enlightenment philosophy is distinguished by its increasing independence from traditional authorities such as the Church, academia, and Aristotelianism; a new focus on the foundations of knowledge and metaphysical system-building; and the emergence of modern physics out of natural philosophy. Early modern (17th and 18th centuries) Some central topics of Western philosophy in its early modern (also classical modern) period include the nature of the mind and its relation to the body, the implications of the new natural sciences for traditional theological topics such as free will and God, and the emergence of a secular basis for moral and political philosophy. These trends first distinctively coalesce in Francis Bacon's call for a new, empirical program for expanding knowledge, and soon found massively influential form in the mechanical physics and rationalist metaphysics of René Descartes. Descartes's epistemology was based on a method called Cartesian doubt, whereby only the most certain belief could act as the foundation for further inquiry, with each step to further ideas being as cautious and clear as possible. This led him to his famous maxim cogito ergo sum ('I think, therefore I exist'), though similar arguments had been made by earlier philosophers. This became foundational for much of further Western philosophy, as the need to find a route from the private world of consciousness to the externally existing reality was widely accepted until the 20th century. A major issue for his thought remained in the mind–body problem, however. One solution to the problem was presented by Baruch Spinoza, who argued that the mind and the body are one substance. This was based on his view that God and the universe are one and the same, encompassing the totality of existence. In the other extreme, Gottfried Wilhelm Leibniz, argued instead that the world was composed of numerous individual substances, called monads. Together, Descartes, Spinoza and Leibniz are considered influential early rationalists. In contrast to Descartes, Thomas Hobbes was a materialist who believed that everything was physical, and an empiricist who thought that all knowledge comes from sensation which is triggered by objects existing in the external world, with thought being a kind of computation. John Locke was another classic empiricist, with his arguments helping it overtake rationalism as the generally preferred approach. Together with David Hume, they form the core of 'British empiricism'. George Berkeley agreed with empiricism, but instead of believing in an ultimate reality which created perceptions, argued in favour immaterialism and the world existing as a result of being perceived. In contrast, the Cambridge Platonists continued to represent rationalism in Britain. In terms of political philosophy, arguments often started from arguing over the first principles of human nature through the thought experiment of what the world would look like without society, a scenario referred to as the state of nature. Hobbes believed that this would be a violent and anarchic, calling life under such a state of affairs "solitary, poor, nasty, brutish and short". To prevent this, he believed that the sovereign of the state should have essentially unlimited power. In contrast, Locke believed the state of nature be one where individuals enjoyed freedom, but that some of that (excluding those covered by natural rights) had to be given up when forming a society, but not to the degree of absolute rule. Jean-Jacques Rousseau meanwhile argued that in nature people were living in a peaceful and comfortable state, and that the formation of society led to the rise of inequality. The approximate end of the early modern period is most often identified with Immanuel Kant's systematic attempt to limit metaphysics, justify scientific knowledge, and reconcile both of these with morality and freedom. Whereas the rationalists had believed that knowledge came from a priori reasoning, the empiricists had argued that it came from a posteriori sensory experience, Kant aimed to reconcile these views by arguing that the mind uses a priori understanding to interpret the a posteriori experiences. He had been inspired to take this approach by the philosophy of Hume, who argued that the mechanisms of the mind gave people the perception of cause and effect. Many other contributors were philosophers, scientists, medical doctors, and politicians. A short list includes Galileo Galilei, Pierre Gassendi, Blaise Pascal, Nicolas Malebranche, Antonie van Leeuwenhoek, Christiaan Huygens, Isaac Newton, Christian Wolff, Montesquieu, Pierre Bayle, Thomas Reid, Jean le Rond d'Alembert and Adam Smith. German idealism German idealism emerged in Germany in the late 18th and early 19th centuries. It developed out of the work of Immanuel Kant in the 1780s and 1790s. Transcendental idealism, advocated by Immanuel Kant, is the view that there are limits on what can be understood since there is much that cannot be brought under the conditions of objective judgment. Kant wrote his Critique of Pure Reason (1781) in an attempt to reconcile the conflicting approaches of rationalism and empiricism, and to establish a new groundwork for studying metaphysics. Although Kant held that objective knowledge of the world required the mind to impose a conceptual or categorical framework on the stream of pure sensory data—a framework including space and time themselves—he maintained that things-in-themselves existed independently of human perceptions and judgments; he was therefore not an idealist in any simple sense. Kant's account of things-in-themselves is both controversial and highly complex. Continuing his work, Johann Gottlieb Fichte and Friedrich Schelling dispensed with belief in the independent existence of the world, and created a thoroughgoing idealist philosophy. The most notable work of absolute idealism was G. W. F. Hegel's Phenomenology of Spirit, of 1807. Hegel admitted his ideas were not new, but that all the previous philosophies had been incomplete. His goal was to correctly finish their job. Hegel asserts that the twin aims of philosophy are to account for the contradictions apparent in human experience (which arise, for instance, out of the supposed contradictions between "being" and "not being"), and also simultaneously to resolve and preserve these contradictions by showing their compatibility at a higher level of examination ("being" and "not being" are resolved with "becoming"). This program of acceptance and reconciliation of contradictions is known as the "Hegelian dialectic". Philosophers influenced by Hegel include Ludwig Feuerbach, who coined the term "projection" as pertaining to humans' inability to recognize anything in the external world without projecting qualities of ourselves upon those things; Karl Marx; Friedrich Engels; and the British idealists, notably T. H. Green, J. M. E. McTaggart, F. H. Bradley, and R. G. Collingwood. Few 20th-century philosophers embraced the core tenets of German idealism after the demise of British idealism. However, quite a few have embraced Hegelian dialectic, most notably Frankfurt School critical theorists, Alexandre Kojève, Jean-Paul Sartre (in his Critique of Dialectical Reason), and Slavoj Žižek. A central theme of German idealism, the legitimacy of Kant's "Copernican revolution", remains an important point of contention in 21st-century post-continental philosophy. Late modern (19th century) Late modern philosophy is usually considered to begin around the pivotal year of 1781, when Gotthold Ephraim Lessing died and Immanuel Kant's Critique of Pure Reason appeared. The 19th century saw the beginnings of what would later grow into the divide between Continental and analytic traditions of philosophy, with the former more interested in general frameworks of metaphysics (more common in the German-speaking world), and the latter focusing on issues of epistemology, ethics, law and politics (more common in the English-speaking world). German philosophy exercised broad influence in this century, owing in part to the dominance of the German university system. German idealists, such as Johann Gottlieb Fichte, Friedrich Wilhelm Joseph Schelling, Georg Wilhelm Friedrich Hegel, and the members of Jena Romanticism (Friedrich Hölderlin, Novalis, and Karl Wilhelm Friedrich Schlegel), transformed the work of Kant by maintaining that the world is constituted by a rational or mind-like process, and as such is entirely knowable. Hegel argued that history was the dialectical journey of the Geist (universal mind) towards self-fulfilment and self-realization. The Geist's self-awareness is absolute knowledge, which itself brings complete freedom. His philosophy was based on absolute idealism, with reality itself being mental. His legacy was divided between the conservative Right Hegelians and radical Young Hegelians, with the latter including David Strauss and Ludwig Feuerbach. Feuerbach argued for a materialist conception of Hegel's thought, inspiring Karl Marx.Arthur Schopenhauer was inspired by Kant and Indian philosophy. Accepting Kant's division of the world into the noumenal (the real) and phenomenal (the apparent) realities, he, nevertheless, disagreed on the accessibility of the former, arguing that it could in fact be accessed. The experience of will was how this reality was accessible, with the will underlying the whole of nature, with everything else being appearance. Whereas he believed the frustration of this will was the cause of suffering, Friedrich Nietzsche thought that the will to power was empowering, leading to growth and expansion, and therefore forming the basis of ethics. Jeremy Bentham established utilitarianism, which was a consequentialist ethic based on 'the greatest happiness for the greatest number', an idea taken from Cesare Beccaria. He believed that any act could be measured by its value in this regard through the application of felicific calculus. His associate James Mill's son John Stuart Mill subsequently took up his thought. However, in contrast to the valuation of pure pleasure in Bentham's work, Mill divided pleasures into higher and lower kinds. Logic began a period of its most significant advances since the inception of the discipline, as increasing mathematical precision opened entire fields of inference to formalization in the work of George Boole and Gottlob Frege. Other philosophers who initiated lines of thought that would continue to shape philosophy into the 20th century include: Gottlob Frege and Henry Sidgwick, whose work in logic and ethics, respectively, provided the tools for early analytic philosophy. Charles Sanders Peirce and William James, who founded pragmatism. Søren Kierkegaard and Friedrich Nietzsche, who laid the groundwork for existentialism and post-structuralism. Pragmatism Pragmatism is a philosophical tradition that began in the United States around 1870. It asserts that the truth of beliefs consists in their usefulness and efficacy rather than their correspondence with reality. Charles Sanders Peirce and William James were its co-founders and it was later modified by John Dewey as instrumentalism. Since the usefulness of any belief at any time might be contingent on circumstance, Peirce and James conceptualized final truth as something established only by the future, final settlement of all opinion. Pragmatism attempted to find a scientific concept of truth that does not depend on personal insight (revelation) or reference to some metaphysical realm. It interpreted the meaning of a statement by the effect its acceptance would have on practice. Inquiry taken far enough is thus the only path to truth. For Peirce commitment to inquiry was essential to truth-finding, implied by the idea and hope that inquiry is not fruitless. The interpretation of these principles has been subject to discussion ever since. Peirce's maxim of pragmatism is, "Consider what effects, which might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of these effects is the whole of our conception of the object." Critics accused pragmatism falling victim to a simple fallacy: that because something that is true proves useful, that usefulness is an appropriate basis for its truthfulness. Pragmatist thinkers include Dewey, George Santayana, and C. I. Lewis. Pragmatism was later worked on by neopragmatists Richard Rorty who was the first to develop neopragmatist philosophy in his Philosophy and the Mirror of Nature (1979), Hilary Putnam, W. V. O. Quine, and Donald Davidson. Neopragmatism has been described as a bridge between analytic and continental philosophy. Contemporary The three major contemporary approaches to academic philosophy are analytic philosophy, continental philosophy and pragmatism. They are neither exhaustive nor mutually exclusive. The 20th century deals with the upheavals produced by a series of conflicts within philosophical discourse over the basis of knowledge, with classical certainties overthrown, and new social, economic, scientific and logical problems. 20th-century philosophy was set for a series of attempts to reform and preserve and to alter or abolish, older knowledge systems. Seminal figures include Bertrand Russell, Ludwig Wittgenstein, Edmund Husserl, Martin Heidegger, and Jean-Paul Sartre. The publication of Husserl's Logical Investigations (1900–1) and Russell's The Principles of Mathematics (1903) is considered to mark the beginning of 20th-century philosophy. The 20th century also saw the increasing professionalization of the discipline and the beginning of the current (contemporary) era of philosophy. Since the Second World War, contemporary philosophy has been divided mostly into analytic and continental traditions; the former carried in the English speaking world and the latter on the continent of Europe. The perceived conflict between continental and analytic schools of philosophy remains prominent, despite increasing skepticism regarding the distinction's usefulness. Analytic philosophy In the English-speaking world, analytic philosophy became the dominant school for much of the 20th century. The term "analytic philosophy" roughly designates a group of philosophical methods that stress detailed argumentation, attention to semantics, use of classical logic and non-classical logic and clarity of meaning above all other criteria. Though the movement has broadened, it was a cohesive school in the first half of the century. Analytic philosophers were shaped strongly by logical positivism, united by the notion that philosophical problems could and should be solved by attention to logic and language. Logic Gottlob Frege's The Foundations of Arithmetic (1884) was the first analytic work, according to Michael Dummett (Origins of Analytical Philosophy, 1993). Frege was the first to take 'the linguistic turn,' analyzing philosophical problems through language. He invented a formal notational system for logic. His stance was one of anti-psychologism, arguing that logical truths were independent of the human minds discovering them. Bertrand Russell and G. E. Moore are also often counted as founders of analytic philosophy. They believed that philosophy should be based on analysing propositions. Russell wrote Principia Mathematica (with Alfred North Whitehead) to apply this to mathematics, while Moore did the same for ethics with Principia Ethica. Russell's attempts to find a foundation for mathematics led him to Russell's paradox, which caused Frege to abandon logicism. Russell espoused logical atomism, declaring that "logic is the essence of philosophy". In his Tractatus Logico-Philosophicus, Ludwig Wittgenstein put forward a refined version of this view. Wittgenstein, Russell's 'disciple', argued that the problems of philosophy were simply products of language which were actually meaningless. This was based on the picture theory of meaning. Wittgenstein later changed his conception of how language works, arguing instead that it has many different uses, which he called different language games. Philosophy of science The logical positivists of the Vienna Circle started as a study group of Russell and Whitehead. They argued that the arguments of metaphysics, ethics and theology were meaningless, as they were not logically or empirically verifiable. This was based on their division of meaningful statements into either the analytic (logical and mathematical statements) and the synthetic (scientific claims). Moritz Schlick and Rudolf Carnap argued that science rested at its roots on direct observation, but Otto Neurath noted that observation already requires theory in order to have meaning. Another participant in the Circle was Carnap's self-confessed disciple, Willard Van Orman Quine. In 'Two Dogmas of Empiricism', Quine criticized the distinction between analytic and synthetic statements. Instead, he advocated for a 'web of belief' approach, whereby all beliefs come from contact with reality (including mathematical ones), but with some being further removed from this contact than others. Another former participant in the Circle was Karl Popper. He argued that verificationism was logically incoherent, promoting instead falsificationism as the basis for science. A further advancement in the philosophy of science was made by Imre Lakatos, who argued that negative findings in individual tests did not falsify theories, but rather entire research programmes would eventually fail explain phenomena. Thomas Kuhn further argued that science was composed of paradigms, which would eventually shift when evidence accumulated against them. Based on the idea that different paradigms had different meanings of expressions, Paul Feyerabend went further in arguing for relativism in science. Philosophy of language Wittgenstein had first brought up the idea that ordinary language could solve philosophical problems. A loosely associated group of philosophers later became known as practitioners of ordinary language philosophy. It included Gilbert Ryle, J. L. Austin, R. M. Hare, and P. F. Strawson. They believed that as philosophy was not science, it could only be advanced through careful conceptual clarification and connection instead of observation and experimentation. However, they had given up the earlier analytic pursuit of using formal logic to express an ideal language, but did nevertheless share the scepticism of metaphysical grand theories. Unlike Wittgenstein, they believed only some problems of philosophy to be artifacts of language. This approach has been described as the linguistic turn of analytic philosophy. Ryle introduced the concept of category mistake, which described the misapplication of a concept in the wrong context (which he accused Descartes of doing with the ghost in the machine). One of Austin's key insights was that some language perform a perlocutionary function (creating by themselves an effect on the world), thereby being speech acts. This idea was later taken up by John Searle. In the final third of the 20th century, philosophy of language emerged as its own programme. The theory of meaning became central to this programme. Donald Davidson argued that meaning could be understood through a theory of truth. This was based on the work of Alfred Tarski. Empirically, Davidson would find the meaning of words in different languages by linking them with the objective conditions of their utterance, which established their truthness. Meaning therefore emerges from the consensus of interpretations of speaker behaviour. Michael Dummett argued against this view on the basis of its realism. This was because realism would make the truthness of many sentences beyond measurability. Instead, he argued for verifiability, based on the idea that one could recognise the proof of truth when offered it. Alternative to these, Paul Grice put forward a theory that meaning was based on the intention of the speaker, which over time becomes established after repeated use. Theories of reference were another major strand of thought on language. Frege had argued that proper names were linked to its referent through a description of what the name refers to. Russell agreed with this, adding that "this" can replace a description in cases of familiarity. Later, Searle and Strawson expanded these ideas by noting that a cluster of descriptions, each of them usable, may be used by linguistic communities. Keith Donnellan further argued that sometimes a description could be wrong but still make the correct reference, this being different from the attributive use of a description. He, as well as Saul Kripke and Hilary Putnam independently, argued that often the referents of proper names are not based on description, but rather on a history of usage passing through users. Towards the end of the century, philosophy of language began to diverge in two directions: the philosophy of mind, and more specific study of particular aspects of language, the latter supported by linguistics. Philosophy of mind Early identity theories of mind in the 1950s and '60s were based on the work of Ullin Place, Herbert Feigl, and J. J. C. Smart. While earlier philosophers such as the Logical Positivists, Quine, Wittgenstein, and Ryle had all used some form of behaviorism to dispense with the mental, they believed that behaviorism was insufficient in explaining many aspects of mental phenomena. Feigl argued that intentional states could not be thus explained. Instead, he espoused externalism. Place meanwhile argued that the mind could be reduced to physical events, while Feigl and Sense agreed they were identical. Functionalism in contrast argued that the mind was defined by what it does, rather than what it is based on. To argue against this, John Searle developed the Chinese room thought experiment. Davidson argued for anomalous monism, which claims that while mental events cause physical ones, and the all causal relations are governed by natural laws, there are however no natural laws governing the causality between mental and physical events. This anomaly in the name was explained by supervenience. In 1970, Keith Campbell proposed a "new epiphenomenalism", according to which the body produces the mind that does not act on the body, a process which he claims is destined to remain mysterious. Paul Churchland and Patricia Churchland argued for eliminative materialism, which claims that understanding the brain will lead to a complete understanding of the mind. This was based on developments in neuroscience. However, physicalist theories of mind have had to grapple with the issue of subjective experience raised by Thomas Nagel in What Is It Like to Be a Bat? and Frank Cameron Jackson's so-called knowledge argument. David Chalmers also argued against physicalism in the philosophical zombie argument. He further noted that subjective experience posed the hard problem of consciousness. The inability of physicalist theories to explain conscious feeling has been termed the explanatory gap. In contrast, Daniel Dennett has claimed that no such gap exists as subjective experiences are a 'philosophical fiction'. Ethics Ethics in 20th century analytic philosophy has been argued to have begun with Moore's Principia Ethica. Moore argued that what is good cannot be defined. Instead, he saw ethical behaviour a result of intuition, which led to non-cognitivism. W. D. Ross in contrast argued that duty formed the basis for ethics. Russell's meta-ethical thought anticipated emotivism and error theory. This was supported by the logical positivists, and later popularised by A. J. Ayer. Charles Stevenson also argued that ethical terms were expressions of emotive meanings by speakers. R. M. Hare aimed to expand their meaning from mere expressions, to also being prescriptions which are universalizable. J. L. Mackie supported error theory on the basis that objective values do not exist, as they are culturally relative and would be metaphysically strange. Another strand of ethical thinking began with G. E. M. Anscombe arguing in 1958 that both consequentialism and deontology were based on obligation, which could not function without divine authority, instead promoting virtue ethics. Other notable virtue ethicists included Philippa Foot and Alasdair MacIntyre. The latter combined it with communitarianism. Other branches Notable students of Quine include Donald Davidson and Daniel Dennett. The later work of Russell and the philosophy of Willard Van Orman Quine are influential exemplars of the naturalist approach dominant in analytic philosophy in the second half of the 20th century. But the diversity of analytic philosophy from the 1970s onward defies easy generalization: the naturalism of Quine and his epigoni was in some precincts superseded by a "new metaphysics" of possible worlds, as in the influential work of David Lewis. Recently, the experimental philosophy movement has sought to reappraise philosophical problems through social science research techniques. Some influential figures in contemporary analytic philosophy are: Timothy Williamson, David Lewis, John Searle, Thomas Nagel, Hilary Putnam, Michael Dummett, John McDowell, Saul Kripke, Peter van Inwagen, and Patricia Churchland. Analytic philosophy has sometimes been accused of not contributing to the political debate or to traditional questions in aesthetics. However, with the appearance of A Theory of Justice by John Rawls and Anarchy, State, and Utopia by Robert Nozick, analytic political philosophy acquired respectability. Analytic philosophers have also shown depth in their investigations of aesthetics, with Roger Scruton, Nelson Goodman, Arthur Danto and others developing the subject to its current shape. Continental philosophy Continental philosophy is a set of 19th- and 20th-century philosophical traditions from mainland Europe. 20th-century movements such as German idealism, phenomenology, existentialism, modern hermeneutics (the theory and methodology of interpretation), critical theory, structuralism, post-structuralism and others are included within this loose category. While identifying any non-trivial common factor in all these schools of thought is bound to be controversial, Michael E. Rosen has hypothesized a few common continental themes: that the natural sciences cannot replace the human sciences; that the thinker is affected by the conditions of experience (one's place and time in history); that philosophy is both theoretical and practical; that metaphilosophy or reflection upon the methods and nature of philosophy itself is an important part of philosophy proper. The founder of phenomenology, Edmund Husserl, sought to study consciousness as experienced from a first-person perspective, while Martin Heidegger drew on the ideas of Kierkegaard, Nietzsche, and Husserl to propose an unconventional existential approach to ontology. Phenomenologically oriented metaphysics undergirded existentialism—Martin Heidegger, Jean-Paul Sartre, Maurice Merleau-Ponty, Albert Camus—and finally post-structuralism—Gilles Deleuze, Jean-François Lyotard (best known for his articulation of postmodernism), Michel Foucault, Jacques Derrida (best known for developing a form of semiotic analysis known as deconstruction). The psychoanalytic work of Sigmund Freud, Carl Jung, Jacques Lacan, Julia Kristeva, and others has also been influential in contemporary continental thought. Conversely, some philosophers have attempted to define and rehabilitate older traditions of philosophy. Most notably, Hans-Georg Gadamer and Alasdair MacIntyre have both, albeit in different ways, revived the tradition of Aristotelianism. Existentialism Existentialism is a term applied to the work of a number of late 19th- and 20th-century philosophers who, despite profound doctrinal differences, shared the belief that philosophical thinking begins with the human subject—not merely the thinking subject, but the acting, feeling, living human individual. In existentialism, the individual's starting point is characterized by what has been called "the existential attitude", or a sense of disorientation and confusion in the face of an apparently meaningless or absurd world. Many existentialists have also regarded traditional systematic or academic philosophy, in both style and content, as too abstract and remote from concrete human experience. Although they did not use the term, the 19th-century philosophers Søren Kierkegaard and Friedrich Nietzsche are widely regarded as the fathers of existentialism. Their influence, however, has extended beyond existentialist thought. Marxism and critical theory Marxism is a method of socioeconomic analysis, originating from Karl Marx and Friedrich Engels. It analyzes class relations and societal conflict using a materialist interpretation of historical development and a dialectical view of social transformation. Marxist analyses and methodologies influenced political ideologies and social movements. Marxist understandings of history and society were adopted by academics in archeology, anthropology, media studies, political science, theater, history, sociology, art history and theory, cultural studies, education, economics, geography, literary criticism, aesthetics, critical psychology and philosophy. In contemporary philosophy, the term "critical theory" describes the Western Marxist philosophy of the Frankfurt School, which was developed in Germany in the 1930s. Critical theory maintains that ideology is the principal obstacle to human emancipation. Phenomenology and hermeneutics Edmund Husserl's phenomenology was an ambitious attempt to lay the foundations for an account of the structure of conscious experience in general. An important part of Husserl's phenomenological project was to show that all conscious acts are directed at or about objective content, a feature that Husserl called intentionality. Husserl published only a few works in his lifetime, which treat phenomenology mainly in abstract methodological terms; but he left an enormous quantity of unpublished concrete analyses. Husserl's work was immediately influential in Germany, with the foundation of phenomenological schools in Munich (Munich phenomenology) and Göttingen (Göttingen phenomenology). Phenomenology later achieved international fame through the work of such philosophers as Martin Heidegger (formerly Husserl's research assistant and a proponent of hermeneutic phenomenology, a theoretical synthesis of modern hermeneutics and phenomenology), Maurice Merleau-Ponty, and Jean-Paul Sartre. Through the work of Heidegger and Sartre, Husserl's focus on subjective experience influenced aspects of existentialism. Structuralism and post-structuralism Inaugurated by the linguist Ferdinand de Saussure, structuralism sought to clarify systems of signs through analyzing the discourses they both limit and make possible. Saussure conceived of the sign as being delimited by all the other signs in the system, and ideas as being incapable of existence prior to linguistic structure, which articulates thought. This led continental thought away from humanism, and toward what was termed the decentering of man: language is no longer spoken by man to express a true inner self, but language speaks man. Structuralism sought the province of a hard science, but its positivism soon came under fire by post-structuralism, a wide field of thinkers, some of whom were once themselves structuralists, but later came to criticize it. Structuralists believed they could analyze systems from an external, objective standing, for example, but the poststructuralists argued that this is incorrect, that one cannot transcend structures and thus analysis is itself determined by what it examines. While the distinction between the signifier and signified was treated as crystalline by structuralists, poststructuralists asserted that every attempt to grasp the signified results in more signifiers, so meaning is always in a state of being deferred, making an ultimate interpretation impossible. Structuralism came to dominate continental philosophy throughout the 1960s and early 1970s, encompassing thinkers as diverse as Claude Lévi-Strauss, Roland Barthes and Jacques Lacan. Post-structuralism came to predominate from the 1970s onwards, including thinkers such as Michel Foucault, Jacques Derrida, Gilles Deleuze and even Roland Barthes; it incorporated a critique of structuralism's limitations. Process philosophy Process philosophy is a tradition beginning with Alfred North Whitehead, who began teaching and writing on process and metaphysics when he joined Harvard University in 1924. This tradition identifies metaphysical reality with change. Process philosophy is sometimes classified as closer to continental philosophy than analytic philosophy, because it is usually only taught in continental departments. However, other sources state that process philosophy should be placed somewhere in the middle between the poles of analytic versus continental methods in contemporary philosophy. Influences from Eastern philosophy The Ancient Greek philosopher Pyrrho accompanied Alexander the Great in his eastern campaigns, spending about 18 months in India. Pyrrho subsequently returned to Greece and founded Pyrrhonism, a philosophy with substantial similarities with Buddhism. The Greek biographer Diogenes Laërtius explained that Pyrrho's equanimity and detachment from the world were acquired in India. Pyrrho was directly influenced by Buddhism in developing his philosophy, which is based on Pyrrho's interpretation of the Buddhist three marks of existence. According to Edward Conze, Pyrrhonism can be compared to Buddhist philosophy, especially the Indian Madhyamika school. The Pyrrhonists' goal of ataraxia (the state of being untroubled) is a soteriological goal similar to nirvana. The Pyrrhonists promoted suspending judgment (epoché) about dogma (beliefs about non-evident matters) as the way to reach ataraxia. This is similar to the Buddha's refusal to answer certain metaphysical questions which he saw as non-conductive to the path of Buddhist practice and Nagarjuna's "relinquishing of all views (drsti)". Adrian Kuzminski argues for direct influence between these two systems of thought. In Pyrrhonism: How the Ancient Greeks Reinvented Buddhism According to Kuzminski, both philosophies argue against assenting to any dogmatic assertions about an ultimate metaphysical reality behind our sense impressions as a tactic to reach tranquility and both also make use of logical arguments against other philosophies in order to expose their contradictions. The Cyrenaic philosopher Hegesias of Cyrene is thought by some to have been influenced by the teachings of Ashoka's Buddhist missionaries. Empiricist philosophers, such as Hume and Berkeley, favoured the bundle theory of personal identity. In this theory, the mind is simply 'a bundle of perceptions' without unity. One interpretation of Hume's view of the self, argued for by philosopher and psychologist James Giles, is that Hume is not arguing for a bundle theory, which is a form of reductionism, but rather for an eliminative view of the self. Rather than reducing the self to a bundle of perceptions, Hume rejects the idea of the self altogether. On this interpretation, Hume is proposing a "no-self theory" and thus has much in common with Buddhist thought (see anattā). Psychologist Alison Gopnik has argued that Hume was in a position to learn about Buddhist thought during his time in France in the 1730s. See also Glossary of philosophy History of philosophy Index of philosophy List of philosophers List of philosophical theories List of philosophies Pseudophilosophy Other traditions African philosophy Eastern philosophy Christian philosophy Islamic philosophy Jewish philosophy National traditions American philosophy British philosophy French philosophy German philosophy Polish philosophy Non-mainstream movements New realism Objectivism Personalism Post-analytic philosophy Post-Continental philosophy Notes References Further reading Copleston, Frederick (1946–1975). A History of Philosophy, 11 vols. Continuum. Hegel, Georg Wilhelm Friedrich (1996) [1892 Kegan Paul]. Haldane, Elizabeth Sanderson, ed. Vorlesungen über die Geschichte der Philosophie [Hegel's Lectures on the History of Philosophy, 3 vols.]. Humanities Press International. Kenny, Anthony (2010). A New History of Western Philosophy. Oxford University Press. Russell, Bertrand (1945). A History of Western Philosophy. Simon & Schuster. External links The Stanford Encyclopedia of Philosophy Internet Encyclopedia of Philosophy The Routledge Encyclopedia of Philosophy
0.767124
0.998485
0.765962
Ataraxia
In Ancient Greek philosophy, (Greek: , from indicating negation or absence and with the abstract noun suffix ), generally translated as , , , or , is a lucid state of robust equanimity characterized by ongoing freedom from distress and worry. In non-philosophical usage, was the ideal mental state for soldiers entering battle. Achieving is a common goal for Pyrrhonism, Epicureanism, and Stoicism, but the role and value of within each philosophy varies in accordance with their philosophical theories. The mental disturbances that prevent one from achieving also vary among the philosophies, and each philosophy has a different understanding as to how to achieve . Pyrrhonism In Pyrrhonism, is the intended result of (i.e., suspension of judgment) regarding all matters of dogma (i.e., non-evident belief), which represents the central aim of Pyrrhonist practice, that is necessary to bring about . Epicureanism is a key component of the Epicurean conception of pleasure, which they consider the highest good. Epicureans break pleasure down into two categories: the physical and the mental. They consider mental, not physical, pleasures to be of high importance because physical pleasures exist only in the present; while mental pleasures exist in the past, the present, and the future. Epicureans further separate pleasure into what they call pleasure, those that come about through action or change, and pleasures, those that come about through an absence of distress. Those who achieved freedom from physical disturbance were said to be in a state of , while those who achieved freedom from mental disturbances were said to be in a state of . , as both a mental and pleasure, is key to a person's happiness. Stoicism In Stoicism, unlike Pyrrhonism or Epicureanism, , or tranquillity of the mind, is not the ultimate goal of life. Instead, the goal is a life of virtue according to nature, which is intended to bring about , the absence of unhealthy passions. However, since Stoics in a state of do not care about matters outside of themselves and are not susceptible to emotion, they would be unable to be disturbed by anything at all, meaning that they were also in a stage of mental tranquillity and thus in a state of . See also Upekṣā References Concepts in ancient Greek ethics Concepts in ancient Greek philosophy of mind Epicureanism Happiness Pyrrhonism Stoicism Theories in ancient Greek philosophy
0.768605
0.996397
0.765836
Hobbes's moral and political philosophy
Thomas Hobbes’s moral and political philosophy is constructed around the basic premise of social and political order, explaining how humans should live in peace under a sovereign power so as to avoid conflict within the ‘state of nature’. Hobbes’s moral philosophy and political philosophy are intertwined; his moral thought is based around ideas of human nature, which determine the interactions that make up his political philosophy.  Hobbes’s moral philosophy therefore provides justification for, and informs, the theories of sovereignty and the state of nature that underpin his political philosophy. In utilising methods of deductive reasoning and motion science, Hobbes examines human emotion, reason and knowledge to construct his ideas of human nature (moral philosophy). This methodology critically influences his politics, determining the interactions of conflict (in the state of nature) which necessitate the creation of a politically authoritative state to ensure the maintenance of peace and cooperation. This method is used and developed in works such as The Elements of Law (1640), De Cive (1642), Leviathan (1651) and Behemoth (1681). Methodology In developing his moral and political philosophy, Hobbes assumes the methodological approach of deductive reasoning, combining mathematics and the mechanics of science to formulate his ideas on human nature. Hobbes was critical of the assumptions of scholastic philosophers, whose evidence for human nature was based upon Aristotelian metaphysics and Cartesian observation, as opposed to reasoning and definition. Though Hobbes did not fully reject the value of observational or ‘prudential’ knowledge, he dismissed the view that this was at all scientific or philosophical in nature. To Hobbes, this type of knowledge was based on subjective and diverse experience, and was therefore capable of producing only speculative assumptions. This view predetermined Hobbes’s method of deductive reasoning, which involved the application of geometry, Galilean scientific concepts and definition. This scientific method stresses the importance of first establishing well-defined principles of human nature (moral philosophy) and ‘deducing’ aspects of political life from this. Hobbes first used the mechanics of motion to define principles of human perception, behaviour and reasoning, which were then used to draw the conclusions of his political philosophy (sovereignty, state of nature). In rejecting what he believed were ‘conjectures’ relating to intangible or supernatural objects or realities, Hobbes’s philosophy is drawn from material and physical reality and experience. Höffe explains how Hobbes applied this method to construct his political theory of sovereignty: “…the combination of mathematics and mechanics, is not sufficient on its own… the combination of mathematics and mechanics leads to the metaphor of the state as an “artificial” human being, which is comparable to a machine constructed out of natural human beings; (3) the resoluto-compositive [the recourse to absolutely first principles or elements] method defines and clarifies the nature of this construction: the artificial human being is decomposed into its smallest constituent parts and then recomposed, i.e., constructed, out of these parts". Hobbes’s moral principles thus provide the ultimate basis for his political philosophy, defining and clarifying how an “artificial” sovereign authority may come into existence. Moral philosophy Hobbes’s moral philosophy is the fundamental starting point from which his political philosophy is developed. This moral philosophy outlines a general conceptual framework on human nature which is rigorously developed in The Elements of Law, De Cive and Leviathan. These works examine how the laws of motion influence human perception, behaviour and action, which then determine how individuals interact. The Elements of Law provides insight into Hobbes’s moral philosophy through ideas of sensation, pleasure, passion, pain, memory and reason. This is expanded upon in De Cive: “… human nature… comprising the faculties of body and mind; . . . Physical force, Experience Reason and Passion". Hobbes believes that as sensory organs process the movements of external stimuli, a range of different mental experiences take place, which in turn dictate human behaviour. What emerged from this idea of motion was the view that humans are naturally drawn towards, or desire, things that benefit their overall wellbeing; things that are “good” for them. These are called “appetites”, and what differentiates the human ‘appetite’ from that of animals is reason. Reason, or “ratiocination”, as used by Hobbes, was not defined in the traditional sense as an innate capability tied to notions of natural law, but as an activity that involved coming to a judgement via the process of logic. Humans, as noted in Leviathan, have “…knowledge of the consequences of one affirmation to another”. Individuals will desire and select whatever ‘thing’ brings them the most “good”. This process of thinking is a consequence of motion and mechanics more than a conscious exercise of choice. Ratiocination leads individuals to uncover the Laws of Nature, which Hobbes deems “the true moral philosophy”. Hobbes’s understanding of human nature establishes the foundations for his political philosophy by explaining the essence of conflict (in the state of nature) and cooperation (in a commonwealth). Because human beings will always pursue what is ‘good’ for them, this philosophy asserts that individuals share overarching desires or goals, such as security and safety (especially from death). This is the point in which Hobbes’s moral and political philosophy intersect: in “our shared conception of ourselves as rational agents”. It is rational to “pursue the necessary means to our dominant shared ends”, in which case the “necessary means” is submission to a sovereign authority. By establishing morality as a force which directs individuals towards their shared desires and goals of, for example, peace and security, and the means to achieve these goals is through the creation of a state, Hobbes grounds his political philosophy in his moral thought. This approach to moral philosophy is executed by Hobbes through discussion of a range of interrelated moral concepts: “good, evil, rights, obligation, justice, contract, covenant and natural law”. Moral concepts Obligation Hobbes’s concept of moral obligation stems from the assumption that humans have a fundamental obligation to follow the laws of nature and all obligations stem from nature. His reasoning for this is premised upon the beliefs of natural law; that the moral standards or reasoning that govern behaviour can be drawn from eternal truths regarding human nature and the world. Hobbes believes that the morals derived from natural law, however, do not permit individuals to challenge the laws of the sovereign; law of the commonwealth supersedes natural law, and obeying the laws of nature does not make you exempt from disobeying those of the government. Hobbes’s concept of moral obligation thus intertwines with the concept of political obligation. This underpins much of Hobbes’s political philosophy, stating that humans have a political obligation or ‘duty’ to prevent the creation of a state of nature. Humans have a political obligation to obey a sovereign power, and once they have renounced part of their natural rights to this power (theory of sovereignty), they have a duty to uphold the ‘social contract’ they have entered into. Political philosophy The main aspects of Hobbes’s political philosophy revolve around the contrasting relationship between the state of nature (a state of war) and the State itself as one of peace and cooperation. This philosophy is determined by, and implied in, his method of deduction. The trajectory of individual desire and will outlined in his moral philosophy is a decisive factor contributing to the formulation of his idea of the State. Hobbes outlined four key principles of purpose in his philosophical literature: Welfare of the general public. State of well-being and satisfaction with life. The pursuit of justice. The pursuit of peace (to avoid the ‘state of war’). These concepts are mutually reinforcing and feature across his most prominent works. For example, in The Elements of Law, Hobbes claims that the benefits given to the general public under a commonwealth are “incomparable”. This overlaps with his discussion of justice in the same text, which is used in a political context. Leviathan details all four principles but focuses on the pursuit of peace, which Hobbes aligns with the first principle of welfare and public good. Where a state of peace (4) and justice (3), and the overall welfare of the general public (1), manifest under a commonwealth (stemming from ‘commonweal’: the general good of the public), a state of well-being and overall satisfaction (2) may be secured. Only under the commonwealth (as opposed to a state of nature and war) can peace, and “the notions of right and wrong, justice and injustice”, exist indefinitely. This is expanded upon again in The Elements of Law, which posits that humans by nature are inclined towards conflict, and therefore need a State to institute peace and protect individuals against the threats of self-preservation which flourish in a state of nature. De Cive also builds on the relationship between these principles, where Hobbes’s claim to show individuals the “highway to peace” affirms his notion that humans should pursue peace, and therefore justice, in the form of a commonwealth. It is in the interest of humans to pursue peace, who have a fundamental obligation to follow the Laws of Nature. A sovereign power or authority figure - a Leviathan - is needed to translate these Laws of Nature in a “binding and authoritative fashion”. The notion that individuals require a “visible power to keep them in awe” - to maintain peace and safety through enforcement of law - underpins Hobbes’s theory of sovereignty, which proposes that a sovereign ruler (with authority to govern the people) is fundamental to any type of commonwealth. Therefore, the overarching concern of Hobbes’s political philosophy remains the capacity of the government to maintain peace, protection, justice and wellbeing in a manner that ensures the continuation of society and civil life. See also Natural and legal rights Psychological egoism Natural law References Thomas Hobbes Eponymous political ideologies
0.780513
0.981165
0.765812
Phenomenology (philosophy)
Phenomenology is a philosophical study and movement largely associated with the early 20th century that seeks to objectively investigate the nature of subjective, conscious experience. It attempts to describe the universal features of consciousness while avoiding assumptions about the external world, aiming to describe phenomena as they appear to the subject, and to explore the meaning and significance of the lived experiences. This approach, while philosophical, has found many applications in qualitative research across different scientific disciplines, especially in the social sciences, humanities, psychology, and cognitive science, but also in fields as diverse as health sciences, architecture, and human-computer interaction, among many others. The application of phenomenology in these fields aims to gain a deeper understanding of subjective experience, rather than focusing on behavior. Phenomenology is contrasted with phenomenalism, which reduces mental states and physical objects to complexes of sensations, and with psychologism, which treats logical truths or epistemological principles as the products of human psychology. In particular, transcendental phenomenology, as outlined by Edmund Husserl, aims to arrive at an objective understanding of the world via the discovery of universal logical structures in human subjective experience. There are important differences in the ways that different branches of phenomenology approach subjectivity. For example, according to Martin Heidegger, truths are contextually situated and dependent on the historical, cultural, and social context in which they emerge. Other types include hermeneutic, genetic, and embodied phenomenology. All these different branches of phenomenology may be seen as representing different philosophies despite sharing the common foundational approach of phenomenological inquiry; that is, investigating things just as they appear, independent of any particular theoretical framework. Etymology The term phenomenology derives from the Greek φαινόμενον, phainómenon ("that which appears") and λόγος, lógos ("study"). It entered the English language around the turn of the 18th century and first appeared in direct connection to Husserl's philosophy in a 1907 article in The Philosophical Review. In philosophy, "phenomenology" refers to the tradition inaugurated by Edmund Husserl at the beginning of the 20th century. The term, however, had been used in different senses in other philosophy texts since the 18th century. These include those by Johann Heinrich Lambert (1728–1777), Immanuel Kant (1724–1804), G. W. F. Hegel (1770–1831), and Carl Stumpf (1848–1936), among others. It was, however, the usage of Franz Brentano (and, as he later acknowledged, Ernst Mach) that would prove definitive for Husserl. From Brentano, Husserl took the conviction that philosophy must commit itself to description of what is "given in direct 'self-evidence'." Central to Brentano's phenomenological project was his theory of intentionality, which he developed from his reading of Aristotle's On the Soul. According to the phenomenological tradition, "the central structure of an experience is its intentionality, it being directed towards something, as it is an experience of or about some object." Also, on this theory, every intentional act is implicitly accompanied by a secondary, pre-reflective awareness of the act as one's own. Overview Phenomenology proceeds systematically, but it does not attempt to study consciousness from the perspective of clinical psychology or neurology. Instead, it seeks to determine the essential properties and structures of experience. Phenomenology is not a matter of individual introspection: a subjective account of experience, which is the topic of psychology, must be distinguished from an account of subjective experience, which is the topic of phenomenology. Its topic is not "mental states", but "worldly things considered in a certain way". Phenomenology is a direct reaction to the psychologism and physicalism of Husserl's time. It takes as its point of departure the question of how objectivity is possible at all when the experience of the world and its objects is thoroughly subjective. So far from being a form of subjectivism, phenomenologists argue that the scientific ideal of a purely objective third-person is a fantasy and falsity. The perspective and presuppositions of the scientist must be articulated and taken into account in the design of the experiment and the interpretation of its results. Inasmuch as phenomenology is able to accomplish this, it can help to improve the quality of empirical scientific research. In spite of the field's internal diversity, Shaun Gallagher and Dan Zahavi argue that the phenomenological method is composed of four basic steps: the époche, the phenomenological reduction, the eidetic variation, and intersubjective corroboration. The époche is Husserl's term for the procedure by which the phenomenologist endeavors to suspend commonsense and theoretical assumptions about reality (what he terms the natural attitude) in order to attend only to what is directly given in experience. This is not a skeptical move; reality is never in doubt. The purpose is to see it more closely as it truly is. The underlying insight is that objects are "experienced and disclosed in the ways they are, thanks to the way consciousness is structured." The phenomenological reduction is closely linked to the époche. The aim of the reduction is to analyze the correlations between what is given in experience and specific structures of subjectivity shaping and enabling this givenness. This "leads back" (Latin: re-ducere) to the world. Eidetic variation is the process of imaginatively stripping away the properties of things to determine what is essential to them, that is, what are the characteristics without which a thing would not be the thing that it is (Eidos is Plato's Greek word for the essence of a thing). Significantly for the phenomenological researcher, eidetic variation can be practiced on acts of consciousness themselves to help clarify, for instance, the structure of perception or memory. Husserl openly acknowledges that the essences uncovered by this method include various degrees of vagueness and also that such analyses are defeasible. He contends, however, that this does not undermine the value of the method. Intersubjective corroboration is simply the sharing of one's results with the larger research community. This allows for comparisons that help to sort out what is idiosyncratic to the individual from what might be essential to the structure of experience as such. According to Maurice Natanson, "The radicality of the phenomenological method is both continuous and discontinuous with philosophy's general effort to subject experience to fundamental, critical scrutiny: to take nothing for granted and to show the warranty for what we claim to know." According to Husserl the suspension of belief in what is ordinarily taken for granted or inferred by conjecture diminishes the power of what is customarily embraced as objective reality. In the words of Rüdiger Safranski, "[Husserl's and his followers'] great ambition was to disregard anything that had until then been thought or said about consciousness or the world [while] on the lookout for a new way of letting the things [they investigated] approach them, without covering them up with what they already knew." History Edmund Husserl "set the phenomenological agenda" for even those who did not strictly adhere to his teachings, such as Martin Heidegger, Jean-Paul Sartre, and Maurice Merleau-Ponty, to name just the foremost. Each thinker has "different conceptions of phenomenology, different methods, and different results." Husserl's conceptions Husserl derived many important concepts central to phenomenology from the works and lectures of his teachers, the philosophers and psychologists Franz Brentano and Carl Stumpf. An important element of phenomenology that Husserl borrowed from Brentano is intentionality (often described as "aboutness" or "directedness"), the notion that consciousness is always consciousness of something. The object of consciousness is called the intentional object, and this object is constituted for consciousness in many different ways, through, for instance, perception, memory, signification, and so forth. Throughout these different intentionalities, though they have different structures and different ways of being "about" the object, an object is still constituted as the identical object; consciousness is directed at the same intentional object in direct perception as it is in the immediately-following retention of this object and the eventual remembering of it. As envisioned by Husserl, phenomenology is a method of philosophical inquiry that rejects the rationalist bias that has dominated Western thought since Plato in favor of a method of reflective attentiveness that discloses the individual's "lived experience." Loosely rooted in an epistemological device called epoché, Husserl's method entails the suspension of judgment while relying on the intuitive grasp of knowledge, free of presuppositions and intellectualizing. Sometimes depicted as the "science of experience," the phenomenological method, rooted in intentionality, represents an alternative to the representational theory of consciousness. That theory holds that reality cannot be grasped directly because it is available only through perceptions of reality that are representations in the mind. In Husserl's own words: experience is not an opening through which a world, existing prior to all experience, shines into a room of consciousness; it is not a mere taking of something alien to consciousness into consciousness... Experience is the performance in which for me, the experiencer, experienced being "is there", and is there as what it is, with the whole content and the mode of being that experience itself, by the performance going on in its intentionality, attributes to it. In effect, he counters that consciousness is not "in" the mind; rather, consciousness is conscious of something other than itself (the intentional object), regardless of whether the object is a physical thing or just a figment of the imagination. Logical Investigations (1900/1901) In the first edition of the Logical Investigations, under the influence of Brentano, Husserl describes his position as "descriptive psychology." Husserl analyzes the intentional structures of mental acts and how they are directed at both real and ideal objects. The first volume of the Logical Investigations, the Prolegomena to Pure Logic, begins with a critique of psychologism, that is, the attempt to subsume the a priori validity of the laws of logic under psychology. Husserl establishes a separate field for research in logic, philosophy, and phenomenology, independently from the empirical sciences. "Pre-reflective self-consciousness" is Shaun Gallagher and Dan Zahavi's term for Husserl's (1900/1901) idea that self-consciousness always involves a self-appearance or self-manifestation prior to self-reflection. This is one point of nearly unanimous agreement among phenomenologists: "a minimal form of self-consciousness is a constant structural feature of conscious experience. Experience happens for the experiencing subject in an immediate way and as part of this immediacy, it is implicitly marked as my experience." Ideas (1913) In 1913, Husserl published Ideas: General Introduction to Pure Phenomenology. In this work, he presents phenomenology as a form of "transcendental idealism". Although Husserl claimed to have always been a transcendental idealist, this was not how many of his admirers had interpreted the Logical Investigations, and some were alienated as a result. This work introduced distinctions between the act of consciousness (noesis) and the phenomena at which it is directed (the noemata). Noetic refers to the intentional act of consciousness (believing, willing, etc.). Noematic refers to the object or content (noema), which appears in the noetic acts (the believed, wanted, hated, loved, etc.). What is observed is not the object as it is in itself, but how and inasmuch it is given in the intentional acts. Knowledge of essences would only be possible by "bracketing" all assumptions about the existence of an external world and the inessential (subjective) aspects of how the object is concretely given to us. This phenomenological reduction is the second stage of Husserl's procedure of epoché. That which is essential is then determined by the imaginative work of eidetic variation, which is a method for clarifying the features of a thing without which it would not be what it is. Husserl concentrated more on the ideal, essential structures of consciousness. As he wanted to exclude any hypothesis on the existence of external objects, he introduced the method of phenomenological reduction to eliminate them. What was left over was the pure transcendental ego, as opposed to the concrete empirical ego. Transcendental phenomenology is the study of the essential structures that are left in pure consciousness: this amounts in practice to the study of the noemata and the relations among them. Munich phenomenology Some phenomenologists were critical of the new theories espoused in Ideas. Members of the Munich group, such as Max Scheler and Roman Ingarden, distanced themselves from Husserl's new transcendental phenomenology. Their theoretical allegiance was to the earlier, realist phenomenology of the first edition of Logical Investigations. Heidegger's conception Martin Heidegger modified Husserl's conception of phenomenology because of what Heidegger perceived as Husserl's subjectivist tendencies. Whereas Husserl conceived humans as having been constituted by states of consciousness, Heidegger countered that consciousness is peripheral to the primacy of one's existence, for which he introduces Dasein as a technical term, which cannot be reduced to a mode of consciousness. From this angle, one's state of mind is an "effect" rather than a determinant of existence, including those aspects of existence of which one is not conscious. By shifting the center of gravity to existence in what he calls fundamental ontology, Heidegger altered the subsequent direction of phenomenology. According to Heidegger, philosophy was more fundamental than science itself. According to him, science is only one way of knowing the world with no special access to truth. Furthermore, the scientific mindset itself is built on a much more "primordial" foundation of practical, everyday knowledge. This emphasis on the fundamental status of a person's pre-cognitive, practical orientation in the world, sometimes called "know-how", would be adopted by both Sartre and Merleau-Ponty. While for Husserl, in the epoché, being appeared only as a correlate of consciousness, for Heidegger the pre-conscious grasp of being is the starting point. For this reason, he replaces Husserl's concept of intentionality with the notion of comportment, which is presented as "more primitive" than the "conceptually structured" acts analyzed by Husserl. Paradigmatic examples of comportment can be found in the unreflective dealing with equipment that presents itself as simply "ready-to-hand" in what Heidegger calls the normally circumspect mode of engagement within the world. For Husserl, all concrete determinations of the empirical ego would have to be abstracted in order to attain pure consciousness. By contrast, Heidegger claims that "the possibilities and destinies of philosophy are bound up with man's existence, and thus with temporality and with historicality." For this reason, all experience must be seen as shaped by social context, which for Heidegger joins phenomenology with philosophical hermeneutics. Husserl charged Heidegger with raising the question of ontology but failing to answer it, instead switching the topic to Dasein. That is neither ontology nor phenomenology, according to Husserl, but merely abstract anthropology. While Being and Time and other early works are clearly engaged with Husserlian issues, Heidegger's later philosophy has little relation to the problems and methods of classical phenomenology. Merleau-Ponty's conception Maurice Merleau-Ponty develops his distinctive mode of phenomenology by drawing, in particular, upon Husserl's unpublished writings, Heidegger's analysis of being-in-the-world, Gestalt theory, and other contemporary psychology research. In his most famous work, The Phenomenology of Perception, Merleau-Ponty critiques empiricist and intellectualist accounts to chart a "third way" that avoids their metaphysical assumptions about an objective, pre-given world. The central contentions of this work are that the body is the locus of engagement with the world, and that the body's modes of engagement are more fundamental than what phenomenology describes as consequent acts of objectification. Merleau-Ponty reinterprets concepts like intentionality, the phenomenological reduction, and the eidetic method to capture our inherence in the perceived world, that is, our embodied coexistence with things through a kind of reciprocal exchange. According to Merleau-Ponty, perception discloses a meaningful world that can never be completely determined, but which nevertheless aims at truth. Varieties Some scholars have differentiated phenomenology into these seven types: Transcendental constitutive phenomenology studies how objects are constituted in transcendental consciousness, setting aside questions of any relation to the natural world. Naturalistic constitutive phenomenology studies how consciousness constitutes things in the world of nature, assuming with the natural attitude that consciousness is part of nature. Existential phenomenology studies concrete human existence, including human experience of free choice and/or action in concrete situations. Generative historicist phenomenology studies how meaning—as found in human experience—is generated in historical processes of collective experience over time. Genetic phenomenology studies the emergence (or genesis) of meanings of things within the stream of experience. Hermeneutical phenomenology (sometimes hermeneutic phenomenology) studies interpretive structures of experience. This approach was introduced in Martin Heidegger's early work. Realistic phenomenology (sometimes realist phenomenology) studies the structure of consciousness and intentionality as "it occurs in a real world that is largely external to consciousness and not somehow brought into being by consciousness." The contrast between "constitutive phenomenology" (sometimes static phenomenology or descriptive phenomenology) and "genetic phenomenology" (sometimes phenomenology of genesis) is due to Husserl. Modern scholarship also recognizes the existence of the following varieties: late Heidegger's transcendental hermeneutic phenomenology, Maurice Merleau-Ponty's embodied phenomenology, Michel Henry's material phenomenology, Alva Noë's analytic phenomenology, and J. L. Austin's linguistic phenomenology. Concepts Intentionality Intentionality refers to the notion that consciousness is always the consciousness of something. The word itself should not be confused with the "ordinary" use of the word intentional, but should rather be taken as playing on the etymological roots of the word. Originally, intention referred to a "stretching out" ("in tension," from Latin intendere), and in this context it refers to consciousness "stretching out" towards its object. However, one should be careful with this image: there is not some consciousness first that, subsequently, stretches out to its object; rather, consciousness occurs as the simultaneity of a conscious act and its object. Intentionality is often summed up as "aboutness." Whether this something that consciousness is about is in direct perception or in fantasy is inconsequential to the concept of intentionality itself; whatever consciousness is directed at, that is what consciousness is conscious of. This means that the object of consciousness does not have to be a physical object apprehended in perception: it can just as well be a fantasy or a memory. Consequently, these "structures" of consciousness, such as perception, memory, fantasy, and so forth, are called intentionalities. The term "intentionality" originated with the Scholastics in the medieval period and was resurrected by Brentano who in turn influenced Husserl's conception of phenomenology, who refined the term and made it the cornerstone of his theory of consciousness. The meaning of the term is complex and depends entirely on how it is conceived by a given philosopher. The term should not be confused with "intention" or the psychoanalytic conception of unconscious "motive" or "gain". Significantly, "intentionality is not a relation, but rather an intrinsic feature of intentional acts." This is because there are no independent relata. It is (at least in the first place) a matter of indifference to the phenomenologist whether the intentional object has any existence independent of the act. Intuition Intuition in phenomenology refers to cases where the intentional object is directly present to the intentionality at play; if the intention is "filled" by the direct apprehension of the object, one has an intuited object. Having a cup of coffee in front of oneself, for instance, seeing it, feeling it, or even imagining it – these are all filled intentions, and the object is then intuited. The same goes for the apprehension of mathematical formulae or a number. If one does not have the object as referred to directly, the object is not intuited, but still intended, but then emptily. Examples of empty intentions can be signitive intentions – intentions that only imply or refer to their objects. Evidence In everyday language, the word evidence is used to signify a special sort of relation between a state of affairs and a proposition: State A is evidence for the proposition "A is true." In phenomenology, however, the concept of evidence is meant to signify the "subjective achievement of truth." This is not an attempt to reduce the objective sort of evidence to subjective "opinion," but rather an attempt to describe the structure of having something present in intuition with the addition of having it present as intelligible: "Evidence is the successful presentation of an intelligible object, the successful presentation of something whose truth becomes manifest in the evidencing itself." In Ideas, Husserl presents as the "Principle of All Principles" that, "every originary presentive intuition is a legitimizing source of cognition, that everything originally (so to speak, in its 'personal' actuality) offered to us in 'intuition' is to be accepted simply as what it is presented as being, but also only within the limits in which it is presented there." It is in this realm of phenomenological givenness, Husserl claims, that the search begins for "indubitable evidence that will ultimately serve as the foundation for every scientific discipline." Noesis and noema Franz Brentano introduced a distinction between sensory and noetic consciousness: the former describes presentations of sensory objects or intuitions, while the latter describes the thinking of concepts. In Husserl's phenomenology, this pair of terms, derived from the Greek nous (mind) designate respectively the real content, noesis, and the ideal content, noema, of an intentional act (an act of consciousness). The noesis is the part of the act that gives it a particular sense or character (as in judging or perceiving something, loving or hating it, accepting or rejecting it, etc.). This is real in the sense that it is actually part of what takes place in the consciousness of the subject of the act. The noesis is always correlated with a noema. For Husserl, the full noema is a complex ideal structure comprising at least a noematic sense and a noematic core. The correct interpretation of what Husserl meant by the noema has long been controversial, but the noematic sense is generally understood as the ideal meaning of the act. For instance, if A loves B, loving is a real part of A's conscious activity – noesis – but gets its sense from the general concept of loving, which has an abstract or ideal meaning, as "loving" has a meaning in the English language independently of what an individual means by the word when they use it. The noematic core as the act's referent or object as it is meant in the act. One element of controversy is whether this noematic object is the same as the actual object of the act (assuming it exists) or is some kind of ideal object. Empathy and intersubjectivity In phenomenology, empathy refers to the experience of one's own body as another. While people often identify others with their physical bodies, this type of phenomenology requires that they focus on the subjectivity of the other, as well as the intersubjective engagement with them. In Husserl's original account, this was done by a sort of apperception built on the experiences of one's own lived body. The lived body is one's own body as experienced by oneself, as oneself. One's own body manifests itself mainly as one's possibilities of acting in the world. It is what lets oneself reach out and grab something, for instance, but it also, and more importantly, allows for the possibility of changing one's point of view. This helps to differentiate one thing from another by the experience of moving around it, seeing new aspects of it (often referred to as making the absent present and the present absent), and still retaining the notion that this is the same thing that one saw other aspects of just a moment ago (it is identical). One's body is also experienced as a duality, both as object (one's ability to touch one's own hand) and as one's own subjectivity (one's experience of being touched). The experience of one's own body as one's own subjectivity is then applied to the experience of another's body, which, through apperception, is constituted as another subjectivity. One can thus recognise the Other's intentions, emotions, etc. This experience of empathy is important in the phenomenological account of intersubjectivity. In phenomenology, intersubjectivity constitutes objectivity (i.e., what one experiences as objective is experienced as being intersubjectively available – available to all other subjects. This does not imply that objectivity is reduced to subjectivity nor does it imply a relativist position, cf. for instance intersubjective verifiability). In the experience of intersubjectivity, one also experiences oneself as being a subject among other subjects, and one experiences oneself as existing objectively for these Others; one experiences oneself as the noema of Others' noeses, or as a subject in another's empathic experience. As such, one experiences oneself as objectively existing subjectivity. Intersubjectivity is also a part in the constitution of one's lifeworld, especially as "homeworld." Lifeworld The lifeworld (German: Lebenswelt) is the "world" each one of us lives in. One could call it the "background" or "horizon" of all experience, and it is that on which each object stands out as itself (as different) and with the meaning it can only hold for us. According to Husserl, the lifeworld is both personal and intersubjective (it is then called a "homeworld"), and, as such, it avoids the threat of solipsism. Phenomenology and empirical science The phenomenological analysis of objects is notably different from traditional science. However, several frameworks do phenomenology with an empirical orientation or aim to unite it with the natural sciences or with cognitive science. For a classical critical point of view, Daniel Dennett argues for the wholesale uselessness of phenomenology considering phenomena as qualia, which cannot be the object of scientific research or do not exist in the first place. Liliana Albertazzi counters such arguments by pointing out that empirical research on phenomena has been successfully carried out employing modern methodology. Human experience can be investigated by surveying, and with brain scanning techniques. For example, ample research on color perception suggests that people with normal color vision see colors similarly and not each in their own way. Thus, it is possible to universalize phenomena of subjective experience on an empirical scientific basis. In the early twenty-first century, phenomenology has increasingly engaged with cognitive science and philosophy of mind. Some approaches to the naturalization of phenomenology reduce consciousness to the physical-neuronal level and are therefore not widely acknowledged as representing phenomenology. These include the frameworks of neurophenomenology, embodied constructivism, and the cognitive neuroscience of phenomenology. Other likewise controversial approaches aim to explain life-world experience on a sociological or anthropological basis despite phenomenology being mostly considered descriptive rather than explanatory. See also References Citations Bibliography External links At the Internet Encyclopedia of Philosophy: At the Stanford Encyclopedia of Philosophy: At the History of Women Philosophers and Scientists: Edmund Husserl Philosophical schools and traditions
0.766532
0.998981
0.765751
Interdisciplinarity
Interdisciplinarity or interdisciplinary studies involves the combination of multiple academic disciplines into one activity (e.g., a research project). It draws knowledge from several fields like sociology, anthropology, psychology, economics, etc. It is related to an interdiscipline or an interdisciplinary field, which is an organizational unit that crosses traditional boundaries between academic disciplines or schools of thought, as new needs and professions emerge. Large engineering teams are usually interdisciplinary, as a power station or mobile phone or other project requires the melding of several specialties. However, the term "interdisciplinary" is sometimes confined to academic settings. The term interdisciplinary is applied within education and training pedagogies to describe studies that use methods and insights of several established disciplines or traditional fields of study. Interdisciplinarity involves researchers, students, and teachers in the goals of connecting and integrating several academic schools of thought, professions, or technologies—along with their specific perspectives—in the pursuit of a common task. The epidemiology of HIV/AIDS or global warming requires understanding of diverse disciplines to solve complex problems. Interdisciplinary may be applied where the subject is felt to have been neglected or even misrepresented in the traditional disciplinary structure of research institutions, for example, women's studies or ethnic area studies. Interdisciplinarity can likewise be applied to complex subjects that can only be understood by combining the perspectives of two or more fields. The adjective interdisciplinary is most often used in educational circles when researchers from two or more disciplines pool their approaches and modify them so that they are better suited to the problem at hand, including the case of the team-taught course where students are required to understand a given subject in terms of multiple traditional disciplines. Interdisciplinary education fosters cognitive flexibility and prepares students to tackle complex, real-world problems by integrating knowledge from multiple fields. This approach emphasizes active learning, critical thinking, and problem-solving skills, equipping students with the adaptability needed in an increasingly interconnected world. For example, the subject of land use may appear differently when examined by different disciplines, for instance, biology, chemistry, economics, geography, and politics. Development Although "interdisciplinary" and "interdisciplinarity" are frequently viewed as twentieth century terms, the concept has historical antecedents, most notably Greek philosophy. Julie Thompson Klein attests that "the roots of the concepts lie in a number of ideas that resonate through modern discourse—the ideas of a unified science, general knowledge, synthesis and the integration of knowledge", while Giles Gunn says that Greek historians and dramatists took elements from other realms of knowledge (such as medicine or philosophy) to further understand their own material. The building of Roman roads required men who understood surveying, material science, logistics and several other disciplines. Any broadminded humanist project involves interdisciplinarity, and history shows a crowd of cases, as seventeenth-century Leibniz's task to create a system of universal justice, which required linguistics, economics, management, ethics, law philosophy, politics, and even sinology. Interdisciplinary programs sometimes arise from a shared conviction that the traditional disciplines are unable or unwilling to address an important problem. For example, social science disciplines such as anthropology and sociology paid little attention to the social analysis of technology throughout most of the twentieth century. As a result, many social scientists with interests in technology have joined science, technology and society programs, which are typically staffed by scholars drawn from numerous disciplines. They may also arise from new research developments, such as nanotechnology, which cannot be addressed without combining the approaches of two or more disciplines. Examples include quantum information processing, an amalgamation of quantum physics and computer science, and bioinformatics, combining molecular biology with computer science. Sustainable development as a research area deals with problems requiring analysis and synthesis across economic, social and environmental spheres; often an integration of multiple social and natural science disciplines. Interdisciplinary research is also key to the study of health sciences, for example in studying optimal solutions to diseases. Some institutions of higher education offer accredited degree programs in Interdisciplinary Studies. At another level, interdisciplinarity is seen as a remedy to the harmful effects of excessive specialization and isolation in information silos. On some views, however, interdisciplinarity is entirely indebted to those who specialize in one field of study—that is, without specialists, interdisciplinarians would have no information and no leading experts to consult. Others place the focus of interdisciplinarity on the need to transcend disciplines, viewing excessive specialization as problematic both epistemologically and politically. When interdisciplinary collaboration or research results in new solutions to problems, much information is given back to the various disciplines involved. Therefore, both disciplinarians and interdisciplinarians may be seen in complementary relation to one another. Barriers Because most participants in interdisciplinary ventures were trained in traditional disciplines, they must learn to appreciate differences of perspectives and methods. For example, a discipline that places more emphasis on quantitative rigor may produce practitioners who are more scientific in their training than others; in turn, colleagues in "softer" disciplines who may associate quantitative approaches with difficulty grasp the broader dimensions of a problem and lower rigor in theoretical and qualitative argumentation. An interdisciplinary program may not succeed if its members remain stuck in their disciplines (and in disciplinary attitudes). Those who lack experience in interdisciplinary collaborations may also not fully appreciate the intellectual contribution of colleagues from those disciplines. From the disciplinary perspective, however, much interdisciplinary work may be seen as "soft", lacking in rigor, or ideologically motivated; these beliefs place barriers in the career paths of those who choose interdisciplinary work. For example, interdisciplinary grant applications are often refereed by peer reviewers drawn from established disciplines; interdisciplinary researchers may experience difficulty getting funding for their research. In addition, untenured researchers know that, when they seek promotion and tenure, it is likely that some of the evaluators will lack commitment to interdisciplinarity. They may fear that making a commitment to interdisciplinary research will increase the risk of being denied tenure. Interdisciplinary programs may also fail if they are not given sufficient autonomy. For example, interdisciplinary faculty are usually recruited to a joint appointment, with responsibilities in both an interdisciplinary program (such as women's studies) and a traditional discipline (such as history). If the traditional discipline makes the tenure decisions, new interdisciplinary faculty will be hesitant to commit themselves fully to interdisciplinary work. Other barriers include the generally disciplinary orientation of most scholarly journals, leading to the perception, if not the fact, that interdisciplinary research is hard to publish. In addition, since traditional budgetary practices at most universities channel resources through the disciplines, it becomes difficult to account for a given scholar or teacher's salary and time. During periods of budgetary contraction, the natural tendency to serve the primary constituency (i.e., students majoring in the traditional discipline) makes resources scarce for teaching and research comparatively far from the center of the discipline as traditionally understood. For these same reasons, the introduction of new interdisciplinary programs is often resisted because it is perceived as a competition for diminishing funds. Due to these and other barriers, interdisciplinary research areas are strongly motivated to become disciplines themselves. If they succeed, they can establish their own research funding programs and make their own tenure and promotion decisions. In so doing, they lower the risk of entry. Examples of former interdisciplinary research areas that have become disciplines, many of them named for their parent disciplines, include neuroscience, cybernetics, biochemistry and biomedical engineering. These new fields are occasionally referred to as "interdisciplines". On the other hand, even though interdisciplinary activities are now a focus of attention for institutions promoting learning and teaching, as well as organizational and social entities concerned with education, they are practically facing complex barriers, serious challenges and criticism. The most important obstacles and challenges faced by interdisciplinary activities in the past two decades can be divided into "professional", "organizational", and "cultural" obstacles. Interdisciplinary studies and studies of interdisciplinarity An initial distinction should be made between interdisciplinary studies, which can be found spread across the academy today, and the study of interdisciplinarity, which involves a much smaller group of researchers. The former is instantiated in thousands of research centers across the US and the world. The latter has one US organization, the Association for Interdisciplinary Studies (founded in 1979), two international organizations, the International Network of Inter- and Transdisciplinarity (founded in 2010) and the Philosophy of/as Interdisciplinarity Network (founded in 2009). The US's research institute devoted to the theory and practice of interdisciplinarity, the Center for the Study of Interdisciplinarity at the University of North Texas, was founded in 2008 but is closed as of 1 September 2014, the result of administrative decisions at the University of North Texas. An interdisciplinary study is an academic program or process seeking to synthesize broad perspectives, knowledge, skills, interconnections, and epistemology in an educational setting. Interdisciplinary programs may be founded in order to facilitate the study of subjects which have some coherence, but which cannot be adequately understood from a single disciplinary perspective (for example, women's studies or medieval studies). More rarely, and at a more advanced level, interdisciplinarity may itself become the focus of study, in a critique of institutionalized disciplines' ways of segmenting knowledge. In contrast, studies of interdisciplinarity raise to self-consciousness questions about how interdisciplinarity works, the nature and history of disciplinarity, and the future of knowledge in post-industrial society. Researchers at the Center for the Study of Interdisciplinarity have made the distinction between philosophy 'of' and 'as' interdisciplinarity, the former identifying a new, discrete area within philosophy that raises epistemological and metaphysical questions about the status of interdisciplinary thinking, with the latter pointing toward a philosophical practice that is sometimes called 'field philosophy'. Perhaps the most common complaint regarding interdisciplinary programs, by supporters and detractors alike, is the lack of synthesis—that is, students are provided with multiple disciplinary perspectives but are not given effective guidance in resolving the conflicts and achieving a coherent view of the subject. Others have argued that the very idea of synthesis or integration of disciplines presupposes questionable politico-epistemic commitments. Critics of interdisciplinary programs feel that the ambition is simply unrealistic, given the knowledge and intellectual maturity of all but the exceptional undergraduate; some defenders concede the difficulty, but insist that cultivating interdisciplinarity as a habit of mind, even at that level, is both possible and essential to the education of informed and engaged citizens and leaders capable of analyzing, evaluating, and synthesizing information from multiple sources in order to render reasoned decisions. While much has been written on the philosophy and promise of interdisciplinarity in academic programs and professional practice, social scientists are increasingly interrogating academic discourses on interdisciplinarity, as well as how interdisciplinarity actually works—and does not—in practice. Some have shown, for example, that some interdisciplinary enterprises that aim to serve society can produce deleterious outcomes for which no one can be held to account. Politics of interdisciplinary studies Since 1998, there has been an ascendancy in the value of interdisciplinary research and teaching and a growth in the number of bachelor's degrees awarded at U.S. universities classified as multi- or interdisciplinary studies. The number of interdisciplinary bachelor's degrees awarded annually rose from 7,000 in 1973 to 30,000 a year by 2005 according to data from the National Center of Educational Statistics (NECS). In addition, educational leaders from the Boyer Commission to Carnegie's President Vartan Gregorian to Alan I. Leshner, CEO of the American Association for the Advancement of Science have advocated for interdisciplinary rather than disciplinary approaches to problem-solving in the 21st century. This has been echoed by federal funding agencies, particularly the National Institutes of Health under the direction of Elias Zerhouni, who has advocated that grant proposals be framed more as interdisciplinary collaborative projects than single-researcher, single-discipline ones. At the same time, many thriving longstanding bachelor's in interdisciplinary studies programs in existence for 30 or more years, have been closed down, in spite of healthy enrollment. Examples include Arizona International (formerly part of the University of Arizona), the School of Interdisciplinary Studies at Miami University, and the Department of Interdisciplinary Studies at Wayne State University; others such as the Department of Interdisciplinary Studies at Appalachian State University, and George Mason University's New Century College, have been cut back. Stuart Henry has seen this trend as part of the hegemony of the disciplines in their attempt to recolonize the experimental knowledge production of otherwise marginalized fields of inquiry. This is due to threat perceptions seemingly based on the ascendancy of interdisciplinary studies against traditional academia. Examples Communication science: Communication studies takes up theories, models, concepts, etc. of other, independent disciplines such as sociology, political science and economics and thus decisively develops them. Environmental science: Environmental science is an interdisciplinary earth science aimed at addressing environmental issues such as global warming and pollution, and involves the use of a wide range of scientific disciplines including geology, chemistry, physics, ecology, and oceanography. Faculty members of environmental programs often collaborate in interdisciplinary teams to solve complex global environmental problems. Those who study areas of environmental policy such as environmental law, sustainability, and environmental justice, may also seek knowledge in the environmental sciences to better develop their expertise and understanding in their fields. Knowledge management: Knowledge management discipline exists as a cluster of divergent schools of thought under an overarching knowledge management umbrella by building on works in computer science, economics, human resource management, information systems, organizational behavior, philosophy, psychology, and strategic management. Liberal arts education: A select realm of disciplines that cut across the humanities, social sciences, and hard sciences, initially intended to provide a well-rounded education. Several graduate programs exist in some form of Master of Arts in Liberal Studies to continue to offer this interdisciplinary course of study. Materials science: Field that combines the scientific and engineering aspects of materials, particularly solids. It covers the design, discovery and application of new materials by incorporating elements of physics, chemistry, and engineering. Permaculture: A holistic design science that provides a framework for making design decisions in any sphere of human endeavor, but especially in land use and resource security. Provenance research: Interdisciplinary research comes into play when clarifying the path of artworks into public and private art collections and also in relation to human remains in natural history collections. Sports science: Sport science is an interdisciplinary science that researches the problems and manifestations in the field of sport and movement in cooperation with a number of other sciences, such as sociology, ethics, biology, medicine, biomechanics or pedagogy. Transport sciences: Transport sciences are a field of science that deals with the relevant problems and events of the world of transport and cooperates with the specialised legal, ecological, technical, psychological or pedagogical disciplines in working out the changes of place of people, goods, messages that characterise them.<ref>Hendrik Ammoser, Mirko Hoppe: Glossary of Transport and Transport Sciences (PDF; 1,3 MB), published in the series Discussion Papers from the Institute of Economics and Transport, Technische Universität Dresden. Dresden 2006. </ref> Venture research: Venture research is an interdisciplinary research area located in the human sciences that deals with the conscious entering into and experiencing of borderline situations. For this purpose, the findings of evolutionary theory, cultural anthropology, social sciences, behavioral research, differential psychology, ethics or pedagogy are cooperatively processed and evaluated.Siegbert A. Warwitz: Vom Sinn des Wagens. Why people take on dangerous challenges. In: German Alpine Association (ed.): Berg 2006. Tyrolia Publishing House. Munich-Innsbruck-Bolzano. P. 96-111. Historical examples There are many examples of when a particular idea, almost in the same period, arises in different disciplines. One case is the shift from the approach of focusing on "specialized segments of attention" (adopting one particular perspective), to the idea of "instant sensory awareness of the whole", an attention to the "total field", a "sense of the whole pattern, of form and function as a unity", an "integral idea of structure and configuration". This has happened in painting (with cubism), physics, poetry, communication and educational theory. According to Marshall McLuhan, this paradigm shift was due to the passage from an era shaped by mechanization, which brought sequentiality, to the era shaped by the instant speed of electricity, which brought simultaneity. Efforts to simplify and defend the concept An article in the Social Science Journal attempts to provide a simple, common-sense, definition of interdisciplinarity, bypassing the difficulties of defining that concept and obviating the need for such related concepts as transdisciplinarity, pluridisciplinarity, and multidisciplinary: In turn, interdisciplinary richness of any two instances of knowledge, research, or education can be ranked by weighing four variables: number of disciplines involved, the "distance" between them, the novelty of any particular combination, and their extent of integration. Interdisciplinary knowledge and research are important because: "Creativity often requires interdisciplinary knowledge. Immigrants often make important contributions to their new field. Disciplinarians often commit errors which can be best detected by people familiar with two or more disciplines. Some worthwhile topics of research fall in the interstices among the traditional disciplines. Many intellectual, social, and practical problems require interdisciplinary approaches. Interdisciplinary knowledge and research serve to remind us of the unity-of-knowledge ideal. Interdisciplinarians enjoy greater flexibility in their research. More so than narrow disciplinarians, interdisciplinarians often treat themselves to the intellectual equivalent of traveling in new lands. Interdisciplinarians may help breach communication gaps in the modern academy, thereby helping to mobilize its enormous intellectual resources in the cause of greater social rationality and justice. By bridging fragmented disciplines, interdisciplinarians might play a role in the defense of academic freedom." Quotations See also Commensurability (philosophy of science) Double degree Encyclopedism Holism Holism in science Integrative learning Interdiscipline Interdisciplinary arts Interdisciplinary teaching Interprofessional education Meta-functional expertise Methodology Polymath Science of team science Social ecological model Science and technology studies (STS) Synoptic philosophy Systems theory Thematic learning Periodic table of human sciences in Tinbergen's four questions Transdisciplinarity References Further reading Association for Interdisciplinary Studies Center for the Study of Interdisciplinarity Centre for Interdisciplinary Research in the Arts (University of Manchester) College for Interdisciplinary Studies, University of British Columbia, Vancouver, British Columbia, Canada Frank, Roberta: Interdisciplitarity': The First Half Century", Issues in Integrative Studies 6 (1988): 139-151. Frodeman, R., Klein, J.T., and Mitcham, C. Oxford Handbook of Interdisciplinarity. Oxford University Press, 2010. The Evergreen State College, Olympia, Washington Gram Vikas (2007) Annual Report, p. 19. Hang Seng Centre for Cognitive Studies Indiresan, P.V. (1990) Managing Development: Decentralisation, Geographical Socialism And Urban Replication. India: Sage Interdisciplinary Arts Department, Columbia College Chicago Interdisciplinarity and tenure/ Interdisciplinary Studies Project, Harvard University School of Education, Project Zero Klein, Julie Thompson (1996) Crossing Boundaries: Knowledge, Disciplinarities, and Interdisciplinarities (University Press of Virginia) Klein, Julie Thompson (2006) "Resources for interdisciplinary studies." Change, (Mark/April). 52–58 Klein, Julie Thompson and Thorsten Philipp (2023), "Interdisciplinarity" in Handbook Transdisciplinary Learning. Eds. Thorsten Philipp und Tobias Schmohl, 195-204. Bielefeld: transcript. doi: 10.14361/9783839463475-021. Kockelmans, Joseph J. editor (1979) Interdisciplinarity and Higher Education, The Pennsylvania State University Press . Yifang Ma, Roberta Sinatra, Michael Szell, Interdisciplinarity: A Nobel Opportunity, November 2018 Gerhard Medicus Gerhard Medicus: Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin 2017 VWB] Moran, Joe. (2002). Interdisciplinarity. Morson, Gary Saul and Morton O. Schapiro (2017). Cents and Sensibility: What Economics Can Learn from the Humanities. (Princeton University Press) NYU Gallatin School of Individualized Study, New York, NY Poverty Action Lab Rhoten, D. (2003). A multi-method analysis of the social and technical conditions for interdisciplinary collaboration. School of Social Ecology at the University of California, Irvine Siskin, L.S. & Little, J.W. (1995). The Subjects in Question. Teachers College Press. about the departmental organization of high schools and efforts to change that. Stiglitz, Joseph (2002) Globalisation and its Discontents, United States of America, W.W. Norton and Company Sumner, A and M. Tribe (2008) International Development Studies: Theories and Methods in Research and Practice, London: Sage Thorbecke, Eric. (2006) "The Evolution of the Development Doctrine, 1950–2005". UNU-WIDER Research Paper No. 2006/155. United Nations University, World Institute for Development Economics Research Trans- & inter-disciplinary science approaches- A guide to on-line resources on integration and trans- and inter-disciplinary approaches. Truman State University's Interdisciplinary Studies Program Peter Weingart and Nico Stehr, eds. 2000. Practicing Interdisciplinarity (University of Toronto Press) External links Association for Interdisciplinary Studies National Science Foundation Workshop Report: Interdisciplinary Collaboration in Innovative Science and Engineering Fields'' Rethinking Interdisciplinarity online conference, organized by the Institut Nicod, CNRS, Paris [broken] Center for the Study of Interdisciplinarity at the University of North Texas Labyrinthe. Atelier interdisciplinaire, a journal (in French), with a special issue on La Fin des Disciplines? Rupkatha Journal on Interdisciplinary Studies in Humanities: An Online Open Access E-Journal, publishing articles on a number of areas Article about interdisciplinary modeling (in French with an English abstract) Wolf, Dieter. Unity of Knowledge, an interdisciplinary project Soka University of America has no disciplinary departments and emphasizes interdisciplinary concentrations in the Humanities, Social and Behavioral Sciences, International Studies, and Environmental Studies. SystemsX.ch – The Swiss Initiative in Systems Biology Tackling Your Inner 5-Year-Old: Saving the world requires an interdisciplinary perspective Academia Academic discipline interactions Knowledge Occupations Pedagogy Philosophy of education
0.768137
0.996882
0.765742
Existence
Existence is the state of having being or reality in contrast to nonexistence and nonbeing. Existence is often contrasted with essence: the essence of an entity is its essential features or qualities, which can be understood even if one does not know whether the entity exists. Ontology is the philosophical discipline studying the nature and types of existence. Singular existence is the existence of individual entities while general existence refers to the existence of concepts or universals. Entities present in space and time have concrete existence in contrast to abstract entities, like numbers and sets. Other distinctions are between possible, contingent, and necessary existence and between physical and mental existence. The common view is that an entity either exists or not with nothing in between, but some philosophers say that there are degrees of existence, meaning that some entities exist to a higher degree than others. The orthodox position in ontology is that existence is a second-order property or a property of properties. For example, to say that lions exist means that the property of being a lion is possessed by an entity. A different view states that existence is a first-order property or a property of individuals. This means existence is similar to other properties of individuals, like color and shape. Alexius Meinong and his followers accept this idea and say that not all individuals have this property; they state that there are some individuals, such as Santa Claus, that do not exist. Universalists reject this view; they see existence as a universal property of every individual. The concept of existence has been discussed throughout the history of philosophy and already played a role in ancient philosophy, including Presocratic philosophy in Ancient Greece, Hindu and Buddhist philosophy in Ancient India, and Daoist philosophy in ancient China. It is relevant to fields such as logic, mathematics, epistemology, philosophy of mind, philosophy of language, and existentialism. Definition and related terms Dictionaries define existence as the state of being real and to exist as having being or participating in reality. Existence sets real entities apart from imaginary ones, and can refer both to individual entities or to the totality of reality. The word "existence" entered the English language in the late 14th century from old French and has its roots in the medieval Latin term , which means "to stand forth", "to appear", and "to arise". Existence is studied by the subdiscipline of metaphysics known as ontology. The terms "being", "reality", and "actuality" are often used as synonyms of "existence", but the exact definition of existence and its connection to these terms is disputed. According to metaphysician Alexius Meinong (1853–1920), all entities have being but not all entities have existence. He argues merely possible objects like Santa Claus have being but lack existence. Ontologist Takashi Yagisawa (20th century–present) contrasts existence with reality; he sees "reality" as the more-fundamental term because it equally characterizes all entities and defines existence as a relative term that connects an entity to the world it inhabits. According to philosopher Gottlob Frege (1848–1925), actuality is narrower than existence because only actual entities can produce and undergo changes, in contrast to non-actual existing entities like numbers and sets. According to some philosophers, like Edmund Husserl (1859–1938), existence is an elementary concept, meaning it cannot be defined in other terms without involving circularity. This would imply characterizing existence or talking about its nature in a non-trivial manner may be difficult or impossible. Disputes about the nature of existence are reflected in the distinction between thin and thick concepts of existence. Thin concepts of existence understand existence as a logical property that every existing thing shares; they do not include any substantial content about the metaphysical implications of having existence. According to one view, existence is the same as the logical property of self-identity. This view articulates a thin concept of existence because it merely states what exists is identical to itself without discussing any substantial characteristics of the nature of existence. Thick concepts of existence encompass a metaphysical analysis of what it means that something exists and what essential features existence implies. According to one proposal, to exist is to be present in space and time, and to have effects on other things. This definition is controversial because it implies abstract objects such as numbers do not exist. Philosopher George Berkeley (1685–1753) gave a different thick concept of existence; he stated: "to be is to be perceived", meaning all existence is mental. Existence contrasts with nonexistence, a lack of reality. Whether objects can be divided into existent and nonexistent objects is a subject of controversy. This distinction is sometimes used to explain how it is possible to think of fictional objects like dragons and unicorns but the concept of nonexistent objects is not generally accepted; some philosophers say the concept is contradictory. Closely related contrasting terms are nothingness and nonbeing. Existence is commonly associated with mind-independent reality but this position is not universally accepted because there could also be forms of mind-dependent existence, such as the existence of an idea inside a person's mind. According to some idealists, this may apply to all of reality. Another contrast is made between existence and essence. Essence refers to the intrinsic nature or defining qualities of an entity. The essence of something determines what kind of entity it is and how it differs from other kinds of entities. Essence corresponds to what an entity is, while existence corresponds to the fact that it is. For instance, it is possible to understand what an object is and grasp its nature even if one does not know whether this object exists. According to some philosophers, there is a difference between entities and the fundamental characteristics that make them the entities they are. Martin Heidegger (1889–1976) introduced this concept; he calls it the ontological difference and contrasts individual beings with being. According to his response to the question of being, being is not an entity but the background context that makes all individual entities intelligible. Types of existing entities Many discussions of the types of existing entities revolve around the definitions of different types, the existence or nonexistence of entities of a specific type, the way entities of different types are related to each other, and whether some types are more fundamental than others. Examples are the existence or nonexistence of souls; whether there are abstract, fictional, and universal entities; and the existence or nonexistence of possible worlds and objects besides the actual world. These discussions cover the topics of the basic stuff or constituents underlying all reality and the most general features of entities. Singular and general There is a distinction between singular existence and general existence. Singular existence is the existence of individual entities. For example, the sentence "Angela Merkel exists" expresses the existence of one particular person. General existence pertains to general concepts, properties, or universals. For instance, the sentence "politicians exist" states the general term "politician" has instances without referring to a particular politician. Singular and general existence are closely related to each other, and some philosophers have tried to explain one as a special case of the other. For example, according to Frege, general existence is more basic than singular existence. One argument in favor of this position is that singular existence can be expressed in terms of general existence. For instance, the sentence "Angela Merkel exists" can be expressed as "entities that are identical to Angela Merkel exist", where the expression "being identical to Angela Merkel" is understood as a general term. Philosopher Willard Van Orman Quine (1908–2000) defends a different position by giving primacy to singular existence and arguing that general existence can be expressed in terms of singular existence. A related question is whether there can be general existence without singular existence. According to philosopher Henry S. Leonard (1905–1967), a property only has general existence if there is at least one actual object that instantiates it. Philosopher Nicholas Rescher (1928–2024), by contrast, states that properties can exist if they have no actual instances, like the property of "being a unicorn". This question has a long philosophical tradition in relation to the existence of universals. According to Platonists, universals have general existence as Platonic forms independently of the particulars that exemplify them. According to this view, the universal of redness exists independently of the existence or nonexistence of red objects. Aristotelianism also accepts the existence of universals but says their existence depends on particulars that instantiate them and that they are unable to exist by themselves. According to this view, a universal that is not present in the space and time does not exist. According to nominalists, only particulars have existence and universals do not exist. Concrete and abstract There is an influential distinction in ontology between concrete and abstract objects. Many concrete objects, like rocks, plants, and other people, are encountered in everyday life. They exist in space and time. They have effects on each other, like when a rock falls on a plant and damages it, or a plant grows through rock and breaks it. Abstract objects, like numbers, sets, and types, have no location in space and time, and lack causal powers. The distinction between concrete objects and abstract objects is sometimes treated as the most-general division of being. The existence of concrete objects is widely agreed upon but opinions about abstract objects are divided. Realists such as Plato accept the idea that abstract objects have independent existence. Some realists say abstract objects have the same mode of existence as concrete objects; according to others, they exist in a different way. Anti-realists state that abstract objects do not exist, a view that is often combined with the idea that existence requires a location in space and time or the ability to causally interact. Possible, contingent, and necessary A further distinction is between merely possible, contingent, and necessary existence. An entity has necessary existence if it must exist or could not fail to exist. This means that it is not possible to newly create or destroy necessary entities. Entities that exist but could fail to exist are contingent; merely possible entities do not exist but could exist. Most entities encountered in ordinary experience, like telephones, sticks, and flowers, have contingent existence. The contingent existence of telephones is reflected in the fact that they exist in the present but did not exist in the past, meaning that it is not necessary that they exist. It is an open question whether any entities have necessary existence. According to some nominalists, all concrete objects have contingent existence while all abstract objects have necessary existence. According to some theorists, one or several necessary beings are required as the explanatory foundation of the cosmos. For instance, the philosophers Avicenna (980–1037) and Thomas Aquinas (1225–1274) say that God has necessary existence. A few philosophers, like Baruch Spinoza (1632–1677), see God and the world as the same thing, and say that all entities have necessary existence to provide a unified and rational explanation of everything. There are many academic debates about the existence of merely possible objects. According to actualism, only actual entities have being; this includes both contingent and necessary entities but excludes merely possible entities. Possibilists reject this view and state there are also merely possible objects besides actual objects. For example, metaphysician David Lewis (1941–2001) states that possible objects exist in the same way as actual objects so as to provide a robust explanation of why statements about what is possible and necessary are true. According to him, possible objects exist in possible worlds while actual objects exist in the actual world. Lewis says the only difference between possible worlds and the actual world is the location of the speaker; the term "actual" refers to the world of the speaker, similar to the way the terms "here" and "now" refer to the spatial and temporal location of the speaker. The problem of contingent and necessary existence is closely related to the ontological question of why there is anything at all or why is there something rather than nothing. According to one view, the existence of something is a contingent fact, meaning the world could have been totally empty. This is not possible if there are necessary entities, which could not have failed to exist. In this case, global nothingness is impossible because the world needs to contain at least all necessary entities. Physical and mental Entities that exist on a physical level include objects encountered in everyday life, like stones, trees, and human bodies, as well as entities discussed in modern physics, like electrons and protons. Physical entities can be observed and measured; they possess mass and a location in space and time. Mental entities like perceptions, experiences of pleasure and pain as well as beliefs, desires, and emotions belong to the realm of the mind; they are primarily associated with conscious experiences but also include unconscious states like unconscious beliefs, desires, and memories. The mind–body problem concerns the ontological status of and relation between physical and mental entities and is a frequent topic in metaphysics and philosophy of mind. According to materialists, only physical entities exist on the most-fundamental level. Materialists usually explain mental entities in terms of physical processes; for example, as brain states or as patterns of neural activation. Idealism, a minority view in contemporary philosophy, rejects matter as ultimate and views the mind as the most basic reality. Dualists like René Descartes (1596–1650) believe both physical and mental entities exist on the most-fundamental level. They state they are connected to one another in several ways but that one cannot be reduced to the other. Other types Fictional entities are entities that exist as inventions inside works of fiction. For example, Sherlock Holmes is a fictional character in Arthur Conan Doyle's book A Study in Scarlet and flying carpets are fictional objects in the folktales One Thousand and One Nights. According to anti-realism, fictional entities do not form part of reality in any substantive sense. Possibilists, by contrast, see fictional entities as a subclass of possible objects; creationists say that they are artifacts that depend for their existence on the authors who first conceived them. Intentional inexistence is a similar phenomenon concerned with the existence of objects within mental states. This happens when a person perceives or thinks about an object. In some cases, the intentional object corresponds to a real object outside the mental state, like when accurately perceiving a tree in the garden. In other cases, the intentional object does not have a real counterpart, like when thinking about Bigfoot. The problem of intentional inexistence is the challenge of explaining how one can think about entities that do not exist since this seems to have the paradoxical implication that the thinker stands in a relation to a non-existing object. Modes and degrees of existence Closely related to the problem of different types of entities is the question of whether their modes of existence also vary. This is the case according to ontological pluralism, which states entities belonging to different types differ in both their essential features and in the ways they exist. This position is sometimes found in theology; it states God is radically different from his creation and emphasizes his uniqueness by saying the difference affects both God's features and God's mode of existence. Another form of ontological pluralism distinguishes the existence of material objects from the existence of space-time. According to this view, material objects have relative existence because they exist in space-time; the existence of space-time itself is not relative in this sense because it just exists without existing within another space-time. The topic of degrees of existence is closely related to the problem of modes of existence. This topic is based on the idea that some entities exist to a higher degree or have more being than other entities, similar to the way some properties, such as heat and mass, have degrees. According to philosopher Plato (428/427–348/347 BCE), for example, unchangeable Platonic forms have a higher degree of existence than physical objects. The view that there are different types of entities is common in metaphysics but the idea that they differ from each other in their modes or degrees of existence is often rejected, implying that a thing either exists or does not exist without in-between alternatives. Metaphysician Peter van Inwagen (1942–present) uses the idea that there is an intimate relationship between existence and quantification to argue against different modes of existence. Quantification is related to the counting of objects; according to Inwagen, if there were different modes of entities, people would need different types of numbers to count them. Because the same numbers can be used to count different types of entities, he concludes all entities have the same mode of existence. Theories of the nature of existence Theories of the nature of existence aim to explain what it means for something to exist. A central dispute in the academic discourse about the nature of existence is whether existence is a property of individuals. An individual is a unique entity, like Socrates or a particular apple. A property is something that is attributed to an entity, like "being human" or "being red", and usually expresses a quality or feature of that entity. The two main theories of existence are first-order and second-order theories. First-order theories understand existence as a property of individuals while second-order theories say existence is a second-order property, that is, a property of properties. A central challenge for theories of the nature of existence is an understanding of the possibility of coherently denying the existence of something, like the statement: "Santa Claus does not exist". One difficulty is explaining how the name "Santa Claus" can be meaningful even though there is no Santa Claus. Second-order theories Second-order theories understand existence as a second-order property rather than a first-order property. They are often seen as the orthodox position in ontology. For instance, the Empire State Building is an individual object and "being tall" is a first-order property of it. "Being instantiated" is a property of "being 443.2 meters tall" and therefore a second-order property. According to second-order theories, to talk about existence is to talk about which properties have instances. For example, this view says that the sentence "God exists" means "Godhood is instantiated" rather than "God has the property of existing". A key reason against characterizing existence as a property of individuals is that existence differs from regular properties. Regular properties, such as being a building and being 443.2 meters tall, express what an object is like but do not directly describe whether or not that building exists. According to this view, existence is more fundamental than regular properties because an object cannot have any properties if it does not exist. According to second-order theorists, quantifiers rather than predicates express existence. Predicates are expressions that apply to and classify objects, usually by attributing features to them, such as "is a butterfly" and "is happy". Quantifiers are terms that talk about the quantity of objects that have certain properties. Existential quantifiers express that there is at least one object, like the expressions "some" and "there exists", as in "some cows eat grass" and "there exists an even prime number". In this regard, existence is closely related to counting because to assert that something exists is to assert that the corresponding concept has one or more instances. Second-order views imply a sentence like "egg-laying mammals exist" is misleading because the word "exist" is used as a predicate in them. These views say the true logical form is better expressed in reformulations like "there exist entities that are egg-laying mammals". This way, "existence" has the role of a quantifier and "egg-laying mammals" is the predicate. Quantifier constructions can also be used to express negative existential statements; for instance, the sentence "talking tigers do not exist" can be expressed as "it is not the case that there exist talking tigers". Many ontologists accept that second-order theories provide a correct analysis of many types of existential sentences. It is, however, controversial whether it is correct for all cases. Some problems relate to assumptions associated with everyday language about sentences like "Ronald McDonald does not exist". This type of statement is called negative singular existential and the expression Ronald McDonald is a singular term that seems to refer to an individual. It is not clear how the expression can refer to an individual if, as the sentence asserts, this individual does not exist. According to a solution philosopher Bertrand Russell (1872–1970) proposed, singular terms do not refer to individuals but are descriptions of individuals. This theory states negative singular existentials deny an object matching the descriptions exists without referring to a non-existent individual. Following this approach, the sentence "Ronald McDonald does not exist" expresses the idea: "it is not the case there is a unique happy hamburger clown". First-order theories According to first-order theories, existence is a property of individuals. These theories are less-widely accepted than second-order theories but also have some influential proponents. There are two types of first-order theories: Meinongianism and universalism. Meinongianism Meinongianism, which describes existence as a property of some but not all entities, was first formulated by Alexius Meinong. Its main assertion is that there are some entities that do not exist, meaning objecthood is independent of existence. Proposed examples of nonexistent objects are merely possible objects such as flying pigs, as well as fictional and mythical objects like Sherlock Holmes and Zeus. According to this view, these objects are real and have being, even though they do not exist. Meinong states there is an object for any combination of properties. For example, there is an object that only has the single property of "being a singer" with no other properties. This means neither the attribute of "wearing a dress" nor the absence of it applies to this object. Meinong also includes impossible objects like round squares in this classification. According to Meinongians, sentences describing Sherlock Holmes and Zeus refer to nonexisting objects. They are true or false depending on whether these objects have the properties ascribed to them. For instance, the sentence "Pegasus has wings" is true because having wings is a property of Pegasus, even though Pegasus lacks the property of existing. One key motivation of Meinongianism is to explain how negative singular existentials like "Ronald McDonald does not exist" can be true. Meinongians accept the idea that singular terms like "Ronald McDonald" refer to individuals. For them, a negative singular existential is true if the individual it refers to does not exist. Meinongianism has important implications for understandings of quantification. According to an influential view defended by Willard Van Orman Quine, the domain of quantification is restricted to existing objects. This view implies quantifiers carry ontological commitments about what exists and what does not exist. Meinongianism differs from this view by saying the widest domain of quantification includes both existing and nonexisting objects. Some aspects of Meinongianism are controversial and have received substantial criticism. According to one objection, one cannot distinguish between being an object and being an existing object. A closely related criticism states objects cannot have properties if they do not exist. A further objection is that Meinongianism leads to an "overpopulated universe" because there is an object corresponding to any combination of properties. A more specific criticism rejects the idea that there are incomplete and impossible objects. Universalism Universalists agree with Meinongians that existence is a property of individuals but deny there are nonexistent entities. Instead, universalists state existence is a universal property; all entities have it, meaning everything exists. One approach is to say existence is the same as self-identity. According to the law of identity, every object is identical to itself or has the property of self-identity. This can be expressed in predicate logic as . An influential argument in favor of universalism is that the denial of the existence of something is contradictory. This conclusion follows from the premises that one can only deny the existence of something by referring to that entity and that one can only refer to entities that exist. Universalists have proposed different ways of interpreting negative singular existentials. According to one view, names of fictional entities like "Ronald McDonald" refer to abstract objects, which exist even though they do not exist in space and time. This means, when understood in a strict sense, all negative singular existentials are false, including the assertion that "Ronald McDonald does not exist". Universalists can interpret such sentences slightly differently in relation to the context. In everyday life, for example, people use sentences like "Ronald McDonald does not exist" to express the idea that Ronald McDonald does not exist as a concrete object, which is true. Another approach is to understand negative singular existentials as neither true nor false but meaningless because their singular terms do not refer to anything. History Western philosophy Western philosophy originated with the Presocratic philosophers, who aimed to replace earlier mythological accounts of the universe by providing rational explanations based on foundational principles of all existence. Some, like Thales (c. 624–545 BCE) and Heraclitus (c. 540–480 BCE), suggested concrete principles like water and fire are the root of existence. Anaximander (c. 610–545 BCE) opposed this position; he believed the source must lie in an abstract principle that is beyond the world of human perception. Plato (428/427–348/347 BCE) argued that different types of entities have different degrees of existence and that shadows and images exist in a weaker sense than regular material objects. He said unchangeable Platonic forms have the highest type of existence, and saw material objects as imperfect and impermanent copies of Platonic forms. Philosopher Aristotle (384–322 BCE) accepted Plato's idea that forms are different from matter, but he challenged the idea that forms have a higher type of existence. Instead, he believed forms cannot exist without matter. He stated: "being is said in many ways" and explored how different types of entities have different modes of existence. For example, he distinguished between substances and their accidents, and between potentiality and actuality. Neoplatonists like Plotinus (204–270 CE) suggested reality has a hierarchical structure. They believed a transcendent entity, called "the One" or "the Good", is responsible for all existence. From it emerges the intellect, which in turn gives rise to the soul and the material world. In medieval philosophy, Anselm of Canterbury (1033–1109 CE) formulated the influential ontological argument, which aims to deduce the existence of God from the concept of God. Anselm defined God as the greatest conceivable being. He reasoned that an entity that did not exist outside his mind would not be the greatest conceivable being, leading him to the conclusion God exists. Thomas Aquinas (1224–1274 CE) distinguished between the essence of a thing and its existence. According to him, the essence of a thing constitutes its fundamental nature. He argued it is possible to understand what an object is and grasp its essence, even if one does not know whether the object exists. He concluded from this observation that existence is not part of the qualities of an object and should be understood as a separate property. Aquinas also considered the problem of creation from nothing and said only God has the power to truly bring new entities into existence. These ideas later inspired metaphysician Gottfried Wilhelm Leibniz's (1646–1716) theory of creation; Leibniz said to create is to confer actual existence to possible objects. The philosophers David Hume (1711–1776) and Immanuel Kant (1724–1804) rejected the idea that existence is a property. According to Hume, objects are bundles of qualities. He said existence is not a property because there is no impression of existence besides the bundled qualities. Kant came to a similar conclusion in his criticism of the ontological argument; according to him, this proof fails because one cannot deduce from the definition of a concept whether entities described by this concept exist. Kant said existence does not add anything to the concept of the object; it only indicates this concept is exemplified. According to philosopher Georg Wilhelm Friedrich Hegel (1770–1831), there is no pure being or pure nothing, only becoming. Philosopher and psychologist Franz Brentano (1838–1917) agreed with Kant's criticism and his position that existence is not a real predicate. Brentano used this idea to develop his theory of judgments, which states all judgments are existential judgments; they either affirm or deny the existence of something. He stated judgments like "some zebras are striped" have the logical form "there is a striped zebra" while judgments like "all zebras are striped" have the logical form "there is not a non-striped zebra". Gottlob Frege (1848–1925) and Bertrand Russell (1872–1970) aimed to refine the idea of what it means that existence is not a regular property. They distinguished between regular first-order properties of individuals and second-order properties of other properties. According to their view, existence is the second-order property of "being instantiated". Russell further developed the idea that general sentences like "lions exist" are at their most fundamental form about individuals by stating that there is an individual that is a lion. Willard Van Orman Quine (1908–2000) followed Frege and Russell in accepting existence as a second-order property. He drew a close link between existence and the role of quantification in formal logic. He applied this idea to scientific theories and stated a scientific theory is committed to the existence of an entity if the theory quantifies over this entity. For example, if a theory in biology asserts that "there are populations with genetic diversity", this theory has an ontological commitment to the existence of populations with genetic diversity. Alexius Meinong (1853–1920) was an influential critic of second-order theories and developed the alternative view that existence is a property of individuals and that not all individuals have this property. Eastern philosophy Many schools of thought in Eastern philosophy discuss the problem of existence and its implications. For instance, the ancient Hindu school of Samkhya articulated a metaphysical dualism according to which the two types of existence are pure consciousness (Purusha) and matter (Prakriti). Samkhya explains the manifestation of the universe as the interaction between these two principles. The Vedic philosopher Adi Shankara (c. 700–750 CE) developed a different approach in his school of Advaita Vedanta. Shankara defended a metaphysical monism by defining the divine (Brahman) as the ultimate reality and the only existent. According to this view, the impression that there is a universe consisting of many distinct entities is an illusion (Maya). The essential features of ultimate reality are described as Sat Chit Ananda—meaning existence, consciousness, and bliss. A central doctrine in Buddhist philosophy is called the "three marks of existence", which are aniccā (impermanence), anattā (absence of a permanent self), and dukkha (suffering). Aniccā is the doctrine that all of existence is subject to change, meaning everything changes at some point and nothing lasts forever. Anattā expresses a similar state in relation to persons by stating that people do not have a permanent identity or a separate self. Ignorance about aniccā and anattā is seen as the main cause of dukkha by leading people to form attachments that cause suffering. A central idea in many schools of Chinese philosophy, like Laozi's (6th century BCE) Daoism, is that a fundamental principle known as dao is the source of all existence. The term is often translated as "the way" and is understood as a cosmic force that governs the natural order of the world. Chinese metaphysicians debated whether dao is a form of being or whether, as the source of being, it belongs to non-being. The concept of existence played a central role in Arabic-Persian philosophy. The Islamic philosophers Avicenna (980–1037 CE) and Al-Ghazali (1058–1111 CE) discussed the relationship between existence and essence, and said the essence of an entity is prior to its existence. The additional step of instantiating the essence is required for the entity to come into existence. Philosopher Mulla Sadra (1571–1636 CE) rejected this priority of essence over existence, and said essence is only a concept that is used by the mind to grasp existence. Existence, by contrast, encompasses the whole of reality, according to his view. Other traditions Indigenous American philosophies tend to emphasize the interconnectedness of all existence and the importance of maintaining balance and harmony with nature. This is often combined with an animist outlook that ascribes a spiritual essence to some or all entities, including plants, rocks, and places. The interest in the relational aspect of existence is also found in African philosophy, which explores how all entities are causally linked to form an ordered world. African philosophy also examines the idea of an underlying and all-pervading life force responsible for animating entities and their influence on each other. In various disciplines Formal logic Formal logic studies deductively valid arguments. In first-order logic, which is the most-commonly used system of formal logic, existence is expressed using the existential quantifier. For example, the formula can be used to state horses exist. The variable x ranges over all elements in the domain of quantification and the existential quantifier expresses that at least one element in this domain is a horse. In first-order logic, all singular terms like names refer to objects in the domain and imply the object exists. Because of this, one can deduce (someone is honest) from (Bill is honest). If only one object matching the description exists, the unique existential quantifier can be used. Many logical systems that are based on first-order logic also follow this idea. Free logic is an exception because it allows the presence of empty names that do not refer to an object in the domain. With this modification, it is possible to apply logical reasoning to fictional objects instead of limiting it to regular objects. In free logic one can express that Pegasus is a flying horse using the formula . As a consequence of this modification, one cannot infer from this type of statement that something exists. This means the inference from to is invalid in free logic, even though it is valid in first-order logic. Free logic uses an additional existence predicate to say a singular term refers to an existing object. For example, the formula can be used to say Homer exists while the formula states Pegasus does not exist. Others The disciplines of epistemology, philosophy of mind, and philosophy of language deal with mental and linguistic representations in their attempt to understand the nature of knowledge, the mind, and language. This brings with it the problem of reference or how representations can refer to existing objects. Examples of such representations are beliefs, thoughts, perceptions, words, and sentences. For instance, in the sentence "Barack Obama is a Democrat", the name "Barack Obama" refers to a particular individual. The problem of reference also affects the epistemology of perception. In particular, this concerns the problem of whether perceptual impressions establish a direct contact with reality. Closely related to the problem of reference is the relationship between true representations and existence. According to truthmaker theory, true representations require a truthmaker, i.e., an entity whose existence is responsible for the representation being true. For example, the sentence "kangaroos live in Australia" is true because there are kangaroos in Australia; the existence of these kangaroos is the truthmaker of the sentence. Truthmaker theory states there is a close relationship between truth and existence; there exists a truthmaker for every true representation. Many of the individual sciences are concerned with the existence of particular types of entities and the laws governing them, such as physical things in physics and living entities in biology. The natural sciences employ a great variety of concepts to classify entities; these are known as natural kinds, and include categories like protons, gold, and elephants. According to scientific realists, these entities have mind-independent being; scientific anti-realists say the existence of these entities and categories is based on human perceptions, theories, and social constructs. A similar problem concerns the existence of social kinds, which are basic concepts used in the social sciences, such as race, gender, disability, money, and nation state. Social kinds are often understood as social constructions that, while useful for describing the complexities of human social life, do not form part of objective reality on the most fundamental level. According to the controversial Sapir–Whorf hypothesis, the social institution of language influences or fully determines how people perceive and understand the world. Existentialism is a school of thought that explores the nature of human existence. Among its key ideas is that existence precedes essence. This means that existence is more basic than essence. As a result, the nature and purpose of human beings are not pre-existing but develop in the process of living. According to this view, humans are thrown into a world that lacks pre-existing intrinsic meaning. They must determine for themselves their purpose and what meaning their life should have. Existentialists use this idea to explore the role of freedom and responsibility in actively shaping one's life. Feminist existentialists investigate the effects of gender on human existence, for example, on the experience of freedom. Influential existentialists include Søren Kierkegaard (1813–1855), Friedrich Nietzsche (1844–1900), Jean-Paul Sartre (1905–1980), and Simone de Beauvoir (1908–1986). Existentialism has influenced reflections on the role of human existence in sociology. Existentialist sociology examines the ways humans experience the social world and construct reality. Existence theory is a relatively recent approach that focuses on the temporal aspect of existence in society. It explores how the existential milestones to which people aspire influence their lives. Mathematicians are often interested in the existence of certain mathematical objects. For example, number theorists ask how many prime numbers exist within a certain interval. The statement that at least one mathematical object matching a certain description exists is called an existence theorem. Metaphysicians of mathematics investigate whether mathematical objects exist not only in relation to mathematical axioms but also as part of the fundamental structure of reality. This position is affirmed by Platonists, while nominalists believe mathematical objects lack a more-substantial form of existence, for instance, because they are merely useful fictions. Many debates in theology revolve around the existence of the divine, and arguments have been presented for and against God's existence. Cosmological arguments state that God must exist as the first cause to explain facts about the existence and aspects of the universe. According to teleological arguments, the only way to explain the order and complexity of the universe and human life is by reference to God as the intelligent designer. An influential argument against the existence of God relies on the problem of evil since it is not clear how evil could exist if there was an all-powerful, all-knowing, and benevolent God. Another argument points to a lack of concrete evidence for God's existence. See also Cogito, ergo sum Solipsism References Notes Citations Sources External links The Concept of Existence: History and Definitions from Leading Philosophers Metaphysical properties Ontology Reality
0.766823
0.998527
0.765694
Scholasticism
Scholasticism was a medieval school of philosophy that employed a critical organic method of philosophical analysis predicated upon Aristotelianism and the Ten Categories. Christian scholasticism emerged within the monastic schools that translated scholastic Judeo-Islamic philosophies, and "rediscovered" the collected works of Aristotle. Endeavoring to harmonize his metaphysics and its account of a prime mover with the Latin Catholic dogmatic trinitarian theology, these monastic schools became the basis of the earliest European medieval universities, and thus became the bedrock for the development of modern science and philosophy in the Western world. Scholasticism dominated education in Europe from about 1100 to 1700. The rise of scholasticism was closely associated with these schools that flourished in Italy, France, Portugal, Spain and England. Scholasticism is a method of learning more than a philosophy or a theology, since it places a strong emphasis on dialectical reasoning to extend knowledge by inference and to resolve contradictions. Scholastic thought is also known for rigorous conceptual analysis and the careful drawing of distinctions. In the classroom and in writing, it often takes the form of explicit disputation; a topic drawn from the tradition is broached in the form of a question, oppositional responses are given, a counterproposal is argued and oppositional arguments rebutted. Because of its emphasis on rigorous dialectical method, scholasticism was eventually applied to many other fields of study. Scholasticism was initially a program conducted by medieval Christian thinkers attempting to harmonize the various authorities of their own tradition, and to reconcile Christian theology with classical and late antiquity philosophy, especially that of Aristotle but also of Neoplatonism. The Scholastics, also known as Schoolmen, included as its main figures Anselm of Canterbury ("the father of scholasticism"), Peter Abelard, Alexander of Hales, Albertus Magnus, Duns Scotus, William of Ockham, Bonaventure, and Thomas Aquinas. Aquinas's masterwork Summa Theologica (1265–1274) is considered to be the pinnacle of scholastic, medieval, and Christian philosophy; it began while Aquinas was regent master at the studium provinciale of Santa Sabina in Rome, the forerunner of the Pontifical University of Saint Thomas Aquinas, Angelicum. Important work in the scholastic tradition has been carried on well past Aquinas's time, such as English scholastics Robert Grosseteste and his student Roger Bacon, and for instance by Francisco Suárez and Luis de Molina, and also among Lutheran and Reformed thinkers. Etymology The terms "scholastic" and "scholasticism" derive from the Latin word , the Latinized form of the Greek , an adjective derived from , "school". Scholasticus means "of or pertaining to schools". The "scholastics" were, roughly, "schoolmen". History The foundations of Christian scholasticism were laid by Boethius through his logical and theological essays, and later forerunners (and then companions) to scholasticism were Islamic Ilm al-Kalām, meaning "science of discourse", and Jewish philosophy, especially Jewish Kalam. Early Scholasticism The first significant renewal of learning in the West came with the Carolingian Renaissance of the Early Middle Ages. Charlemagne, advised by Peter of Pisa and Alcuin of York, attracted the scholars of England and Ireland, where some Greek works continued to survive in the original. By a 787 decree, he established schools at every abbey in his empire. These schools, from which the name scholasticism derived, became centers of medieval learning. During this period, knowledge of Ancient Greek had vanished in the West except in Ireland, where its teaching and use was fairly common in its monastic schools. Irish scholars had a considerable presence in the Frankish court, where they were renowned for their learning. Among them was Johannes Scotus Eriugena (815–877), one of the founders of scholasticism. Eriugena was the most significant Irish intellectual of the early monastic period and an outstanding philosopher in terms of originality. He had considerable familiarity with the Greek language and translated many works into Latin, affording access to the Cappadocian Fathers and the Greek theological tradition. Three other primary founders of scholasticism were the 11th-century archbishops Lanfranc and Anselm of Canterbury in England and Peter Abelard in France. This period saw the beginning of the "rediscovery" of many Greek works which had been lost to the Latin West. As early as the latter half of the 10th century, the Toledo School of Translators in Muslim Spain had begun translating Arabic texts into Latin. After a successful burst of Reconquista in the 12th century, Spain opened even further for Christian scholars and, as these Europeans encountered Judeo-Islamic philosophies, they opened a wealth of Arab and Judaic knowledge of mathematics and astronomy. The Latin translations of the 12th century also included figures like Constantine the African in Italy and James of Venice in Constantinople. Scholars such as Adelard of Bath traveled to Spain and Sicily, translating works on astronomy and mathematics, including the first complete translation of Euclid's Elements into Latin. At the same time, the School of Chartres produced Bernard of Chartres's commentaries on Plato's Timaeus and a range of works by William of Conches that attempted to reconcile the use of classical pagan and philosophical sources in a medieval Christian concept using the kludge of , treating the obviously heretical surface meanings as coverings disguising a deeper (and more orthodox) truth. Abelard himself was condemned by Bernard of Clairvaux at the 1141 Council of Sens and William avoided a similar fate through systematic self-bowdlerization of his early work, but his commentaries and encyclopedic and were miscredited to earlier scholars like Bede and widely disseminated. Anselm of Laon systematized the production of the gloss on Scripture, followed by the rise to prominence of dialectic (the middle subject of the medieval trivium) in the work of Abelard. Peter Lombard produced a collection of Sentences, or opinions of the Church Fathers and other authorities. More recently, Leinsle, Novikoff, and others have argued against the idea that scholasticism primarily derived from philosophical contact, emphasizing its continuity with earlier Patristic Christianity. This remains, however, a minority viewpoint. High Scholasticism The 13th and early 14th centuries are generally seen as the high period of scholasticism. The early 13th century witnessed the culmination of the recovery of Greek philosophy. Schools of translation grew up in Italy and Sicily, and eventually in the rest of Europe. Powerful Norman kings gathered men of knowledge from Italy and other areas into their courts as a sign of their prestige. William of Moerbeke's translations and editions of Greek philosophical texts in the middle half of the thirteenth century helped form a clearer picture of Greek philosophy, particularly of Aristotle, than was given by the Arabic versions on which they had previously relied. Edward Grant writes "Not only was the structure of the Arabic language radically different from that of Latin, but some Arabic versions had been derived from earlier Syriac translations and were thus twice removed from the original Greek text. Word-for-word translations of such Arabic texts could produce tortured readings. By contrast, the structural closeness of Latin to Greek, permitted literal, but intelligible, word-for-word translations." Universities developed in the large cities of Europe during this period, and rival clerical orders within the church began to battle for political and intellectual control over these centers of educational life. The two main orders founded in this period were the Franciscans and the Dominicans. The Franciscans were founded by Francis of Assisi in 1209. Their leader in the middle of the century was Bonaventure, a traditionalist who defended the theology of Augustine and the philosophy of Plato, incorporating only a little of Aristotle in with the more neoplatonist elements. Following Anselm, Bonaventure supposed that reason can only discover truth when philosophy is illuminated by religious faith. Other important Franciscan scholastics were Duns Scotus, Peter Auriol and William of Ockham. By contrast, the Dominican order, a teaching order founded by St Dominic in 1215, to propagate and defend Christian doctrine, placed more emphasis on the use of reason and made extensive use of the new Aristotelian sources derived from the East and Moorish Spain. The great representatives of Dominican thinking in this period were Albertus Magnus and (especially) Thomas Aquinas, whose artful synthesis of Greek rationalism and Christian doctrine eventually came to define Catholic philosophy. Aquinas placed more emphasis on reason and argumentation, and was one of the first to use the new translation of Aristotle's metaphysical and epistemological writing. This was a significant departure from the Neoplatonic and Augustinian thinking that had dominated much of early scholasticism. Aquinas showed how it was possible to incorporate much of the philosophy of Aristotle without falling into the "errors" of the Commentator, Averroes. Post-scholasticism Philosopher Johann Beukes has suggested that period 1349-1464, the era between the deaths of William of Ockham and Nicholas of Cusa, was a distinct period characterized by "robust and independent philosophers" who departed from high scholasticism, on issues such as institutional criticism and materialism, though not scholasticism's methods: such as Marsilius of Padua, Thomas Bradwardine, John Wycliffe, Catherine of Sienna, Jean Gerson, Gabriel Biel and ending with Nicholas. Spanish Scholasticism Late Scholasticism Protestant Scholasticism Lutheran Scholasticism Reformed Scholasticism Following the Reformation, Calvinists largely adopted the scholastic method of theology, while differing regarding sources of authority and content of theology. Neo-Scholasticism The revival and development from the second half of the 19th century of medieval scholastic philosophy is sometimes called neo-Thomism. Thomistic Scholasticism As J. A. Weisheipl O.P. emphasizes, within the Dominican Order Thomistic scholasticism has been continuous since the time of Aquinas: "Thomism was always alive in the Dominican Order, small as it was after the ravages of the Reformation, the French Revolution, and the Napoleonic occupation. Repeated legislation of the General Chapters, beginning after the death of St. Thomas, as well as the Constitutions of the Order, required all Dominicans to teach the doctrine of St. Thomas both in philosophy and in theology." Thomistic scholasticism or scholastic Thomism identifies with the philosophical and theological tradition stretching back to the time of St. Thomas. It focuses not only on exegesis of the historical Aquinas but also on the articulation of a rigorous system of orthodox Thomism to be used as an instrument of critique of contemporary thought. Due to its suspicion of attempts to harmonize Aquinas with non-Thomistic categories and assumptions, Scholastic Thomism has sometimes been called, according to philosophers like Edward Feser, "Strict Observance Thomism". A discussion of recent and current Thomistic scholasticism can be found in La Metafisica di san Tommaso d'Aquino e i suoi interpreti (2002) by , which includes such figures as Sofia Vanni Rovighi (1908–1990), Cornelio Fabro (1911–1995), Carlo Giacon (1900–1984), Tomas Tyn O.P. (1950–1990), Abelardo Lobato O.P. (1925–2012), Leo Elders (1926– ) and Giovanni Ventimiglia (1964– ) among others. Fabro in particular emphasizes Aquinas' originality, especially with respect to the actus essendi or act of existence of finite beings by participating in being itself. Other scholars such as those involved with the "Progetto Tommaso" seek to establish an objective and universal reading of Aquinas' texts. Thomistic scholasticism in the English speaking world went into decline in the 1970s when the Thomistic revival that had been spearheaded by Jacques Maritain, Étienne Gilson, and others, diminished in influence. Partly, this was because this branch of Thomism had become a quest to understand the historical Aquinas after the Second Vatican Council. Analytical Scholasticism A renewed interest in the "scholastic" way of doing philosophy has recently awoken in the confines of the analytic philosophy. Attempts emerged to combine elements of scholastic and analytic methodology in pursuit of a contemporary philosophical synthesis. Proponents of various incarnations of this approach include Anthony Kenny, Peter King, Thomas Williams or David Oderberg. Scholastic method Cornelius O'Boyle explained that Scholasticism focuses on how to acquire knowledge and how to communicate effectively so that it may be acquired by others. It was thought that the best way to achieve this was by replicating the discovery process (modus inveniendi). The scholasticists would choose a book by a renowned scholar, auctor (author), as a subject for investigation. By reading it thoroughly and critically, the disciples learned to appreciate the theories of the author. Other documents related to the book would be referenced, such as Church councils, papal letters and anything else written on the subject, be it ancient or contemporary. The points of disagreement and contention between multiple sources would be written down in individual sentences or snippets of text, known as sententiae. Once the sources and points of disagreement had been laid out through a series of dialectics, the two sides of an argument would be made whole so that they would be found to be in agreement and not contradictory. (Of course, sometimes opinions would be totally rejected, or new positions proposed.) This was done in two ways. The first was through philological analysis. Words were examined and argued to have multiple meanings. It was also considered that the auctor might have intended a certain word to mean something different. Ambiguity could be used to find common ground between two otherwise contradictory statements. The second was through logical analysis, which relied on the rules of formal logic – as they were known at the time – to show that contradictions did not exist but were subjective to the reader. Scholastic instruction Scholastic instruction consisted of several elements. The first was the lectio: a teacher would read an authoritative text followed by a commentary, but no questions were permitted. This was followed by the meditatio (meditation or reflection) in which students reflected on and appropriated the text. Finally, in the quaestio students could ask questions (quaestiones) that might have occurred to them during meditatio. Eventually the discussion of questiones became a method of inquiry apart from the lectio and independent of authoritative texts. Disputationes were arranged to resolve controversial quaestiones. Questions to be disputed were ordinarily announced beforehand, but students could propose a question to the teacher unannounced – disputationes de quodlibet. In this case, the teacher responded and the students rebutted; on the following day the teacher, having used notes taken during the disputation, summarised all arguments and presented his final position, riposting all rebuttals. The quaestio method of reasoning was initially used especially when two authoritative texts seemed to contradict one another. Two contradictory propositions would be considered in the form of an either/or question, and each part of the question would have to be approved (sic) or denied (non). Arguments for the position taken would be presented in turn, followed by arguments against the position, and finally the arguments against would be refuted. This method forced scholars to consider opposing viewpoints and defend their own arguments against them. See also Actus primus Allegory in the Middle Ages Casuistry History of science in the Middle Ages Medieval philosophy Nominalism Pardes (Jewish exegesis) Renaissance of the 12th century Scotism References Works cited Decock, W. (2013), Theologians and Contract Law: The Moral Transformation of the Ius Commune (c. 1500–1650), Leiden/Boston, Brill/Nijhoff, . Fryde, E., The Early Palaeologan Renaissance, Brill 2000. Gracia, J. G. and Noone, T. B., eds., (2003) A Companion to Philosophy in the Middle Ages. London: Blackwell, McGrade, A. S., ed., (2003) The Cambridge Companion to Medieval Philosophy. Cambridge: Cambridge University Press. Further reading Primary Secondary External links Scholasticon by Jacob Schmutz Medieval Philosophy Electronic Resources "Scholasticism". In Encyclopædia Britannica Online. Scholasticism Joseph Rickaby, (1908), 121 pp. (also at googlebooks) Scholasticism in The Catholic Encyclopedia Yahoo! directory category: Scholasticism The genius of the scholastics and the orbit of Aristotle, article by James Franklin on the influence of scholasticism on later thought Medieval Philosophy, Universities and the Church by James Hannam ALCUIN – Regensburger Infothek der Scholastik – Huge database with information on biography, text chronology, editions. Philosophical schools and traditions Christian theological movements Catholicism in the Middle Ages Christianity in the Middle Ages Thomas Aquinas Trinitarianism Christian terminology Western Christianity Western culture
0.767249
0.997953
0.765678
Intellect
In the study of the human mind, intellect is the ability of the human mind to reach correct conclusions about what is true and what is false in reality; and includes capacities such as reasoning, conceiving, judging, and relating. Translated from the Ancient Greek philosophical concept nous, intellect derived from the Latin ("to understand"), from which the term intelligence in the French and English languages is also derived. The discussion of intellect can be divided into two areas that concern the relation between intelligence and intellect. In classical philosophy and in medieval philosophy the intellect (nous) is the subject of the question: How do people know things? In Late Antiquity and in the Middle Ages, the intellect was the conceptual means of reconciling religious monotheism with philosophical or scientific study of Nature. This reconciliation made the intellect the conduit between the individual human soul, and the divine intellect of the cosmos. Aristotle first developed this with his distinction between the passive intellect and active intellect. In psychology and in neuroscience, the controversial Theory of Multiple Intelligences applies the terms intelligence (emotion) and intellect (mind) to describe how people understand the world and reality. Intellect and intelligence As a branch of intelligence, intellect concerns the logical and the rational functions of the human mind, and usually is limited to facts and knowledge. Additional to the functions of linear logic and the patterns of formal logic the intellect also processes the non-linear functions of fuzzy logic and dialectical logic. Intellect and intelligence are contrasted by etymology; derived from the Latin present active participle , the term intelligence denotes "to gather in between", whereas the term intellect, derived from the past participle of , denotes "what has been gathered". Therefore, intelligence relates to the creation of new categories of understanding, based upon similarities and differences, while intellect relates to understanding existing categories. Development of intellect A person's intellectual understanding of reality derives from a conceptual model of reality based upon the perception and the cognition of the material world of reality. The conceptual model of mind is composed of the mental and emotional processes by which a person seeks, finds, and applies logical solutions to the problems of life. The full potential of the intellect is achieved when a person acquires a factually accurate understanding of the real world, which is mirrored in the mind. The mature intellect is identified by the person's possessing the capability of emotional self-management, wherein they can encounter, face, and resolve problems of life without being overwhelmed by emotion. Real-world experience is necessary to and for the development of a person's intellect, because, in resolving the problems of life, a person can intellectually comprehend a social circumstance (a time and a place) and so adjust their social behavior in order to act appropriately in the society of other people. Intellect develops when a person seeks an emotionally satisfactory solution to a problem; mental development occurs from the person's search for satisfactory solutions to the problems of life. Only experience of the real world can provide understanding of reality, which contributes to the person's intellectual development. Structure of intellect In 1955, the psychologist Joy Paul Guilford (1897–1987) proposed a Structural Intellect (SI) model in three dimensions: (i) Operations, (ii) Contents, and (iii) Products. Each parameter contains specific, discrete elements that are individually measured as autonomous units of the human mind. Intellectual operations are represented by cognition and memory, production (by divergent thinking and convergent thinking), and evaluation. Contents are figurative and symbolic, semantic and behavioral. Products are in units, classes, and relations, systems, transformations, and implications. Intellect in psychotherapy Intellectualization is a psychotherapeutic method based of intense intellectual focus in order to avoid dealing with a problem that occupies the attention of a person. In psychological praxis, intellectualization is a defense mechanism that blocks feelings in order to prevent anxiety and stress from acting upon and interfering with the psyche of the person, which otherwise would interfere with their normal functioning in real life. As psychotherapy, intellectualization is a rational, dispassionate, and scientific approach towards dealing with and resolving mental problems, which psychologically disturb the person. The functions of intellectualization involve the Id, ego, and super-ego. The Ego is the conscious aspect of human personality; the Id is the unconscious, animal-instinct aspect; and the super-ego is the control mechanism that mediates and adjusts a person's thoughts and actions and behavior in accordance with the social norms of society. The purpose of intellectualization is to isolate the Id from the real world, and so make the conscious aspects of a person's life the only object of reflection and consideration. Therefore, intellectualization defends and protects the Ego from the Id, the unconscious aspect of human personality that usually is impossible to control. Socially, intellectualization uses technical jargon and complex scientific terminology instead of plain language; e.g. a physician uses the word carcinoma instead of cancer to lessen the negative impact of a diagnosis of terminal disease — by directing the patient's attention away from the bad news. The different registers of language, scientific (carcinoma) and plain language (cancer), facilitate the patient's acceptance of medical fact and medical treatment, by avoiding an outburst of negative emotions that would interfere with the successful treatment of the disease. Moreover, the defense mechanism of intellectualization is criticized because it separates and isolates the person from the painful emotions caused by the psychological problem. As such, the defense mechanism subsequently leads to the denial of intuition, which sometimes contributes to the processes of decision-making; a negative consequence of the absence of emotional stimuli can deprive the person of motivation, and lead to a mood of dissatisfaction, such as melancholy; such "emotional constipation" threatens their creativity, by replacing such capabilities with factual solutions. See also Human intelligence Intellectualism References External links Concepts in the philosophy of mind Concepts in epistemology Intellectualism Intelligence Mental content Mental processes Metaphysics of mind Philosophy of psychology Rational choice theory Sources of knowledge Thought
0.771517
0.992429
0.765676
Truism
A truism is a claim that is so obvious or self-evident as to be hardly worth mentioning, except as a reminder or as a rhetorical or literary device, and is the opposite of falsism. In philosophy, a sentence which asserts incomplete truth conditions for a proposition may be regarded as a truism. An example of such a sentence would be "Under appropriate conditions, the sun rises." Without contextual supporta statement of what those appropriate conditions arethe sentence is true but incontestable. Lapalissades, such as "If he were not dead, he would still be alive", are considered to be truisms. See also Aphorism Axiom Cliché Contradiction Dictum Dogma Figure of speech Maxim Moral Platitude Synthetic proposition Tautology References Rhetoric Paremiology
0.775487
0.987315
0.76565
Anapodoton
An anapodoton (from Ancient Greek anapódoton: "that which lacks an apodosis, that is, the consequential clause in a conditional sentence), plural anapodota, is a rhetorical device related to the anacoluthon; both involve a thought being interrupted or discontinued before it is fully expressed. It is a figure of speech or discourse that is an incomplete sentence, consisting of a subject or complement without the requisite object. The stand-alone subordinate clause suggests or implies a subject (a main clause), but this is not actually provided. As an intentional rhetorical device, it is generally used for set phrases, where the full form is understood, and would thus be tedious to spell out, as in "When in Rome [do as the Romans]." Anapodota are common in Classical Chinese and languages that draw from it, such as Japanese, where a long literary phrase is commonly abbreviated to just its condition. For example, Zhuangzi's phrase "A frog in a well cannot conceive of the ocean", meaning "people of limited experience have a narrow world view", is rendered as "A frog in a well" in Modern Chinese, and as "A frog in a well" in Modern Japanese, abbreviating "A frog in a well does not know the great ocean". Other uses It is also said to occur when a main clause is left unsaid due to a speaker interrupting him/herself to revise a thought, thus leaving the initial clause unresolved, but then making use of it nonetheless by recasting and absorbing it into a new, grammatically complete sentence. Though grammatically incorrect, anapodoton is a commonplace feature of everyday informal speech. It, therefore, appears frequently in dramatic writing and in fiction in the form of direct speech or the representation of stream of consciousness. Examples: "If you think I'm going to sit here and take your insults..." (implied: "then you are mistaken") "When life gives you lemons..." (implied: "you make lemonade") "If they came to hear me beg..." (implied: "then they will be disappointed") "The only easy day..." (implied: "was yesterday") "When the going gets tough..." (implied: "the tough get going") "If you can’t stand the heat..." (implied: "get out of the kitchen") "Birds of a feather..." (implied: "flock together") See also Aposiopesis Xiehouyu References Figures of speech
0.768093
0.996782
0.765621
Human condition
The human condition can be defined as the characteristics and key events of human life, including birth, learning, emotion, aspiration, reason, morality, conflict, and death. This is a very broad topic that has been and continues to be pondered and analyzed from many perspectives, including those of art, biology, literature, philosophy, psychology, and religion. As a literary term, "human condition" is typically used in the context of ambiguous subjects, such as the meaning of life or moral concerns. Some perspectives Each major religion has definitive beliefs regarding the human condition. For example, Buddhism teaches that existence is a perpetual cycle of suffering, death, and rebirth from which humans can be liberated via the Noble Eightfold Path. Meanwhile, many Christians believe that humans are born in a sinful condition and are doomed in the afterlife unless they receive salvation through Jesus Christ. Philosophers have provided many perspectives. An influential ancient view was that of the Republic in which Plato explored the question "what is justice?" and postulated that it is not primarily a matter among individuals but of society as a whole, prompting him to devise a utopia. Two thousand years later René Descartes declared "I think, therefore I am" because he believed the human mind, particularly its faculty of reason, to be the primary determiner of truth; for this he is often credited as the father of modern philosophy. One such modern school, existentialism, attempts to reconcile an individual's sense of disorientation and confusion in a universe believed to be absurd. Many works of literature provide a perspective on the human condition. One famous example is Shakespeare's monologue "All the world's a stage" which pensively summarizes seven phases of human life. Psychology has many theories, including Maslow's hierarchy of needs and the notions of identity crisis and terror management. It also has various methods, e.g. the logotherapy developed by Holocaust survivor Viktor Frankl to discover and affirm a sense of meaning. Another method, cognitive behavioral therapy, has become a widespread treatment for clinical depression. Charles Darwin established the biological theory of evolution, which posits that the human species is related to all others, living and extinct, and that natural selection is the primary survival factor. This led to subsequent beliefs, such as social Darwinism, which eventually lost its connection to natural selection, and theistic evolution of a creator deity acting through laws of nature, including evolution. See also Human nature Know thyself References Concepts in philosophical anthropology Concepts in social philosophy Concepts in the philosophy of mind Existentialist concepts Humans Personal life Philosophy of life Psychological concepts
0.767401
0.997638
0.765588
Descriptive ethics
Descriptive ethics, also known as comparative ethics, is the study of people's beliefs about morality. It contrasts with prescriptive or normative ethics, which is the study of ethical theories that prescribe how people ought to act, and with meta-ethics, which is the study of what ethical terms and theories actually refer to. The following examples of questions that might be considered in each field illustrate the differences between the fields: Descriptive ethics: What do people think is right? Meta-ethics: What does "right" even mean? Normative (prescriptive) ethics: How should people act? Applied ethics: How do we take moral knowledge and put it into practice? Description Descriptive ethics is a form of empirical research into the attitudes of individuals or groups of people. In other words, this is the division of philosophical or general ethics that involves the observation of the moral decision-making process with the goal of describing the phenomenon. Those working on descriptive ethics aim to uncover people's beliefs about such things as values, which actions are right and wrong, and which characteristics of moral agents are virtuous. Research into descriptive ethics may also investigate people's ethical ideals or what actions societies reward or punish in law or politics. What ought to be noted is that culture is generational and not static. Therefore, a new generation will come with its own set of morals and that qualifies to be their ethics. Descriptive ethics will hence try to oversee whether ethics still holds its place. Because descriptive ethics involves empirical investigation, it is a field that is usually investigated by those working in the fields of evolutionary biology, psychology, sociology or anthropology. Information that comes from descriptive ethics is, however, also used in philosophical arguments. Value theory can be either normative or descriptive but is usually descriptive. Lawrence Kohlberg: An example of descriptive ethics Lawrence Kohlberg is one example of a psychologist working on descriptive ethics. In one study, for example, Kohlberg questioned a group of boys about what would be a right or wrong action for a man facing a moral dilemma (specifically, the Heinz dilemma): should he steal a drug to save his wife, or refrain from theft even though that would lead to his wife's death? Kohlberg's concern was not which choice the boys made, but the moral reasoning that lay behind their decisions. After carrying out a number of related studies, Kohlberg devised a theory about the development of human moral reasoning that was intended to reflect the moral reasoning actually carried out by the participants in his research. Kohlberg's research can be classed as descriptive ethics to the extent that he describes human beings' actual moral development. If, in contrast, he had aimed to describe how humans ought to develop morally, his theory would have involved prescriptive ethics. See also Experimental philosophy List of ethics topics Moral reasoning Moral psychology References Further reading Coleman, Stephen Edwin, "DIGITAL PHOTO MANIPULATION: A DESCRIPTIVE ANALYSIS OF CODES OF ETHICS AND ETHICAL DECISIONS OF PHOTO EDITORS" (2007). Dissertations. 1304. https://aquila.usm.edu/dissertations/1304 Descriptive ethics Moral psychology
0.775785
0.986779
0.765528
Nature conservation
Nature conservation is the moral philosophy and conservation movement focused on protecting species from extinction, maintaining and restoring habitats, enhancing ecosystem services, and protecting biological diversity. A range of values underlie conservation, which can be guided by biocentrism, anthropocentrism, ecocentrism, and sentientism, environmental ideologies that inform ecocultural practices and identities. There has recently been a movement towards evidence-based conservation which calls for greater use of scientific evidence to improve the effectiveness of conservation efforts. As of 2018 15% of land and 7.3% of the oceans were protected. Many environmentalists set a target of protecting 30% of land and marine territory by 2030. In 2021, 16.64% of land and 7.9% of the oceans were protected. The 2022 IPCC report on climate impacts and adaptation, underlines the need to conserve 30% to 50% of the Earth's land, freshwater and ocean areas – echoing the 30% goal of the U.N.'s Convention on Biodiversity. Introduction Conservation goals include conserving habitat, preventing deforestation, maintaining soil organic matter, halting species extinction, reducing overfishing, and mitigating climate change. Different philosophical outlooks guide conservationists towards these different goals. The principal value underlying many expressions of the conservation ethic is that the natural world has intrinsic and intangible worth along with utilitarian value – a view carried forward by parts of the scientific conservation movement and some of the older Romantic schools of the ecology movement. Philosophers have attached intrinsic value to different aspects of nature, whether this is individual organisms (biocentrism) or ecological wholes such as species or ecosystems (ecoholism). More utilitarian schools of conservation have an anthropocentric outlook and seek a proper valuation of local and global impacts of human activity upon nature in their effect upon human wellbeing, now and to posterity. How such values are assessed and exchanged among people determines the social, political and personal restraints and imperatives by which conservation is practiced. This is a view common in the modern environmental movement. There is increasing interest in extending the responsibility for human wellbeing to include the welfare of sentient animals. In 2022 the United Kingdom introduced the Animal Welfare (Sentience) Act which lists all vertebrates, decapod crustaceans and cephalopods as sentient beings. Branches of conservation ethics focusing on sentient individuals include ecofeminism and compassionate conservation. In the United States of America, the year 1864 saw the publication of two books which laid the foundation for Romantic and Utilitarian conservation traditions in America. The posthumous publication of Henry David Thoreau's Walden established the grandeur of unspoiled nature as a citadel to nourish the spirit of man. A very different book from George Perkins Marsh, Man and Nature, later subtitled "The Earth as Modified by Human Action", catalogued his observations of man exhausting and altering the land from which his sustenance derives. The consumer conservation ethic has been defined as the attitudes and behaviors held and engaged in by individuals and families that ultimately serve to reduce overall societal consumption of energy. The conservation movement has emerged from the advancements of moral reasoning. Increasing numbers of philosophers and scientists have made its maturation possible by considering the relationships between human beings and organisms with the same rigor. This social ethic primarily relates to local purchasing, moral purchasing, the sustained, and efficient use of renewable resources, the moderation of destructive use of finite resources, and the prevention of harm to common resources such as air and water quality, the natural functions of a living earth, and cultural values in a built environment. These practices are used to slow down the accelerating rate in which extinction is occurring at. The origins of this ethic can be traced back to many different philosophical and religious beliefs; that is, these practices has been advocated for centuries. In the past, conservationism has been categorized under a spectrum of views, including anthropocentric, utilitarian conservationism, and radical eco-centric green eco-political views. More recently, the three major movements has been grouped to become what we now know as conservation ethic. The person credited with formulating the conservation ethic in the United States is former president, Theodore Roosevelt. Terminology The term "conservation" was coined by Gifford Pinchot in 1907. He told his close friend United States President Theodore Roosevelt who used it for a national conference of governors in 1908. In common usage, the term refers to the activity of systematically protecting natural resources such as forests, including biological diversity. Carl F. Jordan defines biological conservation as: While this usage is not new, the idea of biological conservation has been applied to the principles of ecology, biogeography, anthropology, economy, and sociology to maintain biodiversity. The term "conservation" itself may cover the concepts such as cultural diversity, genetic diversity, and the concept of movements environmental conservation, seedbank curation (preservation of seeds), and gene bank coordination (preservation of animals' genetic material). These are often summarized as the priority to respect diversity. Much recent movement in conservation can be considered a resistance to commercialism and globalization. Slow Food is a consequence of rejecting these as moral priorities, and embracing a slower and more locally focused lifestyle. Sustainable living is a lifestyle that people are beginning to adopt, promoting to make decisions that would help protect biodiversity. The small lifestyle changes that promote sustainability will eventually accumulate into the proliferation of biological diversity. Regulating the ecolabeling of products from fisheries, controlling for sustainable food production, or keeping the lights off during the day are some examples of sustainable living. However, sustainable living is not a simple and uncomplicated approach. A 1987 Brundtland Report expounds on the notion of sustainability as a process of change that looks different for everyone: "It is not a fixed state of harmony, but rather a process of change in which the exploitation of resources, the direction of investments, the orientation of technological development, and institutional change are made consistent with future as well as present needs. We do not pretend that the process is easy or straightforward." Simply put, sustainable living does make a difference by compiling many individual actions that encourage the protection of biological diversity. Practice Distinct trends exist regarding conservation development. The need for conserving land has only recently intensified during what some scholars refer to as the Capitalocene epoch. This era marks the beginning of colonialism, globalization, and the Industrial Revolution that has led to global land change as well as climate change. While many countries' efforts to preserve species and their habitats have been government-led, those in the North Western Europe tended to arise out of the middle-class and aristocratic interest in natural history, expressed at the level of the individual and the national, regional or local learned society. Thus countries like Britain, the Netherlands, Germany, etc. had what would be called non-governmental organizations – in the shape of the Royal Society for the Protection of Birds, National Trust and County Naturalists' Trusts (dating back to 1889, 1895, and 1912 respectively) Natuurmonumenten, Provincial Conservation Trusts for each Dutch province, Vogelbescherming, etc. – a long time before there were national parks and national nature reserves. This in part reflects the absence of wilderness areas in heavily cultivated Europe, as well as a longstanding interest in laissez-faire government in some countries, like the UK, leaving it as no coincidence that John Muir, the Scottish-born founder of the National Park movement (and hence of government-sponsored conservation) did his sterling work in the US, where he was the motor force behind the establishment of such national parks as Yosemite and Yellowstone. Nowadays, officially more than 10 percent of the world is legally protected in some way or the other, and in practice, private fundraising is insufficient to pay for the effective management of so much land with protective status. Protected areas in developing countries, where probably as many as 70–80 percent of the species of the world live, still enjoy very little effective management and protection. Some countries, such as Mexico, have non-profit civil organizations and landowners dedicated to protecting vast private property, such is the case of Hacienda Chichen's Maya Jungle Reserve and Bird Refuge in Chichen Itza, Yucatán. The Adopt A Ranger Foundation has calculated that worldwide about 140,000 rangers are needed for the protected areas in developing and transition countries. There are no data on how many rangers are employed at the moment, but probably less than half the protected areas in developing and transition countries have any rangers at all and those that have them are at least 50% short. This means that there would be a worldwide ranger deficit of 105,000 rangers in the developing and transition countries. The terms conservation and preservation are frequently conflated outside the academic, scientific, and professional kinds of literature. The United States' National Park Service offers the following explanation of the important ways in which these two terms represent very different conceptions of environmental protection ethics: During the environmental movement of the early 20th century, two opposing factions emerged: conservationists and preservationists. Conservationists sought to regulate human use while preservationists sought to eliminate human impact altogether." C. Anne Claus presents a distinction for conservation practices. Claus divides conservation into conservation-far and conservation-near. Conservation-far is the means of protecting nature by separating it and safeguarding it from humans. Means of doing this include the creation of preserves or national parks. They are meant to keep the flora and fauna away from human influence and have become a staple method in the west. Conservation-near however is conservation via connection. The method of reconnecting people to nature through traditions and beliefs to foster a desire to protect nature. The basis is that instead of forcing compliance to separate from nature onto the people, instead conservationists work with locals and their traditions to find conservation efforts that work for all. Evidence-based conservation Evidence-based conservation is the application of evidence in conservation management actions and policy making. It is defined as systematically assessing scientific information from published, peer-reviewed publications and texts, practitioners' experiences, independent expert assessment, and local and indigenous knowledge on a specific conservation topic. This includes assessing the current effectiveness of different management interventions, threats and emerging problems, and economic factors. Evidence-based conservation was organized based on the observations that decision making in conservation was based on intuition and/or practitioner experience often disregarding other forms of evidence of successes and failures (e.g. scientific information). This has led to costly and poor outcomes. Evidence-based conservation provides access to information that will support decision making through an evidence-based framework of "what works" in conservation. The evidence-based approach to conservation is based on evidence-based practice which started in medicine and later spread to nursing, education, psychology, and other fields. It is part of the larger movement towards evidence-based practices. See also Conservation biology Conservation community, recent term for controlled-growth land use development Cryoconservation of animal genetic resources Dark green environmentalism Environmental history of the United States Environmental protection Forest conservation Geoconservation Index of environmental articles List of environmental issues List of environmental organizations Natural capital Natural environment Natural resource Relationship between animal ethics and environmental ethics Sustainable agriculture Trail ethics Water conservation Wildlife conservation 30 by 30 References Further reading Glacken, C.J. (1967) Traces on the Rhodian Shore. University of California Press. Berkeley Grove, R.H. (1992) 'Origins of Western Environmentalism', Scientific American 267(1): 22–27. Grove, R.H. (1995) Green Imperialism: Colonial Expansion, Tropical Island Edens, and the Origins of Environmentalism, 1600–1860 New York: Cambridge University Press Leopold, A. (1966) A Sand County Almanac New York: Oxford University Press Pinchot, G. (1910) The Fight for Conservation New York: Harcourt Brace. "Why Care for Earth's Environment?" (in the series "The Bible's Viewpoint") is a two-page article in the December 2007 issue of the magazine Awake!. A free textbook for download. External links Protected Areas and Conservation at Our World in Data Dictionary of the History of ideas: Conservation of Natural Resources For Future Generations, a Canadian documentary on how the conservation ethic influenced national parks Category List --- Religion-Online.org "Ecology/Environment" Natural environment Habitat Natural resource management Sustainable development Environmental protection Environmental ethics
0.770053
0.994076
0.765491
Reification (fallacy)
Reification (also known as concretism, hypostatization, or the fallacy of misplaced concreteness) is a fallacy of ambiguity, when an abstraction (abstract belief or hypothetical construct) is treated as if it were a concrete real event or physical entity. In other words, it is the error of treating something that is not concrete, such as an idea, as a concrete thing. A common case of reification is the confusion of a model with reality: "the map is not the territory". Reification is part of normal usage of natural language, as well as of literature, where a reified abstraction is intended as a figure of speech, and actually understood as such. But the use of reification in logical reasoning or rhetoric is misleading and usually regarded as a fallacy. A potential consequence of reification is exemplified by Goodhart's law, where changes in the measurement of a phenomenon are mistaken for changes to the phenomenon itself. Etymology The term "reification" originates from the combination of the Latin terms res ("thing") and -fication, a suffix related to facere ("to make"). Thus reification can be loosely translated as "thing-making"; the turning of something abstract into a concrete thing or object. Theory Reification takes place when natural or social processes are misunderstood or simplified; for example, when human creations are described as "facts of nature, results of cosmic laws, or manifestations of divine will". Reification may derive from an innate tendency to simplify experience by assuming constancy as much as possible. Fallacy of misplaced concreteness According to Alfred North Whitehead, one commits the fallacy of misplaced concreteness when one mistakes an abstract belief, opinion, or concept about the way things are for a physical or "concrete" reality: "There is an error; but it is merely the accidental error of mistaking the abstract for the concrete. It is an example of what might be called the 'Fallacy of Misplaced Concreteness. Whitehead proposed the fallacy in a discussion of the relation of spatial and temporal location of objects. He rejects the notion that a concrete physical object in the universe can be ascribed a simple spatial or temporal extension, that is, without reference to its relations to other spatial or temporal extensions. [...] apart from any essential reference of the relations of [a] bit of matter to other regions of space [...] there is no element whatever which possesses this character of simple location. [... Instead,] I hold that by a process of constructive abstraction we can arrive at abstractions which are the simply located bits of material, and at other abstractions which are the minds included in the scientific scheme. Accordingly, the real error is an example of what I have termed: The Fallacy of Misplaced Concreteness. Vicious abstractionism William James used the notion of "vicious abstractionism" and "vicious intellectualism" in various places, especially to criticize Immanuel Kant's and Georg Wilhelm Friedrich Hegel's idealistic philosophies. In The Meaning of Truth, James wrote: Let me give the name of "vicious abstractionism" to a way of using concepts which may be thus described: We conceive a concrete situation by singling out some salient or important feature in it, and classing it under that; then, instead of adding to its previous characters all the positive consequences which the new way of conceiving it may bring, we proceed to use our concept privatively; reducing the originally rich phenomenon to the naked suggestions of that name abstractly taken, treating it as a case of "nothing but" that concept, and acting as if all the other characters from out of which the concept is abstracted were expunged. Abstraction, functioning in this way, becomes a means of arrest far more than a means of advance in thought. ... The viciously privative employment of abstract characters and class names is, I am persuaded, one of the great original sins of the rationalistic mind. In a chapter on "The Methods and Snares of Psychology" in The Principles of Psychology, James describes a related fallacy, the psychologist's fallacy, thus: "The great snare of the psychologist is the confusion of his own standpoint with that of the mental fact about which he is making his report. I shall hereafter call this the "psychologist's fallacy" par excellence" (volume 1, p. 196). John Dewey followed James in describing a variety of fallacies, including "the philosophic fallacy", "the analytic fallacy", and "the fallacy of definition". Use of constructs in science The concept of a "construct" has a long history in science; it is used in many, if not most, areas of science. A construct is a hypothetical explanatory variable that is not directly observable. For example, the concepts of motivation in psychology, utility in economics, and gravitational field in physics are constructs; they are not directly observable, but instead are tools to describe natural phenomena. The degree to which a construct is useful and accepted as part of the current paradigm in a scientific community depends on empirical research that has demonstrated that a scientific construct has construct validity (especially, predictive validity). Stephen Jay Gould draws heavily on the idea of fallacy of reification in his book The Mismeasure of Man. He argues that the error in using intelligence quotient scores to judge people's intelligence is that, just because a quantity called "intelligence" or "intelligence quotient" is defined as a measurable thing does not mean that intelligence is real; thus denying the validity of the construct "intelligence." Relation to other fallacies Pathetic fallacy (also known as anthropomorphic fallacy or anthropomorphization) is a specific type of reification. Just as reification is the attribution of concrete characteristics to an abstract idea, a pathetic fallacy is committed when those characteristics are specifically human characteristics, especially thoughts or feelings. Pathetic fallacy is also related to personification, which is a direct and explicit ascription of life and sentience to the thing in question, whereas the pathetic fallacy is much broader and more allusive. The animistic fallacy involves attributing personal intention to an event or situation. Reification fallacy should not be confused with other fallacies of ambiguity: Accentus, where the ambiguity arises from the emphasis (accent) placed on a word or phrase Amphiboly, a verbal fallacy arising from ambiguity in the grammatical structure of a sentence Composition, when one assumes that a whole has a property solely because its various parts have that property Division, when one assumes that various parts have a property solely because the whole has that same property Equivocation, the misleading use of a word with more than one meaning As a rhetorical device The rhetorical devices of metaphor and personification express a form of reification, but short of a fallacy. These devices, by definition, do not apply literally and thus exclude any fallacious conclusion that the formal reification is real. For example, the metaphor known as the pathetic fallacy, "the sea was angry" reifies anger, but does not imply that anger is a concrete substance, or that water is sentient. The distinction is that a fallacy inhabits faulty reasoning, and not the mere illustration or poetry of rhetoric. Counterexamples Reification, while usually fallacious, is sometimes considered a valid argument. Thomas Schelling, a game theorist during the Cold War, argued that for many purposes an abstraction shared between disparate people caused itself to become real. Some examples include the effect of round numbers in stock prices, the importance placed on the Dow Jones Industrial index, national borders, preferred numbers, and many others. (Compare the theory of social constructionism.) See also All models are wrong Counterfactual definiteness Idolatry Objectification Philosophical realism Problem of universals, a debate about the reality of categories Surrogation Hypostatic abstraction References Informal fallacies
0.770577
0.993398
0.76549
Ad hoc
Ad hoc is a Latin phrase meaning literally for this. In English, it typically signifies a solution designed for a specific purpose, problem, or task rather than a generalized solution adaptable to collateral instances (compare with a priori). Common examples include ad hoc committees and commissions created at the national or international level for a specific task, and the term is often used to describe arbitration (ad hoc arbitration). In other fields, the term could refer to a military unit created under special circumstances (see task force), a handcrafted network protocol (e.g., ad hoc network), a temporary collaboration among geographically-linked franchise locations (of a given national brand) to issue advertising coupons, or a purpose-specific equation in mathematics or science. Ad hoc can also function as an adjective describing temporary, provisional, or improvised methods to deal with a particular problem, the tendency of which has given rise to the noun adhocism. This concept highlights the flexibility and adaptability often required in problem-solving across various domains. In everyday language, "ad hoc" is sometimes used informally to describe improvised or makeshift solutions, emphasizing their temporary nature and specific applicability to immediate circumstances. Styling Style guides disagree on whether Latin phrases like ad hoc should be italicized. The trend is not to use italics. For example, The Chicago Manual of Style recommends that familiar Latin phrases that are listed in the Webster's Dictionary, including "ad hoc", not be italicized. Hypothesis In science and philosophy, ad hoc means the addition of extraneous hypotheses to a theory to save it from being falsified. Ad hoc hypotheses compensate for anomalies not anticipated by the theory in its unmodified form. Scientists are often skeptical of scientific theories that rely on frequent, unsupported adjustments to sustain them. Ad hoc hypotheses are often characteristic of pseudo-scientific subjects such as homeopathy. In the military In the military, ad hoc units are created during unpredictable situations, when the cooperation between different units is suddenly needed for fast action, or from remnants of previous units which have been overrun or otherwise whittled down. In governance In national and sub-national governance, ad hoc bodies may be established to deal with specific problems not easily accommodated by the current structure of governance or to address multi-faceted issues spanning several areas of governance. In the UK and other commonwealth countries, ad hoc Royal Commissions may be set up to address specific questions as directed by parliament. In diplomacy In diplomacy, diplomats may be appointed by a government as special envoys, or diplomats who serve on a ad hoc basis due to the possibility that such envoys' offices may either not be retained by a future government or may only exist during the duration of a relevant cause. Networking The term ad hoc networking typically refers to a system of network elements that combine to form a network requiring little or no planning. See also Ad hoc testing Ad infinitum Ad libitum Adhocracy Democracy Heuristic House rule Russell's teapot Inductive reasoning Confirmation bias Cherry picking References Further reading External links Latin words and phrases
0.76694
0.998085
0.765471
Agrarianism
Agrarianism is a social and political philosophy that advocates for a return to subsistence agriculture, family farming, widespread property ownership, and political decentralization. Those who adhere to agrarianism tend to value traditional forms of local community over urban modernity. Agrarian political parties sometimes aim to support the rights and sustainability of small farmers and poor peasants against the wealthy in society. Philosophy Some scholars suggest that agrarianism espouses the superiority of rural society to urban society and the independent farmer as superior to the paid worker, and sees farming as a way of life that can shape the ideal social values. It stresses the superiority of a simpler rural life in comparison to the complexity of urban life. For example, M. Thomas Inge defines agrarianism by the following basic tenets: Farming is the sole occupation that offers total independence and self-sufficiency. Urban life, capitalism, and technology destroy independence and dignity and foster vice and weakness. The agricultural community, with its fellowship of labor and co-operation, is the model society. The farmer has a solid, stable position in the world order. They have "a sense of identity, a sense of historical and religious tradition, a feeling of belonging to a concrete family, place, and region, which are psychologically and culturally beneficial." The harmony of their life checks the encroachments of a fragmented, alienated modern society. Cultivation of the soil "has within it a positive spiritual good" and from it the cultivator acquires the virtues of "honor, manliness, self-reliance, courage, moral integrity, and hospitality." They result from a direct contact with nature and, through nature, a closer relationship to God. The agrarian is blessed in that they follow the example of God in creating order out of chaos. History The philosophical roots of agrarianism include European and Chinese philosophers. The Chinese school of Agriculturalism (农家/農家) was a philosophy that advocated peasant utopian communalism and egalitarianism. In societies influenced by Confucianism that had as its foundation that humans are innately good, the farmer was considered an esteemed productive member of society, but merchants who made money were looked down upon. That influenced European intellectuals like François Quesnay, an avid Confucianist and advocate of China's agrarian policies, in forming the French agrarian philosophy of physiocracy. The physiocrats, along with the ideas of John Locke and the Romantic Era, formed the basis of modern European and American agrarianism. Types of agrarianism Physiocracy Jeffersonian democracy The United States president Thomas Jefferson was an agrarian who based his ideas about the budding American democracy around the notion that farmers are "the most valuable citizens" and the truest republicans. Jefferson and his support base were committed to American republicanism, which they saw as being in opposition to aristocracy and corruption, and which prioritized virtue, exemplified by the "yeoman farmer", "planters", and the "plain folk". In praising the rural farmfolk, the Jeffersonians felt that financiers, bankers and industrialists created "cesspools of corruption" in the cities and should thus be avoided. The Jeffersonians sought to align the American economy more with agriculture than industry. Part of their motive to do so was Jefferson's fear that the over-industrialization of America would create a class of wage slaves who relied on their employers for income and sustenance. In turn, these workers would cease to be independent voters as their vote could be manipulated by said employers. To counter this, Jefferson introduced, as scholar Clay Jenkinson noted, "a graduated income tax that would serve as a disincentive to vast accumulations of wealth and would make funds available for some sort of benign redistribution downward" and tariffs on imported articles, which were mainly purchased by the wealthy. In 1811, Jefferson, writing to a friend, explained: "these revenues will be levied entirely on the rich... . the rich alone use imported articles, and on these alone the whole taxes of the general government are levied. the poor man ... pays not a farthing of tax to the general government, but on his salt." There is general agreement that the substantial United States' federal policy of offering land grants (such as thousands of gifts of land to veterans) had a positive impact on economic development in the 19th century. Agrarian socialism Agrarian socialism is a form of agrarianism that is anti-capitalist in nature and seeks to introduce socialist economic systems in their stead. Zapatismo Notable agrarian socialists include Emiliano Zapata who was a leading figure in the Mexican Revolution. As part of the Liberation Army of the South, his group of revolutionaries fought on behalf of the Mexican peasants, whom they saw as exploited by the landowning classes. Zapata published the Plan of Ayala, which called for significant land reforms and land redistribution in Mexico as part of the revolution. Zapata was killed and his forces crushed over the course of the Revolution, but his political ideas lived on in the form of Zapatismo. Zapatismo would form the basis for neozapatismo, the ideology of the Zapatista Army of National Liberation. Known as Ejército Zapatista de Liberación Nacional or EZLN in Spanish, EZLN is a far-left libertarian socialist political and militant group that emerged in the state of Chiapas in southmost Mexico in 1994. EZLN and Neozapatismo, as explicit in their name, seek to revive the agrarian socialist movement of Zapata, but fuse it with new elements such as a commitment to indigenous rights and community-level decision making. Subcommander Marcos, a leading member of the movement, argues that the peoples' collective ownership of the land was and is the basis for all subsequent developments the movement sought to create:...When the land became property of the peasants ... when the land passed into the hands of those who work it ... [This was] the starting point for advances in government, health, education, housing, nutrition, women's participation, trade, culture, communication, and information ...[it was] recovering the means of production, in this case, the land, animals, and machines that were in the hands of large property owners." Maoism Maoism, the far-left ideology of Mao Zedong and his followers, places a heavy emphasis on the role of peasants in its goals. In contrast to other Marxist schools of thought which normally seek to acquire the support of urban workers, Maoism sees the peasantry as key. Believing that "political power grows out of the barrel of a gun", Maoism saw the Chinese Peasantry as the prime source for a Marxist vanguard because it possessed two qualities: (i) they were poor, and (ii) they were a political blank slate; in Mao's words, "A clean sheet of paper has no blotches, and so the newest and most beautiful words can be written on it". During the Chinese Civil War and the Second Sino-Japanese War, Mao and the Chinese Communist Party made extensive use of peasants and rural bases in their military tactics, often eschewing the cities. Following the eventual victory of the Communist Party in both wars, the countryside and how it should be run remained a focus for Mao. In 1958, Mao launched the Great Leap Forward, a social and economic campaign which, amongst other things, altered many aspects of rural Chinese life. It introduced mandatory collective farming and forced the peasantry to organize itself into communal living units which were known as people's communes. These communes, which consisted of 5,000 people on average, were expected to meet high production quotas while the peasants who lived on them adapted to this radically new way of life. The communes were run as co-operatives where wages and money were replaced by work points. Peasants who criticised this new system were persecuted as "rightists" and "counter-revolutionaries". Leaving the communes was forbidden and escaping from them was difficult or impossible, and those who attempted it were subjected to party-orchestrated "public struggle sessions," which further jeopardized their survival. These public criticism sessions were often used to intimidate the peasants into obeying local officials and they often devolved into little more than public beatings. On the communes, experiments were conducted in order to find new methods of planting crops, efforts were made to construct new irrigation systems on a massive scale, and the communes were all encouraged to produce steel backyard furnaces as part of an effort to increase steel production. However, following the Anti-Rightist Campaign, Mao had instilled a mass distrust of intellectuals into China, and thus engineers often were not consulted with regard to the new irrigation systems and the wisdom of asking untrained peasants to produce good quality steel from scrap iron was not publicly questioned. Similarly, the experimentation with the crops did not produce results. In addition to this the Four Pests Campaign was launched, in which the peasants were called upon to destroy sparrows and other wild birds that ate crop seeds, in order to protect fields. Pest birds were shot down or scared away from landing until they dropped from exhaustion. This campaign resulted in an ecological disaster that saw an explosion of the vermin population, especially crop-eating insects, which was consequently not in danger of being killed by predators. None of these new systems were working, but local leaders did not dare to state this, instead, they falsified reports so as not to be punished for failing to meet the quotas. In many cases they stated that they were greatly exceeding their quotas, and in turn, the Chinese state developed a completely false sense of success with regard to the commune system. All of this culminated in the Great Chinese Famine, which began in 1959, lasted 3 years, and saw an estimated 15 to 30 million Chinese people die. A combination of bad weather and the new, failed farming techniques that were introduced by the state led to massive shortages of food. By 1962, the Great Leap Forward was declared to be at an end. In the late 1960s and early 1970s, Mao once again radically altered life in rural China with the launching of the Down to the Countryside Movement. As a response to the Great Chinese Famine, the Chinese President Liu Shaoqi began "sending down" urban youths to rural China in order to recover its population losses and alleviate overcrowding in the cities. However, Mao turned the practice into a political crusade, declaring that the sending down would strip the youth of any bourgeois tendencies by forcing them to learn from the unprivileged rural peasants. In reality, it was the Communist Party's attempt to reign in the Red Guards, who had become uncontrollable during the course of the Cultural Revolution. 10% of the 1970 urban population of China was sent out to remote rural villages, often in Inner Mongolia. The villages, which were still poorly recovering from the effects of the Great Chinese Famine, did not have the excess resources that were needed to support the newcomers. Furthermore, the so-called "sent-down youth" had no agricultural experience and as a result, they were unaccustomed to the harsh lifestyle that existed in the countryside, and their unskilled labor in the villages provided little benefit to the agricultural sector. As a result, many of the sent-down youth died in the countryside. The relocation of the youths was originally intended to be permanent, but by the end of the Cultural Revolution, the Communist Party relented and some of those who had the capacity to return to the cities were allowed to do so. In imitation of Mao's policies, the Khmer Rouge of Cambodia (who were heavily funded and supported by the People's Republic of China) created their own version of the Great Leap Forward which was known as "Maha Lout Ploh". With the Great Leap Forward as its model, it had similarly disastrous effects, contributing to what is now known as the Cambodian genocide. As a part of the Maha Lout Ploh, the Khmer Rouge sought to create an entirely agrarian socialist society by forcibly relocating 100,000 people to move from Cambodia's cities into newly created communes. The Khmer Rouge leader, Pol Pot sought to "purify" the country by setting it back to "Year Zero", freeing it from "corrupting influences". Besides trying to completely de-urbanize Cambodia, ethnic minorities were slaughtered along with anyone else who was suspected of being a "reactionary" or a member of the "bourgeoisie", to the point that wearing glasses was seen as grounds for execution. The killings were only brought to an end when Cambodia was invaded by the neighboring socialist nation of Vietnam, whose army toppled the Khmer Rouge. However, with Cambodia's entire society and economy in disarray, including its agricultural sector, the country still plunged into renewed famine due to vast food shortages. However, as international journalists began to report on the situation and send images of it out to the world, a massive international response was provoked, leading to one of the most concentrated relief efforts of its time. Notable agrarian parties Peasant parties first appeared across Eastern Europe between 1860 and 1910, when commercialized agriculture and world market forces disrupted traditional rural society, and the railway and growing literacy facilitated the work of roving organizers. Agrarian parties advocated land reforms to redistribute land on large estates among those who work it. They also wanted village cooperatives to keep the profit from crop sales in local hands and credit institutions to underwrite needed improvements. Many peasant parties were also nationalist parties because peasants often worked their land for the benefit of landlords of different ethnicity. Peasant parties rarely had any power before World War I but some became influential in the interwar era, especially in Bulgaria and Czechoslovakia. For a while, in the 1920s and the 1930s, there was a Green International (International Agrarian Bureau) based on the peasant parties in Bulgaria, Czechoslovakia, Poland, and Serbia. It functioned primarily as an information center that spread the ideas of agrarianism and combating socialism on the left and landlords on the right and never launched any significant activities. Europe Bulgaria In Bulgaria, the Bulgarian Agrarian National Union (BZNS) was organized in 1899 to resist taxes and build cooperatives. BZNS came to power in 1919 and introduced many economic, social, and legal reforms. However, conservative forces crushed BZNS in a 1923 coup and assassinated its leader, Aleksandar Stamboliyski (1879–1923). BZNS was made into a communist puppet group until 1989, when it reorganized as a genuine party. Czechoslovakia In Czechoslovakia, the Republican Party of Agricultural and Smallholder People often shared power in parliament as a partner in the five-party pětka coalition. The party's leader, Antonín Švehla (1873–1933), was prime minister several times. It was consistently the strongest party, forming and dominating coalitions. It moved beyond its original agrarian base to reach middle-class voters. The party was banned by the National Front after the Second World War. France In France, the Hunting, Fishing, Nature, Tradition party is a moderate conservative, agrarian party, reaching a peak of 4.23% in the 2002 French presidential election. It would later on become affiliated to France's main conservative party, Union for a Popular Movement. More recently, the Resistons! movement of Jean Lassalle espoused agrarianism. Hungary In Hungary, the first major agrarian party, the small-holders party was founded in 1908. The party became part of the government in the 1920s but lost influence in the government. A new party, the Independent Smallholders, Agrarian Workers and Civic Party was established in 1930 with a more radical program representing larger scale land redistribution initiatives. They implemented this program together with the other coalition parties after WWII. However, after 1949 the party was outlawed when a one-party system was introduced. They became part of the government again 1990–1994, and 1998–2002 after which they lost political support. The ruling Fidesz party has an agrarian faction, and promotes agrarian interest since 2010 with the emphasis now placed on supporting larger family farms versus small-holders. Ireland In the late 19th century, the Irish National Land League aimed to abolish landlordism in Ireland and enable tenant farmers to own the land they worked on. The "Land War" of 1878–1909 led to the Irish Land Acts, ending absentee landlords and ground rent and redistributing land among peasant farmers. Post-independence, the Farmers' Party operated in the Irish Free State from 1922, folding into the National Centre Party in 1932. It was mostly supported by wealthy farmers in the east of Ireland. Clann na Talmhan (Family of the Land; also called the National Agricultural Party) was founded in 1938. They focused more on the poor smallholders of the west, supporting land reclamation, afforestation, social democracy and rates reform. They formed part of the governing coalition of the Government of the 13th Dáil and Government of the 15th Dáil. Economic improvement in the 1960s saw farmers vote for other parties and Clann na Talmhan disbanded in 1965. Kazakhstan In Kazakhstan, the Peasants' Union, originally a communist organization, was formed as one of first agrarian parties in independent Kazakhstan and would win four seats in the 1994 legislative election. The Agrarian Party of Kazakhstan, led by Romin Madinov, was founded in 1999, which favored the privatization of agricultural land, developments towards rural infrastructure, as well as changes in the tax system in agrarian economy. The party would go on to win three Mäjilis seats in the 1999 legislative election and eventually unite with the Civic Party of Kazakhstan to form the pro-government Agrarian-Industrial Union of Workers (AIST) bloc that would be chaired by Madinov for the 2004 legislative election, with the AIST bloc winning 11 seats in the Mäjilis. From there, the bloc remained short-lived as it would merge with the ruling Nur Otan party in 2006. Several other parties in Kazakhstan over the years have embraced agrarian policies in their programs in an effort to appeal towards a large rural Kazakh demographic base, which included Amanat, ADAL, and Respublica. Since late 2000s, the "Auyl" People's Democratic Patriotic Party remains the largest and most influential agrarian-oriented party in Kazakhstan, as its presidential candidate Jiguli Dairabaev had become the second-place frontrunner in the 2022 presidential election after sweeping 3.4% of the vote. In the 2023 legislative election, the Auyl party for the first time was represented the parliament after winning nine seats in the lower chamber Mäjilis. The party raises rural issues in regard to decaying villages, rural development and the agro-industrial complex, the issues of social security of the rural population, and has consistently opposed the ongoing rural flight in Kazakhstan. Latvia In Latvia, the Union of Greens and Farmers is supportive of traditional small farms and perceives them as more environmentally friendly than large-scale farming: Nature is threatened by development, while small farms are threatened by large industrial-scale farms. Lithuania In Lithuania, the government led by the Lithuanian Farmers and Greens Union was in power between 2016 and 2020. Nordic countries Poland In Poland, the Polish People's Party (Polskie Stronnictwo Ludowe, PSL) traces its tradition to an agrarian party in Austro-Hungarian-controlled Galician Poland. After the fall of the communist regime, PSL's biggest success came in 1993 elections, where it won 132 out of 460 parliamentary seats. Since then, PSL's support has steadily declined, until 2019, when they formed Polish Coalition with an anti- establishment, direct democracy Kukiz'15 party, and managed to get 8.5% of popular vote. Moreover, PSL tends to get much better results in local elections. In 2014 elections they have managed to get 23.88% of votes. The right-wing Law and Justice party has also become supportive of agrarian policies in recent years and polls show that most of their support comes from rural areas. AGROunia resembles the features of agrarianism. Romania In Romania, older party parties from Transylvania, Moldavia, and Wallachia merged to become the National Peasants' Party (PNȚ) in 1926. Iuliu Maniu (1873–1953) was a prime minister with an agrarian cabinet from 1928 to 1930 and briefly in 1932–1933, but the Great Depression made proposed reforms impossible. The communist administration dissolved the party in 1947 (along with other historical parties such as the National Liberal Party), but it reformed in 1989 after they fell from power. The reformed party, which also incorporated elements of Christian democracy in its ideology, governed Romania as part of the Romanian Democratic Convention (CDR) between 1996 and 2000. Serbia In Serbia, Nikola Pašić (1845–1926) and his People's Radical Party dominated Serbian politics after 1903. The party also monopolized power in Yugoslavia from 1918 to 1929. During the dictatorship of the 1930s, the prime minister was from that party. Ukraine In Ukraine, the Radical Party of Oleh Lyashko has promised to purify the country of oligarchs "with a pitchfork". The party advocates a number of traditional left-wing positions (a progressive tax structure, a ban on agricultural land sale and eliminating the illegal land market, a tenfold increase in budget spending on health, setting up primary health centres in every village) and mixes them with strong nationalist sentiments. United Kingdom In land law the heyday of English, Irish (and thus Welsh) agrarianism was to 1603, led by the Tudor royal advisors, who sought to maintain a broad pool of agricultural commoners from which to draw military men, against the interests of larger landowners who sought enclosure (meaning complete private control of common land, over which by custom and common law lords of the manor always enjoyed minor rights). The heyday was eroded by hundreds of Acts of Parliament to expressly permit enclosure, chiefly from 1650 to the 1810s. Politicians standing strongly as reactionaries to this included the Levellers, those anti-industrialists (Luddites) going beyond opposing new weaving technology and, later, radicals such as William Cobbett. A high level of net national or local self-sufficiency has a strong base in campaigns and movements. In the 19th century such empowered advocates included Peelites and most Conservatives. The 20th century saw the growth or start of influential non-governmental organisations, such as the National Farmers' Union of England and Wales, Campaign for Rural England, Friends of the Earth (EWNI) and of the England Wales, Scottish and Northern Irish political parties prefixed by and focussed on Green politics. The 21st century has seen decarbonisation already in electricity markets. Following protests and charitable lobbying local food has seen growing market share, sometimes backed by wording in public policy papers and manifestos. The UK has many sustainability-prioritising businesses, green charity campaigns, events and lobby groups ranging from espousing allotment gardens (hobby community farming) through to a clear policy of local food and/or self-sustainability models. Oceania Australia Historian F.K. Crowley finds that: The National Party of Australia (formerly called the Country Party), from the 1920s to the 1970s, promulgated its version of agrarianism, which it called "countrymindedness". The goal was to enhance the status of the graziers (operators of big sheep stations) and small farmers and justified subsidies for them. New Zealand The New Zealand Liberal Party aggressively promoted agrarianism in its heyday (1891–1912). The landed gentry and aristocracy ruled Britain at this time. New Zealand never had an aristocracy but its wealthy landowners largely controlled politics before 1891. The Liberal Party set out to change that by a policy it called "populism." Richard Seddon had proclaimed the goal as early as 1884: "It is the rich and the poor; it is the wealthy and the landowners against the middle and labouring classes. That, Sir, shows the real political position of New Zealand." The Liberal strategy was to create a large class of small landowning farmers who supported Liberal ideals. The Liberal government also established the basis of the later welfare state such as old age pensions and developed a system for settling industrial disputes, which was accepted by both employers and trade unions. In 1893, it extended voting rights to women, making New Zealand the first country in the world to do so. To obtain land for farmers, the Liberal government from 1891 to 1911 purchased of Maori land. The government also purchased from large estate holders for subdivision and closer settlement by small farmers. The Advances to Settlers Act (1894) provided low-interest mortgages, and the agriculture department disseminated information on the best farming methods. The Liberals proclaimed success in forging an egalitarian, anti-monopoly land policy. The policy built up support for the Liberal Party in rural North Island electorates. By 1903, the Liberals were so dominant that there was no longer an organized opposition in Parliament. North America The United States and Canada both saw a rise of Agrarian-oriented parties in the early twentieth century as economic troubles motivated farming communities to become politically active. It has been proposed that different responses to agrarian protest largely determined the course of power generated by these newly energized rural factions. According to Sociologist Barry Eidlin:"In the United States, Democrats adopted a co-optive response to farmer and labor protest, incorporating these constituencies into the New Deal coalition. In Canada, both mainstream parties adopted a coercive response, leaving these constituencies politically excluded and available for an independent left coalition."These reactions may have helped determine the outcome of agrarian power and political associations in the US and Canada. United States of America Kansas Economic desperation experienced by farmers across the state of Kansas in the nineteenth century spurred the creation of The People's Party in 1890, and soon-after would gain control of the governor's office in 1892. This party, consisting of a mix of Democrats, Socialists, Populists, and Fusionists, would find itself buckling from internal conflict regarding the unlimited coinage of silver. The Populists permanently lost power in 1898. Oklahoma Oklahoma farmers considered their political activity during the early twentieth century due to the outbreak of war, depressed crop prices, and an inhibited sense of progression towards owning their own farms. Tenancy had been reportedly as high as 55% in Oklahoma by 1910. These pressures saw agrarian counties in Oklahoma supporting Socialist policies and politics, with the Socialist platform proposing a deeply agrarian-radical platform:...the platform proposed a "Renters and Farmer's Program" which was strongly agrarian radical in its insistence upon various measures to put land into "The hands of the actual tillers of the soil." Although it did not propose to nationalize privately owned land, it did offer numerous plans to enlarge the state's public domain, from which land would be rented at prevailing share rents to tenants until they had paid rent equal to the land's value. The tenant and his children would have the right of occupancy and use, but the 'title' would remind in the 'commonwealth', an arrangement that might be aptly termed 'Socialist fee simple'. They proposed to exempt from taxation all farm dwellings, animals, and improvements up to the value of $1,000. The State Board of Agriculture would encourage 'co-operative societies' of farmers to make plans f or the purchase of land, seed, tools, and for preparing and selling produce. In order to give farmers essential services at cost, the Socialists called for the creation of state banks and mortgage agencies, crop insurance, elevators, and warehouses.This agrarian-backed Socialist party would win numerous offices, causing a panic within the local Democratic party. This agrarian-Socialist movement would be inhibited by voter suppression laws aimed at reducing the participation of voters of color, as well as national wartime policies intended to disrupt political elements considered subversive. This party would peak in power in 1914. Back-to-the-land movement Agrarianism is similar to but not identical with the back-to-the-land movement. Agrarianism concentrates on the fundamental goods of the earth, on communities of more limited economic and political scale than in modern society, and on simple living, even when the shift involves questioning the "progressive" character of some recent social and economic developments. Thus, agrarianism is not industrial farming, with its specialization on products and industrial scale. See also Agrarian socialism Farmer–Labor Party, USA early 20th century Jeffersonian democracy Labour-Farmer Party, Japan 1920s Minnesota Farmer–Labor Party, USA early 20th century Nordic agrarian parties Yeoman, English farmers References Further reading Agrarian values Brass, Tom. Peasants, Populism and Postmodernism: The Return of the Agrarian Myth (2000) * Inge, M. Thomas. Agrarianism in American Literature (1969) Kolodny, Annette. The Land before Her: Fantasy and Experience of the American Frontiers, 1630–1860 (1984). online edition Marx, Leo. The Machine in the Garden: Technology and the Pastoral Ideal in America (1964). Murphy, Paul V. The Rebuke of History: The Southern Agrarians and American Conservative Thought (2000) Parrington, Vernon. Main Currents in American Thought (1927), 3-vol online Thompson, Paul, and Thomas C. Hilde, eds. The Agrarian Roots of Pragmatism (2000) Primary sources Sorokin, Pitirim A. et al., eds. A Systematic Source Book in Rural Sociology (3 vol. 1930) vol 1 pp. 1–146 covers many major thinkers down to 1800 Europe Bell, John D. Peasants in Power: Alexander Stamboliski and the Bulgarian Agrarian National Union, 1899–1923(1923) Donnelly, James S. Captain Rock: The Irish Agrarian Rebellion of 1821–1824 (2009) Donnelly, James S. Irish Agrarian Rebellion, 1760–1800 (2006) Gross, Feliks, ed. European Ideologies: A Survey of 20th Century Political Ideas (1948) pp. 391–481 online edition , on Russia and Bulgaria Kubricht, Andrew Paul. "The Czech Agrarian Party, 1899-1914: a study of national and economic agitation in the Habsburg monarchy" (PhD thesis, Ohio State University Press, 1974) Narkiewicz, Olga A. The Green Flag: Polish Populist Politics, 1867–1970 (1976). Oren, Nissan. Revolution Administered: Agrarianism and Communism in Bulgaria (1973), focus is post 1945 Stefanov, Kristian. Between Ideological Loyalty and Political Adaptation: 'The Agrarian Question' in the Development of Bulgarian Social Democracy, 1891–1912, East European Politics, Societies and Cultures, Is. 4, 2023. Paine, Thomas. Agrarian Justice (1794) Roberts, Henry L. Rumania: Political Problems of an Agrarian State (1951). North America Goodwyn, Lawrence. The Populist Moment: A Short History of the Agrarian Revolt in America (1978), 1880s and 1890s in U.S. Lipset, Seymour Martin. Agrarian socialism: the Coöperative Commonwealth Federation in Saskatchewan (1950), 1930s-1940s McConnell, Grant. The decline of agrarian democracy(1953), 20th century U.S. Mark, Irving. Agrarian conflicts in colonial New York, 1711–1775 (1940) Ochiai, Akiko. Harvesting Freedom: African American Agrarianism in Civil War Era South Carolina (2007) Robison, Dan Merritt. Bob Taylor and the agrarian revolt in Tennessee (1935) Stine, Harold E. The agrarian revolt in South Carolina;: Ben Tillman and the Farmers' Alliance (1974) Summerhill, Thomas. Harvest of Dissent: Agrarianism in Nineteenth-Century New York (2005) Szatmary, David P. Shays' Rebellion: The Making of an Agrarian Insurrection (1984), 1787 in Massachusetts Woodward, C. Vann. Tom Watson: Agrarian Rebel (1938) online edition Global South Brass, Tom (ed.). New Farmers' Movements in India (1995) 304 pages. Handy, Jim. Revolution in the Countryside: Rural Conflict and Agrarian Reform in Guatemala, 1944–1954 (1994) Paige, Jeffery M. Agrarian revolution: social movements and export agriculture in the underdeveloped world (1978) 435 pages excerpt and text search Sanderson, Steven E. Agrarian populism and the Mexican state: the struggle for land in Sonora (1981) Stokes, Eric. The Peasant and the Raj: Studies in Agrarian Society and Peasant Rebellion in Colonial India (1980) Tannenbaum, Frank. The Mexican Agrarian Revolution'' (1930) External links Writings of a Deliberate Agrarian The New Agrarian
0.767539
0.997285
0.765455
Moral realism
Moral realism (also ethical realism) is the position that ethical sentences express propositions that refer to objective features of the world (that is, features independent of subjective opinion), some of which may be true to the extent that they report those features accurately. This makes moral realism a non-nihilist form of ethical cognitivism (which accepts that ethical sentences express propositions and can therefore be true or false) with an ontological orientation, standing in opposition to all forms of moral anti-realism and moral skepticism, including ethical subjectivism (which denies that moral propositions refer to objective facts), error theory (which denies that any moral propositions are true), and non-cognitivism (which denies that moral sentences express propositions at all). Moral realism's two main subdivisions are ethical naturalism and ethical non-naturalism. Most philosophers claim that moral realism dates at least to Plato as a philosophical doctrine and that it is a fully defensible form of moral doctrine. A 2009 survey involving 3,226 respondents found that 56% of philosophers accept or lean toward moral realism (28%: anti-realism; 16%: other). A 2020 study found that 62.1% accept or lean toward realism. Some notable examples of robust moral realists include David Brink, John McDowell, Peter Railton, Geoffrey Sayre-McCord, Michael Smith, Terence Cuneo, Russ Shafer-Landau, G. E. Moore, John Finnis, Richard Boyd, Nicholas Sturgeon, Thomas Nagel, Derek Parfit, and Peter Singer. Norman Geras has argued that Karl Marx was a moral realist. Moral realism's various philosophical and practical applications have been studied. Robust versus minimal moral realism A delineation of moral realism into a minimal form, a moderate form, and a robust form has been put forward in the literature. The robust model of moral realism commits moral realists to three theses: The semantic thesis: The primary semantic role of moral predicates (such as "right" and "wrong") is to refer to moral properties (such as rightness and wrongness), so that moral statements (such as "honesty is good" and "slavery is unjust") purport to represent moral facts, and express propositions that are true or false (or approximately true, largely false, and so on). The alethic thesis: Some moral propositions are in fact true. The metaphysical thesis: Moral propositions are true when actions and other objects of moral assessment have the relevant moral properties (so that the relevant moral facts obtain), where these facts and properties are robust: their metaphysical status, whatever it is, is not relevantly different from that of (certain types of) ordinary non-moral facts and properties. The minimal model leaves off the metaphysical thesis, treating it as matter of contention among moral realists (as opposed to between moral realists and moral anti-realists). This dispute is not insignificant, as acceptance or rejection of the metaphysical thesis is taken by those employing the robust model as the key difference between moral realism and moral anti-realism. Indeed, the question of how to classify certain logically possible (if eccentric) views—such as the rejection of the semantic and alethic theses in conjunction with the acceptance of the metaphysical thesis—turns on which model we accept. Someone employing the robust model might call such a view "realist non-cognitivism," while someone employing the minimal model might simply place such a view alongside other, more traditional, forms of non-cognitivism. The robust model and the minimal model also disagree over how to classify moral subjectivism (roughly, the view that moral facts are not mind-independent in the relevant sense, but that moral statements may still be true). The historical association of subjectivism with moral anti-realism in large part explains why the robust model of moral realism has been dominant—even if only implicitly—both in the traditional and contemporary philosophical literature on metaethics. In the minimal sense of realism, R. M. Hare could be considered a realist in his later works, as he is committed to the objectivity of value judgments, even though he denies that moral statements express propositions with truth-values per se. Moral constructivists like John Rawls and Christine Korsgaard may also be realists in this minimalist sense; the latter describes her own position as procedural realism. Some readings of evolutionary science such as those of Charles Darwin and James Mark Baldwin have suggested that in so far as an ethics may be associated with survival strategies and natural selection then such behavior may be associated with a moderate position of moral realism equivalent to an ethics of survival. Ethical objectivism Moral objectivism is the view that what is right or wrong does not depend on what anyone thinks is right or wrong, but rather on how it affects people's well-being. Moral objectivism allows for moral codes to be compared to each other through a set of universal facts. Nicholas Reschar says that moral codes cannot derive from one's personal moral compass. An example is Immanuel Kant's categorical imperative: "Act only according to that maxim [i.e., rule] whereby you can at the same time will that it become a universal law." John Stuart Mill proposed utilitarianism, which asserts that in any situation, the right thing to do is whatever is likely to produce the most happiness overall. According to the ethical objectivist, the truth or falsehood of typical moral judgments does not depend upon any person's or group of persons' beliefs or feelings. This view holds that moral propositions are analogous to propositions about chemistry, biology, or history, insomuch as they are true despite what anyone believes, hopes, wishes, or feels. When they fail to describe this mind-independent moral reality, they are false—no matter what anyone believes, hopes, wishes, or feels. There are many versions of ethical objectivism, including various religious views of morality, Platonistic intuitionism, Kantianism, utilitarianism, and certain forms of ethical egoism and contractualism. Platonists define ethical objectivism even more narrowly, so that it requires the existence of intrinsic value. Consequently, they reject the idea that contractualists or egoists could be ethical objectivists. Objectivism, in turn, places primacy on the origin of the frame of reference and considers any arbitrary frame of reference a form of ethical subjectivism by a transitive property, even when the frame incidentally coincides with reality. Advantages Moral realism allows the ordinary rules of logic (modus ponens, etc.) to be applied straightforwardly to moral statements. We can say that a moral belief is false or unjustified or contradictory in the same way we would about a factual belief. This is a problem for expressivism, as shown by the Frege–Geach problem. Another advantage of moral realism is its capacity to resolve moral disagreements: if two moral beliefs contradict one another, realism says that they cannot both be right, and therefore everyone involved ought to be seeking out the right answer to resolve the disagreement. Contrary theories of meta-ethics have trouble even formulating the statement "this moral belief is wrong," and so they cannot resolve disagreements in this way. Proponents Peter Railton's moral realism is often associated with a naturalist approach. He argues that moral facts can be reduced to non-moral facts and that our moral claims aim to describe an objective reality. In his well-known paper "Moral Realism" (1986), Railton advocates for a form of moral realism that is naturalistic and scientifically accessible. He suggests that moral facts can be understood in terms of the naturalistic concept of an individual's good. He employs a hypothetical observer's standpoint to explain moral judgments. This standpoint considers what fully rational, well-informed, and sympathetic agents would agree upon under ideal conditions. Railton's naturalistic approach aims to bridge the is-ought gap by explaining moral facts in terms of natural facts, and his theory is generally considered to be a response to the challenge of moral skepticism and anti-realism. By doing so, he attempts to show that moral facts are not mysterious or disconnected from the rest of the world, but can be understood and studied much like other natural phenomena. Philippa Foot adopts a moral realist position, criticizing Stevenson's idea that when evaluation is superposed on fact there has been a "committal in a new dimension." She introduces, by analogy, the practical implications of using the word "injury." Not just anything counts as an injury. There must be some impairment. When we suppose a man wants the things the injury prevents him from obtaining, have not we fallen into the old naturalistic fallacy? Foot argues that the virtues, like hands and eyes in the analogy, play so large a part in so many operations that it is implausible to suppose that a committal in a non-naturalist dimension is necessary to demonstrate their goodness. W. D. Ross articulates his moral realism in analogy to mathematics by stating that the moral order is just as real as "the spatial or numerical structure expressed in the axioms of geometry or arithmetic". In his defense of Divine Command Theory and thereby moral realism, C. Stephen Evans comments that the fact that there are significant moral disagreements does not undermine moral realism. Much of what may appear to be moral disagreement is actually disagreement over facts. In abortion debates, for example, the crux of the issue may really be whether a fetus is a human person. He goes on to comment that there are in fact tremendous amounts of moral agreement. There are five common principles that are recognized by different human cultures, including (1) A general duty not to harm others and a general duty to benefit others; (2) Special duties to those with whom one has special relations, such as friends and family members; (3) Duties to be truthful; (4) Duties to keep one's commitments and promises; (5) Duties to deal fairly and justly with others. Criticisms Several criticisms have been raised against moral realism. A prominent criticism, articulated by J.L. Mackie, is that moral realism postulates the existence of "entities or qualities or relations of a very strange sort, utterly different from anything else in the universe. Correspondingly, if we were aware of them it would have to be by some faculty of moral perception or intuition, utterly different from our ordinary ways of knowing everything else." A number of theories have been developed for how we access objective moral truths, including ethical intuitionism and moral sense theory. Another criticism of moral realism put forth by Mackie is that it can offer no plausible explanation for cross-cultural moral differences— ethical relativism. "The actual variations in the moral codes are more readily explained by the hypothesis that they reflect ways of life than by the hypothesis that they express perceptions, most of them seriously inadequate and badly distorted, of objective values". The evolutionary debunking argument suggests that because human psychology is primarily produced by evolutionary processes which do not seem to have a reason to be sensitive to moral facts, taking a moral realist stance can only lead to moral skepticism. The aim of the argument is to undercut the motivations for taking a moral realist stance, namely to be able to assert there are reliable moral standards. Biologist Richard D. Alexander has argued that "Ethical questions, and the study of morality or concepts of justice and right and wrong, derive solely from the existence of conflicts of interest" and that such conflicts are a necessary consequence of genetic individuality. He also argues that "Because morality involves conflicts of interest, it cannot easily be generalized into a universal despite virtually continual efforts by utilitarian philosophers to do that; morality does not derive its meaning from sets of universals or undeniable facts." Alexander’s views are shared by many scientists and starkly contradict moral realism. See also Cognitivism Cornell realism Emotivism Evolution of morality Instrumental convergence Moral absolutism Moral relativism Morality Nominalism The Right and the Good References Further reading Moral Realism - article from Internet Encyclopedia of Philosophy Moral realism - article from Stanford Encyclopedia of Philosophy Hume, David (1739). Treatise Concerning Human Nature, edited by L.A. Selby-Bigge. Oxford: Oxford University Press, 1888. Philosophical realism
0.771841
0.991718
0.765449
Greek words for love
Ancient Greek philosophy differentiates main conceptual forms and distinct words for the Modern English word love: agápē, érōs, philía, philautía, storgē, and xenía. List of concepts Though there are more Greek words for love, variants and possibly subcategories, a general summary considering these Ancient Greek concepts is: Agápe means "love: esp. unconditional love, charity; the love of God for person and of person for God". Agape is used in ancient texts to denote unconditional love, and it was also used to refer to a love feast. Agape is used by Christians to express the unconditional love of God for His children. This type of love was further explained by Thomas Aquinas as "to will the good of another". Éros means "love, mostly of the sexual passion". The Modern Greek word "erotas" means "intimate love". Plato refined his own definition: Although eros is initially felt for a person, with contemplation it becomes an appreciation of the beauty within that person, or and may ultimately transcend particulars to become an appreciation of beauty itself, hence the concept of platonic love to mean "without physical attraction". In Plato's Symposium, Socrates argues that eros helps the soul recall its inherent knowledge of ideal beauty and spiritual truth. Thus, the ideal form of youthful beauty arouses erotic desire, but also points toward higher spiritual ideals. Philia means "affectionate regard, friendship", usually "between equals". It is a dispassionate virtuous love. In Aristotle's Nicomachean Ethics, philia is expressed variously as loyalty to friends ("brotherly love"), family, and community; it requires virtue, equality, and familiarity. Storge means "love, affection" and "especially of parents and children". It is the common or natural empathy, like that felt by parents for offspring. It is rarely used in ancient works, almost exclusively to describe family relationships. It may also express mere acceptance or tolerance, as in "loving" the tyrant. It may also describe love of country or enthusiasm for a favorite sports team. Philautia means "self-love". To love oneself or "regard for one's own happiness or advantage" has been conceptualized both as a basic human necessity and as a moral flaw, akin to vanity and selfishness, synonymous with amour-propre or egotism. The Greeks further divided this love into positive and negative: one, the unhealthy version, is the self-obsessed love, and the other is the concept of self-compassion. Aristotle also considers philautia to be the root of a general kind of love for family, friends, the enjoyment of an activity, as well as that between lovers. Xenia is an ancient Greek concept of hospitality, "guest-friendship", or "ritualized friendship". It was a social institution requiring generosity, gift exchange, and reciprocity. Hospitality towards foreigners and traveling Hellenes was understood as a moral obligation under the patronage of Zeus Xenios and Athene Xenia. Many understand the Odyssey as a story principally concerned with the concept. For instance, the failure of the Suitors of Penelope to appropriately welcome disguised Odysseus into his own home can be seen as justification for their subsequent demise. See also Color wheel theory of love Diotima of Mantinea The Four Loves by C. S. Lewis Greek love Intellectual virtue – Greek words for knowledge Love Restoration of Peter Sapphic love References Sources Love Love Love
0.767272
0.997574
0.76541
Scholarly method
The scholarly method or scholarship is the body of principles and practices used by scholars and academics to make their claims about their subjects of expertise as valid and trustworthy as possible, and to make them known to the scholarly public. It comprises the methods that systemically advance the teaching, research, and practice of a scholarly or academic field of study through rigorous inquiry. Scholarship is creative, can be documented, can be replicated or elaborated, and can be and is peer reviewed through various methods. The scholarly method includes the subcategories of the scientific method, with which scientists bolster their claims, and the historical method, with which historians verify their claims. Methods The historical method comprises the techniques and guidelines by which historians research primary sources and other evidence, and then write history. The question of the nature, and indeed the possibility, of sound historical method is raised in the philosophy of history, as a question of epistemology. History guidelines commonly used by historians in their work require external criticism, internal criticism, and synthesis. The empirical method is generally taken to mean the collection of data on which to base a hypothesis or derive a conclusion in science. It is part of the scientific method, but is often mistakenly assumed to be synonymous with other methods. The empirical method is not sharply defined and is often contrasted with the precision of experiments, where data emerges from the systematic manipulation of variables. The experimental method investigates causal relationships among variables. An experiment is a cornerstone of the empirical approach to acquiring data about the world and is used in both natural sciences and social sciences. An experiment can be used to help solve practical problems and to support or negate theoretical assumptions. The scientific method refers to a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. To be termed scientific, a method of inquiry must be based on gathering observable, empirical and measurable evidence subject to specific principles of reasoning. A scientific method consists of the collection of data through observation and experimentation, and the formulation and testing of hypotheses. See also Academia Academic authorship Academic publishing Discipline (academia) Doctor (title) Ethics Historical revisionism History of scholarship Manual of style Professor Source criticism Urtext edition Wissenschaft References Academia Methodology
0.772537
0.99055
0.765236
Justification (epistemology)
Justification (also called epistemic justification) is a property of beliefs that fulfill certain norms about what a person should believe. Epistemologists often identify justification as a component of knowledge distinguishing it from mere true opinion. They study the reasons why someone holds a belief. Epistemologists are concerned with various features of belief, which include the ideas of warrant (a proper justification for holding a belief), knowledge, rationality, and probability, among others. Debates surrounding epistemic justification often involve the structure of justification, including whether there are foundational justified beliefs or whether mere coherence is sufficient for a system of beliefs to qualify as justified. Another major subject of debate is the sources of justification, which might include perceptual experience (the evidence of the senses), reason, and authoritative testimony, among others. Justification and knowledge "Justification" involves the reasons why someone holds a belief that one should hold based on one's current evidence. Justification is a property of beliefs insofar as they are held blamelessly. In other words, a justified belief is a belief that a person is entitled to hold. Many philosophers from Plato onward have treated "justified true belief" (JTB) as constituting knowledge. It is particularly associated with a theory discussed in his dialogues Meno and Theaetetus. While in fact Plato seems to disavow justified true belief as constituting knowledge at the end of Theaetetus, the claim that Plato unquestioningly accepted this view of knowledge stuck until the proposal of the Gettier problem. The subject of justification has played a major role in the value of knowledge as "justified true belief". Some contemporary epistemologists, such as Jonathan Kvanvig assert that justification isn't necessary in getting to the truth and avoiding errors. Kvanvig attempts to show that knowledge is no more valuable than true belief, and in the process dismissed the necessity of justification due to justification not being connected to the truth. Conceptions of justification William P. Alston identifies two conceptions of justification. One conception is "deontological" justification, which holds that justification evaluates the obligation and responsibility of a person having only true beliefs. This conception implies, for instance, that a person who has made his best effort but is incapable of concluding the correct belief from his evidence is still justified. The deontological conception of justification corresponds to epistemic internalism. Another conception is "truth-conducive" justification, which holds that justification is based on having sufficient evidence or reasons that entails that the belief is at least likely to be true. The truth-conductive conception of justification corresponds to epistemic externalism. Theories of justification There are several different views as to what entails justification, mostly focusing on the question "How sure do we need to be that our beliefs correspond to the actual world?" Different theories of justification require different conditions before a belief can be considered justified. Theories of justification generally include other aspects of epistemology, such as defining knowledge. Notable theories of justification include: FoundationalismBasic beliefs justify other, non-basic beliefs. Epistemic coherentismBeliefs are justified if they cohere with other beliefs a person holds, each belief is justified if it coheres with the overall system of beliefs. InfinitismBeliefs are justified by infinite chains of reasons. FoundherentismBoth fallible foundations and coherence are components of justification—proposed by Susan Haack. Internalism and externalismThe believer must be able to justify a belief through internal knowledge (internalism), or outside sources of knowledge (externalism). Reformed epistemologyBeliefs are warranted by proper cognitive function—proposed by Alvin Plantinga. EvidentialismBeliefs depend solely on the evidence for them. ReliabilismA belief is justified if it is the result of a reliable process. InfallibilismKnowledge is incompatible with the possibility of being wrong. FallibilismClaims can be accepted even though they cannot be conclusively proven or justified. Non-justificationismKnowledge is produced by attacking claims and refuting them instead of justifying them. SkepticismKnowledge is impossible or undecidable. Criticism of theories of justification Robert Fogelin claims to detect a suspicious resemblance between the theories of justification and Agrippa's five modes leading to the suspension of belief. He concludes that the modern proponents have made no significant progress in responding to the ancient modes of Pyrrhonian skepticism. William P. Alston criticizes the very idea of a theory of justification. He claims: "There isn't any unique, epistemically crucial property of beliefs picked out by 'justified'. Epistemologists who suppose the contrary have been chasing a will-o'-the-wisp. What has really been happening is this. Different epistemologists have been emphasizing, concentrating on, "pushing" different epistemic desiderata, different features of belief that are positively valuable from the standpoint of the aims of cognition." See also Dream argument Regress argument (epistemology) Münchhausen trilemma References External links Stanford Encyclopedia of Philosophy entry on Foundationalist Theories of Epistemic Justification Stanford Encyclopedia of Philosophy entry on Epistemology, 2. What is Justification? Stanford Encyclopedia of Philosophy entry on Internalist vs. Externalist Conceptions of Epistemic Justification Stanford Encyclopedia of Philosophy entry on Coherentist Theories of Epistemic Justification Internet Encyclopedia of Philosophy Internet Encyclopedia of Philosophy entry on Epistemic Justification Internet Encyclopedia of Philosophy entry on Epistemic Entitlement Internet Encyclopedia of Philosophy entry on Internalism and Externalism in Epistemology Internet Encyclopedia of Philosophy entry on Epistemic Consequentialism Internet Encyclopedia of Philosophy entry on Coherentism in Epistemology Internet Encyclopedia of Philosophy entry on Contextualism in Epistemology Internet Encyclopedia of Philosophy entry on Knowledge-First Theories of Justification Metatheory Concepts in epistemology
0.772027
0.991145
0.765191
Tinbergen's four questions
Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular: behavioural adaptive functions phylogenetic history; and the proximate explanations underlying physiological mechanisms ontogenetic/developmental history. Four categories of questions and explanations When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny). This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem. Evolutionary (ultimate) explanations First question: Function (adaptation) Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive. The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function and evolution are often presented as separate and distinct explanations of behaviour. On the other hand, the common definition of adaptation is a central concept in evolution: a trait that was functional to the reproductive success of the organism and that is thus now present due to being selected for; that is, function and evolution are inseparable. However, a trait can have a current function that is adaptive without being an adaptation in this sense, if for instance the environment has changed. Imagine an environment in which having a small body suddenly conferred benefit on an organism when previously body size had had no effect on survival. A small body's function in the environment would then be adaptive, but it would not become an adaptation until enough generations had passed in which small bodies were advantageous to reproduction for small bodies to be selected for. Given this, it is best to understand that presently functional traits might not all have been produced by natural selection. The term "function" is preferable to "adaptation", because adaptation is often construed as implying that it was selected for due to past function. This corresponds to Aristotle's final cause. Second question: Phylogeny (evolution) Evolution captures both the history of an organism via its phylogeny, and the history of natural selection working on function to produce adaptations. There are several reasons why natural selection may fail to achieve optimal design (Mayr 2001:140–143; Buss et al. 1998). One entails random processes such as mutation and environmental events acting on small populations. Another entails the constraints resulting from early evolutionary development. Each organism harbors traits, both anatomical and behavioural, of previous phylogenetic stages, since many traits are retained as species evolve. Reconstructing the phylogeny of a species often makes it possible to understand the "uniqueness" of recent characteristics: Earlier phylogenetic stages and (pre-) conditions which persist often also determine the form of more modern characteristics. For instance, the vertebrate eye (including the human eye) has a blind spot, whereas octopus eyes do not. In those two lineages, the eye was originally constructed one way or the other. Once the vertebrate eye was constructed, there were no intermediate forms that were both adaptive and would have enabled it to evolve without a blind spot. It corresponds to Aristotle's formal cause. Proximate explanations Third question: Mechanism (causation) Some prominent classes of Proximate causal mechanisms include: The brain: For example, Broca's area, a small section of the human brain, has a critical role in linguistic capability. Hormones: Chemicals used to communicate among cells of an individual organism. Testosterone, for instance, stimulates aggressive behaviour in a number of species. Pheromones: Chemicals used to communicate among members of the same species. Some species (e.g., dogs and some moths) use pheromones to attract mates. In examining living organisms, biologists are confronted with diverse levels of complexity (e.g. chemical, physiological, psychological, social). They therefore investigate causal and functional relations within and between these levels. A biochemist might examine, for instance, the influence of social and ecological conditions on the release of certain neurotransmitters and hormones, and the effects of such releases on behaviour, e.g. stress during birth has a tocolytic (contraction-suppressing) effect. However, awareness of neurotransmitters and the structure of neurons is not by itself enough to understand higher levels of neuroanatomic structure or behaviour: "The whole is more than the sum of its parts." All levels must be considered as being equally important: cf. transdisciplinarity, Nicolai Hartmann's "Laws about the Levels of Complexity." It corresponds to Aristotle's efficient cause. Fourth question: Ontogeny (development) Ontogeny is the process of development of an individual organism from the zygote through the embryo to the adult form. In the latter half of the twentieth century, social scientists debated whether human behaviour was the product of nature (genes) or nurture (environment in the developmental period, including culture). An example of interaction (as distinct from the sum of the components) involves familiarity from childhood. In a number of species, individuals prefer to associate with familiar individuals but prefer to mate with unfamiliar ones (Alcock 2001:85–89, Incest taboo, Incest). By inference, genes affecting living together interact with the environment differently from genes affecting mating behaviour. A simple example of interaction involves plants: Some plants grow toward the light (phototropism) and some away from gravity (gravitropism). Many forms of developmental learning have a critical period, for instance, for imprinting among geese and language acquisition among humans. In such cases, genes determine the timing of the environmental impact. A related concept is labeled "biased learning" (Alcock 2001:101–103) and "prepared learning" (Wilson, 1998:86–87). For instance, after eating food that subsequently made them sick, rats are predisposed to associate that food with smell, not sound (Alcock 2001:101–103). Many primate species learn to fear snakes with little experience (Wilson, 1998:86–87). See developmental biology and developmental psychology. It corresponds to Aristotle's material cause. Causal relationships The figure shows the causal relationships among the categories of explanations. The left-hand side represents the evolutionary explanations at the species level; the right-hand side represents the proximate explanations at the individual level. In the middle are those processes' end products—genes (i.e., genome) and behaviour, both of which can be analyzed at both levels. Evolution, which is determined by both function and phylogeny, results in the genes of a population. The genes of an individual interact with its developmental environment, resulting in mechanisms, such as a nervous system. A mechanism (which is also an end-product in its own right) interacts with the individual's immediate environment, resulting in its behaviour. Here we return to the population level. Over many generations, the success of the species' behaviour in its ancestral environment—or more technically, the environment of evolutionary adaptedness (EEA) may result in evolution as measured by a change in its genes. In sum, there are two processes—one at the population level and one at the individual level—which are influenced by environments in three time periods. Examples Vision Four ways of explaining visual perception: Function: To find food and avoid danger. Phylogeny: The vertebrate eye initially developed with a blind spot, but the lack of adaptive intermediate forms prevented the loss of the blind spot. Mechanism: The lens of the eye focuses light on the retina. Development: Neurons need the stimulation of light to wire the eye to the brain (Moore, 2001:98–99). Westermarck effect Four ways of explaining the Westermarck effect, the lack of sexual interest in one's siblings (Wilson, 1998:189–196): Function: To discourage inbreeding, which decreases the number of viable offspring. Phylogeny: Found in a number of mammalian species, suggesting initial evolution tens of millions of years ago. Mechanism: Little is known about the neuromechanism. Ontogeny: Results from familiarity with another individual early in life, especially in the first 30 months for humans. The effect is manifested in nonrelatives raised together, for instance, in kibbutzs. Romantic love Four ways of explaining romantic love have been used to provide a comprehensive biological definition (Bode & Kushnick, 2021): Function: Mate choice, courtship, sex, pair-bonding. Phylogeny: Evolved by co-opting mother-infant bonding mechanisms sometime in the recent evolutionary history of humans. Mechanisms: Social, psychological mate choice, genetic, neurobiological, and endocrinological mechanisms cause romantic love. Ontogeny: Romantic love can first manifest in childhood, manifests with all its characteristics following puberty, but can manifest across the lifespan. Sleep Sleep has been described using Tinbergen's four questions as a framework (Bode & Kuula, 2021): Function: Energy restoration, metabolic regulation, thermoregulation, boosting immune system, detoxification, brain maturation, circuit reorganization, synaptic optimization, avoiding danger. Phylogeny: Sleep exists in invertebrates, lower vertebrates, and higher vertebrates. NREM and REM sleep exist in eutheria, marsupialiformes, and also evolved in birds. Mechanisms: Mechanisms regulate wakefulness, sleep onset, and sleep. Specific mechanisms involve neurotransmitters, genes, neural structures, and the circadian rhythm. Ontogeny: Sleep manifests differently in babies, infants, children, adolescents, adults, and older adults. Differences include the stages of sleep, sleep duration, and sex differences. Use of the four-question schema as "periodic table" Konrad Lorenz, Julian Huxley and Niko Tinbergen were familiar with both conceptual categories (i.e. the central questions of biological research: 1. - 4. and the levels of inquiry: a. - g.), the tabulation was made by Gerhard Medicus. The tabulated schema is used as the central organizing device in many animal behaviour, ethology, behavioural ecology and evolutionary psychology textbooks (e.g., Alcock, 2001). One advantage of this organizational system, what might be called the "periodic table of life sciences," is that it highlights gaps in knowledge, analogous to the role played by the periodic table of elements in the early years of chemistry. This "biopsychosocial" framework clarifies and classifies the associations between the various levels of the natural and social sciences, and it helps to integrate the social and natural sciences into a "tree of knowledge" (see also Nicolai Hartmann's "Laws about the Levels of Complexity"). Especially for the social sciences, this model helps to provide an integrative, foundational model for interdisciplinary collaboration, teaching and research (see The Four Central Questions of Biological Research Using Ethology as an Example – PDF). References Sources Alcock, John (2001) Animal Behaviour: An Evolutionary Approach, Sinauer, 7th edition. . Buss, David M., Martie G. Haselton, Todd K. Shackelford, et al. (1998) "Adaptations, Exaptations, and Spandrels," American Psychologist, 53:533–548. http://www.sscnet.ucla.edu/comm/haselton/webdocs/spandrels.html Buss, David M. (2004) Evolutionary Psychology: The New Science of the Mind, Pearson Education, 2nd edition. . Cartwright, John (2000) Evolution and Human Behaviour, MIT Press, . Krebs, John R., Davies N.B. (1993) An Introduction to Behavioural Ecology, Blackwell Publishing, . Lorenz, Konrad (1937) Biologische Fragestellungen in der Tierpsychologie (I.e. Biological Questions in Animal Psychology). Zeitschrift für Tierpsychologie, 1: 24–32. Mayr, Ernst (2001) What Evolution Is, Basic Books. . Gerhard Medicus (2017, chapter 1). Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin VWB Medicus, Gerhard (2017) Being Human – Bridging the Gap between the Sciences of Body and Mind. Berlin: VWB 2015, Nesse, Randolph M (2013) "Tinbergen's Four Questions, Organized," Trends in Ecology and Evolution, 28:681-682. Moore, David S. (2001) The Dependent Gene: The Fallacy of 'Nature vs. Nurture''', Henry Holt. . Pinker, Steven (1994) The Language Instinct: How the Mind Creates Language, Harper Perennial. . Tinbergen, Niko (1963) "On Aims and Methods of Ethology," Zeitschrift für Tierpsychologie, 20: 410–433. Wilson, Edward O. (1998) Consilience: The Unity of Knowledge'', Vintage Books. . External links Diagrams The Four Areas of Biology pdf The Four Areas and Levels of Inquiry pdf Tinbergen's four questions within the "Fundamental Theory of Human Sciences" ppt Tinbergen's Four Questions, organized pdf Derivative works On aims and methods of cognitive ethology (pdf) by Jamieson and Bekoff. Behavioral ecology Ethology Evolutionary psychology Sociobiology
0.778483
0.982909
0.765178
Social epistemology
Social epistemology refers to a broad set of approaches that can be taken in epistemology (the study of knowledge) that construes human knowledge as a collective achievement. Another way of characterizing social epistemology is as the evaluation of the social dimensions of knowledge or information. As a field of inquiry in analytic philosophy, social epistemology deals with questions about knowledge in social contexts, meaning those in which knowledge attributions cannot be explained by examining individuals in isolation from one another. The most common topics discussed in contemporary social epistemology are testimony (e.g. "When does a belief that x is true which resulted from being told 'x is true' constitute knowledge?"), peer disagreement (e.g. "When and how should I revise my beliefs in light of other people holding beliefs that contradict mine?"), and group epistemology (e.g. "What does it mean to attribute knowledge to groups rather than individuals, and when are such knowledge attributions appropriate?"). Social epistemology also examines the social justification of belief. One of the enduring difficulties with defining "social epistemology" that arises is the attempt to determine what the word "knowledge" means in this context. There is also a challenge in arriving at a definition of "social" which satisfies academics from different disciplines. Social epistemologists may exist working in many of the disciplines of the humanities and social sciences, most commonly in philosophy and sociology. In addition to marking a distinct movement in traditional and analytic epistemology, social epistemology is associated with the interdisciplinary field of science and technology studies (STS). History of the term The consideration of social dimensions of knowledge in relation to philosophy started in 380 B.C.E with Plato’s dialogue: Charmides. This dialogue included Socrates' argument about whether anyone is capable of examining if another man's claim that he knows something, is true or not. In it he questions the degree of certainty an unprofessional in a field can have towards a person’s claim to be a specialist in that same field. Charmides also explored the tendency of the utopian vision of social relations to degenerate into dystopian fantasy. As the exploration of a dependence on authoritative figures constitutes a part of the study of social epistemology, it confirms the existence of the ideology in minds long before it was given its label. In 1936, Karl Mannheim turned Karl Marx‘s theory of ideology (which interpreted the “social” aspect in epistemology to be of a political or sociological nature) into an analysis of how the human society develops and functions in this respect. Particularly, this Marxist analysis prompted Mannheim to write Ideology and Utopia, which investigated the classical sociology of knowledge and the construct of ideology. The term “social epistemology” was first coined by the library scientists Margaret Egan. and Jesse Shera in a Library Quarterly paper at the University of Chicago Graduate Library School in the 1950s. The term was used by Robert K. Merton in a 1972 article in the American Journal of Sociology and then by Steven Shapin in 1979. However, it was not until the 1980s that the current sense of “social epistemology” began to emerge. The rise of social epistemology In the 1980s, there was a powerful growth of interest amongst philosophers in topics such as epistemic value of testimony, the nature and function of expertise, proper distribution of cognitive labor and resources among individuals in the communities and the status of group reasoning and knowledge. In 1987, the philosophical journal ‘’Synthese‘’ published a special issue on social epistemology which included two authors that have since taken the branch of epistemology in two divergent directions: Alvin Goldman and Steve Fuller. Fuller founded a journal called ‘’Social Epistemology: A journal of knowledge, culture, and policy‘’ in 1987 and published his first book, ‘’Social Epistemology’’, in 1988. Goldman’s ‘’Knowledge in a Social World’’ came out in 1999. Goldman advocates for a type of epistemology which is sometimes called “veritistic epistemology” because of its large emphasis on truth. This type of epistemology is sometimes seen to side with “essentialism” as opposed to “multiculturalism”. But Goldman has argued that this association between veritistic epistemology and essentialism is not necessary. He describes Social Epistemology as knowledge derived from one’s interactions with another person, group or society. Goldman looks into one of the two strategies of the socialization of epistemology. This strategy includes the evaluation of social factors that impact knowledge formed on true belief. In contrast, Fuller takes preference for the second strategy that defines knowledge influenced by social factors as collectively accepted belief. The difference between the two can be simplified with exemplars e.g.: the first strategy means analyzing how your degree of wealth (a social factor) influences what information you determine to be valid whilst the second strategy occurs when an evaluation is done on wealth’s influence upon your knowledge acquired from the beliefs of the society in which you find yourself. Fuller's position supports the conceptualization that social epistemology is a critique of context, particularly in his approach to "knowledge society" and the "university" as integral contexts of modern learning. It is said that this articulated a reformulation of the Duheim-Quine thesis, which covers the underdetermination of theory by data. It explains that the problem of context will assume this form: :knowledge is determined by its context". In 2012, on the occasion of the 25th anniversary of ‘‘Social Epistemology’’, Fuller reflected upon the history and the prospects of the field, including the need for social epistemology to re-connect with the larger issues of knowledge production first identified by Charles Sanders Peirce as ‘’cognitive economy’’ and nowadays often pursued by library and information science. As for the “analytic social epistemology”, to which Goldman has been a significant contributor, Fuller concludes that it has “failed to make significant progress owing, in part, to a minimal understanding of actual knowledge practices, a minimised role for philosophers in ongoing inquiry, and a focus on maintaining the status quo of epistemology as a field.” Kuhn, Foucault, and the sociology of scientific knowledge The basic view of knowledge that motivated the emergence of social epistemology as it is perceived today can be traced to the work of Thomas Kuhn and Michel Foucault, which gained acknowledgment at the end of the 1960s. Both brought historical concerns directly to bear on problems long associated with the philosophy of science. Perhaps the most notable issue here was the nature of truth, which both Kuhn and Foucault described as a relative and contingent notion. On this background, ongoing work in the sociology of scientific knowledge (SSK) and the history and philosophy of science (HPS) was able to assert its epistemological consequences, leading most notably to the establishment of the strong programme at the University of Edinburgh. In terms of the two strands of social epistemology, Fuller is more sensitive and receptive to this historical trajectory (if not always in agreement) than Goldman, whose “veritistic” social epistemology can be reasonably read as a systematic rejection of the more extreme claims associated with Kuhn and Foucault. Social epistemology as a field In the standard sense of the term today, social epistemology is a field within analytic philosophy. It focuses on the social aspects of how knowledge is created and disseminated. What precisely these social aspects are, and whether they have beneficial or detrimental effects upon the possibilities to create, acquire and spread knowledge is a subject of continuous debate. The most common topics discussed in contemporary social epistemology are testimony (e.g. "When does a belief that 'x is true' which resulted from being told that 'x is true' constitute knowledge?"), peer disagreement (e.g. "When and how should I revise my beliefs in light of other people holding beliefs that contradict mine?"), and group epistemology (e.g. "What does it mean to attribute knowledge to groups rather than individuals, and when are such knowledge attributions appropriate?"). Within the field, "the social" is approached in two complementary and not mutually exclusive ways: "the social" character of knowledge can either be approached through inquiries in inter-individual epistemic relations or through inquiries focusing on epistemic communities. The inter-individual approach typically focuses on issues such as testimony, epistemic trust as a form of trust placed by one individual in another, epistemic dependence, epistemic authority, etc. The community approach typically focuses on issues such as community standards of justification, community procedures of critique, diversity, epistemic justice, and collective knowledge. Social epistemology as a field within analytic philosophy has close ties to, and often overlaps with philosophy of science. While parts of the field engage in abstract, normative considerations of knowledge creation and dissemination, other parts of the field are "naturalized epistemology" in the sense that they draw on empirically gained insights---which could mean natural science research from, e.g., cognitive psychology, be that qualitative or quantitative social science research. (For the notion of "naturalized epistemology" see Willard Van Orman Quine.) And while parts of the field are concerned with analytic considerations of rather general character, case-based and domain-specific inquiries in, e.g., knowledge creation in collaborative scientific practice, knowledge exchange on online platforms or knowledge gained in learning institutions play an increasing role. Important academic journals for social epistemology as a field within analytic philosophy are, e.g., Episteme, Social Epistemology, and Synthese. However, major works within this field are also published in journals that predominantly address philosophers of science and psychology or in interdisciplinary journals which focus on particular domains of inquiry (such as, e.g., Ethics and Information Technology). Major philosophers who influenced social epistemology Plato in Charmides Dialogue John Locke in Problem of Testimony David Hume in Problem of Testimony Thomas Reid in Problem of Testimony Karl Marx in interrelating Ideology and Knowledge. used by Karl Mannheim who concentrated on the social conditioning of knowledge with the reasoning that a knowledge claim's validity is restricted by the social conditions with regard to which the claim was initially made. Miranda Fricker in Problem of Testimony Present and future concerns In both stages, both varieties of social epistemology remain largely "academic" or "theoretical" projects. Yet both emphasize the social significance of knowledge and therefore the cultural value of social epistemology itself. A range of journals publishing social epistemology welcome papers that include a policy dimension. More practical applications of social epistemology can be found in the areas of library science, academic publishing, guidelines for scientific authorship and collaboration, knowledge policy and debates over the role of the Internet in knowledge transmission and creation. Social epistemology is still considered a relatively new addition to philosophy, with its problems and theories still fresh and in rapid movement. Of increasing importance is social epistemology developments within transdisciplinarity as manifested by media ecology. See also Bayesian epistemology Epistemology Feminist epistemology Knowledge falsification Sociology of knowledge Social constructionism Social philosophy Reflexivity (social theory) Media ecology Notes References Berlin, James A. Rhetorics, Poetics, and Cultures: Refiguring College English Studies, Indiana: Parlor Press, 2003. Egan, Margaret and Jesse Shera. 1952. "Foundations of a Theory of Bibliography." Library Quarterly 44:125-37. Goldman, Alvin; Blanchard, Thomas (2016-01-01). Zalta, Edward N., ed. The Stanford Encyclopedia of Philosophy (Winter 2016 ed.). Metaphysics Research Lab, Stanford University. Goldman, Alvin,. "Social Epistemology". stanford.library.sydney.edu.au. Retrieved 2017-02-22. Longino, Helen. 1990. Science as Social Knowledge. Princeton: Princeton University Press. Longino, Helen. 2001. The Fate of Knowledge. Princeton: Princeton University Press. Remedios, Francis. 2003. Legitimizing Scientific Knowledge: An Introduction to Steve Fuller’s Social Epistemology. Lexington Books. Rimkutė, Audronė (2014-09-28). "The Problem of Social Knowledge in Contemporary Social Epistemology: Two Approaches". Problemos (in Lithuanian). 0 (65): 4–19. doi:10.15388/Problemos.2004.65.6645. ISSN 1392-1126. Schmitt, Frederick F. 1994. Socializing Epistemology. Rowman & Littlefield. Schmitt, Frederick F.; Scholz, Oliver R. (2010-02-01). "Introduction: The History of Social Epistemology". Episteme. 7 (1): 1–6. doi:10.3366/E174236000900077X. ISSN 1750-0117. Solomon, Miriam. 2001. Social Empricism. Cambridge: MIT Press. Further reading "What Is Social Epistemology? A Smorgasbord of projects", in Pathways to Knowledge: Private and Public, Oxford University Press, Pg:182-204, "Relativism, Rationalism and the Sociology of Knowledge", Barry Barnes and David Bloor, in Rationality and Relativism, Pg:22 Social Epistemology, Steve Fuller, Indiana University Press, p. 3. External links The journal Social Epistemology Interdisciplinary subfields of sociology Epistemology Philosophy of science Social philosophy
0.780775
0.980017
0.765173
Conceptual model
The term conceptual model refers to any model that is formed after a conceptualization or generalization process. Conceptual models are often abstractions of things in the real world, whether physical or social. Semantic studies are relevant to various stages of concept formation. Semantics is fundamentally a study of concepts, the meaning that thinking beings give to various elements of their experience. Overview Concept models and conceptual models The value of a conceptual model is usually directly proportional to how well it corresponds to a past, present, future, actual or potential state of affairs. A concept model (a model of a concept) is quite different because in order to be a good model it need not have this real world correspondence. In artificial intelligence, conceptual models and conceptual graphs are used for building expert systems and knowledge-based systems; here the analysts are concerned to represent expert opinion on what is true not their own ideas on what is true. Type and scope of conceptual models Conceptual models range in type from the more concrete, such as the mental image of a familiar physical object, to the formal generality and abstractness of mathematical models which do not appear to the mind as an image. Conceptual models also range in terms of the scope of the subject matter that they are taken to represent. A model may, for instance, represent a single thing (e.g. the Statue of Liberty), whole classes of things (e.g. the electron), and even very vast domains of subject matter such as the physical universe. The variety and scope of conceptual models is due to the variety of purposes had by the people using them. Conceptual modeling is the activity of formally describing some aspects of the physical and social world around us for the purposes of understanding and communication. Fundamental objectives A conceptual model's primary objective is to convey the fundamental principles and basic functionality of the system which it represents. Also, a conceptual model must be developed in such a way as to provide an easily understood system interpretation for the model's users. A conceptual model, when implemented properly, should satisfy four fundamental objectives. Enhance an individual's understanding of the representative system Facilitate efficient conveyance of system details between stakeholders Provide a point of reference for system designers to extract system specifications Document the system for future reference and provide a means for collaboration The conceptual model plays an important role in the overall system development life cycle. Figure 1 below, depicts the role of the conceptual model in a typical system development scheme. It is clear that if the conceptual model is not fully developed, the execution of fundamental system properties may not be implemented properly, giving way to future problems or system shortfalls. These failures do occur in the industry and have been linked to; lack of user input, incomplete or unclear requirements, and changing requirements. Those weak links in the system design and development process can be traced to improper execution of the fundamental objectives of conceptual modeling. The importance of conceptual modeling is evident when such systemic failures are mitigated by thorough system development and adherence to proven development objectives/techniques. Modelling techniques Numerous techniques can be applied across multiple disciplines to increase the user's understanding of the system to be modeled. A few techniques are briefly described in the following text, however, many more exist or are being developed. Some commonly used conceptual modeling techniques and methods include: workflow modeling, workforce modeling, rapid application development, object-role modeling, and the Unified Modeling Language (UML). Data flow modeling Data flow modeling (DFM) is a basic conceptual modeling technique that graphically represents elements of a system. DFM is a fairly simple technique; however, like many conceptual modeling techniques, it is possible to construct higher and lower level representative diagrams. The data flow diagram usually does not convey complex system details such as parallel development considerations or timing information, but rather works to bring the major system functions into context. Data flow modeling is a central technique used in systems development that utilizes the structured systems analysis and design method (SSADM). Entity relationship modeling Entity–relationship modeling (ERM) is a conceptual modeling technique used primarily for software system representation. Entity-relationship diagrams, which are a product of executing the ERM technique, are normally used to represent database models and information systems. The main components of the diagram are the entities and relationships. The entities can represent independent functions, objects, or events. The relationships are responsible for relating the entities to one another. To form a system process, the relationships are combined with the entities and any attributes needed to further describe the process. Multiple diagramming conventions exist for this technique; IDEF1X, Bachman, and EXPRESS, to name a few. These conventions are just different ways of viewing and organizing the data to represent different system aspects. Event-driven process chain The event-driven process chain (EPC) is a conceptual modeling technique which is mainly used to systematically improve business process flows. Like most conceptual modeling techniques, the event driven process chain consists of entities/elements and functions that allow relationships to be developed and processed. More specifically, the EPC is made up of events which define what state a process is in or the rules by which it operates. In order to progress through events, a function/ active event must be executed. Depending on the process flow, the function has the ability to transform event states or link to other event driven process chains. Other elements exist within an EPC, all of which work together to define how and by what rules the system operates. The EPC technique can be applied to business practices such as resource planning, process improvement, and logistics. Joint application development The dynamic systems development method uses a specific process called JEFFF to conceptually model a systems life cycle. JEFFF is intended to focus more on the higher level development planning that precedes a project's initialization. The JAD process calls for a series of workshops in which the participants work to identify, define, and generally map a successful project from conception to completion. This method has been found to not work well for large scale applications, however smaller applications usually report some net gain in efficiency. Place/transition net Also known as Petri nets, this conceptual modeling technique allows a system to be constructed with elements that can be described by direct mathematical means. The petri net, because of its nondeterministic execution properties and well defined mathematical theory, is a useful technique for modeling concurrent system behavior, i.e. simultaneous process executions. State transition modeling State transition modeling makes use of state transition diagrams to describe system behavior. These state transition diagrams use distinct states to define system behavior and changes. Most current modeling tools contain some kind of ability to represent state transition modeling. The use of state transition models can be most easily recognized as logic state diagrams and directed graphs for finite-state machines. Technique evaluation and selection Because the conceptual modeling method can sometimes be purposefully vague to account for a broad area of use, the actual application of concept modeling can become difficult. To alleviate this issue, and shed some light on what to consider when selecting an appropriate conceptual modeling technique, the framework proposed by Gemino and Wand will be discussed in the following text. However, before evaluating the effectiveness of a conceptual modeling technique for a particular application, an important concept must be understood; Comparing conceptual models by way of specifically focusing on their graphical or top level representations is shortsighted. Gemino and Wand make a good point when arguing that the emphasis should be placed on a conceptual modeling language when choosing an appropriate technique. In general, a conceptual model is developed using some form of conceptual modeling technique. That technique will utilize a conceptual modeling language that determines the rules for how the model is arrived at. Understanding the capabilities of the specific language used is inherent to properly evaluating a conceptual modeling technique, as the language reflects the techniques descriptive ability. Also, the conceptual modeling language will directly influence the depth at which the system is capable of being represented, whether it be complex or simple. Considering affecting factors Building on some of their earlier work, Gemino and Wand acknowledge some main points to consider when studying the affecting factors: the content that the conceptual model must represent, the method in which the model will be presented, the characteristics of the model's users, and the conceptual model languages specific task. The conceptual model's content should be considered in order to select a technique that would allow relevant information to be presented. The presentation method for selection purposes would focus on the technique's ability to represent the model at the intended level of depth and detail. The characteristics of the model's users or participants is an important aspect to consider. A participant's background and experience should coincide with the conceptual model's complexity, else misrepresentation of the system or misunderstanding of key system concepts could lead to problems in that system's realization. The conceptual model language task will further allow an appropriate technique to be chosen. The difference between creating a system conceptual model to convey system functionality and creating a system conceptual model to interpret that functionality could involve two completely different types of conceptual modeling languages. Considering affected variables Gemino and Wand go on to expand the affected variable content of their proposed framework by considering the focus of observation and the criterion for comparison. The focus of observation considers whether the conceptual modeling technique will create a "new product", or whether the technique will only bring about a more intimate understanding of the system being modeled. The criterion for comparison would weigh the ability of the conceptual modeling technique to be efficient or effective. A conceptual modeling technique that allows for development of a system model which takes all system variables into account at a high level may make the process of understanding the system functionality more efficient, but the technique lacks the necessary information to explain the internal processes, rendering the model less effective. When deciding which conceptual technique to use, the recommendations of Gemino and Wand can be applied in order to properly evaluate the scope of the conceptual model in question. Understanding the conceptual models scope will lead to a more informed selection of a technique that properly addresses that particular model. In summary, when deciding between modeling techniques, answering the following questions would allow one to address some important conceptual modeling considerations. What content will the conceptual model represent? How will the conceptual model be presented? Who will be using or participating in the conceptual model? How will the conceptual model describe the system? What is the conceptual models focus of observation? Will the conceptual model be efficient or effective in describing the system? Another function of the simulation conceptual model is to provide a rational and factual basis for assessment of simulation application appropriateness. Models in philosophy and science Mental model In cognitive psychology and philosophy of mind, a mental model is a representation of something in the mind, but a mental model may also refer to a nonphysical external model of the mind itself. Metaphysical models A metaphysical model is a type of conceptual model which is distinguished from other conceptual models by its proposed scope; a metaphysical model intends to represent reality in the broadest possible way. This is to say that it explains the answers to fundamental questions such as whether matter and mind are one or two substances; or whether or not humans have free will. Conceptual model vs. semantics model Conceptual Models and semantic models have many similarities, however the way they are presented, the level of flexibility and the use are different. Conceptual models have a certain purpose in mind, hence the core semantic concepts are predefined in a so-called meta model. This enables a pragmatic modelling but reduces the flexibility, as only the predefined semantic concepts can be used. Samples are flow charts for process behaviour or organisational structure for tree behaviour. Semantic models are more flexible and open, and therefore more difficult to model. Potentially any semantic concept can be defined, hence the modelling support is very generic. Samples are terminologies, taxonomies or ontologies. In a concept model each concept has a unique and distinguishable graphical representation, whereas semantic concepts are by default the same. In a concept model each concept has predefined properties that can be populated, whereas semantic concepts are related to concepts that are interpreted as properties. In a concept model operational semantic can be built-in, like the processing of a sequence, whereas a semantic model needs explicit semantic definition of the sequence. The decision if a concept model or a semantic model is used, depends therefore on the "object under survey", the intended goal, the necessary flexibility as well as how the model is interpreted. In case of human-interpretation there may be a focus on graphical concept models, in case of machine interpretation there may be the focus on semantic models. Epistemological models An epistemological model is a type of conceptual model whose proposed scope is the known and the knowable, and the believed and the believable. Logical models In logic, a model is a type of interpretation under which a particular statement is true. Logical models can be broadly divided into ones which only attempt to represent concepts, such as mathematical models; and ones which attempt to represent physical objects, and factual relationships, among which are scientific models. Model theory is the study of (classes of) mathematical structures such as groups, fields, graphs, or even universes of set theory, using tools from mathematical logic. A system that gives meaning to the sentences of a formal language is called a model for the language. If a model for a language moreover satisfies a particular sentence or theory (set of sentences), it is called a model of the sentence or theory. Model theory has close ties to algebra and universal algebra. Mathematical models Mathematical models can take many forms, including but not limited to dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures. A more comprehensive type of mathematical model uses a linguistic version of category theory to model a given situation. Akin to entity-relationship models, custom categories or sketches can be directly translated into database schemas. The difference is that logic is replaced by category theory, which brings powerful theorems to bear on the subject of modeling, especially useful for translating between disparate models (as functors between categories). Scientific models A scientific model is a simplified abstract view of a complex reality. A scientific model represents empirical objects, phenomena, and physical processes in a logical way. Attempts to formalize the principles of the empirical sciences use an interpretation to model reality, in the same way logicians axiomatize the principles of logic. The aim of these attempts is to construct a formal system that will not produce theoretical consequences that are contrary to what is found in reality. Predictions or other statements drawn from such a formal system mirror or map the real world only insofar as these scientific models are true.<ref name="tcarotmimanass">Leo Apostel (1961). "Formal study of models". In: The Concept and the Role of the Model in Mathematics and Natural and Social. Edited by Hans Freudenthal. Springer. pp. 8–9 (Source)],</ref> Statistical models A statistical model is a probability distribution function proposed as generating data. In a parametric model, the probability distribution function has variable parameters, such as the mean and variance in a normal distribution, or the coefficients for the various exponents of the independent variable in linear regression. A nonparametric model has a distribution function without parameters, such as in bootstrapping, and is only loosely confined by assumptions. Model selection is a statistical method for selecting a distribution function within a class of them; e.g., in linear regression where the dependent variable is a polynomial of the independent variable with parametric coefficients, model selection is selecting the highest exponent, and may be done with nonparametric means, such as with cross validation. In statistics there can be models of mental events as well as models of physical events. For example, a statistical model of customer behavior is a model that is conceptual (because behavior is physical), but a statistical model of customer satisfaction is a model of a concept (because satisfaction is a mental not a physical event). Social and political models Economic models In economics, a model is a theoretical construct that represents economic processes by a set of variables and a set of logical and/or quantitative relationships between them. The economic model is a simplified framework designed to illustrate complex processes, often but not always using mathematical techniques. Frequently, economic models use structural parameters. Structural parameters are underlying parameters in a model or class of models. A model may have various parameters and those parameters may change to create various properties. Models in systems architecture A system model is the conceptual model that describes and represents the structure, behavior, and more views of a system. A system model can represent multiple views of a system by using two different approaches. The first one is the non-architectural approach and the second one is the architectural approach. The non-architectural approach respectively picks a model for each view. The architectural approach, also known as system architecture, instead of picking many heterogeneous and unrelated models, will use only one integrated architectural model. Business process modelling In business process modelling the enterprise process model is often referred to as the business process model. Process models are core concepts in the discipline of process engineering. Process models are: Processes of the same nature that are classified together into a model. A description of a process at the type level. Since the process model is at the type level, a process is an instantiation of it. The same process model is used repeatedly for the development of many applications and thus, has many instantiations. One possible use of a process model is to prescribe how things must/should/could be done in contrast to the process itself which is really what happens. A process model is roughly an anticipation of what the process will look like. What the process shall be will be determined during actual system development. Models in information system design Conceptual models of human activity systems Conceptual models of human activity systems are used in soft systems methodology (SSM), which is a method of systems analysis concerned with the structuring of problems in management. These models are models of concepts; the authors specifically state that they are not intended to represent a state of affairs in the physical world. They are also used in information requirements analysis (IRA) which is a variant of SSM developed for information system design and software engineering. Logico-linguistic models Logico-linguistic modeling is another variant of SSM that uses conceptual models. However, this method combines models of concepts with models of putative real world objects and events. It is a graphical representation of modal logic in which modal operators are used to distinguish statement about concepts from statements about real world objects and events. Data models Entity–relationship model In software engineering, an entity–relationship model (ERM) is an abstract and conceptual representation of data. Entity–relationship modeling is a database modeling method, used to produce a type of conceptual schema or semantic data model of a system, often a relational database, and its requirements in a top-down fashion. Diagrams created by this process are called entity-relationship diagrams, ER diagrams, or ERDs. Entity–relationship models have had wide application in the building of information systems intended to support activities involving objects and events in the real world. In these cases they are models that are conceptual. However, this modeling method can be used to build computer games or a family tree of the Greek Gods, in these cases it would be used to model concepts. Domain model A domain model is a type of conceptual model used to depict the structural elements and their conceptual constraints within a domain of interest (sometimes called the problem domain). A domain model includes the various entities, their attributes and relationships, plus the constraints governing the conceptual integrity of the structural model elements comprising that problem domain. A domain model may also include a number of conceptual views, where each view is pertinent to a particular subject area of the domain or to a particular subset of the domain model which is of interest to a stakeholder of the domain model. Like entity–relationship models, domain models can be used to model concepts or to model real world objects and events. See also Concept Concept mapping Conceptual framework Conceptual model (computer science) Conceptual schema Conceptual system Digital twin Information model International Conference on Conceptual Modeling Interpretation (logic) Isolated system Ontology (computer science) Paradigm Physical model Process of concept formation Scientific modeling Simulation Theory References Further reading J. Parsons, L. Cole (2005), "What do the pictures mean? Guidelines for experimental evaluation of representation fidelity in diagrammatical conceptual modeling techniques", Data & Knowledge Engineering 55: 327–342; A. Gemino, Y. Wand (2005), "Complexity and clarity in conceptual modeling: Comparison of mandatory and optional properties", Data & Knowledge Engineering 55: 301–326; D. Batra (2005), "Conceptual Data Modeling Patterns", Journal of Database Management 16: 84–106 Papadimitriou, Fivos. (2010). "Conceptual Modelling of Landscape Complexity". Landscape Research, 35(5):563-570. External links Models article in the Internet Encyclopedia of Philosophy'' Metaphor Semantics Simulation
0.769299
0.994567
0.76512
Potentiality and actuality
In philosophy, potentiality and actuality are a pair of closely connected principles which Aristotle used to analyze motion, causality, ethics, and physiology in his Physics, Metaphysics, Nicomachean Ethics, and De Anima. The concept of potentiality, in this context, generally refers to any "possibility" that a thing can be said to have. Aristotle did not consider all possibilities the same, and emphasized the importance of those that become real of their own accord when conditions are right and nothing stops them. Actuality, in contrast to potentiality, is the motion, change or activity that represents an exercise or fulfillment of a possibility, when a possibility becomes real in the fullest sense. Both these concepts therefore reflect Aristotle's belief that events in nature are not all natural in a true sense. As he saw it, many things happen accidentally, and therefore not according to the natural purposes of things. These concepts, in modified forms, remained very important into the Middle Ages, influencing the development of medieval theology in several ways. In modern times the dichotomy has gradually lost importance, as understandings of nature and deity have changed. However the terminology has also been adapted to new uses, as is most obvious in words like energy and dynamic. These were words first used in modern physics by the German scientist and philosopher, Gottfried Wilhelm Leibniz. Aristotle's concept of entelechy retains influence on recent concepts of biological "entelechy". Potentiality "Potentiality" and "potency" are translations of the Ancient Greek word (δύναμις). They refer especially to the way the word is used by Aristotle, as a concept contrasting with "actuality". The Latin translation of dunamis is , which is the root of the English word "potential"; it is also sometimes used in English-language philosophical texts. In early modern philosophy, English authors like Hobbes and Locke used the English word power as their translation of Latin . is an ordinary Greek word for possibility or capability. Depending on context, it could be translated 'potency', 'potential', 'capacity', 'ability', 'power', 'capability', 'strength', 'possibility', 'force' and is the root of modern English words dynamic, dynamite, and dynamo. In his philosophy, Aristotle distinguished two meanings of the word . According to his understanding of nature there was both a weak sense of potential, meaning simply that something "might chance to happen or not to happen", and a stronger sense, to indicate how something could be done well. For example, "sometimes we say that those who can merely take a walk, or speak, without doing it as well as they intended, cannot speak or walk." This stronger sense is mainly said of the potentials of living things, although it is also sometimes used for things like musical instruments. Throughout his works, Aristotle clearly distinguishes things that are stable or persistent, with their own strong natural tendency to a specific type of change, from things that appear to occur by chance. He treats these as having a different and more real existence. "Natures which persist" are said by him to be one of the causes of all things, while natures that do not persist, "might often be slandered as not being at all by one who fixes his thinking sternly upon it as upon a criminal." The potencies which persist in a particular material are one way of describing "the nature itself" of that material, an innate source of motion and rest within that material. In terms of Aristotle's theory of four causes, a material's non-accidental potential is the material cause of the things that can come to be from that material, and one part of how we can understand the substance (ousia, sometimes translated as "thinghood") of any separate thing. (As emphasized by Aristotle, this requires his distinction between accidental causes and natural causes.) According to Aristotle, when we refer to the nature of a thing, we are referring to the form or shape of a thing, which was already present as a potential, an innate tendency to change, in that material before it achieved that form. When things are most "fully at work" we can see more fully what kind of thing they really are. Actuality Actuality is often used to translate both (ἐνέργεια) and (ἐντελέχεια) (sometimes rendered in English as entelechy). Actuality comes from Latin and is a traditional translation, but its normal meaning in Latin is 'anything which is currently happening.' The two words and were coined by Aristotle, and he stated that their meanings were intended to converge. In practice, most commentators and translators consider the two words to be interchangeable. They both refer to something being in its own type of action or at work, as all things are when they are real in the fullest sense, and not just potentially real. For example, "to be a rock is to strain to be at the center of the universe, and thus to be in motion unless constrained otherwise." is a word based upon , meaning 'work'. It is the source of the modern word energy but the term has evolved so much over the course of the history of science that reference to the modern term is not very helpful in understanding the original as used by Aristotle. It is difficult to translate his use of into English with consistency. Joe Sachs renders it with the phrase "being-at-work" and says that "we might construct the word is-at-work-ness from Anglo-Saxon roots to translate into English". Aristotle says the word can be made clear by looking at examples rather than trying to find a definition. Two examples of in Aristotle's works are pleasure and happiness (eudaimonia). Pleasure is an of the human body and mind whereas happiness is more simply the of a human being a human. , translated as movement, motion, or in some contexts change, is also explained by Aristotle as a particular type of . See below. Entelechy Entelechy, in Greek , was coined by Aristotle and transliterated in Latin as . According to : Aristotle invents the word by combining (, 'complete, full-grown') with (= hexis, to be a certain way by the continuing effort of holding on in that condition), while at the same time punning on (, 'persistence') by inserting telos (, 'completion'). This is a three-ring circus of a word, at the heart of everything in Aristotle's thinking, including the definition of motion. Sachs therefore proposed a complex neologism of his own, "being-at-work-staying-the-same." Another translation in recent years is "being-at-an-end" (which Sachs has also used). , as can be seen by its derivation, is a kind of completeness, whereas "the end and completion of any genuine being is its being-at-work". The is a continuous being-at-work when something is doing its complete "work". For this reason, the meanings of the two words converge, and they both depend upon the idea that every thing's "thinghood" is a kind of work, or in other words a specific way of being in motion. All things that exist now, and not just potentially, are beings-at-work, and all of them have a tendency towards being-at-work in a particular way that would be their proper and "complete" way. Sachs explains the convergence of and as follows, and uses the word actuality to describe the overlap between them: Just as extends to because it is the activity which makes a thing what it is, extends to because it is the end or perfection which has being only in, through, and during activity. Motion Aristotle discusses motion in his Physics quite differently from modern science. Aristotle's definition of motion is closely connected to his actuality-potentiality distinction. Taken literally, Aristotle defines motion as the actuality of a "potentiality as such". What Aristotle meant however is the subject of several different interpretations. A major difficulty comes from the fact that the terms actuality and potentiality, linked in this definition, are normally understood within Aristotle as opposed to each other. On the other hand, the "as such" is important and is explained at length by Aristotle, giving examples of "potentiality as such". For example, the motion of building is the of the of the building materials as building materials as opposed to anything else they might become, and this potential in the unbuilt materials is referred to by Aristotle as "the buildable". So the motion of building is the actualization of "the buildable" and not the actualization of a house as such, nor the actualization of any other possibility which the building materials might have had. In an influential 1969 paper, Aryeh Kosman divided up previous attempts to explain Aristotle's definition into two types, criticised them, and then gave his own third interpretation. While this has not become a consensus, it has been described as having become "orthodox". This and similar more recent publications are the basis of the following summary. 1. The "process" interpretation and associate this approach with W. D. Ross. points out that it was also the interpretation of Averroes and Maimonides. This interpretation is, to use the words of Ross that "it is the passage to actuality that is " as opposed to any potentiality being an actuality. The argument of Ross for this interpretation requires him to assert that Aristotle actually used his own word wrongly, or inconsistently, only within his definition, making it mean "actualization", which is in conflict with Aristotle's normal use of words. According to this explanation also can not account for the "as such" in Aristotle's definition. 2. The "product" interpretation associates this interpretation with Thomas Aquinas and explains that by this explanation "the apparent contradiction between potentiality and actuality in Aristotle's definition of motion" is resolved "by arguing that in every motion actuality and potentiality are mixed or blended." Motion is therefore "the actuality of any potentiality insofar as it is still a potentiality." Or in other words: The Thomistic blend of actuality and potentiality has the characteristic that, to the extent that it is actual it is not potential and to the extent that it is potential it is not actual; the hotter the water is, the less is it potentially hot, and the cooler it is, the less is it actually, the more potentially, hot. As with the first interpretation however, objects that: One implication of this interpretation is that whatever happens to be the case right now is an , as though something that is intrinsically unstable as the instantaneous position of an arrow in flight deserved to be described by the word that everywhere else Aristotle reserves for complex organized states that persist, that hold out against internal and external causes that try to destroy them. In a more recent paper on this subject, Kosman associates the view of Aquinas with those of his own critics, David Charles, Jonathan Beere, and Robert Heineman. 3. The interpretation of Kosman, Coope, Sachs and others , amongst other authors (such as Aryeh Kosman and Ursula Coope), proposes that the solution to problems interpreting Aristotle's definition must be found in the distinction Aristotle makes between two different types of potentiality, with only one of those corresponding to the "potentiality as such" appearing in the definition of motion. He writes: The man with sight, but with his eyes closed, differs from the blind man, although neither is seeing. The first man has the capacity to see, which the second man lacks. There are then potentialities as well as actualities in the world. But when the first man opens his eyes, has he lost the capacity to see? Obviously not; while he is seeing, his capacity to see is no longer merely a potentiality, but is a potentiality which has been put to work. The potentiality to see exists sometimes as active or at-work, and sometimes as inactive or latent. Coming to motion, Sachs gives the example of a man walking across the room and explains as follows: "Once he has reached the other side of the room, his potentiality to be there has been actualized in Ross' sense of the term". This is a type of . However, it is not a motion, and not relevant to the definition of motion. While a man is walking his potentiality to be on the other side of the room is actual just as a potentiality, or in other words the potential as such is an actuality. "The actuality of the potentiality to be on the other side of the room, as just that potentiality, is neither more nor less than the walking across the room." , in his commentary of Aristotle's Physics Book III gives the following results from his understanding of Aristotle's definition of motion: The genus of which motion is a species is being-at-work-staying-itself, of which the only other species is thinghood. The being-at-work-staying-itself of a potency, as material, is thinghood. The being-at-work-staying-the-same of a potency as a potency is motion. The importance of actuality in Aristotle's philosophy The actuality-potentiality distinction in Aristotle is a key element linked to everything in his physics and metaphysics. Aristotle describes potentiality and actuality, or potency and action, as one of several distinctions between things that exist or do not exist. In a sense, a thing that exists potentially does not exist; but, the potential does exist. And this type of distinction is expressed for several different types of being within Aristotle's categories of being. For example, from Aristotle's Metaphysics, 1017a: We speak of an entity being a "seeing" thing whether it is currently seeing or just able to see. We speak of someone having understanding, whether they are using that understanding or not. We speak of corn existing in a field even when it is not yet ripe. People sometimes speak of a figure being already present in a rock which could be sculpted to represent that figure. Within the works of Aristotle the terms and , often translated as actuality, differ from what is merely actual because they specifically presuppose that all things have a proper kind of activity or work which, if achieved, would be their proper end. Greek for end in this sense is telos, a component word in (a work that is the proper end of a thing) and also teleology. This is an aspect of Aristotle's theory of four causes and specifically of formal cause (, which Aristotle says is ) and final cause. In essence this means that Aristotle did not see things as matter in motion only, but also proposed that all things have their own aims or ends. In other words, for Aristotle (unlike modern science), there is a distinction between things with a natural cause in the strongest sense, and things that truly happen by accident. He also distinguishes non-rational from rational potentialities (e.g. the capacity to heat and the capacity to play the flute, respectively), pointing out that the latter require desire or deliberate choice for their actualization. Because of this style of reasoning, Aristotle is often referred to as having a teleology, and sometimes as having a theory of forms. While actuality is linked by Aristotle to his concept of a formal cause, potentiality (or potency) on the other hand, is linked by Aristotle to his concepts of hylomorphic matter and material cause. Aristotle wrote for example that "matter exists potentially, because it may attain to the form; but when it exists actually, it is then in the form." Teleology is a crucial concept throughout Aristotle's philosophy. This means that as well as its central role in his physics and metaphysics, the potentiality-actuality distinction has a significant influence on other areas of Aristotle's thought such as his ethics, biology and psychology. The active intellect The active intellect was a concept Aristotle described that requires an understanding of the actuality-potentiality dichotomy. Aristotle described this in his De Anima (Book 3, Chapter 5, 430a10-25) and covered similar ground in his Metaphysics (Book 12, Chapter 7-10). The following is from the De Anima, translated by Joe Sachs, with some parenthetic notes about the Greek. The passage tries to explain "how the human intellect passes from its original state, in which it does not think, to a subsequent state, in which it does." He inferred that the / distinction must also exist in the soul itself: ...since in nature one thing is the material [hulē] for each kind [genos] (this is what is in potency all the particular things of that kind) but it is something else that is the causal and productive thing by which all of them are formed, as is the case with an art in relation to its material, it is necessary in the soul [psuchē] too that these distinct aspects be present; the one sort is intellect [nous] by becoming all things, the other sort by forming all things, in the way an active condition [hexis] like light too makes the colors that are in potency be at work as colors []. This sort of intellect is separate, as well as being without attributes and unmixed, since it is by its thinghood a being-at-work, for what acts is always distinguished in stature above what is acted upon, as a governing source is above the material it works on. Knowledge [], in its being-at-work, is the same as the thing it knows, and while knowledge in potency comes first in time in any one knower, in the whole of things it does not take precedence even in time. This does not mean that at one time it thinks but at another time it does not think, but when separated it is just exactly what it is, and this alone is deathless and everlasting (though we have no memory, because this sort of intellect is not acted upon, while the sort that is acted upon is destructible), and without this nothing thinks. This has been referred to as one of "the most intensely studied sentences in the history of philosophy." In the Metaphysics, Aristotle wrote at more length on a similar subject and is often understood to have equated the active intellect with being the "unmoved mover" and God. Nevertheless, as Davidson remarks: Just what Aristotle meant by potential intellect and active intellect – terms not even explicit in the De Anima and at best implied – and just how he understood the interaction between them remains moot to this day. Students of the history of philosophy continue to debate Aristotle's intent, particularly the question whether he considered the active intellect to be an aspect of the human soul or an entity existing independently of man. Post-Aristotelian usage New meanings of or energy Already in Aristotle's own works, the concept of a distinction between and was used in many ways, for example to describe the way striking metaphors work, or human happiness. Polybius about 150 BC, in his work the Histories uses Aristotle's word energeia in both an Aristotelian way and also to describe the "clarity and vividness" of things. Diodorus Siculus in 60-30 BC used the term in a very similar way to Polybius. However, Diodorus uses the term to denote qualities unique to individuals. Using the term in ways that could translated as 'vigor' or 'energy' (in a more modern sense); for society, 'practice' or 'custom'; for a thing, 'operation' or 'working'; like vigor in action. Platonism and neoplatonism Already in Plato it is found implicitly the notion of potency and act in his cosmological presentation of becoming and forces, linked to the ordering intellect, mainly in the description of the Demiurge and the "Receptacle" in his Timaeus. It has also been associated to the dyad of Plato's unwritten doctrines, and is involved in the question of being and non-being since from the pre-socratics, as in Heraclitus's mobilism and Parmenides' immobilism. The mythological concept of primordial Chaos is also classically associated with a disordered prime matter (see also prima materia), which, being passive and full of potentialities, would be ordered in actual forms, as can be seen in Neoplatonism, especially in Plutarch, Plotinus, and among the Church Fathers, and the subsequent medieval and Renaissance philosophy, as in Ramon Lllull's Book of Chaos and John Milton's Paradise Lost. Plotinus was a late classical pagan philosopher and theologian whose monotheistic re-workings of Plato and Aristotle were influential amongst early Christian theologians. In his Enneads he sought to reconcile ideas of Aristotle and Plato together with a form of monotheism, that used three fundamental metaphysical principles, which were conceived of in terms consistent with Aristotle's / dichotomy, and one interpretation of his concept of the Active Intellect (discussed above): The Monad or "the One" sometimes also described as "the Good". This is the or possibility of existence. The Intellect, or Intelligence, or, to use the Greek term, Nous, which is described as God, or a Demiurge. It thinks its own contents, which are thoughts, equated to the Platonic ideas or forms. The thinking of this Intellect is the highest activity of life. The actualization of this thinking is the being of the forms. This Intellect is the first principle or foundation of existence. The One is prior to it, but not in the sense that a cause is prior to an effect, but instead Intellect is called an emanation of the One. The One is the possibility of this foundation of existence. Soul or, to use the Greek term, Psyche. The soul is also an : it acts upon or actualizes its own thoughts and creates "a separate, material cosmos that is the living image of the spiritual or noetic Cosmos contained as a unified thought within the Intelligence." This was based largely upon Plotinus' reading of Plato, but also incorporated many Aristotelian concepts, including the unmoved mover as . New Testament usage Other than incorporation of Neoplatonic into Christendom by early Christian theologians such as St. Augustine, the concepts of and (the morphological root of ) are frequently used in the original Greek New Testament. is used 119 times and is used 161 times, usually with the meaning 'power/ability' and 'act/work', respectively. Essence-energies debate in medieval Christian theology In Eastern Orthodox Christianity, St Gregory Palamas wrote about the "energies" (actualities; singular in Greek, or in Latin) of God in contrast to God's "essence". These are two distinct types of existence, with God's energy being the type of existence which people can perceive, while the essence of God is outside of normal existence or non-existence or human understanding, i.e. transcendental, in that it is not caused or created by anything else. Palamas gave this explanation as part of his defense of the Eastern Orthodox ascetic practice of hesychasm. Palamism became a standard part of Orthodox dogma after 1351. In contrast, the position of Western Medieval (or Catholic) Christianity, can be found for example in the philosophy of Thomas Aquinas, who relied on Aristotle's concept of entelechy, when he defined God as , pure act, actuality unmixed with potentiality. The existence of a truly distinct essence of God which is not actuality, is not generally accepted in Catholic theology. Influence on modal logic The notion of possibility was greatly analyzed by medieval and modern philosophers. Aristotle's logical work in this area is considered by some to be an anticipation of modal logic and its treatment of potentiality and time. Indeed, many philosophical interpretations of possibility are related to a famous passage on Aristotle's On Interpretation, concerning the truth of the statement: "There will be a sea battle tomorrow." Contemporary philosophy regards possibility, as studied by modal metaphysics, to be an aspect of modal logic. Modal logic as a named subject owes much to the writings of the Scholastics, in particular William of Ockham and John Duns Scotus, who reasoned informally in a modal manner, mainly to analyze statements about essence and accident. Influence on early modern physics Aristotle's metaphysics, his account of nature and causality, was for the most part rejected by the early modern philosophers. Francis Bacon in his Novum Organon in one explanation of the case for rejecting the concept of a formal cause or "nature" for each type of thing, argued for example that philosophers must still look for formal causes but only in the sense of "simple natures" such as colour, and weight, which exist in many gradations and modes in very different types of individual bodies. In the works of Thomas Hobbes then, the traditional Aristotelian terms, "", are discussed, but he equates them simply to "cause and effect". There was an adaptation of at least one aspect of Aristotle's potentiality and actuality distinction, which has become part of modern physics, although as per Bacon's approach it is a generalized form of energy, not one connected to specific forms for specific things. The definition of energy in modern physics as the product of mass and the square of velocity, was derived by Leibniz, as a correction of Descartes, based upon Galileo's investigation of falling bodies. He preferred to refer to it as an or 'living force' (Latin ), but what he defined is today called kinetic energy, and was seen by Leibniz as a modification of Aristotle's , and his concept of the potential for movement which is in things. Instead of each type of physical thing having its own specific tendency to a way of moving or changing, as in Aristotle, Leibniz said that instead, force, power, or motion itself could be transferred between things of different types, in such a way that there is a general conservation of this energy. In other words, Leibniz's modern version of entelechy or energy obeys its own laws of nature, whereas different types of things do not have their own separate laws of nature. Leibniz wrote: ...the entelechy of Aristotle, which has made so much noise, is nothing else but force or activity; that is, a state from which action naturally flows if nothing hinders it. But matter, primary and pure, taken without the souls or lives which are united to it, is purely passive; properly speaking also it is not a substance, but something incomplete. Leibniz's study of the "entelechy" now known as energy was a part of what he called his new science of "dynamics", based on the Greek word and his understanding that he was making a modern version of Aristotle's old dichotomy. He also referred to it as the "new science of power and action", (Latin and ). And it is from him that the modern distinction between statics and dynamics in physics stems. The emphasis on in the name of this new science comes from the importance of his discovery of potential energy which is not active, but which conserves energy nevertheless. "As 'a science of power and action', dynamics arises when Leibniz proposes an adequate architectonic of laws for constrained, as well as unconstrained, motions." For Leibniz, like Aristotle, this law of nature concerning entelechies was also understood as a metaphysical law, important not only for physics, but also for understanding life and the soul. A soul, or spirit, according to Leibniz, can be understood as a type of entelechy (or living monad) which has distinct perceptions and memory. Influence on modern physics Ideas about potentiality have been related to quantum mechanics, where a wave function in a superposition of potential values (before measurement) has the potential to collapse into one of those values, under the Copenhagen interpretation of quantum mechanics. In particular, the German physicist Werner Heisenberg called this "a quantitative version of the old concept of 'potentia' in Aristotelian philosophy". in modern philosophy and biology As discussed above, terms derived from and have become parts of modern scientific vocabulary with a very different meaning from Aristotle's. The original meanings are not used by modern philosophers unless they are commenting on classical or medieval philosophy. In contrast, , in the form of entelechy is a word used much less in technical senses in recent times. As mentioned above, the concept had occupied a central position in the metaphysics of Leibniz, and is closely related to his monad in the sense that each sentient entity contains its own entire universe within it. But Leibniz' use of this concept influenced more than just the development of the vocabulary of modern physics. Leibniz was also one of the main inspirations for the important movement in philosophy known as German idealism, and within this movement and schools influenced by it entelechy may denote a force propelling one to self-fulfillment. In the biological vitalism of Hans Driesch, living things develop by entelechy, a common purposive and organising field. Leading vitalists like Driesch argued that many of the basic problems of biology cannot be solved by a philosophy in which the organism is simply considered a machine. Vitalism and its concepts like entelechy have since been discarded as without value for scientific practice by the overwhelming majority of professional biologists. Important to the philosophy of Giorgio Agamben is potentiality and the notion that tied in every potentiality is the potentiality to not do something as well, and that actuality is actually the not not doing of a potentiality; Agamben notes that thought is unique in that it is the ability to reflect on this potentiality in itself rather than in a relation to an object making the mind a sort of tabula rasa. However, in philosophy aspects and applications of the concept of entelechy have been explored by scientifically interested philosophers and philosophically inclined scientists alike. One example was the American critic and philosopher Kenneth Burke (1897–1993) whose concept of the "terministic screen" illustrates his thought on the subject. Prof. Denis Noble argues that, just as teleological causation is necessary to the social sciences, a specific teleological causation in biology, expressing functional purpose, should be restored and that it is already implicit in neo-Darwinism (e.g. "selfish gene"). Teleological analysis proves parsimonious when the level of analysis is appropriate to the complexity of the required 'level' of explanation (e.g. whole body or organ rather than cell mechanism). See also Actual infinity Actus purus Alexander of Aphrodisias Essence–Energies distinction First cause Henosis Hylomorphism Hypokeimenon Hypostasis (philosophy and religion) Sumbebekos Theosis Unmoved movers References Bibliography Old translations of Aristotle This 1933 translation is reproduced online at the Perseus Project. Action (philosophy) Aristotelianism Causality Metaphysical properties Philosophy of Aristotle
0.771072
0.992251
0.765097
Relevance
Relevance is the concept of one topic being connected to another topic in a way that makes it useful to consider the second topic when considering the first. The concept of relevance is studied in many different fields, including cognitive sciences, logic, and library and information science. Most fundamentally, however, it is studied in epistemology (the theory of knowledge). Different theories of knowledge have different implications for what is considered relevant and these fundamental views have implications for all other fields as well. Definition "Something (A) is relevant to a task (T) if it increases the likelihood of accomplishing the goal (G), which is implied by T." (Hjørland & Sejer Christensen, 2002). A thing might be relevant, a document or a piece of information may be relevant. The basic understanding of relevance does not depend on whether we speak of "things" or "information". For example, the Gandhian principles are of great relevance in today's world. Epistemology If you believe that schizophrenia is caused by bad communication between mother and child, then family interaction studies become relevant. If, on the other hand, you subscribe to a genetic theory of relevance then the study of genes becomes relevant. If you subscribe to the epistemology of empiricism, then only intersubjectively controlled observations are relevant. If, on the other hand, you subscribe to feminist epistemology, then the sex of the observer becomes relevant. Epistemology is not just one domain among others. Epistemological views are always at play in any domain. Those views determine or influence what is regarded relevant. Logic In formal reasoning, relevance has proved an important but elusive concept. It is important because the solution of any problem requires the prior identification of the relevant elements from which a solution can be constructed. It is elusive, because the meaning of relevance appears to be difficult or impossible to capture within conventional logical systems. The obvious suggestion that q is relevant to p if q is implied by p breaks down because under standard definitions of material implication, a false proposition implies all other propositions. However though 'iron is a metal' may be implied by 'cats lay eggs' it doesn't seem to be relevant to it the way in which 'cats are mammals' and 'mammals give birth to living young' are relevant to each other. If one states "I love ice cream," and another person responds "I have a friend named Brad Cook," then these statements are not relevant. However, if one states "I love ice cream," and another person responds "I have a friend named Brad Cook who also likes ice cream," this statement now becomes relevant because it relates to the first person's idea. Another proposal defines relevance or, more accurately, irrelevance information-theoretically. It is easiest to state in terms of variables, which might reflect the values of measurable hypotheses or observation statements. The conditional entropy of an observation variable e conditioned on a variable h characterizing alternative hypotheses provides a measure of the irrelevance of the observation variable e to the set of competing hypotheses characterized by h. It is useful combined with measures of the information content of the variable e in terms of its entropy. One can then subtract the content of e that is irrelevant to h (given by its conditional entropy conditioned on h) from the total information content of e (given by its entropy) to calculate the amount of information the variable e contains about the set of hypotheses characterized by h. Relevance (via the concept of irrelevance) and information content then characterize the observation variable and can be used to measure its sensitivity and specificity (respectively) as a test for alternative hypotheses. More recently a number of theorists have sought to account for relevance in terms of "possible world logics" in intensional logic. Roughly, the idea is that necessary truths are true in all possible worlds, contradictions (logical falsehoods) are true in no possible worlds, and contingent propositions can be ordered in terms of the number of possible worlds in which they are true. Relevance is argued to depend upon the "remoteness relationship" between an actual world in which relevance is being evaluated and the set of possible worlds within which it is true. Application Cognitive science and pragmatics In 1986, Dan Sperber and Deirdre Wilson drew attention to the central importance of relevance decisions in reasoning and communication. They proposed an account of the process of inferring relevant information from any given utterance. To do this work, they used what they called the "Principle of Relevance": namely, the position that any utterance addressed to someone automatically conveys the presumption of its own optimal relevance. The central idea of Sperber and Wilson's theory is that all utterances are encountered in some context, and the correct interpretation of a particular utterance is the one that allows most new implications to be made in that context on the basis of the least amount of information necessary to convey it. For Sperber and Wilson, relevance is conceived as relative or subjective, as it depends upon the state of knowledge of a hearer when they encounter an utterance. Sperber and Wilson stress that this theory is not intended to account for every intuitive application of the English word "relevance". Relevance, as a technical term, is restricted to relationships between utterances and interpretations, and so the theory cannot account for intuitions such as the one that relevance relationships obtain in problems involving physical objects. If a plumber needs to fix a leaky faucet, for example, some objects and tools are relevant (e.g. a wrench) and others are not (e.g. a waffle iron). And, moreover, the latter seems to be irrelevant in a manner which does not depend upon the plumber's knowledge, or the utterances used to describe the problem. A theory of relevance that seems to be more readily applicable to such instances of physical problem solving has been suggested by Gorayska and Lindsay in a series of articles published during the 1990s. The key feature of their theory is the idea that relevance is goal-dependent. An item (e.g., an utterance or object) is relevant to a goal if and only if it can be an essential element of some plan capable of achieving the desired goal. This theory embraces both propositional reasoning and the problem-solving activities of people such as plumbers, and defines relevance in such a way that what is relevant is determined by the real world (because what plans will work is a matter of empirical fact) rather than the state of knowledge or belief of a particular problem solver. Economics The economist John Maynard Keynes saw the importance of defining relevance to the problem of calculating risk in economic decision-making. He suggested that the relevance of a piece of evidence, such as a true proposition, should be defined in terms of the changes it produces of estimations of the probability of future events. Specifically, Keynes proposed that new evidence is irrelevant to a proposition , given old evidence , if and only if , otherwise, the proposition is relevant. There are technical problems with this definition, for example, the relevance of a piece of evidence can be sensitive to the order in which other pieces of evidence are received. Law The meaning of "relevance" in U.S. law is reflected in Rule 401 of the Federal Rules of Evidence. That rule defines relevance as "having any tendency to make the existence of any fact that is of consequence to the determinations of the action more probable or less probable than it would be without the evidence." In other words, if a fact were to have no bearing on the truth or falsity of a conclusion, it would be legally irrelevant. Library and information science This field has considered when documents (or document representations) retrieved from databases are relevant or non-relevant. Given a conception of relevance, two measures have been applied: Precision and recall: Recall = a : (a + c) X 100%, where a = number of retrieved, relevant documents, c = number of non-retrieved, relevant documents (sometimes termed "silence"). Recall is thus an expression of how exhaustive a search for documents is. Precision = a : (a + b) X 100%, where a = number of retrieved, relevant documents, b = number of retrieved, non-relevant documents (often termed "noise"). Precision is thus a measure of the amount of noise in document-retrieval. Relevance itself has in the literature often been based on what is termed "the system's view" and "the user's view". Hjørland (2010) criticize these two views and defends a "subject knowledge view of relevance". Politics During the 1960s, relevance became a fashionable buzzword, meaning roughly 'relevance to social concerns', such as racial equality, poverty, social justice, world hunger, world economic development, and so on. The implication was that some subjects, e.g., the study of medieval poetry and the practice of corporate law, were not worthwhile because they did not address pressing social issues. See also Source criticism Description Distraction Information-action ratio Information overload Intention Intuitionistic logic Kripke semantics Relevance theory References Gorayska B. & R. O. Lindsay (1993). The Roots of Relevance. Journal of Pragmatics 19, 301–323. Los Alamitos: IEEE Computer Society Press. Hjørland, Birger (2010). The foundation of the concept of relevance. Journal of the American Society for Information Science and Technology, 61(2), 217–237. Keynes, J. M. (1921). Treatise on Probability. London: MacMillan Lindsay, R. & Gorayska, B. (2002) Relevance, Goals and Cognitive Technology. International Journal of Cognitive Technology, 1, (2), 187–232 Sperber, D. & D. Wilson (1986/1995) Relevance: Communication and Cognition. 2nd edition. Oxford: Blackwell. Sperber, D. & D. Wilson (1987). Précis of Relevance: Communication and Cognition. Behavioral and Brain Science, 10, 697–754. Sperber, D. & D. Wilson (2004). Relevance Theory. In Horn, L.R. & Ward, G. (eds.) 2004 The Handbook of Pragmatics. Oxford: Blackwell, 607–632. http://www.dan.sperber.fr/?p=93 Zhang, X, H. (1993). A Goal-Based Relevance Model and its Application to Intelligent Systems. Ph.D. Thesis, Oxford Brookes University, Department of Mathematics and Computer Science, October, 1993. External links Malcolm Gladwell – Blink – full show: TVOntario interview regarding "snap judgements" and Blink Information science Library science terminology Pragmatics Logic Descriptive technique
0.775954
0.986005
0.765095
Genetic epistemology
Genetic epistemology or 'developmental theory of knowledge' is a study of the origins (genesis) of knowledge (epistemology) established by Swiss psychologist Jean Piaget. This theory opposes traditional epistemology and unites constructivism and structuralism. Piaget took epistemology as the starting point and adopted the method of genetics, arguing that all knowledge of the child is generated through interaction with the environment. Aims The goal of genetic epistemology is to link the knowledge to the model of its construction – i.e., the context in which knowledge is gained affects its perception, quality, and degree of retention. Further, genetic epistemology seeks to explain the process of cognitive development (from birth) in four primary stages: sensorimotor (birth to age 2), pre-operational (2–7), concrete operational (7–11), and formal operational (11 years onward). As an example, consider that for children in the sensorimotor stage, teachers should try to provide a rich and stimulating environment with ample objects to play with. Then with children in the concrete operational stage, learning activities should involve problems of classification, ordering, location, conservation using concrete objects. The main focus is on the younger years of development. Assimilation occurs when the perception of a new event or object occurs to the learner in an existing schema and is usually used in the context of self-motivation. In Accommodation, one accommodates the experiences according to the outcome of the tasks. The highest form of development is equilibration. Equilibration encompasses both assimilation and accommodation as the learner changes how they think to get a better answer. Piaget believed that knowledge is a biological function that results from the actions of an individual through change. He also stated that knowledge consists of structures, and comes about by the adaptation of these structures with the environment. Types of knowledge Piaget proposes three types of knowledge: physical, logical mathematical, and social knowledge. Physical knowledge: It refers to knowledge related to objects in the world, which can be acquired through perceptual properties. The acquisition of physical knowledge has been equated with learning in Piaget's theory (Gruber and Voneche, 1995). In other words, thought is fit directly to experience. Piaget also called his view constructivism, because he firmly believed that knowledge acquisition is a process of continuous self-construction. That is, Knowledge is not out there, external to the child and waiting to be discovered. But neither is it wholly performed within the child, ready to emerge as the child develops with the world surrounding her ... Piaget believed that children actively approach their environments and acquire knowledge through their actions. See also Constructivist epistemology Cognitive psychology Educational psychology Evolutionary epistemology General semantics Genetic structuralism Learning styles Learning theory Ontogeny recapitulates phylogeny Theory of cognitive development Notes References Developmental psychology Educational psychology History of psychology Social epistemology
0.779974
0.980867
0.76505
Ordinary language philosophy
Ordinary language philosophy (OLP) is a philosophical methodology that sees traditional philosophical problems as rooted in misunderstandings philosophers develop by distorting or forgetting how words are ordinarily used to convey meaning in non-philosophical contexts. "Such 'philosophical' uses of language, on this view, create the very philosophical problems they are employed to solve." This approach typically involves eschewing philosophical "theories" in favor of close attention to the details of the use of everyday "ordinary" language. Its earliest forms are associated with the later work of Ludwig Wittgenstein and a number of mid-20th century philosophers who can be split into two main groups, neither of which could be described as an organized "school". In its earlier stages, contemporaries of Wittgenstein at Cambridge University such as Norman Malcolm, Alice Ambrose, Friedrich Waismann, Oets Kolk Bouwsma and Morris Lazerowitz started to develop ideas recognisable as ordinary language philosophy. These ideas were further elaborated from 1945 onwards through the work of some Oxford University philosophers led initially by Gilbert Ryle, then followed by J. L. Austin and Paul Grice. This Oxford group also included H. L. A. Hart, Geoffrey Warnock, J. O. Urmson and P. F. Strawson. The close association between ordinary language philosophy and these later thinkers has led to it sometimes being called "Oxford philosophy". The posthumous publication of Wittgenstein's Philosophical Investigations in 1953 further solidified the notion of ordinary language philosophy. Philosophers a generation after Austin who made use of the method of ordinary language philosophy include Antony Flew, Stanley Cavell, John Searle and Oswald Hanfling. Today, Alice Crary, Nancy Bauer, Sandra Laugier, as well as literary theorists Toril Moi, Rita Felski, and Shoshana Felman have adopted the teachings of Cavell in particular, generating a resurgence of interest in ordinary language philosophy. Central ideas The later Wittgenstein held that the meanings of words reside in their ordinary uses and that this is why philosophers trip over words taken in abstraction. From this came the idea that philosophy had gotten into trouble by trying to use words outside of the context of their use in ordinary language. For example, "understanding" is what you mean when you say "I understand". "Knowledge" is what you mean when you say "I know". The point is that you already know what "understanding" or "knowledge" are, at least implicitly. Philosophers are ill-advised to construct new definitions of these terms, because this is necessarily a redefinition, and the argument may unravel into self-referential nonsense. Rather, philosophers must explore the definitions these terms already have, without forcing convenient redefinitions onto them. The controversy really begins when ordinary language philosophers apply the same leveling tendency to questions such as What is Truth? or What is Consciousness? Philosophers in this school would insist that we cannot assume that (for example) truth 'is' a 'thing' (in the same sense that tables and chairs are 'things') that the word 'truth' represents. Instead, we must look at the differing ways in which the words 'truth' and 'conscious' actually function in ordinary language. We may well discover, after investigation, that there is no single entity to which the word 'truth' corresponds, something Wittgenstein attempts to get across via his concept of a 'family resemblance' (cf. Philosophical Investigations). Therefore, ordinary language philosophers tend to be anti-essentialist. History Early analytic philosophy had a less positive view of ordinary language. Bertrand Russell tended to dismiss language as being of little philosophical significance, and ordinary language as just too confused to help solve metaphysical and epistemological problems. Gottlob Frege, the Vienna Circle (especially Rudolf Carnap), the young Wittgenstein, and W. V. O. Quine all attempted to improve upon it, in particular using the resources of modern logic. In his Tractatus Logico-Philosophicus Wittgenstein more or less agreed with Russell that language ought to be reformulated so as to be unambiguous, so as to accurately represent the world, so that we can better deal with philosophical questions. By contrast, Wittgenstein later described his task as bringing "words back from their metaphysical to their everyday use". The sea change brought on by his unpublished work in the 1930s centered largely on the idea that there is nothing wrong with ordinary language as it stands, and that many traditional philosophical problems are only illusions brought on by misunderstandings about language and related subjects. The former idea led to rejecting the approaches of earlier analytic philosophy—arguably, of any earlier philosophy—and the latter led to replacing them with careful attention to language in its normal use, in order to "dissolve" the appearance of philosophical problems, rather than attempt to solve them. At its inception, ordinary language philosophy (also called linguistic philosophy) was taken as either an extension of or as an alternative to analytic philosophy. Ordinary language analysis largely flourished and developed at Oxford University in the 1940s, under Austin and Ryle, and was quite widespread for a time before declining rapidly in popularity in the late 1960s and early 1970s. Despite this decline, Stanley Cavell and John Searle (both students of Austin) published seminal texts which draw significantly from the ordinary language tradition in 1969. Cavell more explicitly adopted the banner of ordinary language philosophy and inspired a generation of philosophers and literary theorists to reexamine the merits of this philosophical approach, all the while distancing himself from the limitations of traditional analytic philosophy. This caused a relatively recent resurgence of interest in this methodology, with some updates particularly due to the literature and teachings of Cavell, has also become a mainstay of what might be called postanalytic philosophy. Seeking to avoid the increasingly metaphysical and abstruse language found in mainstream analytic philosophy, posthumanism, and post-structuralism, a number of feminist philosophers have adopted the methods of ordinary language philosophy. Many of these philosophers were students or colleagues of Cavell. There are some affinities between contemporary ordinary language philosophy and philosophical pragmatism (or neopragmatism). Interestingly, the pragmatist philosopher F. C. S. Schiller might be seen as a forerunner to ordinary language philosophy, especially in his noted publication Riddles of the Sphinx. Seneca the Younger described the activities of other philosophers in ways that reflect some of the same concerns as ordinary language philosophers. For these men, too, have left to us, not positive discoveries, but problems whose solution is still to be sought. They might perhaps have discovered the essentials, had they not sought the superfluous also. They lost much time in quibbling about words and in sophistical argumentation; all that sort of thing exercises the wit to no purpose. We tie knots and bind up words in double meanings, and then try to untie them. Have we leisure enough for this? Do we already know how to live, or die? We should rather proceed with our whole souls towards the point where it is our duty to take heed lest things, as well as words, deceive us. Why, pray, do you discriminate between similar words, when nobody is ever deceived by them except during the discussion? It is things that lead us astray: it is between things that you must discriminate. Criticism One of the most ardent critics of ordinary language philosophy was a student at Oxford (and later a philosopher himself), Ernest Gellner, who said: Gellner criticized ordinary language philosophy in his book Words and Things published in 1959. See also Definitions of philosophy Ideal language philosophy Linguistic phenomenology References Further reading Primary sources Austin, J. L. How to do things with Words, ed. J. O. Urmson and Marina Sbisa. Cambridge, MA: Harvard University Press, 1975. -----. "A Plea for Excuses". In Austin, Philosophical Papers, ed. J. O. Urmson & G. J. Warnock. Oxford: Oxford UP, 1961. -----. Sense and Sensibilia, ed. G. J. Warnock. Oxford, Oxford University Press, 1962. Hanfling, Oswald. Philosophy and Ordinary Language. Hart, H. L. A. "The Ascription of Responsibility and Rights". Proceedings of the Aristotelian Society, 1949. Ryle, Gilbert. The Concept of Mind. New York: Barnes and Noble, 1965. -----. Dilemmas. Strawson, P. F. Individuals: An Essay in Descriptive Metaphysics. Garden City, NY: Doubleday, 1963. -----. "On Referring". Reprinted in Meaning and Reference, ed. A. W. Moore. Oxford, Oxford University Press: 1993. John Wisdom, Other Minds, 1952, Philosophy & Psychoanalysis, 1953, Paradox and Discovery, 1965 Wittgenstein, Ludwig. Blue and Brown Books -----.Philosophical Investigations, trans. G. E. M. Anscombe. New York: Macmillan, 1953. Secondary sources Forguson, Lynd. "Oxford and the "Epidemic" of Ordinary Language Philosophy", The Monist 84: 325–345, 2001. Passmore, John. A Hundred Years of Philosophy, revised edition. New York: Basic Books, 1966. See chapter 18, "Wittgenstein and Ordinary Language Philosophy". Soames, Scott. Philosophical Analysis in the Twentieth Century: Volume Two, The Age of Meaning. Princeton, Princeton University Press, 2005. Ordinary Language Philosophy: A Reappraisal – edited by Anthony Coleman & Ivan Welty. External links "Ordinary Language Philosophy". Internet Encyclopedia of Philosophy. Steven Pinker on Concepts & Reasoning Analytic philosophy Philosophical methodology Philosophy of language
0.7758
0.986115
0.765028
Anti-foundationalism
Anti-foundationalism (also called nonfoundationalism) is any philosophy which rejects a foundationalist approach. An anti-foundationalist is one who does not believe that there is some fundamental belief or principle which is the basic ground or foundation of inquiry and knowledge. Anti-foundationalism can be metaphysical (positing a ground of being or metaphysical foundation), ethical (positing some value or virtue as fundamental), epistemological (i.e. the foundationalist theory of justification) or apply to some other field with foundationalist theories. Anti-essentialism Anti-foundationalists use logical or historical or genealogical attacks on foundational concepts (see especially Nietzsche and Foucault), often coupled with alternative methods for justifying and forwarding intellectual inquiry, such as the pragmatic subordination of knowledge to practical action. Foucault dismissed the search for a return to origins as Platonic essentialism, preferring to stress the contingent nature of human practices. Anti-foundationalists oppose metaphysical methods. Moral and ethical anti-foundationalists are often criticized for moral relativism, but anti-foundationalists often dispute this charge, offering alternative methods of moral thought that they claim do not require foundations. Thus while Charles Taylor accused Foucault of having "no order of human life, or way we are, or human nature, that one can appeal to in order to judge or evaluate between ways of life", Foucault nevertheless insists on the need for continuing ethical enquiry without any universal system to appeal to. Niklas Luhmann used cybernetics to challenge the role of foundational unities and canonical certainties. Totalisation and legitimation Anti-foundationalists oppose totalising visions of social, scientific or historical reality, considering them to lack legitimation, and preferring local narratives instead. No social totality but a multitude of local and concrete practices; "not a history but at best histories". In such neopragmatism, there is no overall truth, merely an ongoing process of better and more fruitful methods of edification. Even our most taken-for-granted categories for social analysis—of gender, sex, race, and class—are considered by anti-essentialists like Marjorie Garber as social constructs. Hope and fear Stanley Fish distinguishes between what he calls "antifoundationalist theory hope" and "antifoundationalist theory fear"—finding them however both equally illusory. Fear of the corrosive effects of antifoundationalism was widespread in the late twentieth century, anticipating such things as a cultural meltdown and moral anarchy, or (at the least) a loss of the necessary critical distance to allow for leverage against the status quo. For Fish, however, the threat of a loss of objective standards of rational enquiry with the disappearance of any founding principle was a false fear: far from opening the way to an unbridled subjectivity, antifoundationalism leaves the individual firmly entrenched within the conventional context and standards of enquiry/dispute of the discipline/profession/habitus within which s/he is irrevocably placed. By the same token, however, the antifoundationalist hope of escaping local situations through awareness of the contingency of all such situations—through recognition of the conventional/rhetorical nature of all claims to master principles—that hope is to Fish equally foredoomed by the very nature of the situational consciousness, the all-embracing social and intellectual context, in which every individual is separately enclosed. Fish has also noted how, in contradistinction to hopes of an emancipatory outcome from antifoundationalism, anti-essentialist theories arguing for the absence of a transcontextual point of reference have been put to conservative and neo-conservative, as well as progressive, ends. Thus, for example, John Searle has offered an account of the construction of social reality fully compatible with the acceptance stance of "the man who is at home in his society, the man who is chez lui in the social institutions of the society...as comfortable as the fish in the sea". Criticism Anti-foundationalists have been criticised for attacking all general claims except their own; for offering a localizing rhetoric contradicted in practice by their own globalizing style. Edward Said condemned radical anti-foundationalism for excessive cultural relativism and overdependence on the linguistic turn at the expense of human realities. Anti-foundationalists See also References Further reading Katherine N. Hayles, Chaos Bound (1990) W. J. T. Mitchell, Against Theory (1985) Richard Rorty, Consequences of Pragmatism (1982) Edward Said, Humanism and Democratic Criticism (2004) External links Anti-Foundationalism Skepticism Theories of justification Coherentism Foundationalism
0.778978
0.982003
0.764959
Constructivism (philosophy of education)
Constructivism in education is a theory that suggests that learners do not passively acquire knowledge through direct instruction. Instead, they construct their understanding through experiences and social interaction, integrating new information with their existing knowledge. This theory originates from Swiss developmental psychologist Jean Piaget's theory of cognitive development. Background Constructivism in education is rooted in epistemology, a theory of knowledge concerned with the logical categories of knowledge and its justification. It acknowledges that learners bring prior knowledge and experiences shaped by their social and cultural environment and that learning is a process of students "constructing" knowledge based on their experiences. While Behaviorism focuses on understanding what students are doing, constructivism emphasizes the importance of understanding what students are thinking and how to enrich their thinking. Constructivism in educational psychology can be attributed to the work of Jean Piaget (1896–1980) and his theory of cognitive development. Piaget's focus was on how humans make meaning by integrating experiences with ideas, emphasizing human development as distinct from external influences Another influential figure, Lev Vygotsky (1896-1934), emphasized the importance of sociocultural learning in his theory of social constructivism, highlighting how interactions with adults, peers, and cognitive tools contribute to the formation of mental constructs. Building upon Vygotsky's work, Jerome Bruner and other educational psychologists introduced the concept of instructional scaffolding, where the learning environment provides support that is gradually removed as learners internalize the knowledge. Views more focused on human development within the social sphere include the sociocultural or socio-historical perspective of Lev Vygotsky and the situated cognition perspectives of Mikhail Bakhtin, Jean Lave, and Etienne Wenger. Additionally, the works of Brown, Collins, and Duguid, as well as Newman, Griffin, Cole, and Barbara Rogoff. The concept of constructivism has impacted a number of disciplines, including psychology, sociology, education, and the history of science. In its early stages, constructivism focused on the relationship between human experiences and their reflexes or behavior patterns. Piaget referred to these systems of knowledge "schemes." Piaget's theory of constructivist learning has significantly influenced learning theories and teaching methods in education. It serves as a foundational concept in education reform movements within cognitive science and neuroscience. Overview The formalization of constructivism from a within-the-human perspective is commonly credited to Jean Piaget. Piaget described the mechanisms by which information from the environment and ideas from the individual interact to form internalized structures developed by learners. He identified processes of assimilation and accommodation as crucial in this interaction, as individuals construct new knowledge from their experiences. When individuals assimilate new information, they integrate it into their existing framework without altering that framework. This can happen when their experiences align with their internal view of the world, but it can also occur if they fail to update a flawed understanding. Accommodation is the process of adjusting one's mental representation of the external world to fit new experiences. It can be understood as the mechanism by which failure leads to learning. It is important to note that constructivism is not a specific pedagogy, but rather a theory explaining how learning occurs, regardless of the learning environment. However, constructivism is often associated with pedagogic approaches that promote active learning, or learning by doing. While there is much enthusiasm for constructivism as a design strategy, some experts believe that it is more of a philosophical framework than a theory that can precisely describe instruction or prescribe design strategies. Constructivist pedagogy The nature of the learner Social constructivism recognizes and embraces the individuality and complexity of each learner, actively encouraging and rewarding it as a vital component of the learning process. The importance of the background and culture of the learner Social constructivism, also known as socioculturalism, emphasizes the role of an individual's background, culture, and worldview in shaping their understanding of truth. According to this theory, learners inherit historical developments and symbol systems from their culture and continue to learn and develop these throughout their lives. This approach highlights the significance of a learner's social interactions with knowledgeable members of society. It suggests that without such interactions, it is challenging to grasp the social meaning of important symbol systems and learn how to effectively use them. Social constructivism also points out that young children develop their thinking abilities through interactions with peers, adults, and the physical world. Therefore, it is essential to consider the learner's background and culture throughout the learning process, as these factors help shape the knowledge and truth that the learner acquires. Responsibility for learning Social constructivism emphasizes the importance of the student being actively involved in the learning process, unlike previous educational viewpoints where the responsibility rested with the instructor to teach and where the learner played a passive, receptive role. Von Glasersfeld (1989) emphasized that learners construct their own understanding and that they do not simply mirror and reflect what they read. Learners look for meaning and will try to find regularity and order in the events of the world even in the absence of full or complete information. The motivation for learning When considering students' learning, it is essential to take into account their motivation and confidence. According to Von Glasersfeld, a student's motivation to learn is strongly influenced by their belief in their potential for learning This belief is shaped by their past experiences of successfully mastering problems, which is more influential than external acknowledgment and motivation. This idea aligns with Vygotsky's concept of the "zone of proximal development," where students are challenged at a level slightly above their current development. By successfully completing challenging tasks, students build confidence and motivation to take on even more complex challenges. According to a study on the impact that COVID-19 had on the learning process in Australian University students, a student's motivation and confidence depends on self-determination theory. This theory requires support from the educational environment to fulfill three basic needs to achieve growth, including autonomy, relatedness, and competency. During the historical event of COVID-19, the basic needs were hindered in some way, along with environments that were meant to foster education and growth, which was hindered through the change from traditional in-person classes to online classes that left students with significantly less opportunities for social interactive and active learning opportunities. The role of the instructor Instructors as facilitators According to the social constructivist approach, instructors are expected to adapt to the role of facilitators rather than traditional teachers. While a teacher teacher gives a didactic lecture that covers the subject matter, a facilitator assists the student in developing their own understanding of the content. This shift in roles places the focus on the student's active involvement in the learning process, as opposed to the instructor and the content itself. As a result, a facilitator requires a different set of skills compared to a teacher. For instance, a teacher imparts information, whereas a facilitator encourages questions; a teacher leads from the front, while a facilitator provides support from the background; and a teacher delivers answers based on a set curriculum, whereas a facilitator offers guidance and creates an environment for the learner to form their own conclusions. Furthermore, a teacher typically engages in a monologue, whereas a facilitator maintains an ongoing dialogue with the learners. Additionally, a facilitator should be able to dynamically adapt the learning experience by taking the lead in guiding the experience to align with the learners' interests and needs in order to create value. The learning environment should be created in a way that both supports and challenges the student's thinking While it is advocated to give the student ownership of the problem and solution process, it is not the case that any and all activities or solutions are adequate. The critical goal is to support the student in developing effective thinking skills. Relationship between instructor and students In the social constructivist viewpoint, the role of the facilitator involves both the instructor and the students being actively engaged in learning from each other. This dynamic interaction requires that the instructor's culture, values, and background play a significant part in shaping the learning experience. Students compare their own thoughts with those of the instructor and their peers, leading to the development of a new, socially validated understanding of the subject matter. The task or problem serves as the interface between the instructor and the student, creating a dynamic interaction. As a result, both students and instructors need to develop an awareness of each other's viewpoints and consider their own beliefs, standards, and values, making the learning experience both subjective and objective at the same time. Several studies highlight the significance of mentoring in the learning process. The social constructivist model underscores the importance of the relationship between the student and the instructor in facilitating learning. Interactive learning can be facilitated through various approaches such as reciprocal teaching, peer collaboration, cognitive apprenticeship, problem-based instruction, Anchored Instruction, and other methods that involve collaborative learning. Learning is an active process Social constructivism, which is strongly influenced by Vygotsky's work, proposes that knowledge is initially built within a social setting and is then taken in by individuals. According to social constructivists, the act of sharing individual viewpoints, known as collaborative elaboration, leads to learners jointly constructing understanding that would not be achievable on their own. Social constructivist scholars view learning as an active process in which students are encouraged to discover principles, concepts, and facts independently. Therefore, it is crucial to promote speculation and intuitive thinking in students. According to other constructivist scholars, individuals create meanings through their interactions with each other and the environment they inhabit. Knowledge is created by people and is shaped by social and cultural influences. McMahon (1997) also emphasizes the social nature of learning, stating that it is not solely a mental process or a result of external factors shaping behavior. Instead, meaningful learning occurs when individuals participate in social activities. According to Vygotsky (1978), an important aspect of intellectual development is the convergence of speech and practical activity. He emphasized that as children engage in practical activities, they construct meaning on an individual level, and through speech, they connect this meaning to their culture and the interpersonal world they share with others. Collaboration among learners Another tenet of social constructivism is that collaboration among individuals with diverse skills and backgrounds is essential for developing a comprehensive understanding of a particular subject or field. In some social constructivist models, there is an emphasis on the importance of collaboration among learners, which contrasts with traditional competitive approaches. One concept from Vygotsky that is particularly relevant to peer collaboration is the zone of proximal development. This is defined as the gap between a learner's actual developmental level, determined by independent problem-solving, and the level of potential development, determined through problem-solving under adult guidance or in collaboration with more capable peers. It differs from Piaget's fixed biological stages of development. Through a process called "scaffolding," a learner can be extended beyond the limitations of physical maturation, allowing the development process to catch up to the learning process. When students present and teach new material to their peers, it fosters a non-linear process of collective knowledge construction. The importance of context The social constructivist paradigm emphasizes that the environment in which learning takes place plays a crucial role in the learning process. The concept of the learner as an active processor is based on the idea that there are no universal learning laws that apply to all domains. When individuals possess decontextualized knowledge, they may struggle to apply their understanding to real-world tasks. This is due to the lack of engagement with the concept in its complex, real-world environment, as well as the absence of experience with the intricate interrelationships that influence the application of the concept. One concept within social constructivism is authentic or situated learning, which involves students participating in activities directly related to the practical application of their learning within a culture similar to the real-world setting. Cognitive apprenticeship is a suggested effective model of constructivist learning that aims to immerse students in authentic practices through activity and social interaction, similar to the successful methods used in craft apprenticeship.[ Holt and Willard-Holt (2000) highlight the concept of dynamic assessment, which offers a distinct approach to evaluating learners compared to traditional tests. Dynamic assessment extends the interactive nature of learning to the assessment process, emphasizing interaction between the assessor and the learner. It involves a dialogue between the assessor and the learner to understand the current performance level on a task and explore ways to improve future performance. This approach views assessment and learning as interconnected processes, rather than separate entities. According to this viewpoint, instructors should approach assessment as an ongoing and interactive process that evaluates the learner's achievements, the quality of the learning experience, and course materials. The feedback generated by the assessment process is crucial for driving further development. The selection, scope, and sequencing of the subject matter Knowledge should be discovered as an integrated whole The organization of knowledge should prioritize integration over division into separate subjects or compartments. This again emphasizes the significance of presenting learning within a specific context. The world in which learners operate is not divided into separate subjects but rather comprises a complex array of facts, problems, dimensions, and perceptions. Engaging and challenging the student Students benefit from being challenged with tasks that require them to apply skills and knowledge slightly beyond their current level of mastery. This approach can help to maintain their motivation and build on past achievements to boost their confidence. This is in line with Vygotsky's zone of proximal development, which refers to the gap between a person's current level of ability and their potential level of development under the guidance of adults or more capable peers. Vygotsky (1978) argued that effective instruction should be slightly ahead of a learner's current developmental stage. By doing so, instruction can stimulate the development of a range of functions that are in the learner's zone of proximal development. This highlights the crucial role of instruction in fostering development. In order to effectively engage and challenge students, it is important that the tasks and learning environment mirror the complexity of the real-world environment in which the students are expected to operate upon completing their education. Students should not only take ownership of the learning and problem-solving process but also take ownership of the problems themselves. When it comes to organizing subject matter, the constructivist perspective suggests that the fundamental principles of any subject can be taught to anyone at any point, in some capacity. This approach entails introducing the foundational concepts that makeup topics or subject areas initially and then consistently revisiting and expanding on these ideas. Instructors should recognize that while they are given a set curriculum to follow, they inevitably personalize it to reflect their own beliefs, thoughts, and emotions about the subject matter and their students. As a result, the learning experience becomes a collaborative effort, influenced by the emotions and life experiences of all involved. It's important to consider the student's motivation as central to the learning process. The structuredness of the learning process Incorporating an appropriate balance between structure and flexibility into the learning process is essential. According to Savery (1994), a highly structured learning environment may pose challenges for learners in constructing meaning based on their existing conceptual understandings. A facilitator should strive to provide adequate structure to offer clear guidance and parameters for achieving learning objectives, while also allowing for an open and flexible learning experience that enables learners to discover, interact, and arrive at their own understanding of truth. Teaching Techniques A few strategies for cooperative learning include: Reciprocal Questioning: students work together to ask and answer questions Jigsaw Classroom: students become "experts" on one part of a group project and teach it to the others in their group Structured Controversies: Students work together to research a particular controversy The Harkness discussion method The "Harkness" discussion method is named after Edward Harkness, who funded its development at Phillips Exeter Academy in the 1930s. This method involves students sitting in a circle, guiding their own discussion. The teacher's role is minimized, with the students initiating, directing, and focusing the discussion. They work together as a team, sharing responsibility and goals. The ultimate aim is to illuminate the subject, interpret different viewpoints, and piece together a comprehensive understanding. Discussion skills are crucial, and every participant is expected to contribute to keeping the discussion engaging and productive. Criticism Many cognitive psychologists and educators have raised concerns about the core principles of constructivism, arguing that these theories may be misleading or inconsistent with well-established findings. In neo-Piagetian theories of cognitive development, it is proposed that learning is influenced by the processing and representational resources available at a particular age. This implies that if the demands of a concept to be learned exceed the available processing efficiency and working memory resources, then the concept is considered unlearnable. This approach to learning can impact the understanding of essential theoretical concepts and reasoning. Therefore, for effective learning to occur, a child must operate in an environment that aligns with their developmental and individual learning constraints, taking into account any deviations from the norm for their age. If this condition is not met, the learning process may not progress as intended. Many educators have raised concerns about the effectiveness of this approach to instructional design, particularly when it comes to creating instruction for beginners. While some proponents of constructivism claim that "learning by doing" improves learning, critics argue that there is insufficient empirical evidence to support this assertion, especially for novice learners. Sweller and his colleagues argue that novices do not possess the underlying mental models, or "schemas" necessary for "learning by doing". Additionally, Mayer (2004) conducted a review of the literature and concluded that fifty years of empirical data do not support the use of pure discovery as a constructivist teaching technique. In situations requiring discovery, he recommends the use of guided discovery instead. Some researchers, such as Kirschner et al. (2006), have characterized the constructivist teaching methods as "unguided methods of instruction" and have suggested more structured learning activities for learners with little to no prior knowledge. Slezak has expressed skepticism about constructivism, describing it as "fashionable but thoroughly problematic doctrines that can have little benefit for practical pedagogy or teacher education." Similar views have been stated by Meyer, Boden, Quale and others. Kirschner et al. grouped several learning theories together, including Discovery, Problem-Based, Experiential, and Inquiry-Based learning, and suggested that highly scaffolded constructivist methods such as problem-based learning and inquiry learning may be ineffective. They described several research studies that were favorable to problem-based learning given learners were provided some level of guidance and support. Confusion between constructivist and maturationist views Many people confuse constructivist with maturationist views. The constructivist (or cognitive-developmental) stream "is based on the idea that the dialectic or interactionist process of development and learning through the student's active construction should be facilitated and promoted by adults". The romantic maturationist stream emphasizes the natural development of students without adult interventions in a permissive environment. In contrast, constructivism involves adults actively guiding learning while allowing children to take charge of their own learning process. Subtypes Contextual constructivism According to William Cobern (1991) Contextual constructivism is "about understanding the fundamental, culturally based beliefs that both students and teachers bring to class, and how these beliefs are supported by culture. Contextual constructivists not only raise new research questions, they also call for a new research paradigm. The focus on contextualization means that qualitative, especially ethnographic, techniques are to be preferred" (p. 3). Radical constructivism Ernst von Glasersfeld developed radical constructivism by coupling Piaget's theory of learning and philosophical viewpoint about the nature of knowledge with Kant's rejection of an objective reality independent of human perception or reason. Radical constructivism does not view knowledge as an attempt to generate ideas that match an independent, objective reality. Instead, theories and knowledge about the world, as generated by our senses and reason, either fit within the constraints of whatever reality may exist and, thus, are viable or do not and are not viable. As a theory of education, radical constructivism emphasizes the experiences of the learner, differences between learners and the importance of uncertainty. Relational constructivism Björn Kraus' relational constructivism can be perceived as a relational consequence of radical constructivism. In contrast to social constructivism, it picks up the epistemological threads and maintains the radical constructivist idea that humans cannot overcome their limited conditions of reception. Despite the subjectivity of human constructions of reality, relational constructivism focuses on the relational conditions that apply to human perceptional processes. Social constructivism In recent decades, constructivist theorists have extended the traditional focus on individual learning to address collaborative and social dimensions of learning. It is possible to see social constructivism as a bringing together of aspects of the work of Piaget with that of Bruner and Vygotsky. Communal constructivism The concept Communal constructivism was developed by Leask and Younie, in 1995, through their research on the European SchoolNet, which demonstrated the value of experts collaborating to push the boundaries of knowledge, including communal construction of new knowledge between experts, rather than the social construction of knowledge, as described by Vygotsky, where there is a learner to teacher scaffolding relationship. "Communal constructivism,” as a concept, applies to those situations in which there is currently no expert knowledge or research to underpin knowledge in an area. "Communal constructivism" refers, specifically, to the process of experts working together to create, record, and publish new knowledge in emerging areas. In the seminal European SchoolNet research where, for the first time, academics were testing out how the internet could support classroom practice and pedagogy, experts from a number of countries set up test situations to generate and understand new possibilities for educational practice. Bryan Holmes, in 2001, applied this to student learning, as described in an early paper, "in this model, students will not simply pass through a course like water through a sieve but instead leave their own imprint in the learning process." Influence on computer science and robotics Constructivism has influenced the course of programming and computer science. Some famous programming languages have been created, either wholly or in part, for educational use, to support the constructionist theory of Seymour Papert. These languages have been dynamically typed and reflective. Logo and its successor, Scratch, are the best known of them. Constructivism has also informed the design of interactive machine learning systems, whereas Radical Constructivism has been explored as a paradigm to design experiments in rehabilitation robotics and more precisely in prosthetics. List of notable constructivists Writers who influenced constructivism include: John Dewey (1859–1952) Maria Montessori (1870–1952) Władysław Strzemiński (1893–1952) Jean Piaget (1896–1980) Lev Vygotsky (1896–1934) Heinz von Foerster (1911–2002) George Kelly (1905–1967) Jerome Bruner (1915–2016) Herbert Simon (1916–2001) Paul Watzlawick (1921–2007) Ernst von Glasersfeld (1917–2010) Edgar Morin (born 1921) Humberto Maturana (1928–2021) Paulo Freire (1921–1997) See also Autodidactism Connectivism Constructivist epistemology Constructivist teaching methods Critical pedagogy Cultural-historical activity theory (CHAT) Educational psychology Learning styles Philosophy of education Reform mathematics Situated cognition Socratic method Teaching for social justice Vocational education APOS Theory References Further reading Dalgarno, B. (1996) Constructivist computer assisted learning: theory and technique, ASCILITE Conference, 2–4 December 1996, retrieved from https://web.archive.org/web/20140902003411/http://www.ascilite.org.au/conferences/adelaide96/papers/21.html Hilbert, T. S., & Renkl, A. (2007). Learning how to Learn by Concept Mapping: A Worked-Example Effect. Oral presentation at the 12th Biennial Conference EARLI 2007 in Budapest, Hungary Jeffery, G. (ed) (2005) The creative college: building a successful learning culture in the arts, Stoke-on-Trent: Trentham Books. Jonassen, D., Mayes, T., & McAleese, R. (1993). A manifesto for a constructivist approach to uses of technology in higher education. In T.M. Duffy, J. Lowyck, & D.H. Jonassen (Eds.), Designing environments for constructive learning (pp. 231–247). Heidelberg: Springer-Verlag. Piaget, Jean. (1950). The Psychology of Intelligence. New York: Routledge. Jean Piaget (1967). Logique et Connaissance scientifique, Encyclopédie de la Pléiade. External links A journey into Constructivism by Martin Dougiamas, 1998–11. Cognitively Guided Instruction reviewed on the Promising Practices Network Sample Online Activity Objects Designed with Constructivist Approach (2007) Liberal Exchange learning resources offering a constructivist approach to learning English as a second/foreign language (2009) Lutz, S., & Huitt, W. (2018). "Connecting cognitive development and constructivism." In W. Huitt (Ed.), Becoming a Brilliant Star: Twelve core ideas supporting holistic education (pp. 45–63). IngramSpark. Definition of Constructivism by Martin Ryder (a footnote to the book chapter The Cyborg and the Noble Savage where Ryder discusses One Laptop Per Child's XO laptop from a constructivist educator's point of view) Education reform Alternative education Educational psychology Constructivism (psychological school)
0.766693
0.997562
0.764824
Philosophical pessimism
Philosophical pessimism is a family of philosophical views that assign a negative value to life or existence. Philosophical pessimists commonly argue that the world contains an empirical prevalence of pains over pleasures, that existence is ontologically or metaphysically adverse to living beings, and that life is fundamentally meaningless or without purpose. Philosophical pessimism is not a single coherent movement, but rather a loosely associated group of thinkers with similar ideas and a resemblance to each other. Their responses to the condition of life are widely varied. Philosophical pessimists usually do not advocate for suicide as a solution to the human predicament; though many favour the adoption of antinatalism, that is, non-procreation. Definitions The word pessimism comes from Latin pessimus, meaning "the worst". Philosophers define the position in a variety of ways. In Pessimism: A History and a Criticism, James Sully describes the essence of philosophical pessimism as "the denial of happiness or the affirmation of life's inherent misery". Byron Simmons writes, "[p]essimism is, roughly, the view that life is not worth living". Frederick C. Beiser writes, "pessimism is the thesis that life is not worth living, that nothingness is better than being, or that it is worse to be than not be". According to Paul Prescott, it is the view that "the bad prevails over the good". Olga Plümacher identifies two fundamental claims of philosophical pessimism: "The sum of displeasure outweighs the sum of pleasure" and "Consequently the non-being of the world would be better than its being". Ignacio L. Moya defines pessimism as a position that holds that the essence of existence can be known (at least partially); that life is essentially characterized by needs, wants, and pain, and hence suffering is inescapable; that there are no ultimate reasons for, no cosmic plan or purpose to suffering; and that, ultimately, non-existence is preferable to existence. Themes Reaching a pessimistic conclusion can be approached in various ways, with numerous arguments reinforcing this perspective. However, certain recurring themes consistently emerge: Life is not worth living — one of the most common arguments of pessimists is that life is not worth living. In short, pessimists view existence, overall, as having a deleterious effect on living beings: to be alive is to be put in a bad position. The bad prevails over the good — generally, the bad wins over the good. This can be understood in two ways. Firstly, one can make a case that — irrespective of the quantities of goods and evils — the suffering cannot be compensated for by the good. Secondly, one can make a case that there is a predominance of bad things over good things. Non-existence is preferable to existence — since existence is bad, it would have been better had it not have been. This point can be understood in one of the two following ways. Firstly, one can argue that, for any individual being, it would have been better had they never existed. Secondly, various pessimists have argued that the non-existence of the whole world would be better than its existence. Development of pessimist thought Pessimistic sentiments can be found throughout religions and in the works of various philosophers. The major developments in the tradition started with the works of German philosopher Arthur Schopenhauer, who was the first to provide an explanation for why there is so much misery in the world and construct a complete philosophical system in which pessimism played a major role. Ancient times One of the central points of Buddhism, which originated in ancient India, is the claim that life is full of suffering and unsatisfactoriness. This is known as dukkha from the Four Noble Truths. In the Ecclesiastes from the Abrahamic religions, which originated in the Middle East, the author laments the meaninglessness of human life, views life as worse than death and expresses antinatalistic sentiments towards coming into existence. These views are made central in Gnosticism, a religious movement stemming from Christianity, where the body is seen as a type of a "prison" for the soul, and the world as a type of hell. Hegesias of Cyrene, who lived in ancient Greece, argued that lasting happiness cannot be realized because of constant bodily ills and the impossibility of achieving all our goals. 19th-century Germany Arthur Schopenhauer was the first philosopher who constructed an entire philosophical system, where he presented an explanation of the world through metaphysics, aesthetics, epistemology, and ethics — all connected with a pessimistic view of the world. Schopenhauer viewed the world as having two sides — Will and representation. Will is pure striving, aimless, incessant, with no end; it is the inner essence of all things. Representation is how we view the world with our particular perceptual and cognitive endowment; it is how we build objects from our perceptions. In living creatures, the Will takes the form of the will to life — self-preservation or the survival instinct appearing as striving to satisfy desires. And since this will to life is our inner nature, we are doomed to be always dissatisfied, as one satisfied desire makes room for striving for yet another thing. There is, however, something we can do with that ceaseless willing. We can take temporary respite during aesthetic contemplation or through cultivating a moral attitude. We can also defeat the will to life more permanently through asceticism, achieving equanimity. 20th and 21st centuries In the 20th and 21st centuries, a number of thinkers have revisited and revitalized philosophical pessimism — drawing in large part from the works of Arthur Schopenhauer and his contemporaries. Examples of 20th and 21st authors who espoused philosophically pessimistic views include Emil Cioran, Peter Wessel Zapffe, Eugene Thacker, Thomas Ligotti, David Benatar, and Julio Cabrera. Main arguments The most common arguments for the tenets of philosophical pessimism are briefly presented here. Duḥkha as the mark of existence Constant dissatisfaction — duḥkha — is an intrinsic mark of all sentient existence. All living creatures have to undergo the sufferings of birth, aging, sickness and death; want what they do not have, avoid what they do not like, and feel loss for the positive things they have lost. All of these types of striving (taṇhā) are sources of suffering, and they are not external but are rather inherent vices (such as greed, lust, envy, self-indulgence) of all living creatures. Since in Buddhism one of the central concepts is that of liberation or nirvana, this highlights the miserable character of existence, as there would be no need to make such a great effort to free oneself from a mere "less than ideal state". Since enlightenment is the goal of Buddhist practices through the Noble Eightfold Path, the value of life itself, under this perspective, appears as doubtful. Pleasure doesn't add anything positive to our experience A number of philosophers have put forward criticisms of pleasure, essentially denying that it adds anything positive to our well-being above the neutral state. Pleasure as the mere removal of pain A particular strand of criticism of pleasure goes as far back as to Plato, who said that most of the pleasures we experience are forms of relief from pain, and that the unwise confuse the neutral painless state with happiness. Epicurus pushed this idea to its limit and claimed that, "[t]he limit of the greatness of the pleasures is the removal of everything which can give pain". As such, according to Epicureans, one can not be better off than being free from pain, anxiety, distress, fear, irritation, regret, worry, etc. — in the state of tranquillity. According to Knutsson, there are a couple of reasons why we might think that. Firstly, we can say that one experience is better than another by recognizing that the first one lacks a particular discomfort. And we can do that with any number of experiences, thus explaining what it means to feel better, all that just with relying on taking away disturbances. Secondly, it's difficult to find a particular quality of experience that would make it better than a completely undisturbed state. Thirdly, we can explain behavior without invoking positive pleasures. Fourthly, it's easy to understand what it means for an experience to have certain imperfections (aversive qualities), while it's not clear what it would mean for an experience to be genuinely better than neutral. And lastly, a model with only negative and neutral states is theoretically simpler than one containing an additional class of positive experiences. No genuine positive states A stronger version of this view is that there may be no states that are undisturbed or neutral. It's at least plausible that in every state we could notice some dissatisfactory quality such as tiredness, irritation, boredom, worry, feeling uncomfortable, etc. Instead of neutral states, there may simply be "default" states — states with recurrent but minor frustrations and discomforts that, over time, we got used to and learned not to do anything about. Pleasure as the mere relief from striving Schopenhauer maintained that only pain is positive. That is, only pain is directly felt — it's experienced as something which is immediately added to our consciousness. On the other hand, pleasure is only ever negative, which means it only takes away something already present in our experience — and thus is only experienced in an indirect or mediate way. He put forward his negativity thesis — that pleasure is only ever a relief from pain. Later German pessimists — Julius Bahnsen, Eduard von Hartmann, and Philipp Mainländer — held very similar views. Pain can be removed in one of two ways. One way is to satisfy a desire. Since to strive is to suffer, once a desire is satisfied, suffering momentarily stops. The second way is through distraction. When we're not paying attention to what we lack — and hence, desire — we are temporarily at peace. This happens in cases of intellectual and aesthetic experiences. A craving may arise when we direct our attention towards some external object, or when we notice something unwanted about our current situation. This is experienced as a visceral need to change something about the current state. When we do not feel any such cravings, we are content or tranquil — we feel no urgency or need to change anything about our experience. No genuine counterpart to suffering Alternatively, it can be argued that, for any purported pleasant state, we never find — under closer inspection — anything that would make it a positive or genuine counterpart to suffering. For an experience to be genuinely positive it would have to be an experiential opposite to suffering. However, it's difficult to understand what it would take for an experience to be an opposite of another experience — there just seem to be separate axes of experiences (hot and cold, loud and silent), which are noticed as contrasting. And even if we granted that the idea of an experiential opposite makes sense, it's difficult — if not impossible — to actually find a clear example of such an experience that would survive scrutiny. There is some neuroscientific evidence that positive and negative experiences are not laid on the same axis, but rather comprise two distinct — albeit interacting — systems. Life contains uncompensated evils One argument for the negative view on life is the recognition that evils are unconditionally unacceptable. A good life is not possible with evils in it. This line of thinking is based on Schopenhauer's statement that "the ill and evil in the world... even if they stood in the most just relation to each other, indeed even if they were far outweighed by the good, are nevertheless things that should absolutely never exist in any way, shape or form" in The World as Will and Representation. The idea here is that no good can ever erase the experienced evils, because they are of a different quality or kind of importance. Schopenhauer elaborates on the vital difference between the good and the bad, saying that, "it is fundamentally beside the point to argue whether there is more good or evil in the world: for the very existence of evil already decides the matter since it can never be cancelled out by any good that might exist alongside or after it, and cannot therefore be counterbalanced", and adding that, "even if thousands had lived in happiness and delight, this would never annul the anxiety and tortured death of a single person; and my present wellbeing does just as little to undo my earlier suffering." One way of interpreting the argument is by focusing on how one thing could compensate another. The goods can only compensate the evils, when they a) happen to the same subject, and b) happen at the same time. The reason why the good has to happen to the same subject is because the miserable cannot feel the happiness of the joyful, and hence it has no effect on him. The reason why the good has to happen at the same time is because the future joy does not act backwards in time, and so it has no effect on the present state of the suffering individual. But these conditions are not being met, and hence life is not worth living. Here, it doesn't matter whether there are any genuine positive pleasures, because since pleasures and pains are experientially separated, the evils are left unrepaid. Another interpretation of the negativity thesis — that goods are merely negative in character — uses metaphors of debt and repayment, and crime and punishment. Here, merely ceasing an evil does not count as paying it off, just like stopping committing a crime does not amount to making amends for it. The bad can only be compensated by something positively good, just like a crime has to be answered for by some punishment, or a debt has to be paid off by something valuable. If the good is merely taking away an evil, then it cannot compensate for the bad since it's not of the appropriate kind — it's not a positive thing that could "repay the debt" of the bad. Suffering is essential to life because of perpetual striving Arthur Schopenhauer introduces an a priori argument for pessimism. The basis of the argument is the recognition that sentient organisms—animals—are embodied and inhabit specific niches in the environment. They struggle for their self-preservation. Striving to satisfy wants is the essence of all organic life. Schopenhauer posits that striving is the essence of life. All striving, he argues, involves suffering. Thus, he concludes that suffering is unavoidable and inherent to existence. Given this, he says that the balance of good and bad is on the whole negative. There are a couple of reasons why suffering is a fundamental aspect of life: Satisfaction is elusive: organisms strive towards various things all the time. Whenever they satisfy one desire, they want something else and the striving begins anew. Happiness is negative: while needs come to us seemingly out of themselves, we have to exert ourselves in order to experience some degree of joy. Moreover, pleasure is only ever a satisfaction—or elimination—of a particular desire. Therefore, it is only a negative experience as it temporarily takes away a striving or need. Striving is suffering: as long as striving is not satisfied, it's being experienced as suffering. Boredom is suffering: the lack of an object of desire is experienced as a discomforting state. The terminality of human life According to Julio Cabrera's ontology, human life has a structurally negative value. Under this view, human life does not provoke discomfort in humans due to the particular events that happen in the lives of each individual, but due to the very being or nature of human existence as such. The following characteristics constitute what Cabrera calls the "terminality of being" — in other words, its structurally negative value: For Cabrera, this situation is further worsened by a phenomenon he calls "moral impediment", that is, the structural impossibility of acting in the world without harming or manipulating someone at some given moment. According to him, moral impediment happens not necessarily because of a moral fault in us, but due to the structural situation in which we have been placed. The positive values that are created in human life come into being within a narrow and anxious environment. Human beings are cornered by the presence of their decaying bodies as well as pain and discouragement, in a complicated and holistic web of actions, in which we are forced to quickly understand diversified social situations and take relevant decisions. It is difficult for our urgent need to build our own positive values, not to end up harming the projects of other humans who are also anxiously trying to do the same, that is, build their own positive values. The asymmetry between harms and benefits David Benatar argues that there is a significant difference between lack/presence of harms and benefits when comparing a situation when a person exists with a situation when said person never exists. The starting point of the argument is the following noncontroversial observation: 1. The presence of pain is bad. 2. The presence of pleasure is good. However, the symmetry breaks when we consider the absence of pain and pleasure: 3. The absence of pain is good, even if that good is not enjoyed by anyone. 4. The absence of pleasure is not bad unless there is somebody for whom this absence is a deprivation. Based on the above, Benatar infers the following: the absence of pain is better in the case where a person never exists than the presence of pain where a person does exist, the absence of pleasure is not worse in the case where a person never exists than the presence of pleasure where a person does exists. In short, the absence of pain is good, while the absence of pleasure is not bad. From this it follows that not coming into existence has advantages over coming into existence for the one who would be affected by coming into the world. This is the cornerstone of his argument for antinatalism — the view that coming into existence is bad. Empirical differences between the pleasures and pains in life To support his case for pessimism, Benatar mentions a series of empirical differences between the pleasures and pains in life. In a strictly temporal aspect, the most intense pleasures that can be experienced are short-lived (e.g. orgasms), whereas the most severe pains can be much more enduring, lasting for days, months, and even years. The worst pains that can be experienced are also worse in quality or magnitude than the best pleasures are good, offering as an example the thought experiment of whether one would accept "an hour of the most delightful pleasures in exchange for an hour of the worst tortures". In addition to citing Schopenhauer, who made a similar argument, when asking his readers to "compare the feelings of an animal that is devouring another with those of that other"; the amount of time it may take for one's desires to be fulfilled, with some of our desires never being satisfied; the quickness with which one's body can be injured, damaged, or fall ill, and the comparative slowness of recovery, with full recovery sometimes never being attained; the existence of chronic pain, but the comparative non-existence of chronic pleasure; the gradual and inevitable physical and mental decline to which every life is subjected through the process of ageing; the effortless way in which the bad things in life naturally come to us, and the efforts one needs to muster in order to ward them off and obtain the good things; the lack of a cosmic or transcendent meaning to human life as a whole, borrowing a term from Spinoza, according to Benatar our lives lack meaning from the perspective of the universe, that is, sub specie aeternitatis. Benatar concludes that, even if one argues that the bad things in life are in some sense necessary for human beings to appreciate the good things in life, or at least to appreciate them fully, he asserts that it is not clear that this appreciation requires as much bad as there is, and that our lives are worse than they would be if the bad things were not in such sense necessary. Human life would be vastly better if pain were fleeting and pleasure protracted; if the pleasures were much better than the pains were bad; if it were really difficult to be injured or get sick; if recovery were swift when injury or illness did befall us; and if our desires were fulfilled instantly and if they did not give way to new desires. Human life would also be immensely better if we lived for many thousands of years in good health and if we were much wiser, cleverer, and morally better than we are. Responses to the evils of existence Pessimistic philosophers came up with a variety of ways of dealing with the suffering and misery of life. Schopenhauer's renunciation of the will to life Arthur Schopenhauer regarded his philosophy not only as a condemnation of existence, but also as a doctrine of salvation that allows one to counteract the suffering that comes from the will to life and attain tranquillity. According to Schopenhauer, suffering comes from willing (striving, desiring). One's willing is proportional to one's focus on oneself, one's needs, fears, individuality, etc. So, Schopenhauer reasons, to interrupt suffering, one has to interrupt willing. And to diminish willing, one has to diminish the focus on oneself. This can be accomplished in a couple of ways. Aesthetic contemplation Aesthetic contemplation is the focused appreciation of a piece of art, music, or even an idea. It is disinterested and impersonal. It is disinterested — one's interests give way to a devotion to the object; it's being considered as an end in itself. It is impersonal — not constrained by one's own likes and dislikes. Aesthetic appreciation evokes a universal idea of an object, rather than the perception of the object as unique. During that time, one "loses oneself" in the object of contemplation, and the sense of individuation temporarily dissolves. This is because the universality of the object of contemplation passes onto the subject. One's consciousness becomes will-less. One becomes — if only for a brief moment — a neutral spectator or a "pure subject", unencumbered by one's own self, needs, and suffering. Compassionate moral outlook For Schopenhauer, a proper moral attitude towards others comes from the recognition that the separation between living beings occurs only in the realm of representation, originating from the principium individuationis. Underneath the representational realm, we are all one. Each person is, in fact, the same Will — only manifested through different objectifications. The suffering of another being is thus our own suffering. The recognition of this metaphysical truth allows one to attain a more universal, rather than individualistic, consciousness. In such a universal consciousness, one relinquishes one's exclusive focus on one's own well-being and woe towards that of all other beings. Asceticism Schopenhauer explains that one may go through a transformative experience in which one recognizes that the perception of the world as being constituted of separate things, that are impermanent and constantly striving, is illusory. This can come about through knowledge of the workings of the world or through an experience of extreme suffering. One sees through the veil of Maya. This means that one no longer identifies oneself as a separate individual. Rather, one recognizes himself as all things. One sees the source of all misery — the Will as the thing-in-itself, which is the kernel of all reality. One can then change one's attitude to life towards that of the renunciation of the will to life and practice self-denial (not giving in to desires). The person who attains this state of mind lives his life in complete peace and equanimity. He is not bothered by desires or lack. He accepts everything as it is. This path of redemption, Schopenhauer argues, is more permanent, since it's grounded in a profound recognition that changes one's attitude. It's not merely a fleeting moment as in the case of an aesthetic experience. The ascetic way of life, however, is not available for everyone — only a few rare and heroic individuals may be able to live as ascetics and attain such a state. More importantly, Schopenhauer explains, asceticism requires virtue; and virtue can be cultivated but not taught. Defence mechanisms Peter Wessel Zapffe viewed humans as animals with an overly developed consciousness who yearn for justice and meaning in a fundamentally meaningless and unjust universe — constantly struggling against feelings of existential dread as well as the knowledge of their own mortality. He identified four defence mechanisms that allow people to cope with disturbing thoughts about the nature of human existence: Isolation: the troublesome facts of existence are simply repressed — they are not spoken about in public, and are not even thought about in private. Anchoring: one fixates (anchors) oneself on cultural projects, religious beliefs, ideologies, etc.; and pursue goals appropriate to the objects of one's fixation. By dedicating oneself to a cause, one focuses one's attention on a specific value or ideal, thus achieving a communal or cultural sense of stability and safety from unsettling existential musings. Distraction: through entertainment, career, status, etc., one distracts oneself from existentially disturbing thoughts. By constantly chasing for new pleasures, new goals, and new things to do, one is able to evade a direct confrontation against mankind's vulnerable and ill-fated situation in the cosmos. Sublimation: artistic expression may act as a temporary means of respite from feelings of existential angst by transforming them into works of art that can be aesthetically appreciated from a distance. Non-procreation and extinction Concern for those who will be coming into this world has been present throughout the history of pessimism. Notably, Arthur Schopenhauer asked: One should try to imagine that the act of procreation were neither a need, nor accompanied by sexual pleasure, but instead a matter of pure, rational reflection; could the human race even continue to exist? Would not everyone, on the contrary, have so much compassion for the coming generation that he would rather spare it the burden of existence, or at least refuse to take it upon himself to cold-bloodedly impose it on them? Schopenhauer also compares life to a debt that's being collected through urgent needs and torturing wants. We live by paying off the interests on this debt by constantly satisfying the desires of life; and the entirety of such debt is contracted in procreation: when we come into the world. Anthropocentric antinatalism Some pessimists, most notably Peter Wessel Zapffe and David Benatar, prescribe abstention from procreation as the best response to the ills of life. A person can only do so much to secure oneself from suffering or help others in need. The best course of action, they argue, is to not bring others into a world where discomfort is guaranteed. They also suggest a scenario where humanity decides not to continue to exist, but instead chooses to go down the route of phased extinction. The resulting extinction of the human species would not be regrettable but a good thing. They go as far as to prescribe non-procreation as the morally right — or even obligatory — course of action. Zapffe conveys this position through the words of the titular Last Messiah: "Know yourselves – be infertile and let the earth be silent after ye". Wildlife antinatalism Antinatalism can be extended to animals. Benatar clearly notes that his "argument applies not only to humans but also to all other sentient beings" and that "coming into existence harms all sentient beings". He reinforces his view when discussing extinction by saying "it would be better, all things considered, if there were no more people (and indeed no more conscious life)." It can be argued that since we have a prima facie obligation to help humans in need, and preventing future humans from coming into existence is helping them, and there is no justification for treating animals worse, we have a similar obligation to animals living in the wild. That is, we should also help alleviate their suffering and introduce certain interventions to prevent them from coming into the world — a position which would be called "wildlife anti-natalism". Suicide Some pessimists, including David Benatar and Julio Cabrera, argue that in some extreme situations, such as intense pain, terror, and slavery, people are morally justified to end their own lives. Although this will not resolve the human predicament, it may at the very least stop further suffering or moral degradation of the person in question. Cabrera says that dying is usually not pleasant nor dignified, so suicide is the only way to choose the way one dies. He writes, "If you want to die well, you must be the artist of your own death; nobody can replace you in that." Arthur Schopenhauer rejects various objections to suicide stemming from religion, as well as those based on accusations of cowardice or insanity regarding the person who decides to end their own life. In this perspective, we should be compassionate towards the suicide — we should understand that someone may not be able to bear the sufferings present in their own life, and that one's own life is something that one has an indisputable right to. Schopenhauer does not see suicide as a kind of solution to the sufferings of existence. His opposition to suicide is rooted in his metaphysical system. Schopenhauer focuses on human nature — which is governed by the Will. This means that we are in a never ending cycle of striving to achieve our ends, feeling dissatisfied, feeling bored, and once again desiring something else. Yet because the Will is the inner essence of existence, the source of our suffering is not exactly in us, but in the world itself. Taking one's life is a mistake, for one still would like to live, but simply in better conditions. The suicidal person still desires goods in life — a "person who commits suicide stops living precisely because he cannot stop willing". It is not one's own individual life that is the source of one's suffering, but the Will, the ceaselessly striving nature of existence. The mistake is in annihilating an individual life, and not the Will itself. The Will cannot be negated by ending one's life, so it's not a solution to the sufferings embedded in existence itself. David Benatar considers many objections against suicide, such as it being a violation of the sanctity of human life, a violation of the person's right to life, being unnatural, or being a cowardly act, to be unconvincing. The only relevant considerations that should be taken into account in the matter of suicide are those regarding people to whom we hold some special obligations. Such as, for example, our family members. In general, for Benatar the question of suicide is more a question of dealing with the particular miseries of one's life, rather than a moral problem per se. Consequently, he argues that, in certain situations, suicide is not only morally justified but is also a rational course of action. Benatar's arguments regarding the poor quality of human life do not lead him to the conclusion that death is generally preferable to the continuation of life. But they do serve to clarify as to why there are cases in which one's continued existence would be worse than death, as they make it explicit that suicide is justified in a greater variety of situations than we would normally grant. Every person's situation is different, and the question of the rationality of suicide must be considered from the perspective of each particular individual — based on their own hardships and prospects regarding the future. Jiwoon Hwang argued that the hedonistic interpretation of David Benatar's axiological asymmetry of harms and benefits entails promortalism — the view that it is always preferable to cease to exist than to continue to live. Hwang argues that the absence of pleasure is not bad in the following cases: for the one who never exists, for the one who exists, and for the one who ceased to exist. By "bad" we mean that it's not worse than the presence of pleasure for the one who exists. This is consistent with Benatar's statement that the presence of pleasure for the existing person is not an advantage over the absence of pleasure for the never existing and vice versa. Collective ending of all life Eduard von Hartmann was against all individualistic forms of the abolition of suffering, prominent in Buddhism and in Schopenhauer's philosophy, arguing that these approaches fail to address the problem of continued suffering for others. Instead, he opted for a collective solution: he believed that life progresses towards greater rationality—culminating in humankind—and that as humans became more educated and more intelligent, they would see through various illusions regarding the abolition of suffering, eventually realizing that the problem lies ultimately in existence itself. Thus, humanity as a whole would recognize that the only way to end the suffering present in life is to end life itself. This would happen in the future, where people would have advanced technologically to a point where they could destroy the whole of nature. That, for von Hartmann, would be the ultimate negation of the Will by Reason. Pessimism and other philosophical topics Animals Aside from the human predicament, many philosophical pessimists also emphasize the negative quality of the life of non-human animals, criticizing the notion of nature as a "wise and benevolent" creator. In his 1973 Pulitzer Prize winning book The Denial of Death, Ernest Becker describes it thus: What are we to make of a creation in which the routine activity is for organisms to be tearing others apart with teeth of all types—biting, grinding flesh, plant stalks, bones between molars, pushing the pulp greedily down the gullet with delight, incorporating its essence into one's own organization, and then excreting with foul stench and gasses the residue. Everyone reaching out to incorporate others who are edible to him. The mosquitoes bloating themselves on blood, the maggots, the killer-bees attacking with a fury and a demonism, sharks continuing to tear and swallow while their own innards are being torn out—not to mention the daily dismemberment and slaughter in "natural" accidents of all types (...) Creation is a nightmare spectacular taking place on a planet that has been soaked for hundreds of millions of years in the blood of all its creatures. The soberest conclusion that we could make about what has actually been taking place on the planet for about three billion years is that it is being turned into a vast pit of fertilizer. But the sun distracts our attention, always baking the blood dry, making things grow over it, and with its warmth giving the hope that comes with the organism's comfort and expansiveness. The theory of evolution by natural selection can be said to justify a form of philosophical pessimism based on a negative evaluation of the lives of animals in the wild. In 1887, Charles Darwin expressed a feeling of revolt at the notion that God's benevolence is limited, stating: "for what advantage can there be in the sufferings of millions of the lower animals throughout almost endless time?" The animal activist and moral philosopher Oscar Horta argues that because of evolutionary processes, not only is suffering in nature inevitable, but that it actually prevails over happiness. For evolutionary biologist Richard Dawkins, nature is in no way benevolent. He argues that what is at stake in biological processes is nothing more than the survival of DNA sequences of genes. Dawkins also asserts that as long as the DNA is transmitted, it does not matter how much suffering such transmission entails and that genes do not care about the amount of suffering they cause because nothing affects them emotionally. In other words, nature is indifferent to unhappiness, unless it has an impact on the survival of the DNA. Although Dawkins does not explicitly establish the prevalence of suffering over well-being, he considers unhappiness to be the "natural state" of wild animals: The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute it takes me to compose this sentence, thousands of animals are being eaten alive; others are running for their lives, whimpering with fear; others are being slowly devoured from within by rasping parasites; thousands of all kinds are dying of starvation, thirst and disease. It must be so. If there is ever a time of plenty, this very fact will automatically lead to an increase in population until the natural state of starvation and misery is restored.... In a universe of blind physical forces and genetic replication, some people are going to get hurt, other people are going to get lucky, and you won't find any rhyme or reason in it, nor any justice. The universe we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil and no good, nothing but blind, pitiless indifference. Abortion Even though pessimists agree on the judgment that life is bad and some pessimistic antinatalists criticise procreation, their views on abortion differ. Pro-death view David Benatar holds a "pro-death" stance on abortion. He argues that in the earlier stages of pregnancy, when the fetus has not yet developed consciousness and has no morally relevant interests, we should adopt a presumption against carrying the fetus to term. What demands justification is not the act of abortion, but the failure to abort the fetus (in the early stages of pregnancy). Benatar does not argue that such early abortions should be mandatory, but only that it would be preferable to perform the abortion. Anti-abortion view Julio Cabrera notices that abortion requires consideration of and action upon something that is already there. He argues that we must take it into our moral deliberations, regardless of the nature of that thing. He gives the following argument against abortion: P1. From the perspective of negative ethics, it is wrong to eliminate another human being only for our benefit, hence treating him as an obstacle to be removed. P2. It's morally good to act in favor of those who cannot defend themselves. P3. A fetus is something that begins to terminate from the very beginning, and it terminates as a human being. P4. A human fetus is, within the context of gestation, pregnancy and birth, the most helpless being involved. Conclusion: Therefore, from the perspective of negative ethics, it is morally wrong to eliminate (abort) a human being. Cabrera further elaborates on the argument with a couple of points. Since we are all valueless, the victimizer has no greater value than the victim to justify the killing. It's better to err on the side of caution and not abort because it's difficult to say when a fetus becomes a human. A fetus has a potential to become a rational agent with consciousness, feelings, preferences, thoughts, etc. We can think of humans as beings who are always in self-construction; and a fetus is such a type of being. Furthermore, a fetus is — like any other human being — in a process of "decay". Finally, we should also debate the status of those who perform abortions and the women who undergo abortions; not just the status of the fetus. Death For Arthur Schopenhauer, every action (eating, sleeping, breathing, etc.) was a struggle against death, although one which always ends with death's triumph over the individual. Since other animals also fear death, the fear of death is not rational, but more akin to an instinct or a drive, which he called the will to life. In the end, however, death dissolves the individual and, with it, all fears, pains, and desires. Schopenhauer views death as a "great opportunity not to be I any longer". Our inner essence is not destroyed though — since we are a manifestation of the universal Will. David Benatar has not only a negative view on coming into existence, but also on ceasing to exist. Even though it is a harm for us to come into existence, once we do exist we have an interest in continuing to exist. We have plans for the future; we want to achieve our goals; there may be some future goods we could benefit from, if we continue to exist. But death annihilates us; in this way robbing us from our future and the possibility of us realizing our plans. Criticism Plümacher's criticisms of Schopenhauer Olga Plümacher criticizes Schopenhauer's system on a variety of points. According to Schopenhauer, an individual person is itself a manifestation of the Will. But if that is the case, then the negation of the Will is also an illusion, since if it were genuine, it would bring about the end of the world. Furthermore, she notices that for Schopenhauer, the non-existence of the world is preferable to its existence. However, this is not an absolute statement (that is, it says that the world is the worst), but a comparative statement (that is, it says that it's worse than something else). Against the claim that pleasures are only ever negative A claim pessimists often make is that pleasures are negative in nature — they are mere satisfactions of desires or removals of pains. Some object to this by providing intuitive counterexamples, where we are engaged in something pleasurable which seems to be adding some genuine pleasure above the neutral state of undisturbness. This objection can be presented like this: Imagine that I am enjoying the state of being hydrated, full and warm. Then somebody offers me a small chocolate bon-bon, and I greatly enjoy the delicious taste of the dark chocolate. Why am I not experiencing more pleasure now than I was before (...)? The objection here is that we can clearly introspect that we feel something added to our experience, not that we merely no longer feel some pain, boredom, or desire. Such experiences include pleasant surprises, waking up in a good mood, savoring delicious meals, anticipating something good that will likely happen to us, and others. The response to these objections from counterexamples can run as follows. Usually, we do not focus enough on our present state to notice all disturbances (discontentment). It's likely we could notice some disturbances had we paid enough attention — even in situations where we think we experience genuine pleasure. Thus, it's at least plausible that these seemingly positive states have various imperfections, and we are not, in fact, undisturbed; and, therefore, we are below the hedonic neutral state. Influence outside philosophy TV and cinema The character of Rust Cohle in the first season of the television series True Detective is noted for expressing a philosophically pessimistic worldview; the creator of the series was inspired by the works of Thomas Ligotti, Emil Cioran, Eugene Thacker and David Benatar when creating the character. Literature Dostoevsky, Fyodor (1864). Notes from Underground Le Guin, Ursula K. (1973). The Ones Who Walk Away from Omelas Leopardi, Giacomo (1835). Canti Ligotti, Thomas (2018). The Conspiracy Against the Human Race: A Contrivance of Horror. Penguin Books McCarthy, Cormac (1992/1985). Blood Meridian; or, The Evening Redness in the West. Knopf Doubleday Publishing Group McCarthy, Cormac (2006). The Road. Knopf Doubleday Publishing Group Pessoa, Fernando (1982). The Book of Disquiet Thacker, Eugene (2018). Infinite Resignation. Repeater Thomson, James "B.V." (1874). The City of Dreadful Night Yalom, Irvin D. (2005). The Schopenhauer Cure. HarperCollins (The novel switches between the current events happening around a therapy group and the psychobiography of Arthur Schopenhauer). Voltaire (1759). Candide Novels and short stories by Guy de Maupassant. Labadie, Laurance (2014). Anarcho-Pessimism: the collected writings of Laurance Labadie. Ardent Press. Andreyev, Leonid (1906). "Lazarus" See also References Notes Citations Bibliography Primary literature Books Bahnsen, Julius (1877). Das Tragische als Weltgesetz und der Humor als ästhetische Gestalt des Metaphysischen. Lauenberg: Verlag von F. Ferley Benatar, David (2017). The Human Predicament: A Candid Guide to Life's Biggest Questions. Oxford University Press Cabrera, Julio (2019). Discomfort and Moral Impediment: The Human Situation, Radical Bioethics and Procreation. Cambridge Scholars Publishing Cioran, Emil (2012/1949). A Short History of Decay. Arcade Publishing Leopardi, Giacomo (1882/1827). Essays and Dialogues. Mainländer, Philipp (2024/1876). Philosophy of Redemption. Brisbane: Irukandji Press Schopenhauer, Arthur (2010/1818). The World as Will and Representation, Volume 1. Cambridge University Press Schopenhauer, Arthur (2018/1844). The World as Will and Representation, Volume 2. Cambridge University Press von Hartmann, Eduard (2014/1884). Philosophy of the Unconscious: Speculative Results According to the Inductive Method of Physical Science. Routledge Zapffe, Peter Wessel (2024/1941). On the Tragic. Peter Lang Inc., International Academic Publishers Essays Academic papers Penultimate draft Secondary literature Books Beiser, Frederick C. (2016). Weltschmerz: Pessimism in German Philosophy, 1860–1900. Oxford: Oxford University Press Coates, Ken (2014). Anti-Natalism: Rejectionist Philosophy from Buddhism to Benatar. First Edition Design Publishing Dienstag, Joshua Foa (2006). Pessimism: Philosophy, Ethic, Spirit. Princeton University Press Feltham, Colin (2016). Depressive Realism: Interdisciplinary Perspectives. Routledge Ligotti, Thomas (2011). The Conspiracy Against the Human Race: A Contrivance of Horror. Hippocampus Press Saltus, Edgar (2022/1885). The Philosophy of Disenchantment. Legare Street Press Segev, Mor (2022). The Value of the World and of Oneself: Philosophical Optimism and Pessimism from Aristotle to Modernity. Oxford: Oxford University Press Sully, James (2022/1877). Pessimism: A History and a Criticism. Legare Street Press Tsanoff, Radoslav A. (1931). The Nature of Evil. New York: The MacMillan Company van der Lugt, Mara (2021). Dark Matters: Pessimism and the Problem of Suffering. Princeton University Press Book chapters Academic papers External links Pessimism by Mara Van der Lugt in The Philosopher APA series on philosophical pessimism Metzinger, Thomas (2017). Benevolent Artificial Anti-Natalism (BAAN). Edge Revista Hénadas, Spanish magazine about philosophical pessimism Listing of papers and books on pessimism on PhilPapers Pessimism
0.768375
0.995365
0.764813
Moral constructivism
Moral constructivism or ethical constructivism is a view both in meta-ethics and normative ethics. Metaethical constructivism holds that correctness of moral judgments, principles and values is determined by being the result of a suitable constructivist procedure. In other words, normative values are not something discovered by the use of theoretical reason, but a construction of human practical reason. In normative ethics, moral constructivism is the view that principles and values within a given normative domain can be justified based on the very fact that they are the result of a suitable constructivist device or procedure. See also Constructivism in Practical Philosophy Ethical subjectivism Moral rationalism Pragmatic ethics References External links Epistemological theories Metaethics Normative ethics Rationalism A priori Constructivism Ethical theories
0.79921
0.956957
0.76481
Equifinality
Equifinality is the principle that in open systems a given end state can be reached by many potential means. The term and concept is due to the German Hans Driesch, the developmental biologist, later applied by the Austrian Ludwig von Bertalanffy, the founder of general systems theory, and by William T. Powers, the founder of perceptual control theory. Driesch and von Bertalanffy prefer this term, in contrast to "goal", in describing complex systems' similar or convergent behavior. Powers simply emphasised the flexibility of response, since it emphasizes that the same end state may be achieved via many different paths or trajectories. In closed systems, a direct cause-and-effect relationship exists between the initial condition and the final state of the system: When a computer's 'on' switch is pushed, the system powers up. Open systems (such as biological and social systems), however, operate quite differently. The idea of equifinality suggests that similar results may be achieved with different initial conditions and in many different ways. This phenomenon has also been referred to as isotelesis (from Greek ἴσος isos "equal" and τέλεσις telesis: "the intelligent direction of effort toward the achievement of an end") when in games involving superrationality. Overview In business, equifinality implies that firms may establish similar competitive advantages based on substantially different competencies. In psychology, equifinality refers to how different early experiences in life (e.g., parental divorce, physical abuse, parental substance abuse) can lead to similar outcomes (e.g., childhood depression). In other words, there are many different early experiences that can lead to the same psychological disorder. In archaeology, equifinality refers to how different historical processes may lead to a similar outcome or social formation. For example, the development of agriculture or the bow and arrow occurred independently in many different areas of the world, yet for different reasons and through different historical trajectories. This highlights that generalizations based on cross-cultural comparisons cannot be made uncritically. In Earth and environmental Sciences, two general types of equifinality are distinguished: process equifinality (concerned with real-world open systems) and model equifinality (concerned with conceptual open systems). For example, process equifinality in geomorphology indicates that similar landforms might arise as a result of quite different sets of processes. Model equifinality refers to a condition where distinct configurations of model components (e.g. distinct model parameter values) can lead to similar or equally acceptable simulations (or representations of the real-world process of interest). This similarity or equal acceptability is conditional on the objective functions and criteria of acceptability defined by the modeler. While model equifinality has various facets, model parameter and structural equifinality are mostly known and focused in modeling studies. Equifinality (particularly parameter equifinality) and Monte Carlo experiments are the foundation of the GLUE method that was the first generalised method for uncertainty assessment in hydrological modeling. GLUE is now widely used within and beyond environmental modeling. See also GLUE – Generalized Likelihood Uncertainty Estimation (when modeling environmental systems there are many different model structures and parameter sets that may be behavioural or acceptable in reproducing the behaviour of that system) TMTOWTDI – Computer programming maxim: "there is more than one way to do it" Underdetermination Consilience Convergent evolution Teleonomy Degeneracy (biology) Kruskal's principle Multicollinearity References Publications Bertalanffy, Ludwig von, General Systems Theory, 1968 Beven, K.J. and Binley, A.M., 1992. The future of distributed models: model calibration and uncertainty prediction, Hydrological Processes, 6, pp. 279–298. Beven, K.J. and Freer, J., 2001a. Equifinality, data assimilation, and uncertainty estimation in mechanistic modelling of complex environmental systems, Journal of Hydrology, 249, 11–29. Croft, Gary W., Glossary of Systems Theory and Practice for the Applied Behavioral Sciences, Syntropy Incorporated, Freeland, WA, Prepublication Review Copy, 1996 Durkin, James E. (ed.), Living Groups: Group Psychotherapy and General System Theory, Brunner/Mazel, New York, 1981 Mash, E. J., & Wolfe, D. A. (2005). Abnormal Child Psychology (3rd edition). Wadsworth Canada. pp. 13–14. Weisbord, Marvin R., Productive Workplaces: Organizing and Managing for Dignity, Meaning, and Community, Jossey-Bass Publishers, San Francisco, 1987 Tang, J.Y. and Zhuang, Q. (2008). Equifinality in parameterization of process-based biogeochemistry models: A significant uncertainty source to the estimation of regional carbon dynamics, J. Geophys. Res., 113, G04010. Systems theory
0.783934
0.975139
0.764445
Convergent thinking
Convergent thinking is a term coined by Joy Paul Guilford as the opposite of divergent thinking. It generally means the ability to give the "correct" answer to questions that do not require novel ideas, for instance on standardized multiple-choice tests for intelligence. Relevance Convergent thinking is the type of thinking that focuses on coming up with the single, well-established answer to a problem. It is oriented toward deriving the single best, or most often correct answer to a question. Convergent thinking emphasizes speed, accuracy, and logic and focuses on recognizing the familiar, reapplying techniques, and accumulating stored information. It is most effective in situations where an answer readily exists and simply needs to be either recalled or worked out through decision making strategies. A critical aspect of convergent thinking is that it leads to a single best answer, leaving no room for ambiguity. In this view, answers are either right or wrong. The solution that is derived at the end of the convergent thinking process is the best possible answer the majority of the time. Convergent thinking is also linked to knowledge as it involves manipulating existing knowledge by means of standard procedures. Knowledge is another important aspect of creativity. It is a source of ideas, suggests pathways to solutions, and provides criteria of effectiveness and novelty. Convergent thinking is used as a tool in creative problem solving. When an individual is using critical thinking to solve a problem they consciously use standards or probabilities to make judgments. This contrasts with divergent thinking where judgment is deferred while looking for and accepting many possible solutions. Convergent thinking is often used in conjunction with divergent thinking. Divergent thinking typically occurs in a spontaneous, free-flowing manner, where many creative ideas are generated and evaluated. Multiple possible solutions are explored in a short amount of time, and unexpected connections are drawn. After the process of divergent thinking has been completed, ideas and information are organized and structured using convergent thinking to decision making strategies are used leading to a single-best, or most often correct answer. Examples of divergent thinking include using brainstorming, free writing and creative thinking at the beginning of the problem solving process to generate possible solutions that can be evaluated later. Once a sufficient number of ideas have been explored, convergent thinking can be used. Knowledge, logic, probabilities and other decision-making strategies are taken into consideration as the solutions are evaluated individually in a search for a single best answer which when reached is unambiguous. Convergent vs. divergent thinking Personality The personality correlates of divergent and convergent thinking have been studied. Results indicate that many personality traits are associated with divergent thinking (e.g., ideational fluency). Two of the most commonly identified correlates are Openness and Extraversion, which have been found to facilitate divergent thinking production. Openness assesses intellectual curiosity, imagination, artistic interests, liberal attitudes, and originality. See Divergent thinking page for further details. The fact that Openness was found to be one of the strongest personality correlate of divergent thinking is not surprising, as previous studies have suggested that Openness be interpreted as a proxy of creativity. Although Openness conceptualizes individual differences in facets other than creativity, the high correlation between Openness and divergent thinking is indicative of two different ways of measuring the same aspects of creativity. Openness is a self-report of one’s preference for thinking "outside the box”. Divergent thinking tests represent a performance-based measure of such. While some studies have found no personality effects on convergent thinking, large-scale meta-analyses have found numerous personality traits to be related to such reasoning abilities (e.g., corrected r = .31 with openness and -.30 with the volatility aspect of neuroticism). Brain activity The changes in brain activity were studied in subjects during both convergent and divergent thinking. To do this, researchers studied Electroencephalography (EEG) patterns of subjects during convergent and divergent thinking tasks. Different patterns of change for the EEG parameters were found during each type of thinking. When compared with a control group who was resting, both convergent and divergent thinking produced significant desynchronization of the Alpha 1,2 rhythms. Meanwhile, convergent thinking induced coherence increases in the Theta 1 band that was more caudal and right-sided. On the other hand, divergent thinking demonstrated amplitude decreases in the caudal regions of the cortex in Theta 1 and 2 bands. The large increase in amplitude and coherence indicates a close synchronization between both hemispheres in the brain. The successful generation of the hypothesis during divergent thinking performance seems to induce positive emotions which, in part, can be due to the increase of complexity and performance measures of creative thinking, Psycho-inter-hemispheric coherence. Finally, the obtained dominance of the right hemisphere and ‘the cognitive axis’, the coupling of the left occipital – right frontal in contrast to the right occipital – left frontal ‘axis’ characterizing analytic thinking, may reflect the EEG pattern of the unconscious mental processing during successful divergent thinking. Convergent and divergent thinking depend on the locus coeruleus neurotransmission system, which modulates noradrenaline levels in the brain. This system plays important roles in cognitive flexibility and the explore/exploit tradeoff problem (multi-armed bandit problem). Intellectual ability A series of standard intelligence tests were used to measure both the convergent and divergent thinking abilities of adolescents. Results indicate that subjects who classified as high on divergent thinking had significantly higher word fluency and reading scores than subjects who classified as low on divergent thinking. Furthermore, those who were high in divergent thinking also demonstrated higher anxiety and penetration scores. Thus, those subjects who are high in divergent thinking can be characterized as having their perceptual processes mature and become adequately controlled in an unconventional way. Conversely, subjects in the high convergent thinking group illustrated higher grade averages for the previous school year, less difficulty with homework and also indicated that their parents pressed them towards post-secondary education. These were the only significant relationships regarding the convergent thinking measures. This suggests that these cognitive dimensions are independent of one another. Future investigations into this topic should focus more upon the developmental, cognitive and perpetual aspects of personality among divergent and convergent thinkers, rather than their attitude structures. Creative ability Creative ability was measured in a study using convergent tasks, which require a single correct answer, and divergent tasks, which requires producing many different answers of varying correctness. Two types of convergent tasks used were, the first being a remote associates tasks, which gave the subject three words and asked what word the previous three words are related to. The second type of convergent thinking task were insight problems, which gave the subjects some contextual facts and then asked them a question requiring interpretation. For the remote associates tasks, the convergent thinkers correctly solved more of the five remote associates problems than did those using divergent thinking. This was demonstrated to be significantly different by a one-way ANOVA. In addition, when responding to insight problems, participants using convergent thinking solved more insight problems than did the control group, however, there was no significant difference between subjects using convergent or divergent thinking. For the divergent thinking tasks, although together all of the divergent tasks demonstrated a correlation, they were not significant when examined between conditions. Mood With increasing evidence suggesting that emotions can affect underlying cognitive processes, recent approaches have also explored the opposite, that cognitive processes can also affect one's mood. Research indicates that preparing for a creative thinking task induces mood swings depending on what type of thinking is used for the task. The results demonstrate that carrying out a task requiring creative thinking does have an effect on one's mood. This provides considerable support for the idea that mood and cognition are not only related, but also that this relation is reciprocal. Additionally, divergent and convergent thinking impact mood in opposite ways. Divergent thinking led to a more positive mood, whereas convergent thinking had the opposite effect, leading to a more negative mood. Practical use Convergent thinking is a fundamental tool in a child's education. Today, most educational opportunities are tied to one's performance on standardized tests that are often multiple choice in nature. When a student contemplates the possible answers available, they use convergent thinking to weigh alternatives within a construct. This allows one to find a single best solution that is measurable. Examples of convergent questions in teaching in the classroom: On reflecting over the entirety of the play Hamlet, what were the main reasons why Ophelia went mad? What is the chemical reaction for photosynthesis? What are signs of nitrogen deficiency in plants? Which breeds of livestock would be best adapted for South Texas? Criticism The idea of convergent thinking has been critiqued by researchers who claim that not all problems have solutions that can be effectively ranked. Convergent thinking assigns a position to one solution over another. The problem is that when one is dealing with more complex problems, the individual may not be able to appropriately rank the solutions available to them. In these instances, researchers indicate that when dealing with complex problems, other variables such as one's gut feeling or instinctive problem solving abilities also have a role in determining a solution to a given problem. Furthermore, convergent thinking has also been said to devalue minority arguments. In a study where experimental manipulations were used to motivate subjects to engage in convergent or divergent thinking when presented with either majority or minority support for persuasive arguments, a pattern emerged under the convergent thinking condition where majority support produced more positive attitudes on the focal issue. Conversely, minority support for the argument had no effect on the subjects. The convergent thinkers are too focused with selecting the best answer that they fail to appropriately evaluate minority opinion and could end up dismissing accurate solutions. See also Divergent thinking References Problem solving skills
0.775949
0.985061
0.764357
Monism
Monism attributes oneness or singleness to a concept, such as to existence. Various kinds of monism can be distinguished: Priority monism states that all existing things go back to a source that is distinct from them; e.g., in Neoplatonism everything is derived from The One. In this view only the One is ontologically basic or prior to everything else. Existence monism posits that, strictly speaking, there exists only a single thing, the universe, which can only be artificially and arbitrarily divided into many things. Substance monism asserts that a variety of existing things can be explained in terms of a single reality or substance. Substance monism posits that only one kind of substance exists, although many things may be made up of this substance, e.g., matter or mind. Dual-aspect monism is the view that the mental and the physical are two aspects of, or perspectives on, the same substance. Neutral monism believes the fundamental nature of reality to be neither mental nor physical; in other words it is "neutral". Definitions There are two sorts of definitions for monism: The wide definition: a philosophy is monistic if it postulates unity of the origin of all things; all existing things return to a source that is distinct from them. The restricted definition: this requires not only unity of origin but also unity of substance and essence. Although the term monism is derived from Western philosophy to typify positions in the mind–body problem, it has also been used to typify religious traditions. In modern Hinduism, the term "absolute monism" has been applied to Advaita Vedanta, though Philip Renard points out that this may be a Western interpretation, bypassing the intuitive understanding of a nondual reality. It is more generally categorized by scholars as a form of absolute nondualism. History Material monism can be traced back to the pre-Socratic philosophers who sought to understand the arche or basic principle of the universe in terms of different material causes. These included Thales, who argued that the basis of everything was water, Anaximenes, who claimed it was air, and Heraclitus who believed it to be fire. Later, Parmenides described the world as "One", which could not change in any way. Zeno of Elea defended this view of everything being a single entity through his paradoxes, which aim to show the existence of time, motion and space to be illusionary. Baruch Spinoza argued that 'God or Nature' (Deus sive Natura) is the only substance of the universe, which can be referred to as either 'God' or 'Nature' (the two being interchangeable). This is because God/Nature has all the possible attributes and no two substances can share an attribute, which means there can be no other substances than God/Nature. Monism has been discussed thoroughly in Indian philosophy and Vedanta throughout their history starting as early as the Rig Veda. The term monism was introduced in the 18th century by Christian von Wolff in his work Logic (1728), to designate types of philosophical thought in which the attempt was made to eliminate the dichotomy of body and mind and explain all phenomena by one unifying principle, or as manifestations of a single substance. The mind–body problem in philosophy examines the relationship between mind and matter, and in particular the relationship between consciousness and the brain. The problem was addressed by René Descartes in the 17th century, resulting in Cartesian dualism, and by pre-Aristotelian philosophers, in Avicennian philosophy, and in earlier Asian and more specifically Indian traditions. It was later also applied to the theory of absolute identity set forth by Hegel and Schelling. Thereafter the term was more broadly used, for any theory postulating a unifying principle. The opponent thesis of dualism also was broadened, to include pluralism. According to Urmson, as a result of this extended use, the term is "systematically ambiguous". According to Jonathan Schaffer, monism lost popularity due to the emergence of analytic philosophy in the early twentieth century, which revolted against the neo-Hegelians. Rudolf Carnap and A. J. Ayer, who were strong proponents of positivism, "ridiculed the whole question as incoherent mysticism". The mind–body problem has reemerged in social psychology and related fields, with the interest in mind–body interaction and the rejection of Cartesian mind–body dualism in the identity thesis, a modern form of monism. Monism is also still relevant to the philosophy of mind, where various positions are defended. Types Different types of monism include: Substance monism, "the view that the apparent plurality of substances is due to different states or appearances of a single substance" Attributive monism, "the view that whatever the number of substances, they are of a single ultimate kind" Epistemological monism, where "ultimately, everything that can be thought, observed and engaged, shares one conceptual system of interaction, however complex." Partial monism, "within a given realm of being (however many there may be) there is only one substance" Existence monism, "the view that there is only one concrete object token (The One, "Τὸ Ἕν" or the Monad)" Priority monism, "the whole is prior to its parts" or "the world has parts, but the parts are dependent fragments of an integrated whole" Property monism, "the view that all properties are of a single type (e.g., only physical properties exist)" Genus monism, "the doctrine that there is a highest category; e.g., being" Views contrasting with monism are: Metaphysical dualism, which asserts that there are two ultimately irreconcilable substances or realities such as Good and Evil, for example, Gnosticism and Manichaeism. Metaphysical pluralism, which asserts three or more fundamental substances or realities. Metaphysical nihilism, negates any of the above categories (substances, properties, concrete objects, etc.). Monism in modern philosophy of mind can be divided into three broad categories: Certain positions do not fit easily into the above categories, such as functionalism, anomalous monism, and reflexive monism. Moreover, they do not define the meaning of "real". Monistic philosophers Pre-Socratic While the lack of information makes it difficult in some cases to be sure of the details, the following pre-Socratic philosophers thought in monistic terms: Thales: Water Anaximander: Apeiron (meaning 'the undefined infinite'). Reality is some, one thing, but we cannot know what. Anaximenes of Miletus: Air Heraclitus: Change, symbolized by fire (in that everything is in constant flux). Parmenides: Being or Reality is an unmoving perfect sphere, unchanging, undivided. Post-Socrates Neopythagorians such as Apollonius of Tyana centered their cosmologies on the Monad or One. Stoics taught that there is only one substance, identified as God. Middle Platonism under such works as those by Numenius taught that the Universe emanates from the Monad or One. Neoplatonism is monistic. Plotinus taught that there was an ineffable transcendent god, 'The One,' of which subsequent realities were emanations. From The One emanates the Divine Mind (Nous), the Cosmic Soul (Psyche), and the World (Cosmos). Modern Monistic neuroscientists György Buzsáki Francis Crick Karl Friston Eric Kandel Mark Solms Rodolfo Llinas Ivan Pavlov Roger Sperry Religion Pantheism Pantheism is the belief that everything composes an all-encompassing, immanent God, or that the universe (or nature) is identical with divinity. Pantheists thus do not believe in a personal or anthropomorphic god, but believe that interpretations of the term differ. Pantheism was popularized in the modern era as both a theology and philosophy based on the work of the 17th-century philosopher Baruch Spinoza, whose Ethics was an answer to Descartes' famous dualist theory that the body and spirit are separate. Spinoza held that the two are the same, and this monism is a fundamental quality of his philosophy. He was described as a "God-intoxicated man," and used the word God to describe the unity of all substance. Although the term pantheism was not coined until after his death, Spinoza is regarded as its most celebrated advocate. H. P. Owen claimed that Pantheism is closely related to monism, as pantheists too believe all of reality is one substance, called Universe, God or Nature. Panentheism, a slightly different concept (explained below), however is dualistic. Some of the most famous pantheists are the Stoics, Giordano Bruno and Spinoza. Panentheism Panentheism (from Greek (pân) "all"; (en) "in"; and (theós) "God"; "all-in-God") is a belief system that posits that the divine (be it a monotheistic God, polytheistic gods, or an eternal cosmic animating force) interpenetrates every part of nature, but is not one with nature. Panentheism differentiates itself from pantheism, which holds that the divine is synonymous with the universe. In panentheism, there are two types of substance, "pan" the universe and God. The universe and the divine are not ontologically equivalent. God is viewed as the eternal animating force within the universe. In some forms of panentheism, the cosmos exists within God, who in turn "transcends", "pervades" or is "in" the cosmos. While pantheism asserts that 'All is God', panentheism claims that God animates all of the universe, and also transcends the universe. In addition, some forms indicate that the universe is contained within God, like in the Judaic concept of Tzimtzum. Much Hindu thought is highly characterized by panentheism and pantheism. Paul Tillich has argued for such a concept within Christian theology, as has liberal biblical scholar Marcus Borg and mystical theologian Matthew Fox, an Episcopal priest. Pandeism Pandeism or pan-deism (from and meaning "god" in the sense of deism) is a term describing beliefs coherently incorporating or mixing logically reconcilable elements of pantheism (that "God", or a metaphysically equivalent creator deity, is identical to Nature) and classical deism (that the creator-god who designed the universe no longer exists in a status where it can be reached, and can instead be confirmed only by reason). It is therefore most particularly the belief that the creator of the universe actually became the universe, and so ceased to exist as a separate entity. Through this synergy pandeism claims to answer primary objections to deism (why would God create and then not interact with the universe?) and to pantheism (how did the universe originate and what is its purpose?). Indian and East Asian religions Characteristics The central problem in Asian (religious) philosophy is not the body-mind problem, but the search for an unchanging Real or Absolute beyond the world of appearances and changing phenomena, and the search for liberation from dukkha and the liberation from the cycle of rebirth. In Hinduism, substance-ontology prevails, seeing Brahman as the unchanging real beyond the world of appearances. In Buddhism, process ontology is prevalent, seeing reality as empty of an unchanging essence. Characteristic for various Asian philosophy, technology and religions is the discernment of levels of truth, an emphasis on intuitive-experiential understanding of the Absolute such as jnana, bodhi and jianxing: (Chinese; 見性), and the technology of yin and yang used within East Asian medicine with an emphasis on the integration of these levels of truth and its understanding. Hinduism Vedanta Vedanta is the inquiry into and systematisation of the Vedas and Upanishads, to harmonise the various and contrasting ideas that can be found in those texts. Within Vedanta, different schools exist: Vishishtadvaita, qualified monism, is from the school of Ramanuja; Shuddhadvaita, in-essence monism, is the school of Vallabha; Dvaitadvaita, differential monism, is a school founded by Nimbarka; Achintya Bheda Abheda, a school of Vedanta founded by Chaitanya Mahaprabhu representing the philosophy of inconceivable one-ness and difference. It can be understood as an integration of the strict dualist (dvaita) theology of Madhvacharya and the qualified monism (vishishtadvaita) of Ramanuja. Modern Hinduism The colonisation of India by the British had a major impact on Hindu society. In response, leading Hindu intellectuals started to study western culture and philosophy, integrating several western notions into Hinduism. This modernised Hinduism, at its turn, has gained popularity in the west. A major role was played in the 19th century by Swami Vivekananda in the revival of Hinduism, and the spread of Advaita Vedanta to the west via the Ramakrishna Mission. His interpretation of Advaita Vedanta has been called Neo-Vedanta. In Advaita, Shankara suggests meditation and Nirvikalpa Samadhi are means to gain knowledge of the already existing unity of Brahman and Atman, not the highest goal itself: Vivekananda, according to Gavin Flood, was "a figure of great importance in the development of a modern Hindu self-understanding and in formulating the West's view of Hinduism." Central to his philosophy is the idea that the divine exists in all beings, that all human beings can achieve union with this "innate divinity", and that seeing this divine as the essence of others will further love and social harmony. According to Vivekananda, there is an essential unity to Hinduism, which underlies the diversity of its many forms. According to Flood, Vivekananda's view of Hinduism is the most common among Hindus today. This monism, according to Flood, is at the foundation of earlier Upanishads, to theosophy in the later Vedanta tradition and in modern Neo-Hinduism. Buddhism According to the Pāli Canon, both pluralism (nānatta) and monism (ekatta) are speculative views. A Theravada commentary notes that the former is similar to or associated with nihilism (ucchēdavāda), and the latter is similar to or associated with eternalism (sassatavada). Levels of truth Within Buddhism, a rich variety of philosophical and pedagogical models can be found. Various schools of Buddhism discern levels of truth: The Two truths doctrine of the Madhyamaka The Three Natures of the Yogacara Essence-Function, or Absolute-relative in Chinese and Korean Buddhism The Trikaya-formule, consisting of The Dharmakāya or Truth body which embodies the very principle of enlightenment and knows no limits or boundaries; The Sambhogakāya or body of mutual enjoyment which is a body of bliss or clear light manifestation; The Nirmāṇakāya or created body which manifests in time and space. The Prajnaparamita-sutras and Madhyamaka emphasize the non-duality of form and emptiness: "form is emptiness, emptiness is form", as the heart sutra says. In Chinese Buddhism this was understood to mean that ultimate reality is not a transcendental realm, but equal to the daily world of relative reality. This idea was well-situated for the existing Chinese culture, which emphasized the mundane world and society. But this does not tell how the absolute is present in the relative world: This question is answered in such schemata as the Five Ranks of Tozan, the Oxherding Pictures, and Hakuin's Four ways of knowing. Sikhism Sikhism complies with the concept of Absolute Monism. Sikh philosophy advocates that all that our senses comprehend is an illusion; God is the ultimate reality. Forms being subject to time shall pass away. God's Reality alone is eternal and abiding. The thought is that Atma (soul) is born from, and a reflection of, ParamAtma (Supreme Soul), and "will again merge into it", in the words of the fifth guru of Sikhs, Guru Arjan, "just as water merges back into the water." God and Soul are fundamentally the same; identical in the same way as Fire and its sparks. "Atam meh Ram, Ram meh Atam" which means "The Ultimate Eternal reality resides in the Soul and the Soul is contained in Him". As from one stream, millions of waves arise and yet the waves, made of water, again become water; in the same way all souls have sprung from the Universal Being and would blend again into it. Abrahamic faiths Judaism Jewish thought considers God as separate from all physical, created things and as existing outside of time. According to Maimonides, God is an incorporeal being that caused all other existence. According to Maimonides, to admit corporeality to God is tantamount to admitting complexity to God, which is a contradiction to God as the first cause and constitutes heresy. While Hasidic mystics considered the existence of the physical world a contradiction to God's simpleness, Maimonides saw no contradiction. According to Hasidic thought (particularly as propounded by the 18th century, early 19th-century founder of Chabad, Shneur Zalman of Liadi), God is held to be immanent within creation for two interrelated reasons: A very strong Jewish belief is that "[t]he Divine life-force which brings [the universe] into existence must constantly be present ... were this life-force to forsake [the universe] for even one brief moment, it would revert to a state of utter nothingness, as before the creation ..." Simultaneously, Judaism holds as axiomatic that God is an absolute unity, and that he is perfectly simple, thus, if his sustaining power is within nature, then his essence is also within nature. The Vilna Gaon was very much against this philosophy, for he felt that it would lead to pantheism and heresy. According to some this is the main reason for the Gaon's ban on Chasidism. Christianity Creator–creature distinction Christians maintain that God created the universe ex nihilo and not from his own substance, so that the creator is not to be confused with creation, but rather transcends it. There is a movement of "Christian Panentheism". Rejection of radical dualism In On Free Choice of the Will, Augustine argued, in the context of the problem of evil, that evil is not the opposite of good, but rather merely the absence of good, something that does not have existence in itself. Likewise, C. S. Lewis described evil as a "parasite" in Mere Christianity, as he viewed evil as something that cannot exist without good to provide it with existence. Lewis went on to argue against dualism from the basis of moral absolutism, and rejected the dualistic notion that God and Satan are opposites, arguing instead that God has no equal, hence no opposite. Lewis rather viewed Satan as the opposite of Michael the archangel. Due to this, Lewis instead argued for a more limited type of dualism. Other theologians, such as Greg Boyd, have argued in more depth that the Biblical authors held a "limited dualism", meaning that God and Satan do engage in real battle, but only due to free will given by God, for the duration that God allows. Mormonism Latter Day Saint theology also expresses a form of dual-aspect monism via materialism and eternalism, claiming that creation was ex materia (as opposed to ex nihilo in conventional Christianity), as expressed by Parley Pratt and echoed in view by the movement's founder Joseph Smith, making no distinction between the spiritual and the material, these being not just similarly eternal, but ultimately two manifestations of the same reality or substance. Parley Pratt implies a vitalism paired with evolutionary adaptation noting, "these eternal, self-existing elements possess in themselves certain inherent properties or attributes, in a greater or less degree; or, in other words, they possess intelligence, adapted to their several spheres." Parley Pratt's view is also similar to Gottfried Leibniz's monadology, which holds that "reality consists of mind atoms that are living centers of force." Brigham Young anticipates a proto-mentality of elementary particles with his vitalist view, "there is life in all matter, throughout the vast extent of all the eternities; it is in the rock, the sand, the dust, in water, air, the gases, and in short, in every description and organization of matter; whether it be solid, liquid, or gaseous, particle operating with particle." The LDS conception of matter is "essentially dynamic rather than static, if indeed it is not a kind of living energy, and that it is subject at least to the rule of intelligence." John A. Widstoe held a similar, more vitalist view, that "Life is nothing more than matter in motion; that, therefore, all matter possess a kind of life... Matter... [is] intelligence... hence everything in the universe is alive." However, Widstoe resisted outright affirming a belief in panpsychism. Islam Quran Vincent Cornell argues that the Quran provides a monist image of God by describing reality as a unified whole, with God being a single concept that would describe or ascribe all existing things. But most argue that Abrahamic religious scriptures, especially the Quran, see creation and God as two separate existences. It explains that everything has been created by God and is under his control, but at the same time distinguishes creation as being dependent on the existence of God. Sufism Some Sufi mystics advocate monism. One of the most notable being the 13th-century Persian poet Rumi (1207–1273) in his didactic poem Masnavi espoused monism. Rumi says in the Masnavi, Other Sufi mystics however, such as Ahmad Sirhindi, upheld dualistic Monotheism (the separation of God and the Universe). The most influential of the Islamic monists was the Sufi philosopher Ibn Arabi (1165–1240). He developed the concept of 'unity of being' (Arabic: waḥdat al-wujūd), which some argue is a monistic philosophy. Born in al-Andalus, he made an enormous impact on the Muslim world, where he was crowned "the great Master". In the centuries following his death, his ideas became increasingly controversial. Ahmad Sirhindi criticised monistic understanding of 'unity of being', advocating the dualistic-compatible 'unity of witness' (Arabic: wahdat ash-shuhud), maintaining separation of creator and creation. Later, Shah Waliullah Dehlawi reconciled the two ideas maintaining that their differences are semantic differences, arguing that the universal existence (which is different in creation to creator) and the divine essence are different and that the universal existence emanates (in a non-platonic sense) from the divine essence and that the relationship between them is similar to the relationship between the number four and a number being even. Shi'ism The doctrine of waḥdat al-wujūd also enjoys considerable following in the rationalist philosophy of Twelver Shi'ism, with the most famous modern-day adherent being Ruhollah Khomeini. Baháʼí Faith Although the teachings of the Baháʼí Faith have a strong emphasis on social and ethical issues, there exist a number of foundational texts that have been described as mystical. Some of these include statements of a monist nature (e.g., The Seven Valleys and the Hidden Words). The differences between dualist and monist views are reconciled by the teaching that these opposing viewpoints are caused by differences in the observers themselves, not in that which is observed. This is not a 'higher truth/lower truth' position. God is unknowable. For man it is impossible to acquire any direct knowledge of God or the Absolute, because any knowledge that one has, is relative. See also Cosmic pluralism Dialectical monism Henosis Holism Indefinite monism Neoplatonism Material monism Monadology Monistic idealism Ontological pluralism Realistic monism Taoism Univocity of being Wuji Notes References Sources Further reading External links Catholic Encyclopedia - Monism Hinduism's Online Lexicon – (search for Monism) The Monist Philosophy of religion Metaphysical theories Theory of mind
0.765443
0.998462
0.764266
The Order of Things
The Order of Things: An Archaeology of the Human Sciences (Les Mots et les Choses: Une archéologie des sciences humaines) is a book by French philosopher Michel Foucault. It proposes that every historical period has underlying epistemic assumptions, ways of thinking, which determine what is truth and what is acceptable discourse about a subject, by delineating the origins of biology, economics, and linguistics. The introduction to the origins of the human sciences begins with detailed, forensic analyses and discussion of the complex networks of sightlines, hidden-ness, and representation that exist in the group painting (The Ladies-in-waiting, 1656) by Diego Velázquez. Foucault's application of the analyses shows the structural parallels in the similar developments in perception that occurred in researchers' ways of seeing the subject in the human sciences. The concept of episteme In The Order of Things: An Archaeology of the Human Sciences Foucault wrote that a historical period is characterized by epistemes — ways of thinking about truth and about discourse — which are common to the fields of knowledge, and determine what ideas it is possible to conceptualize and what ideas it is acceptable to affirm as true. That the acceptable ideas change and develop in the course of time, manifested as paradigm shifts of intellectualism, for instance between the "Classical Age" and "Modernity" (from Kant onwards) — which is the period considered by Foucault in the book — is support for the thesis that every historical period has underlying epistemic assumptions, ways of thinking that determined what is truth and what is rationally acceptable. Concerning language: from general grammar to linguistics Concerning living organisms: from natural history to biology Concerning money: from the science of wealth to economics Foucault analyzes three epistemes: The episteme of the Renaissance, characterized by resemblance and similitude The episteme of the Classical era, characterized by representation and ordering, identity and difference, as categorization and taxonomy The episteme of the Modern era, the character of which is the subject of the book In the Classical-era episteme, the concept of "man" was not yet defined. Man was not subject to a distinct epistemological awareness. Classical thought, and previous ones, were able to talk about the mind and the body, about the human being, and about his very limited place in the universe, about all the limits of knowledge and his freedom, but none of them have ever known man as modern thought has done. The humanism of the Renaissance, the rationalism of the "classics" assigned human beings a privileged place in the order of the world, but they did not think of man. This happened only with Kant's Critique of Pure Reason, when the entire Western episteme was overturned. The connection between "positivity and finitude", the duplication of the empirical and the transcendental, the "perpetual reference of the cogito to the unthought", the "retreat and the return of the origin", define, for Foucault, man's way of being, because now reflection tries to philosophically found the possibility of knowledge on the analysis of this way of being and no longer on that of representation. Epistemic interpretation The Order of Things (1966) is about the "cognitive status of the modern human sciences" in the production of knowledge — the ways of seeing that researchers apply to a subject under examination. Foucault's introduction to the epistemic origins of the human sciences is a forensic analysis of the painting Las Meninas (The Ladies-in-waiting, 1656), by Diego Velázquez, as an objet d'art. For the detailed descriptions, Foucault uses language that is "neither prescribed by, nor filtered through the various texts of art-historical investigation." Ignoring the 17th-century social context of the painting — the subject (a royal family); the artist's biography, technical acumen, artistic sources and stylistic influences; and the relationship with his patrons (King Philip IV of Spain and Queen Mariana of Austria) — Foucault analyzes the conscious, artistic artifice of Las Meninas as a work of art, to show the network of complex, visual relationships that exist among the painter, the subjects, and the spectator who is viewing the painting: As a representational painting Las Meninas is a new episteme (way of thinking) that is at the midpoint between two "great discontinuities" in European intellectualism, the Classical and the modern: "Perhaps there exists, in this painting by Velázquez, the representation, as it were, of Classical representation, and the definition of the space it opens up to us . . . representation freed, finally, from the relation that was impeding it, can offer itself as representation, in its pure form." The Order of Things concludes with Foucault's explanation of why he did the forensic analysis: Influence The critique of epistemic practices presented in The Order of Things: An Archaeology of the Human Sciences expanded and deepened the research methodology of cultural history. Foucault's presentation and explanation of cultural shifts in awareness about ways of thinking, prompted the historian of science Theodore Porter to investigate and examine the contemporary bases for the production of knowledge, which yielded a critique of the scientific researcher's psychological projection of modern categories of knowledge upon past people and things that remain intrinsically unintelligible, despite contemporary historical knowledge of the past under examination. In France, The Order of Things established Foucault's intellectual pre-eminence among the national intelligentsia; in a review of which, the philosopher Jean-Paul Sartre said that Foucault was "the last barricade of the bourgeoisie." Responding to Sartre, Foucault said, "poor bourgeoisie; if they needed me as a 'barricade', then they had already lost power!" In the book Structuralism (Le Structuralisme, 1968) Jean Piaget compared Foucault's episteme to the concept of paradigm shift, which the philosopher of science Thomas Kuhn presented in The Structure of Scientific Revolutions (1962). See also Le Monde 100 Books of the Century The Archaeology of Knowledge Notes External links English translation of the Preface 1966 non-fiction books Books about discourse analysis Éditions Gallimard books French non-fiction books Philosophy books Postmodern novels Works by Michel Foucault
0.770313
0.992089
0.76422
Degrowth
Degrowth is an academic and social movement critical of the concept of growth in gross domestic product as a measure of human and economic development. The idea of degrowth is based on ideas and research from economic anthropology, ecological economics, environmental sciences, and development studies. It argues that modern capitalism's unitary focus on growth causes widespread ecological damage and is unnecessary for the further increase of human living standards. Degrowth theory has been met with both academic acclaim and considerable criticism. Degrowth's main argument is that an infinite expansion of the economy is fundamentally contradictory to the finiteness of material resources on Earth. It argues that economic growth measured by GDP should be abandoned as a policy objective. Policy should instead focus on economic and social metrics such as life expectancy, health, education, housing, and ecologically sustainable work as indicators of both ecosystems and human well-being. Degrowth theorists posit that this would increase human living standards and ecological preservation even as GDP growth slows. Degrowth theory is highly critical of free market capitalism, and it highlights the importance of extensive public services, care work, self-organization, commons, relational goods, community, and work sharing. Degrowth theory partly orients itself as a critique of green capitalism or as a radical alternative to the market-based, sustainable development goal (SDG) model of addressing ecological overshoot and environmental collapse. A 2024 review of degrowth studies over the past 10 years showed that most were of poor quality: almost 90% were opinions rather than analysis, few used quantitative or qualitative data, and even fewer ones used formal modelling; the latter used small samples or a focus on non-representative cases. Also most studies offered subjective policy advice, but lacked policy evaluation and integration with insights from the literature on environmental/climate policies. Background The "degrowth" movement arose from concerns over the consequences of the productivism and consumerism associated with industrial societies (whether capitalist or socialist) including: The reduced availability of energy sources (see peak oil); The destabilization of Earth's ecosystems upon which all life on Earth depends (see Holocene Extinction, Anthropocene, global warming, pollution, current biodiversity loss); The rise of negative societal side-effects (unsustainable development, poorer health, poverty); and The ever-expanding use of resources by Global North countries to satisfy lifestyles that consume more food and energy, and produce greater waste, at the expense of the Global South (see neocolonialism). A 2017 review of the research literature on degrowth, found that it focused on three main goals: (1) reduction of environmental degradation; (2) redistribution of income and wealth locally and globally; (3) promotion of a social transition from economic materialism to participatory culture. Decoupling The concept of decoupling denotes decoupling economic growth, usually measured in GDP growth, GDP per capita growth or GNI per capita growth from the use of natural resources and greenhouse gas (GHG) emissions. Absolute decoupling refers to GDP growth coinciding with a reduction in natural resource use and GHG emissions, while relative decoupling describes an increase in resource use and GHG emissions lower than the increase in GDP growth. The degrowth movement heavily critiques this idea and argues that absolute decoupling is only possible for short periods, specific locations, or with small mitigation rates. In 2021 NGO European Environmental Bureau called stated that "not only is there no empirical evidence supporting the existence of a decoupling of economic growth from environmental pressures on anywhere near the scale needed to deal with environmental breakdown", and that reported cases of existing eco-economic decouplings either depict relative decoupling and/or are observed only temporarily and/or only on a local scale, arguing that alternatives to eco-economic decoupling are needed. This is supported by several other studies which state that absolute decoupling is highly unlikely to be achieved fast enough to prevent global warming over 1.5 °C or 2 °C, even under optimistic policy conditions. Major criticism of this view points out that Degrowth is politically unpalatable, defaulting towards the more free market green growth orthodoxy as a set of solutions that is more politically tenable. The problems with the SDG process are political rather than technical, Ezra Klein of the New York Times claims in summary of these criticisms, and degrowth has less plausibility than green growth as a democratic political platform. However, in a recent review of efforts toward Sustain Development Goals by the Council of Foreign Relations in 2023 it was found that progress toward 50% of the minimum viable SDG's have stalled and 30% of these verticals have reversed (or are getting worse, rather than better). Thus, while it may be true that Degrowth will be 'a difficult sell' (per Ezra Klein) to introduce via democratic voluntarism, the critique of SDG's and decoupling against green capitalism leveled by Degrowth theorists appear to have predictive power. Resource depletion Degrowth proponents argue that economic expansion must be met with a corresponding increase in resource consumption. Non-renewable resources, like petroleum, have a limited supply and can eventually be exhausted. Similarly, renewable resources can also be depleted if they are harvested at unsustainable rates for prolonged periods. An example of this depletion is evident in the case of caviar production in the Caspian Sea. Supporters of degrowth contend that reducing demand is the sole permanent solution to bridging the demand gap. To sustain renewable resources, both demand and production must be regulated to levels that avert depletion and ensure environmental sustainability. Transitioning to a society less reliant on oil is crucial for averting societal collapse as non-renewable resources dwindle. Degrowth can also be interpreted as a plea for resource reallocation, aiming to halt unsustainable practices of transforming certain entities into resources, such as non-renewable natural resources. Instead, the focus shifts towards identifying and utilizing alternative resources, such as renewable human capabilities. Ecological footprint The ecological footprint measures human demand on the Earth's ecosystems by comparing human demand with the Earth's ecological capacity to regenerate. It represents the amount of biologically productive land and sea area required to regenerate the resources a human population consumes and to absorb and render harmless the corresponding waste. According to a 2005 Global Footprint Network report, inhabitants of high-income countries live off of 6.4 global hectares (gHa), while those from low-income countries live off of a single gHa. For example, while each inhabitant of Bangladesh lives off of what they produce from 0.56 gHa, a North American requires 12.5 gHa. Each inhabitant of North America uses 22.3 times as much land as a Bangladeshi. According to the same report, the average number of global hectares per person was 2.1, while current consumption levels have reached 2.7 hectares per person. For the world's population to attain the living standards typical of European countries, the resources of between three and eight planet Earths would be required with current levels of efficiency and means of production. For world economic equality to be achieved with the currently available resources, proponents say rich countries would have to reduce their standard of living through degrowth. The constraints on resources would eventually lead to a forced reduction in consumption. A controlled reduction of consumption would reduce the trauma of this change, assuming no technological changes increase the planet's carrying capacity. Multiple studies now demonstrate that in many affluent countries per-capita energy consumption could be decreased substantially and quality living standards still be maintained. Sustainable development Degrowth ideology opposes all manifestations of productivism, which advocates that economic productivity and growth should be the primary objectives of human organization. Consequently, it stands in opposition to the prevailing model of sustainable development. While the concept of sustainability aligns with some aspects of degrowth philosophy, sustainable development, as conventionally understood, is based on mainstream development principles focused on augmenting economic growth and consumption. Degrowth views sustainable development as contradictory because any development reliant on growth within a finite and ecologically strained context is deemed intrinsically unsustainable. Development based on growth in a finite, environmentally stressed world is viewed as inherently unsustainable. Critics of degrowth argue that a slowing of economic growth would result in increased unemployment, increased poverty, and decreased income per capita. Many who believe in negative environmental consequences of growth still advocate for economic growth in the South, even if not in the North. Slowing economic growth would fail to deliver the benefits of degrowth — self-sufficiency and material responsibility — and would indeed lead to decreased employment. Rather, degrowth proponents advocate the complete abandonment of the current (growth) economic model, suggesting that relocalizing and abandoning the global economy in the Global South would allow people of the South to become more self-sufficient and would end the overconsumption and exploitation of Southern resources by the North. Supporters of degrowth view it as a potential method to shield ecosystems from human exploitation. Within this concept, there is an emphasis on communal stewardship of the environment, fostering a symbiotic relationship between humans and nature. Degrowth recognizes ecosystems as valuable entities beyond their utility as mere sources of resources. During the Second International Conference on degrowth, discussions encompassed concepts like implementing a maximum wage and promoting open borders. Degrowth advocates an ethical shift that challenges the notion that high-resource consumption lifestyles are desirable. Additionally, alternative perspectives on degrowth include addressing perceived historical injustices perpetrated by the global North through centuries of colonization and exploitation, advocating for wealth redistribution. Determining the appropriate scale of action remains a focal point of debate within degrowth movements. Some researchers believe that the world is poised to experience a Great Transformation, either by disastrous events or intentional design. They maintain that ecological economics must incorporate Postdevelopment theories, Buen vivir, and degrowth to affect the change necessary to avoid these potentially catastrophic events. A 2022 paper by Mark Diesendorf found that limiting global warming to 1,5 degrees with no overshoot would require a reduction of energy consumption. It describes (chapters 4–5) degrowth toward a steady state economy as possible and probably positive. The study ends with the words: "The case for a transition to a steady-state economy with low throughput and low emissions, initially in the high-income economies and then in rapidly growing economies, needs more serious attention and international cooperation. "Rebound effect" Technologies designed to reduce resource use and improve efficiency are often touted as sustainable or green solutions. Degrowth literature, however, warns about these technological advances due to the "rebound effect", also known as Jevons paradox. This concept is based on observations that when a less resource-exhaustive technology is introduced, behavior surrounding the use of that technology may change, and consumption of that technology could increase or even offset any potential resource savings. In light of the rebound effect, proponents of degrowth hold that the only effective "sustainable" solutions must involve a complete rejection of the growth paradigm and a move to a degrowth paradigm. There are also fundamental limits to technological solutions in the pursuit of degrowth, as all engagements with technology increase the cumulative matter-energy throughput. However, the convergence of digital commons of knowledge and design with distributed manufacturing technologies may arguably hold potential for building degrowth future scenarios. Mitigation of climate change and determinants of 'growth' Scientists report that degrowth scenarios, where economic output either "declines" or declines in terms of contemporary economic metrics such as current GDP, have been neglected in considerations of 1.5 °C scenarios reported by the Intergovernmental Panel on Climate Change (IPCC), finding that investigated degrowth scenarios "minimize many key risks for feasibility and sustainability compared to technology-driven pathways" with a core problem of such being feasibility in the context of contemporary decision-making of politics and globalized rebound- and relocation-effects. However, structurally realigning 'economic growth' and socioeconomic activity determination-structures may not be widely debated in both the degrowth community and in degrowth research which may largely focus on reducing economic growth either more generally or without structural alternative but with e.g. nonsystemic political interventions. Similarly, many green growth advocates suggest that contemporary socioeconomic mechanisms and metrics – including for economic growth – can be continued with forms of nonstructural "energy-GDP decoupling". A study concluded that public services are associated with higher human need satisfaction and lower energy requirements while contemporary forms of economic growth are linked with the opposite, with the contemporary economic system being fundamentally misaligned with the twin goals of meeting human needs and ensuring ecological sustainability, suggesting that prioritizing human well-being and ecological sustainability would be preferable to overgrowth in current metrics of economic growth. The word 'degrowth' was mentioned 28 times in the United Nations IPCC Sixth Assessment Report by Working Group III published in April 2022. Open Localism Open localism is a concept that has been promoted by the degrowth community when envisioning an alternative set of social relations and economic organization. It builds upon the political philosophies of localism and is based on values such as diversity, ecologies of knowledge, and openness. Open localism does not look to create an enclosed community but rather to circulate production locally in an open and integrative manner. Open localism is a direct challenge to the acts of closure regarding identitarian politics. By producing and consuming as much as possible locally, community members enhance their relationships with one another and the surrounding environment. Degrowth's ideas around open localism share similarities with ideas around the commons while also having clear differences. On the one hand, open localism promotes localized, common production in cooperative-like styles similar to some versions of how commons are organized. On the other hand, open localism does not impose any set of rules or regulations creating a defined boundary, rather it favours a cosmopolitan approach. Feminism The degrowth movement builds on feminist economics that has criticized measures of economic growth like the GDP as it excludes work mainly done by women such as unpaid care work (the work performed to fulfill people's needs) and reproductive work (the work sustaining life), first argued by Marilyn Waring. Further, degrowth draws on the critique of socialist feminists like Silvia Federici and Nancy Fraser claiming that capitalist growth builds on the exploitation of women's work. Instead of devaluing it, degrowth centers the economy around care, proposing that care work should be organized as a commons. Centering care goes hand in hand with changing society's time regimes. Degrowth scholars propose a working time reduction. As this does not necessarily lead to gender justice, the redistribution of care work has to be equally pushed. A concrete proposal by Frigga Haug is the 4-in-1 perspective that proposes 4 hours of wage work per day, freeing time for 4 hours of care work, 4 hours of political activities in a direct democracy, and 4 hours of personal development through learning. Furthermore, degrowth draws on materialist ecofeminisms that state the parallel of the exploitation of women and nature in growth-based societies and proposes a subsistence perspective conceptualized by Maria Mies and Ariel Salleh. Synergies and opportunities for cross-fertilization between degrowth and feminism were proposed in 2022, through networks including the Feminisms and Degrowth Alliance (FaDA). FaDA argued that the 2023 launch of Degrowth Journal created "a convivial space for generating and exploring knowledge and practice from diverse perspectives". Decolonialism A relevant concept within the theory of degrowth is decolonialism, which refers to putting an end to the perpetuation of political, social, economic, religious, racial, gender, and epistemological relations of power, domination, and hierarchy of the global north over the global south. The foundation of this relationship lies in the claim that the imminent socio-ecological collapse is caused by capitalism, which is sustained by economic growth. This economic growth in turn can only be maintained under the eaves of colonialism and extractivism, perpetuating asymmetric power relationships between territories. Colonialism is understood as the appropriation of common goods, resources, and labor, which is antagonistic to degrowth principles. Through colonial domination, capital depresses the prices of inputs and colonial cheapening occurs to the detriment of the oppressed countries. Degrowth criticizes these appropriation mechanisms and enclosure of one territory over another and proposes a provision of human needs through disaccumulation, de-enclosure, and decommodification. It also reconciles with social movements and seeks to recognize the ecological debt to achieve the catch-up, which is postulated as impossible without decolonization. In practice, decolonial practices close to degrowth are observed, such as the movement of Buen vivir or sumak kawsay by various indigenous peoples. Policies There is a wide range of policy proposals associated with degrowth. In 2022, Nick Fitzpatrick, Timothée Parrique and Inês Cosme conducted a comprehensive survey of degrowth literature from 2005 to 2020 and found 530 specific policy proposals with "50 goals, 100 objectives, 380 instruments". The survey found that the ten most frequently cited proposals were: universal basic incomes, work-time reductions, job guarantees with a living wage, maximum income caps, declining caps on resource use and emissions, not-for-profit cooperatives, holding deliberative forums, reclaiming the commons, establishing ecovillages, and housing cooperatives. To address the common criticism that such policies are not realistically financeable, economic anthropologist Jason Hickel sees an opportunity to learn from modern monetary theory, which argues that monetary sovereign states can issue the money needed to pay for anything available in the national economy without the need to first tax their citizens for the requisite funds. Taxation, credit regulations and price controls could be used to mitigate the inflation this may generate, while also reducing consumption. Origins of the movement The contemporary degrowth movement can trace its roots back to the anti-industrialist trends of the 19th century, developed in Great Britain by John Ruskin, William Morris and the Arts and Crafts movement (1819–1900), in the United States by Henry David Thoreau (1817–1862), and in Russia by Leo Tolstoy (1828–1910). Degrowth movements draw on the values of humanism, enlightenment, anthropology and human rights. Club of Rome reports In 1968, the Club of Rome, a think tank headquartered in Winterthur, Switzerland, asked researchers at the Massachusetts Institute of Technology for a report on the limits of our world system and the constraints it puts on human numbers and activity. The report, called The Limits to Growth, published in 1972, became the first significant study to model the consequences of economic growth. The reports (also known as the Meadows Reports) are not strictly the founding texts of the degrowth movement, as these reports only advise zero growth, and have also been used to support the sustainable development movement. Still, they are considered the first studies explicitly presenting economic growth as a key reason for the increase in global environmental problems such as pollution, shortage of raw materials, and the destruction of ecosystems. The Limits to Growth: The 30-Year Update was published in 2004, and in 2012, a 40-year forecast from Jørgen Randers, one of the book's original authors, was published as 2052: A Global Forecast for the Next Forty Years. In 2021, Club of Rome committee member Gaya Herrington published an article comparing the proposed models' predictions against empirical data trends. The BAU2 ("Business as Usual 2") scenario, predicting "collapse through pollution", as well as the CT ("Comprehensive Technology") scenario, predicting exceptional technological development and gradual decline, were found to align most closely with data observed as of 2019. In September 2022, the Club of Rome released updated predictive models and policy recommendations in a general-audiences book titled Earth for all – A survival guide to humanity. Lasting influence of Georgescu-Roegen The degrowth movement recognises Romanian American mathematician, statistician and economist Nicholas Georgescu-Roegen as the main intellectual figure inspiring the movement. In his 1971 work, The Entropy Law and the Economic Process, Georgescu-Roegen argues that economic scarcity is rooted in physical reality; that all natural resources are irreversibly degraded when put to use in economic activity; that the carrying capacity of Earth—that is, Earth's capacity to sustain human populations and consumption levels—is bound to decrease sometime in the future as Earth's finite stock of mineral resources is presently being extracted and put to use; and consequently, that the world economy as a whole is heading towards an inevitable future collapse. Georgescu-Roegen's intellectual inspiration to degrowth dates back to the 1970s. When Georgescu-Roegen delivered a lecture at the University of Geneva in 1974, he made a lasting impression on the young, newly graduated French historian and philosopher, Jacques Grinevald, who had earlier been introduced to Georgescu-Roegen's works by an academic advisor. Georgescu-Roegen and Grinevald became friends, and Grinevald devoted his research to a closer study of Georgescu-Roegen's work. As a result, in 1979, Grinevald published a French translation of a selection of Georgescu-Roegen's articles entitled Demain la décroissance: Entropie – Écologie – Économie ('Tomorrow, the Decline: Entropy – Ecology – Economy'). Georgescu-Roegen, who spoke French fluently, approved the use of the term décroissance in the title of the French translation. The book gained influence in French intellectual and academic circles from the outset. Later, the book was expanded and republished in 1995 and once again in 2006; however, the word Demain ('tomorrow') was removed from the book's title in the second and third editions. By the time Grinevald suggested the term décroissance to form part of the title of the French translation of Georgescu-Roegen's work, the term had already permeated French intellectual circles since the early 1970s to signify a deliberate political action to downscale the economy on a permanent and voluntary basis. Simultaneously, but independently, Georgescu-Roegen criticised the ideas of The Limits to Growth and Herman Daly's steady-state economy in his article, "Energy and Economic Myths", delivered as a series of lectures from 1972, but not published before 1975. In the article, Georgescu-Roegen stated the following: When reading this particular passage of the text, Grinevald realised that no professional economist of any orientation had ever reasoned like this before. Grinevald also realised the congruence of Georgescu-Roegen's viewpoint and the French debates occurring at the time; this resemblance was captured in the title of the French edition. The translation of Georgescu-Roegen's work into French both fed on and gave further impetus to the concept of décroissance in France—and everywhere else in the francophone world—thereby creating something of an intellectual feedback loop. By the 2000s, when décroissance was to be translated from French back into English as the catchy banner for the new social movement, the original term "decline" was deemed inappropriate and misdirected for the purpose: "Decline" usually refers to an unexpected, unwelcome, and temporary economic recession, something to be avoided or quickly overcome. Instead, the neologism "degrowth" was coined to signify a deliberate political action to downscale the economy on a permanent, conscious basis—as in the prevailing French usage of the term—something good to be welcomed and maintained, or so followers believe. When the first international degrowth conference was held in Paris in 2008, the participants honoured Georgescu-Roegen and his work. In his manifesto on Petit traité de la décroissance sereine ("Farewell to Growth"), the leading French champion of the degrowth movement, Serge Latouche, credited Georgescu-Roegen as the "main theoretical source of degrowth". Likewise, Italian degrowth theorist Mauro Bonaiuti considered Georgescu-Roegen's work to be "one of the analytical cornerstones of the degrowth perspective". Schumacher and Buddhist economics E. F. Schumacher's 1973 book Small Is Beautiful predates a unified degrowth movement but nonetheless serves as an important basis for degrowth ideas. In this book he critiques the neo-liberal model of economic development, arguing that an increasing "standard of living", based on consumption is absurd as a goal of economic activity and development. Instead, under what he refers to as Buddhist economics, we should aim to maximize well-being while minimizing consumption. Ecological and social issues In January 1972, Edward Goldsmith and Robert Prescott-Allen—editors of The Ecologist—published A Blueprint for Survival, which called for a radical programme of decentralisation and deindustrialization to prevent what the authors referred to as "the breakdown of society and the irreversible disruption of the life-support systems on this planet". In 2019, a summary for policymakers of the largest, most comprehensive study to date of biodiversity and ecosystem services was published by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. The report was finalised in Paris. The main conclusions: Over the last 50 years, the state of nature has deteriorated at an unprecedented and accelerating rate. The main drivers of this deterioration have been changes in land and sea use, exploitation of living beings, climate change, pollution and invasive species. These five drivers, in turn, are caused by societal behaviors, from consumption to governance. Damage to ecosystems undermines 35 of 44 selected UN targets, including the UN General Assembly's Sustainable Development Goals for poverty, hunger, health, water, cities' climate, oceans and land. It can cause problems with food, water and humanity's air supply. To fix the problem, humanity needs transformative change, including sustainable agriculture, reductions in consumption and waste, fishing quotas and collaborative water management. Page 8 of the report proposes "enabling visions of a good quality of life that do not entail ever-increasing material consumption" as one of the main measures. The report states that "Some pathways chosen to achieve the goals related to energy, economic growth, industry and infrastructure and sustainable consumption and production (Sustainable Development Goals 7, 8, 9 and 12), as well as targets related to poverty, food security and cities (Sustainable Development Goals 1, 2 and 11), could have substantial positive or negative impacts on nature and therefore on the achievement of other Sustainable Development Goals". In a June 2020 paper published in Nature Communications, a group of scientists argue that "green growth" or "sustainable growth" is a myth: "we have to get away from our obsession with economic growth—we really need to start managing our economies in a way that protects our climate and natural resources, even if this means less, no or even negative growth." They conclude that a change in economic paradigms is imperative to prevent environmental destruction, and suggest a range of ideas from the reformist to the radical, with the latter consisting of degrowth, eco-socialism and eco-anarchism. In June 2020, the official site of one of the organizations promoting degrowth published an article by Vijay Kolinjivadi, an expert in political ecology, arguing that the emergence of COVID-19 is linked to the ecological crisis. The 2019 World Scientists' Warning of a Climate Emergency and its 2021 update have asserted that economic growth is a primary driver of the overexploitation of ecosystems, and to preserve the biosphere and mitigate climate change civilization must, in addition to other fundamental changes including stabilizing population growth and adopting largely plant-based diets, "shift from GDP growth and the pursuit of affluence toward sustaining ecosystems and improving human well-being by prioritizing basic needs and reducing inequality." In an opinion piece published in Al Jazeera, Jason Hickel states that this paper, which has more than 11,000 scientist cosigners, demonstrates that there is a "strong scientific consensus" towards abandoning "GDP as a measure of progress." In a 2022 comment published in Nature, Hickel, Giorgos Kallis, Juliet Schor, Julia Steinberger and others say that both the IPCC and the IPBES "suggest that degrowth policies should be considered in the fight against climate breakdown and biodiversity loss, respectively". Movement Conferences The movement has included international conferences promoted by the network Research & Degrowth (R&D). The First International Conference on Economic Degrowth for Ecological Sustainability and Social Equity in Paris (2008) was a discussion about the financial, social, cultural, demographic, and environmental crisis caused by the deficiencies of capitalism and an explanation of the main principles of degrowth. Further conferences were in Barcelona (2010), Montreal (2012), Venice (2012), Leipzig (2014), Budapest (2016), Malmö (2018), and Zagreb (2023). The 10th International Degrowth Conference will be held in Pontevedra in June 2024. Separately, two conferences have been organised as cross-party initiatives of Members of the European Parliament: the Post-Growth 2018 Conference and the Beyond Growth 2023 Conference, both held in the European Parliament in Brussels. International Degrowth Network The conferences have also been accompanied by informal degrowth assemblies since 2018, to build community between degrowth groups across countries. The 4th Assembly in Zagreb in 2023 discussed a proposal to create a more intentional organisational structure and led to the creation of the International Degrowth Network, which organised the 5th assembly in June 2024. Relation to other social movements The degrowth movement has a variety of relations to other social movements and alternative economic visions, which range from collaboration to partial overlap. The Konzeptwerk Neue Ökonomie (Laboratory for New Economic Ideas), which hosted the 2014 international Degrowth conference in Leipzig, has published a project entitled "Degrowth in movement(s)" in 2017, which maps relationships with 32 other social movements and initiatives. The relation to the environmental justice movement is especially visible. Although not explicitly called degrowth, movements inspired by similar concepts and terminologies can be found around the world, including Buen Vivir in Latin America, the Zapatistas in Mexico, the Kurdish Rojava or Eco-Swaraj in India, and the sufficiency economy in Thailand. The Cuban economic situation has also been of interest to degrowth advocates because its limits on growth were socially imposed (although as a result of geopolitics), and has resulted in positive health changes. Another set of movements the degrowth movement finds synergy with is the wave of initiatives and networks inspired by the commons, where resources are sustainably shared in a decentralised and self-managed manner, instead of through capitalist organization. For example, initiatives inspired by commons could be food cooperatives, open-source platforms, and group management of resources such as energy or water. Commons-based peer production also guides the role of technology in degrowth, where conviviality and socially useful production are prioritised over capital gain. This could happen in the form of cosmolocalism, which offers a framework for localising collaborative forms of production while sharing resources globally as digital commons, to reduce dependence on global value chains. Criticisms, challenges and dilemmas Critiques of degrowth concern the poor study quality of degrowth studies, negative connotation that the term "degrowth" imparts, the misapprehension that growth is seen as unambiguously bad, the challenges and feasibility of a degrowth transition, as well as the entanglement of desirable aspects of modernity with the growth paradigm. Criticisms According to a highly cited scientific paper of environmental economist Jeroen C. J. M. van den Bergh, degrowth is often seen as an ambiguous concept due to its various interpretations, which can lead to confusion rather than a clear and constructive debate on environmental policy. Many interpretations of degrowth do not offer effective strategies for reducing environmental impact or transitioning to a sustainable economy. Additionally, degrowth is unlikely to gain significant social or political support, making it an ineffective strategy for achieving environmental sustainability. Ineffectiveness and better alternatives In his scientific paper, Jeroen C. J. M. van den Bergh concludes that a degrowth strategy, which focuses on reducing the overall scale of the economy or consumption, tends to overlook the significance of changes in production composition and technological innovation. Van den Bergh also highlights that a focus solely on reducing consumption (or consumption degrowth) may lead to rebound effects. For instance, reducing consumption of certain goods and services might result in an increase in spending on other items, as disposable income remains unchanged. Alternatively, it could lead to savings, which would provide additional funds for others to borrow and spend. He emphasizes the importance of (global) environmental policies, such as pricing externalities through taxes or permits, which incentivize behavior changes that reduce environmental impact and which provide essential information for consumers and help manage rebound effects. Effective environmental regulation through pricing is crucial for transitioning from polluting to cleaner consumption patterns. Study quality A 2024 review of degrowth studies over the past 10 years showed that most were of poor quality: almost 90% were opinions rather than analysis, few used quantitative or qualitative data, and even fewer ones used formal modelling; the latter used small samples or a focus on non-representative cases. Also most studies offered subjective policy advice, but lacked policy evaluation and integration with insights from the literature on environmental/climate policies. Negative connotation The use of the term "degrowth" is criticized for being detrimental to the degrowth movement because it could carry a negative connotation, in opposition to the positively perceived "growth". "Growth" is associated with the "up" direction and positive experiences, while "down" generates the opposite associations. Research in political psychology has shown that the initial negative association of a concept, such as of "degrowth" with the negatively perceived "down", can bias how the subsequent information on that concept is integrated at the unconscious level. At the conscious level, degrowth can be interpreted negatively as the contraction of the economy, although this is not the goal of a degrowth transition, but rather one of its expected consequences. In the current economic system, a contraction of the economy is associated with a recession and its ensuing austerity measures, job cuts, or lower salaries. Noam Chomsky commented on the use of the term: "When you say 'degrowth' it frightens people. It's like saying you're going to have to be poorer tomorrow than you are today, and it doesn't mean that." Since "degrowth" contains the term "growth", there is also a risk of the term having a backfire effect, which would reinforce the initial positive attitude toward growth. "Degrowth" is also criticized for being a confusing term, since its aim is not to halt economic growth as the word implies. Instead, "a-growth" is proposed as an alternative concept that emphasizes that growth ceases to be an important policy objective, but that it can still be achieved as a side-effect of environmental and social policies. Systems theoretical critique In stressing the negative rather than the positive side(s) of growth, the majority of degrowth proponents remain focused on (de-)growth, thus giving continued attention to the issue of growth, leading to continued attention to the arguments that sustainable growth is possible. One way to avoid giving attention to growth might be extending from the economic concept of growth, which proponents of both growth and degrowth commonly adopt, to a broader concept of growth that allows for the observation of growth in other sociological characteristics of society. A corresponding "recoding" of "growth-obsessed", capitalist organizations was proposed by Steffen Roth. Marxist critique Traditional Marxists distinguish between two types of value creation: that which is useful to mankind, and that which only serves the purpose of accumulating capital. Traditional Marxists consider that it is the exploitative nature and control of the capitalist production relations that is the determinant and not the quantity. According to Jean Zin, while the justification for degrowth is valid, it is not a solution to the problem. Other Marxist writers have adopted positions close to the de-growth perspective. For example, John Bellamy Foster and Fred Magdoff, in common with David Harvey, Immanuel Wallerstein, Paul Sweezy and others focus on endless capital accumulation as the basic principle and goal of capitalism. This is the source of economic growth and, in the view of these writers, results in an unsustainable growth imperative. Foster and Magdoff develop Marx's own concept of the metabolic rift, something he noted in the exhaustion of soils by capitalist systems of food production, though this is not unique to capitalist systems of food production as seen in the Aral Sea. Many degrowth theories and ideas are based on neo-Marxist theory. Foster emphasizes that degrowth "is not aimed at austerity, but at finding a 'prosperous way down' from our current extractivist, wasteful, ecologically unsustainable, maldeveloped, exploitative, and unequal, class-hierarchical world." Challenges Lack of macroeconomics for sustainability It is reasonable for society to worry about recession as economic growth has been the unanimous goal around the globe in the past decades. However, in some advanced countries, there are attempts to develop a model for a regrowth economy. For instance, the Cool Japan strategy has proven to be instructive for Japan, which has been a static economy for almost decades. Political and social spheres According to some scholars in Sociology, the growth imperative is deeply entrenched in market capitalist societies such that it is necessary for their stability. Moreover, the institutions of modern societies, such as the nation state, welfare, labor market, education, academia, law and finance, have co-evolved with growth to sustain them. A degrowth transition thus requires not only a change of the economic system but of all the systems on which it relies. As most people in modern societies are dependent on those growth-oriented institutions, the challenge of a degrowth transition also lies in individual resistance to move away from growth. Land privatisation Baumann, Alexander and Burdon suggest that "the Degrowth movement needs to give more attention to land and housing costs, which are significant barriers hindering true political and economic agency and any grassroots driven degrowth transition." They claim that land – a necessity like land and air – privatisation creates an absolute economic growth determinant. They point out that even one who is fully committed to degrowth nevertheless has no option but decades of market growth participation to pay rent or mortgage. Because of this, land privatisation is a structural impediment to moving forward that makes degrowth economically and politically unviable. They conclude that without addressing land privatisation (the market's inaugural privatisation – primitive accumulation) the degrowth movement's strategies cannot succeed. Just as land enclosure (privatisation) initiated capitalism (economic growth), degrowth must start with reclaiming land commons. Agriculture When it comes to agriculture, a degrowth society would require a shift from industrial agriculture to less intensive and more sustainable agricultural practices such as permaculture or organic agriculture. Still, it is not clear if any of those alternatives could feed the current and projected global population. In the case of organic agriculture, Germany, for example, would not be able to feed its population under ideal organic yields over all of its arable land without meaningful changes to patterns of consumption, such as reducing meat consumption and food waste. Moreover, labour productivity of non-industrial agriculture is significantly lower due to the reduced use or absence of fossil fuels, which leaves much less labour for other sectors. Potential solutions to this challenge include scaling up approaches such as community-supported agriculture (CSA). Dilemmas Given that modernity has emerged with high levels of energy and material throughput, there is an apparent compromise between desirable aspects of modernity (e.g., social justice, gender equality, long life expectancy, low infant mortality) and unsustainable levels of energy and material use. Some researchers, however, argue that the decline in income inequality and rise in social mobility occurring under capitalism from the late 1940s to the 1960s was a product of the heavy bargaining power of labor unions and increased wealth and income redistribution during that time; while also pointing to the rise in income inequality in the 1970s following the collapse of labor unions and weakening of state welfare measures. Others also argue that modern capitalism maintains gender inequalities by means of advertising, messaging in consumer goods, and social media. Another way of looking at the argument that the development of desirable aspects of modernity require unsustainable energy and material use is through the lens of the Marxist tradition, which relates the superstructure (culture, ideology, institutions) and the base (material conditions of life, division of labor). A degrowth society, with its drastically different material conditions, could produce equally drastic changes in society's cultural and ideological spheres. The political economy of global capitalism has generated a lot of social and environmental bads, such as socioeconomic inequality and ecological devastation, which in turn have also generated a lot of goods through individualization and increased spatial and social mobility. At the same time, some argue the widespread individualization promulgated by a capitalist political economy is a bad due to its undermining of solidarity, aligned with democracy as well as collective, secondary, and primary forms of caring, and simultaneous encouragement of mistrust of others, highly competitive interpersonal relationships, blame of failure on individual shortcomings, prioritization of one's self-interest, and peripheralization of the conceptualization of human work required to create and sustain people. In this view, the widespread individuation resulting from capitalism may impede degrowth measures, requiring a change in actions to benefit society rather than the individual self. Some argue the political economy of capitalism has allowed social emancipation at the level of gender equality, disability, sexuality and anti-racism that has no historical precedent. However, others dispute social emancipation as being a direct product of capitalism or question the emancipation that has resulted. The feminist writer Nancy Holmstrom, for example, argues that capitalism's negative impacts on women outweigh the positive impacts, and women tend to be hurt by the system. In her examination of China following the Chinese Communist Revolution, Holmstrom notes that women were granted state-assisted freedoms to equal education, childcare, healthcare, abortion, marriage, and other social supports. Thus, whether the social emancipation achieved in Western society under capitalism may coexist with degrowth is ambiguous. Doyal and Gough allege that the modern capitalist system is built on the exploitation of female reproductive labor as well as that of the Global South, and sexism and racism are embedded in its structure. Therefore, some theories (such as Eco-Feminism or political ecology) argue that there cannot be equality regarding gender and the hierarchy between the Global North and South within capitalism. The structural properties of growth present another barrier to degrowth as growth shapes and is enforced by institutions, norms, culture, technology, identities, etc. The social ingraining of growth manifests in peoples' aspirations, thinking, bodies, mindsets, and relationships. Together, growth's role in social practices and in socio-economic institutions present unique challenges to the success of the degrowth movement. Another potential barrier to degrowth is the need for a rapid transition to a degrowth society due to climate change and the potential negative impacts of a rapid social transition including disorientation, conflict, and decreased well-being. In the United States, a large barrier to the support of the degrowth movement is the modern education system, including both primary and higher learning institutions. Beginning in the second term of the Reagan administration, the education system in the US was restructured to enforce neoliberal ideology by means of privatization schemes such as commercialization and performance contracting, implementation of standards and accountability measures incentivizing schools to adopt a uniform curriculum, and higher education accreditation and curricula designed to affirm market values and current power structures and avoid critical thought concerning the relations between those in power, ethics, authority, history, and knowledge. The degrowth movement, based on the empirical assumption that resources are finite and growth is limited, clashes with the limitless growth ideology associated with neoliberalism and the market values affirmed in schools, and therefore faces a major social barrier in gaining widespread support in the US. Nevertheless, co-evolving aspects of global capitalism, liberal modernity, and the market society, are closely tied and will be difficult to separate to maintain liberal and cosmopolitan values in a degrowth society. At the same time, the goal of the degrowth movement is progression rather than regression, and researchers point out that neoclassical economic models indicate neither negative nor zero growth would harm economic stability or full employment. Several assert the main barriers to the movement are social and structural factors clashing with implementing degrowth measures. Healthcare It has been pointed out that there is an apparent trade-off between the ability of modern healthcare systems to treat individual bodies to their last breath and the broader global ecological risk of such an energy and resource intensive care. If this trade-off exists, a degrowth society must choose between prioritizing the ecological integrity and the ensuing collective health or maximizing the healthcare provided to individuals. However, many degrowth scholars argue that the current system produces both psychological and physical damage to people. They insist that societal prosperity should be measured by well-being, not GDP. See also A Blueprint for Survival Agrowth Anti-consumerism Critique of political economy Degrowth advocates (category) Political ecology Postdevelopment theory Power Down: Options and Actions for a Post-Carbon World Paradox of thrift The Path to Degrowth in Overdeveloped Countries Post-capitalism Productivism Prosperity Without Growth Slow movement Steady-state economy Transition town Uneconomic growth References Reference details Further reading External links List of International Degrowth conferences on degrowth.info Research and Degrowth International Degrowth Network Degrowth Journal Planned Degrowth: Ecosocialism and Sustainable Human Development. Monthly Reviewissue on "Planned Degrowth". July 1, 2023. Simple living Sustainability Green politics Ecological economics Environmental movements Environmental ethics Environmental economics Environmental social science concepts
0.766595
0.996878
0.764202
Ethical naturalism
Ethical naturalism (also called moral naturalism or naturalistic cognitivistic definism) is the meta-ethical view which claims that: Ethical sentences express propositions. Some such propositions are true. Those propositions are made true by objective features of the world. These moral features of the world are reducible to some set of non-moral features. Overview The versions of ethical naturalism which have received the most sustained philosophical interest, for example, Cornell realism, differ from the position that "the way things are is always the way they ought to be", which few ethical naturalists hold. Ethical naturalism does, however, reject the fact-value distinction: it suggests that inquiry into the natural world can increase our moral knowledge in just the same way it increases our scientific knowledge. Indeed, proponents of ethical naturalism have argued that humanity needs to invest in the science of morality, a broad and loosely defined field that uses evidence from biology, primatology, anthropology, psychology, neuroscience, and other areas to classify and describe moral behavior. Ethical naturalism encompasses any reduction of ethical properties, such as 'goodness', to non-ethical properties; there are many different examples of such reductions, and thus many different varieties of ethical naturalism. Hedonism, for example, is the view that goodness is ultimately just pleasure. Ethical theories that can be naturalistic Altruism Consequentialism Consequentialist libertarianism Cornell realism Ethical egoism/ Objectivism Evolutionary ethics Hedonism Humanistic ethics Natural law Natural-rights libertarianism Utilitarianism Virtue ethics Criticisms Ethical naturalism has been criticized most prominently by ethical non-naturalist G. E. Moore, who formulated the open-question argument. Garner and Rosen say that a common definition of "natural property" is one "which can be discovered by sense observation or experience, experiment, or through any of the available means of science." They also say that a good definition of "natural property" is problematic but that "it is only in criticism of naturalism, or in an attempt to distinguish between naturalistic and nonnaturalistic definist theories, that such a concept is needed." R. M. Hare also criticised ethical naturalism because of what he considered its fallacious definition of the terms 'good' or 'right', saying that value-terms being part of our prescriptive moral language are not reducible to descriptive terms: "Value-terms have a special function in language, that of commending; and so they plainly cannot be defined in terms of other words which themselves do not perform this function". Moral nihilism Moral nihilists maintain that there are no such entities as objective values or objective moral facts. Proponents of moral science like Ronald A. Lindsay have counter-argued that their way of understanding "morality" as a practical enterprise is the way we ought to have understood it in the first place. He holds the position that the alternative seems to be the elaborate philosophical reduction of the word "moral" into a vacuous, useless term. Lindsay adds that it is important to reclaim the specific word "morality" because of the connotations it holds with many individuals. Morality as a science Author Sam Harris has argued that we overestimate the relevance of many arguments against the science of morality, arguments he believes scientists happily and rightly disregard in other domains of science like physics. For example, scientists may find themselves attempting to argue against philosophical skeptics, when Harris says they should be practically asking – as they would in any other domain – "why would we listen to a solipsist in the first place?" This, Harris contends, is part of what it means to practice a science of morality. In modern times, many thinkers discussing the fact–value distinction and the is–ought problem have settled on the idea that one cannot derive ought from is. Conversely, Harris maintains that the fact-value distinction is a confusion, proposing that values are really a certain kind of fact. Specifically, Harris suggests that values amount to empirical statements about "the flourishing of conscious creatures in a society". He argues that there are objective answers to moral questions, even if some are difficult or impossible to possess in practice. In this way, he says, science can tell us what to value. Harris adds that we do not demand absolute certainty from predictions in physics, so we should not demand that of a science studying morality (see The Moral Landscape). Physicist Sean Carroll believes that conceiving of morality as a science could be a case of scientific imperialism and insists that what is "good for conscious creatures" is not an adequate working definition of "moral". In opposition, John Shook, vice president of the Center for Inquiry, claims that this working definition is more than adequate for science at present and that disagreement should not immobilize the scientific study of ethics. References Other sources External links Philosophy 302: Naturalistic Ethics Metaethics Naturalism (philosophy) Ethical theories
0.777289
0.983062
0.764123
An Enquiry Concerning Human Understanding
An Enquiry Concerning Human Understanding is a book by the Scottish empiricist philosopher David Hume, published in English in 1748 under the title Philosophical Essays Concerning Human Understanding until a 1757 edition came up with the now-familiar name. It was a revision of an earlier effort, Hume's A Treatise of Human Nature, published anonymously in London in 1739–40. Hume was disappointed with the reception of the Treatise, which "fell dead-born from the press," as he put it, and so tried again to disseminate his more developed ideas to the public by writing a shorter and more polemical work. The end product of his labours was the Enquiry. The Enquiry dispensed with much of the material from the Treatise, in favor of clarifying and emphasizing its most important aspects. For example, Hume's views on personal identity do not appear. However, more vital propositions, such as Hume's argument for the role of habit in a theory of knowledge, are retained. This book has proven highly influential, both in the years that would immediately follow and today. Immanuel Kant points to it as the book which woke him from his self-described "dogmatic slumber." The Enquiry is widely regarded as a classic in modern philosophical literature. Content The argument of the Enquiry proceeds by a series of incremental steps, separated into chapters which logically succeed one another. After expounding his epistemology, Hume explains how to apply his principles to specific topics. 1. Of the different species of philosophy In the first section of the Enquiry, Hume provides a rough introduction to philosophy as a whole. For Hume, philosophy can be split into two general parts: natural philosophy and the philosophy of human nature (or, as he calls it, "moral philosophy"). The latter investigates both actions and thoughts. He emphasizes in this section, by way of warning, that philosophers with nuanced thoughts will likely be cast aside in favor of those whose conclusions more intuitively match popular opinion. However, he insists, precision helps art and craft of all kinds, including the craft of philosophy. 2. Of the origin of ideas Next, Hume discusses the distinction between impressions and ideas. By "impressions", he means sensations, while by "ideas", he means memories and imaginings. According to Hume, the difference between the two is that ideas are less vivacious than impressions. For example, the idea of the taste of an orange is far inferior to the impression (or sensation) of actually eating one. Writing within the tradition of empiricism, he argues that impressions are the source of all ideas. Hume accepts that ideas may be either the product of mere sensation or of the imagination working in conjunction with sensation. According to Hume, the creative faculty makes use of (at least) four mental operations that produce imaginings out of sense-impressions. These operations are compounding (or the addition of one idea onto another, such as a horn on a horse to create a unicorn); transposing (or the substitution of one part of a thing with the part from another, such as with the body of a man upon a horse to make a centaur); augmenting (as with the case of a giant, whose size has been augmented); and diminishing (as with Lilliputians, whose size has been diminished). (Hume 1974:317) In a later chapter, he also mentions the operations of mixing, separating, and dividing. (Hume 1974:340) However, Hume admits that there is one objection to his account: the problem of "The Missing Shade of Blue". In this thought-experiment, he asks us to imagine a man who has experienced every shade of blue except for one (see Fig. 1). He predicts that this man will be able to divine the color of this particular shade of blue, despite the fact that he has never experienced it. This seems to pose a serious problem for the empirical account, though Hume brushes it aside as an exceptional case by stating that one may experience a novel idea that itself is derived from combinations of previous impressions. (Hume 1974:319) 3. Of the association of ideas In this chapter, Hume discusses how thoughts tend to come in sequences, as in trains of thought. He explains that there are at least three kinds of associations between ideas: resemblance, contiguity in space-time, and cause-and-effect. He argues that there must be some universal principle that must account for the various sorts of connections that exist between ideas. However, he does not immediately show what this principle might be. (Hume 1974:320-321) 4. Sceptical doubts concerning the operations of the understanding (in two parts) In the first part, Hume discusses how the objects of inquiry are either "relations of ideas" or "matters of fact", which is roughly the distinction between analytic and synthetic propositions. The former, he tells the reader, are proved by demonstration, while the latter are given through experience. (Hume 1974:322) In explaining how matters of fact are entirely a product of experience, he dismisses the notion that they may be arrived at through a priori reasoning. For Hume, every effect only follows its cause arbitrarily—they are entirely distinct from one another. (Hume 1974:324) In part two, Hume inquires into how anyone can justifiably believe that experience yields any conclusions about the world: "When it is asked, What is the nature of all our reasonings concerning matter of fact? the proper answer seems to be, that they are founded on the relation of cause and effect. When again it is asked, What is the foundation of all our reasonings and conclusions concerning that relation? it may be replied in one word, experience. But if we still carry on our sifting humor, and ask, What is the foundation of all conclusions from experience? this implies a new question, which may be of more difficult solution and explication." (Hume 1974:328) He shows how a satisfying argument for the validity of experience can be based neither on demonstration (since "it implies no contradiction that the course of nature may change") nor experience (since that would be a circular argument). (Hume 1974:330-332) Here he is describing what would become known as the problem of induction. 5. Sceptical solution of these doubts (in two parts) According to Hume, we assume that experience tells us something about the world because of habit or custom, which human nature forces us to take seriously. This is also, presumably, the "principle" that organizes the connections between ideas. Indeed, one of the many famous passages of the Enquiry is on the topic of the incorrigibility of human custom. In Section XII, Of the academical or sceptical philosophy, Hume will argue, "The great subverter of Pyrrhonism or the excessive principles of skepticism is action, and employment, and the occupations of common life. These principles may flourish and triumph in the schools; where it is, indeed, difficult, if not impossible, to refute them. But as soon as they leave the shade, and by the presence of the real objects, which actuate our passions and sentiments, are put in opposition to the more powerful principles of our nature, they vanish like smoke, and leave the most determined skeptic in the same condition as other mortals." (Hume 1974:425) In the second part, he provides an account of beliefs. He explains that the difference between belief and fiction is that the former produces a certain feeling of confidence which the latter doesn't. (Hume 1974:340) 6. Of probability This short chapter begins with the notions of probability and chance. For him, "probability" means a higher chance of occurring, and brings about a higher degree of subjective expectation in the viewer. By "chance", he means all those particular comprehensible events which the viewer considers possible in accord with the viewer's experience. However, further experience takes these equal chances, and forces the imagination to observe that certain chances arise more frequently than others. These gentle forces upon the imagination cause the viewer to have strong beliefs in outcomes. This effect may be understood as another case of custom or habit taking past experience and using it to predict the future. (Hume 1974:346-348) 7. Of the idea of necessary connection (in two parts) By "necessary connection", Hume means the power or force which necessarily ties one idea to another. He rejects the notion that any sensible qualities are necessarily conjoined, since that would mean we could know something prior to experience. Unlike his predecessors, Berkeley and Locke, Hume rejects the idea that volitions or impulses of the will may be inferred to necessarily connect to the actions they produce by way of some sense of the power of the will. He reasons that, 1. if we knew the nature of this power, then the mind-body divide would seem totally unmysterious to us; 2. if we had immediate knowledge of this mysterious power, then we would be able to intuitively explain why it is that we can control some parts of our bodies (e.g., our hands or tongues), and not others (e.g., the liver or heart); 3. we have no immediate knowledge of the powers which allow an impulse of volition to create an action (e.g., of the "muscles, and nerves, and animal spirits" which are the immediate cause of an action). (Hume 1974:353-354) He produces like arguments against the notion that we have knowledge of these powers as they affect the mind alone. (Hume 1974:355-356) He also argues in brief against the idea that causes are mere occasions of the will of some god(s), a view associated with the philosopher Nicolas Malebranche. (Hume 1974:356-359) Having dispensed with these alternative explanations, he identifies the source of our knowledge of necessary connections as arising out of observation of constant conjunction of certain impressions across many instances. In this way, people know of necessity through rigorous custom or habit, and not from any immediate knowledge of the powers of the will. (Hume 1974:361) 8. Of liberty and necessity (in two parts) Here Hume tackles the problem of how liberty may be reconciled with metaphysical necessity (otherwise known as a compatibilist formulation of free will). Hume believes that all disputes on the subject have been merely verbal arguments—that is to say, arguments which are based on a lack of prior agreement on definitions. He first shows that it is clear that most events are deterministic, but human actions are more controversial. However, he thinks that these too occur out of necessity since an outside observer can see the same regularity that he would in a purely physical system. To show the compatibility of necessity and liberty, Hume defines liberty as the ability to act on the basis of one's will e.g. the capacity to will one's actions but not to will one's will. He then shows (quite briefly) how determinism and free will are compatible notions, and have no bad consequences on ethics or moral life. 9. Of the reason of animals Hume insists that the conclusions of the Enquiry will be very powerful if they can be shown to apply to animals and not just humans. He believed that animals were able to infer the relation between cause and effect in the same way that humans do: through learned expectations. (Hume 1974:384) He also notes that this "inferential" ability that animals have is not through reason, but custom alone. Hume concludes that there is an innate faculty of instincts which both beasts and humans share, namely, the ability to reason experimentally (through custom). Nevertheless, he admits, humans and animals differ in mental faculties in a number of ways, including: differences in memory and attention, inferential abilities, ability to make deductions in a long chain, ability to grasp ideas more or less clearly, the human capacity to worry about conflating unrelated circumstances, a sagely prudence which arrests generalizations, a capacity for a greater inner library of analogies to reason with, an ability to detach oneself and scrap one's own biases, and an ability to converse through language (and thus gain from the experience of others' testimonies). (Hume 1974:385, footnote 17.) 10. Of miracles (in two parts) The next topic which Hume strives to give treatment is that of the reliability of human testimony, and of the role that testimony plays a part in epistemology. This was not an idle concern for Hume. Depending on its outcome, the entire treatment would give the epistemologist a degree of certitude in the treatment of miracles. True to his empirical thesis, Hume tells the reader that, though testimony does have some force, it is never quite as powerful as the direct evidence of the senses. That said, he provides some reasons why we may have a basis for trust in the testimony of persons: because a) human memory can be relatively tenacious; and b) because people are inclined to tell the truth, and ashamed of telling falsities. Needless to say, these reasons are only to be trusted to the extent that they conform to experience. (Hume 1974:389) And there are a number of reasons to be skeptical of human testimony, also based on experience. If a) testimonies conflict one another, b) there are a small number of witnesses, c) the speaker has no integrity, d) the speaker is overly hesitant or bold, or e) the speaker is known to have motives for lying, then the epistemologist has reason to be skeptical of the speaker's claims. (Hume 1974:390) There is one final criterion that Hume thinks gives us warrant to doubt any given testimony, and that is f) if the propositions being communicated are miraculous. Hume understands a miracle to be any event which contradicts the laws of nature. He argues that the laws of nature have an overwhelming body of evidence behind them, and are so well demonstrated to everyone's experience, that any deviation from those laws necessarily flies in the face of all evidence. (Hume 1974:391-392) Moreover, he stresses that talk of the miraculous has no surface validity, for four reasons. First, he explains that in all of history there has never been a miracle which was attested to by a wide body of disinterested experts. Second, he notes that human beings delight in a sense of wonder, and this provides a villain with an opportunity to manipulate others. Third, he thinks that those who hold onto the miraculous have tended towards barbarism. Finally, since testimonies tend to conflict with one another when it comes to the miraculous—that is, one man's religious miracle may be contradicted by another man's miracle—any testimony relating to the fantastic is self-denunciating. (Hume 1974:393-398) Still, Hume takes care to warn that historians are generally to be trusted with confidence, so long as their reports on facts are extensive and uniform. However, he seems to suggest that historians are as fallible at interpreting the facts as the rest of humanity. Thus, if every historian were to claim that there was a solar eclipse in the year 1600, then though we might at first naively regard that as in violation of natural laws, we'd come to accept it as a fact. But if every historian were to assert that Queen Elizabeth was observed walking around happy and healthy after her funeral, and then interpreted that to mean that they had risen from the dead, then we'd have reason to appeal to natural laws in order to dispute their interpretation. (Hume 1974:400-402) 11. Of a particular providence and of a future state Hume continues his application of epistemology to theology by an extended discussion on heaven and hell. The brunt of this chapter allegedly narrates the opinions, not of Hume, but of one of Hume's anonymous friends, who again presents them in an imagined speech by the philosopher Epicurus. His friend argues that, though it is possible to trace a cause from an effect, it is not possible to infer unseen effects from a cause thus traced. The friend insists, then, that even though we might postulate that there is a first cause behind all things—God—we can't infer anything about the afterlife, because we don't know anything of the afterlife from experience, and we can't infer it from the existence of God. (Hume 1974:408) Hume offers his friend an objection: if we see an unfinished building, then can't we infer that it has been created by humans with certain intentions, and that it will be finished in the future? His friend concurs, but indicates that there is a relevant disanalogy that we can't pretend to know the contents of the mind of God, while we can know the designs of other humans. Hume seems essentially persuaded by his friend's reasoning. (Hume 1974:412-414) 12. Of the academical or skeptical philosophy (in three parts) The first section of the last chapter is well organized as an outline of various skeptical arguments. The treatment includes the arguments of atheism, Cartesian skepticism, "light" skepticism, and rationalist critiques of empiricism. Hume shows that even light skepticism leads to crushing doubts about the world which - while they ultimately are philosophically justifiable - may only be combated through the non-philosophical adherence to custom or habit. He ends the section with his own reservations towards Cartesian and Lockean epistemologies. In the second section he returns to the topic of hard skepticism by sharply denouncing it. "For here is the chief and most confounding objection to excessive skepticism, that no durable good can ever result from it; while it remains in its full force and vigor. We need only ask such a skeptic, What his meaning is? And what he proposes by all these curious researches? He is immediately at a loss, and knows not what to answer... a Pyrrhonian cannot expect, that his philosophy will have any constant influence on the mind: or if it had, that its influence would be beneficial to society. On the contrary, he must acknowledge, if he will acknowledge anything, that all human life must perish, were his principles universally and steadily to prevail." (Hume 1974:426) He concludes the volume by setting out the limits of knowledge once and for all. "When we run over libraries, persuaded of these principles, what havoc must we make? If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion." Critiques and rejoinders The criteria Hume lists in his examination of the validity of human testimony are roughly upheld in modern social psychology, under the rubric of the communication-persuasion paradigm. Supporting literature includes: the work of social impact theory, which discusses persuasion in part through the number of persons engaging in influence; as well as studies made on the relative influence of communicator credibility in different kinds of persuasion; and examinations of the trustworthiness of the speaker. The "custom" view of learning can in many ways be likened to associationist psychology. This point of view has been subject to severe criticism in the research of the 20th century. Still, testing on the subject has been somewhat divided. Testing on certain animals like cats have concluded that they do not possess any faculty which allow their minds to grasp an insight into cause and effect. However, it has been shown that some animals, like chimpanzees, were able to generate creative plans of action to achieve their goals, and thus would seem to have a causal insight which transcends mere custom. References External links An Enquiry Concerning Human Understanding: Mirrored at eBooks@Adelaide A version of this work, slightly edited for easier reading An Enquiry Concerning Human Understanding The Enquiry hosted at infidels.org 1748 non-fiction books Books by David Hume Epistemology literature
0.773819
0.987463
0.764117
Culture
Culture is a concept that encompasses the social behavior, institutions, and norms found in human societies, as well as the knowledge, beliefs, arts, laws, customs, capabilities, attitude, and habits of the individuals in these groups. Culture is often originated from or attributed to a specific region or location. Humans acquire culture through the learning processes of enculturation and socialization, which is shown by the diversity of cultures across societies. A cultural norm codifies acceptable conduct in society; it serves as a guideline for behavior, dress, language, and demeanor in a situation, which serves as a template for expectations in a social group. Accepting only a monoculture in a social group can bear risks, just as a single species can wither in the face of environmental change, for lack of functional responses to the change. Thus in military culture, valor is counted a typical behavior for an individual and duty, honor, and loyalty to the social group are counted as virtues or functional responses in the continuum of conflict. In the practice of religion, analogous attributes can be identified in a social group. Cultural change, or repositioning, is the reconstruction of a cultural concept of a society. Cultures are internally affected by both forces encouraging change and forces resisting change. Cultures are externally affected via contact between societies. Organizations like UNESCO attempt to preserve culture and cultural heritage. Description Culture is considered a central concept in anthropology, encompassing the range of phenomena that are transmitted through social learning in human societies. Cultural universals are found in all human societies. These include expressive forms like art, music, dance, ritual, religion, and technologies like tool usage, cooking, shelter, and clothing. The concept of material culture covers the physical expressions of culture, such as technology, architecture and art, whereas the immaterial aspects of culture such as principles of social organization (including practices of political organization and social institutions), mythology, philosophy, literature (both written and oral), and science comprise the intangible cultural heritage of a society. In the humanities, one sense of culture as an attribute of the individual has been the degree to which they have cultivated a particular level of sophistication in the arts, sciences, education, or manners. The level of cultural sophistication has also sometimes been used to distinguish civilizations from less complex societies. Such hierarchical perspectives on culture are also found in class-based distinctions between a high culture of the social elite and a low culture, popular culture, or folk culture of the lower classes, distinguished by the stratified access to cultural capital. In common parlance, culture is often used to refer specifically to the symbolic markers used by ethnic groups to distinguish themselves visibly from each other such as body modification, clothing or jewelry. Mass culture refers to the mass-produced and mass mediated forms of consumer culture that emerged in the 20th century. Some schools of philosophy, such as Marxism and critical theory, have argued that culture is often used politically as a tool of the elites to manipulate the proletariat and create a false consciousness. Such perspectives are common in the discipline of cultural studies. In the wider social sciences, the theoretical perspective of cultural materialism holds that human symbolic culture arises from the material conditions of human life, as humans create the conditions for physical survival, and that the basis of culture is found in evolved biological dispositions. When used as a count noun, a "culture" is the set of customs, traditions, and values of a society or community, such as an ethnic group or nation. Culture is the set of knowledge acquired over time. In this sense, multiculturalism values the peaceful coexistence and mutual respect between different cultures inhabiting the same planet. Sometimes "culture" is also used to describe specific practices within a subgroup of a society, a subculture (e.g. "bro culture"), or a counterculture. Within cultural anthropology, the ideology and analytical stance of cultural relativism hold that cultures cannot easily be objectively ranked or evaluated because any evaluation is necessarily situated within the value system of a given culture. Etymology The modern term "culture" is based on a term used by the ancient Roman orator Cicero in his Tusculanae Disputationes, where he wrote of a cultivation of the soul or "cultura animi", using an agricultural metaphor for the development of a philosophical soul, understood teleologically as the highest possible ideal for human development. Samuel Pufendorf took over this metaphor in a modern context, meaning something similar, but no longer assuming that philosophy was man's natural perfection. His use, and that of many writers after him, "refers to all the ways in which human beings overcome their original barbarism, and through artifice, become fully human." In 1986, philosopher Edward S. Casey wrote, "The very word culture meant 'place tilled' in Middle English, and the same word goes back to Latin colere, 'to inhabit, care for, till, worship' and cultus, 'A cult, especially a religious one.' To be cultural, to have a culture, is to inhabit a place sufficiently intensely to cultivate it—to be responsible for it, to respond to it, to attend to it caringly." Culture described by Richard Velkley: ... originally meant the cultivation of the soul or mind, acquires most of its later modern meaning in the writings of the 18th-century German thinkers, who were on various levels developing Rousseau's criticism of "modern liberalism and Enlightenment." Thus a contrast between "culture" and "civilization" is usually implied in these authors, even when not expressed as such. In the words of anthropologist E.B. Tylor, it is "that complex whole which includes knowledge, belief, art, morals, law, custom and any other capabilities and habits acquired by man as a member of society." Alternatively, in a contemporary variant, "Culture is defined as a social domain that emphasizes the practices, discourses and material expressions, which, over time, express the continuities and discontinuities of social meaning of a life held in common. The Cambridge English Dictionary states that culture is "the way of life, especially the general customs and beliefs, of a particular group of people at a particular time." Terror management theory posits that culture is a series of activities and worldviews that provide humans with the basis for perceiving themselves as "person[s] of worth within the world of meaning"—raising themselves above the merely physical aspects of existence, in order to deny the animal insignificance and death that Homo sapiens became aware of when they acquired a larger brain. The word is used in a general sense as the evolved ability to categorize and represent experiences with symbols and to act imaginatively and creatively. This ability arose with the evolution of behavioral modernity in humans around 50,000 years ago and is often thought to be unique to humans. However, some other species have demonstrated similar, though much less complicated, abilities for social learning. It is also used to denote the complex networks of practices and accumulated knowledge and ideas that are transmitted through social interaction and exist in specific human groups, or cultures, using the plural form. Change Raimon Panikkar identified 29 ways in which cultural change can be brought about, including growth, development, evolution, involution, renovation, reconception, reform, innovation, revivalism, revolution, mutation, progress, diffusion, osmosis, borrowing, eclecticism, syncretism, modernization, indigenization, and transformation. In this context, modernization could be viewed as adoption of Enlightenment era beliefs and practices, such as science, rationalism, industry, commerce, democracy, and the notion of progress. Rein Raud, building on the work of Umberto Eco, Pierre Bourdieu and Jeffrey C. Alexander, has proposed a model of cultural change based on claims and bids, which are judged by their cognitive adequacy and endorsed or not endorsed by the symbolic authority of the cultural community in question. Cultural invention has come to mean any innovation that is new and found to be useful to a group of people and expressed in their behavior but which does not exist as a physical object. Humanity is in a global "accelerating culture change period," driven by the expansion of international commerce, the mass media, and above all, the human population explosion, among other factors. Culture repositioning means the reconstruction of the cultural concept of a society. Cultures are internally affected by both forces encouraging change and forces resisting change. These forces are related to both social structures and natural events, and are involved in the perpetuation of cultural ideas and practices within current structures, which themselves are subject to change. Social conflict and the development of technologies can produce changes within a society by altering social dynamics and promoting new cultural models, and spurring or enabling generative action. These social shifts may accompany ideological shifts and other types of cultural change. For example, the U.S. feminist movement involved new practices that produced a shift in gender relations, altering both gender and economic structures. Environmental conditions may also enter as factors. For example, after tropical forests returned at the end of the last ice age, plants suitable for domestication were available, leading to the invention of agriculture, which in turn brought about many cultural innovations and shifts in social dynamics. Cultures are externally affected via contact between societies, which may also produce—or inhibit—social shifts and changes in cultural practices. War or competition over resources may impact technological development or social dynamics. Additionally, cultural ideas may transfer from one society to another, through diffusion or acculturation. In diffusion, the form of something (though not necessarily its meaning) moves from one culture to another. For example, Western restaurant chains and culinary brands sparked curiosity and fascination to the Chinese as China opened its economy to international trade in the late 20th-century. "Stimulus diffusion" (the sharing of ideas) refers to an element of one culture leading to an invention or propagation in another. "Direct borrowing", on the other hand, tends to refer to technological or tangible diffusion from one culture to another. Diffusion of innovations theory presents a research-based model of why and when individuals and cultures adopt new ideas, practices, and products. Acculturation has different meanings. Still, in this context, it refers to the replacement of traits of one culture with another, such as what happened to certain Native American tribes and many indigenous peoples across the globe during the process of colonization. Related processes on an individual level include assimilation (adoption of a different culture by an individual) and transculturation. The transnational flow of culture has played a major role in merging different cultures and sharing thoughts, ideas, and beliefs. Early modern discourses German Romanticism Immanuel Kant (1724–1804) formulated an individualist definition of "enlightenment" similar to the concept of : "Enlightenment is man's emergence from his self-incurred immaturity." He argued that this immaturity comes not from a lack of understanding, but from a lack of courage to think independently. Against this intellectual cowardice, Kant urged: "" ("Dare to be wise!"). In reaction to Kant, German scholars such as Johann Gottfried Herder (1744–1803) argued that human creativity, which necessarily takes unpredictable and highly diverse forms, is as important as human rationality. Moreover, Herder proposed a collective form of : "For Herder, was the totality of experiences that provide a coherent identity, and sense of common destiny, to a people." In 1795, the Prussian linguist and philosopher Wilhelm von Humboldt (1767–1835) called for an anthropology that would synthesize Kant's and Herder's interests. During the Romantic era, scholars in Germany, especially those concerned with nationalist movements—such as the nationalist struggle to create a "Germany" out of diverse principalities, and the nationalist struggles by ethnic minorities against the Austro-Hungarian Empire—developed a more inclusive notion of culture as "worldview". According to this school of thought, each ethnic group has a distinct worldview that is incommensurable with the worldviews of other groups. Although more inclusive than earlier views, this approach to culture still allowed for distinctions between "civilized" and "primitive" or "tribal" cultures. In 1860, Adolf Bastian (1826–1905) argued for "the psychic unity of mankind." He proposed that a scientific comparison of all human societies would reveal that distinct worldviews consisted of the same basic elements. According to Bastian, all human societies share a set of "elementary ideas"; different cultures, or different "folk ideas", are local modifications of the elementary ideas. This view paved the way for the modern understanding of culture. Franz Boas (1858–1942) was trained in this tradition, and he brought it with him when he left Germany for the United States. English Romanticism In the 19th century, humanists such as English poet and essayist Matthew Arnold (1822–1888) used the word "culture" to refer to an ideal of individual human refinement, of "the best that has been thought and said in the world." This concept of culture is also comparable to the German concept of : "...culture being a pursuit of our total perfection by means of getting to know, on all the matters which most concern us, the best which has been thought and said in the world." In practice, culture referred to an elite ideal and was associated with such activities as art, classical music, and haute cuisine. As these forms were associated with urban life, "culture" was identified with "civilization" (from ). Another facet of the Romantic movement was an interest in folklore, which led to identifying a "culture" among non-elites. This distinction is often characterized as that between high culture, namely that of the ruling social group, and low culture. In other words, the idea of "culture" that developed in Europe during the 18th and early 19th centuries reflected inequalities within European societies. Matthew Arnold contrasted "culture" with anarchy; other Europeans, following philosophers Thomas Hobbes and Jean-Jacques Rousseau, contrasted "culture" with "the state of nature." According to Hobbes and Rousseau, the Native Americans who were being conquered by Europeans from the 16th centuries on were living in a state of nature; this opposition was expressed through the contrast between "civilized" and "uncivilized." According to this way of thinking, one could classify some countries and nations as more civilized than others and some people as more cultured than others. This contrast led to Herbert Spencer's theory of Social Darwinism and Lewis Henry Morgan's theory of cultural evolution. Just as some critics have argued that the distinction between high and low cultures is an expression of the conflict between European elites and non-elites, other critics have argued that the distinction between civilized and uncivilized people is an expression of the conflict between European colonial powers and their colonial subjects. Other 19th-century critics, following Rousseau, have accepted this differentiation between higher and lower culture, but have seen the refinement and sophistication of high culture as corrupting and unnatural developments that obscure and distort people's essential nature. These critics considered folk music (as produced by "the folk," i.e., rural, illiterate, peasants) to honestly express a natural way of life, while classical music seemed superficial and decadent. Equally, this view often portrayed indigenous peoples as "noble savages" living authentic and unblemished lives, uncomplicated and uncorrupted by the highly stratified capitalist systems of the West. In 1870 the anthropologist Edward Tylor (1832–1917) applied these ideas of higher versus lower culture to propose a theory of the evolution of religion. According to this theory, religion evolves from more polytheistic to more monotheistic forms. In the process, he redefined culture as a diverse set of activities characteristic of all human societies. This view paved the way for the modern understanding of religion. Anthropology Although anthropologists worldwide refer to Tylor's definition of culture, in the 20th century "culture" emerged as the central and unifying concept of American anthropology, where it most commonly refers to the universal human capacity to classify and encode human experiences symbolically, and to communicate symbolically encoded experiences socially. American anthropology is organized into four fields, each of which plays an important role in research on culture: biological anthropology, linguistic anthropology, cultural anthropology, and in the United States and Canada, archaeology. The term , or "culture glasses," coined by German American anthropologist Franz Boas, refers to the "lenses" through which a person sees their own culture. Martin Lindstrom asserts that , which allow a person to make sense of the culture they inhabit, "can blind us to things outsiders pick up immediately." Sociology The sociology of culture concerns culture as manifested in society. For sociologist Georg Simmel (1858–1918), culture referred to "the cultivation of individuals through the agency of external forms which have been objectified in the course of history." As such, culture in the sociological field can be defined as the ways of thinking, the ways of acting, and the material objects that together shape a people's way of life. Culture can be either of two types, non-material culture or material culture. Non-material culture refers to the non-physical ideas that individuals have about their culture, including values, belief systems, rules, norms, morals, language, organizations, and institutions, while material culture is the physical evidence of a culture in the objects and architecture they make or have made. The term tends to be relevant only in archeological and anthropological studies, but it specifically means all material evidence which can be attributed to culture, past or present. Cultural sociology first emerged in Weimar Germany (1918–1933), where sociologists such as Alfred Weber used the term ('cultural sociology'). Cultural sociology was then reinvented in the English-speaking world as a product of the cultural turn of the 1960s, which ushered in structuralist and postmodern approaches to social science. This type of cultural sociology may be loosely regarded as an approach incorporating cultural analysis and critical theory. Cultural sociologists tend to reject scientific methods, instead hermeneutically focusing on words, artifacts and symbols. Culture has since become an important concept across many branches of sociology, including resolutely scientific fields like social stratification and social network analysis. As a result, there has been a recent influx of quantitative sociologists to the field. Thus, there is now a growing group of sociologists of culture who are, confusingly, not cultural sociologists. These scholars reject the abstracted postmodern aspects of cultural sociology, and instead, look for a theoretical backing in the more scientific vein of social psychology and cognitive science. Early researchers and development of cultural sociology The sociology of culture grew from the intersection between sociology (as shaped by early theorists like Marx, Durkheim, and Weber) with the growing discipline of anthropology, wherein researchers pioneered ethnographic strategies for describing and analyzing a variety of cultures around the world. Part of the legacy of the early development of the field lingers in the methods (much of cultural, sociological research is qualitative), in the theories (a variety of critical approaches to sociology are central to current research communities), and in the substantive focus of the field. For instance, relationships between popular culture, political control, and social class were early and lasting concerns in the field. Cultural studies In the United Kingdom, sociologists and other scholars influenced by Marxism such as Stuart Hall (1932–2014) and Raymond Williams (1921–1988) developed cultural studies. Following nineteenth-century Romantics, they identified culture with consumption goods and leisure activities (such as art, music, film, food, sports, and clothing). They saw patterns of consumption and leisure as determined by relations of production, which led them to focus on class relations and the organization of production. In the United Kingdom, cultural studies focuses largely on the study of popular culture; that is, on the social meanings of mass-produced consumer and leisure goods. Richard Hoggart coined the term in 1964 when he founded the Birmingham Centre for Contemporary Cultural Studies or CCCS. It has since become strongly associated with Stuart Hall, who succeeded Hoggart as Director. Cultural studies in this sense, then, can be viewed as a limited concentration scoped on the intricacies of consumerism, which belongs to a wider culture sometimes referred to as Western civilization or globalism. From the 1970s onward, Stuart Hall's pioneering work, along with that of his colleagues Paul Willis, Dick Hebdige, Tony Jefferson, and Angela McRobbie, created an international intellectual movement. As the field developed, it began to combine political economy, communication, sociology, social theory, literary theory, media theory, film/video studies, cultural anthropology, philosophy, museum studies, and art history to study cultural phenomena or cultural texts. In this field researchers often concentrate on how particular phenomena relate to matters of ideology, nationality, ethnicity, social class, and/or gender. Cultural studies is concerned with the meaning and practices of everyday life. These practices comprise the ways people do particular things (such as watching television or eating out) in a given culture. It also studies the meanings and uses people attribute to various objects and practices. Specifically, culture involves those meanings and practices held independently of reason. Watching television to view a public perspective on a historical event should not be thought of as culture unless referring to the medium of television itself, which may have been selected culturally; however, schoolchildren watching television after school with their friends to "fit in" certainly qualifies since there is no grounded reason for one's participation in this practice. In the context of cultural studies, a text includes not only written language, but also films, photographs, fashion, or hairstyles: the texts of cultural studies comprise all the meaningful artifacts of culture. Similarly, the discipline widens the concept of culture. Culture, for a cultural-studies researcher, not only includes traditional high culture (the culture of ruling social groups) and popular culture, but also everyday meanings and practices. The last two, in fact, have become the main focus of cultural studies. A further and recent approach is comparative cultural studies, based on the disciplines of comparative literature and cultural studies. Scholars in the United Kingdom and the United States developed somewhat different versions of cultural studies after the late 1970s. The British version of cultural studies had originated in the 1950s and 1960s, mainly under the influence of Richard Hoggart, E.P. Thompson, and Raymond Williams, and later that of Stuart Hall and others at the Centre for Contemporary Cultural Studies at the University of Birmingham. This included overtly political, left-wing views, and criticisms of popular culture as "capitalist" mass culture; it absorbed some of the ideas of the Frankfurt School critique of the "culture industry" (i.e. mass culture). This emerges in the writings of early British cultural-studies scholars and their influences: see the work of (for example) Raymond Williams, Stuart Hall, Paul Willis, and Paul Gilroy. In the United States, Lindlof and Taylor write, "cultural studies [were] grounded in a pragmatic, liberal-pluralist tradition." The American version of cultural studies initially concerned itself more with understanding the subjective and appropriative side of audience reactions to, and uses of, mass culture; for example, American cultural-studies advocates wrote about the liberatory aspects of fandom. The distinction between American and British strands, however, has faded. Some researchers, especially in early British cultural studies, apply a Marxist model to the field. This strain of thinking has some influence from the Frankfurt School, but especially from the structuralist Marxism of Louis Althusser and others. The main focus of an orthodox Marxist approach concentrates on the production of meaning. This model assumes a mass production of culture and identifies power as residing with those producing cultural artifacts. In a Marxist view, the mode and relations of production form the economic base of society, which constantly interacts and influences superstructures, such as culture. Other approaches to cultural studies, such as feminist cultural studies and later American developments of the field, distance themselves from this view. They criticize the Marxist assumption of a single, dominant meaning, shared by all, for any cultural product. The non-Marxist approaches suggest that different ways of consuming cultural artifacts affect the meaning of the product. This view comes through in the book Doing Cultural Studies: The Story of the Sony Walkman (by Paul du Gay et al.), which seeks to challenge the notion that those who produce commodities control the meanings that people attribute to them. Feminist cultural analyst, theorist, and art historian Griselda Pollock contributed to cultural studies from viewpoints of art history and psychoanalysis. The writer Julia Kristeva is among influential voices at the turn of the century, contributing to cultural studies from the field of art and psychoanalytical French feminism. Petrakis and Kostis (2013) divide cultural background variables into two main groups: The first group covers the variables that represent the "efficiency orientation" of the societies: performance orientation, future orientation, assertiveness, power distance, and uncertainty avoidance. The second covers the variables that represent the "social orientation" of societies, i.e., the attitudes and lifestyles of their members. These variables include gender egalitarianism, institutional collectivism, in-group collectivism, and human orientation. In 2016, a new approach to culture was suggested by Rein Raud, who defines culture as the sum of resources available to human beings for making sense of their world and proposes a two-tiered approach, combining the study of texts (all reified meanings in circulation) and cultural practices (all repeatable actions that involve the production, dissemination or transmission of purposes), thus making it possible to re-link anthropological and sociological study of culture with the tradition of textual theory. Psychology Starting in the 1990s, psychological research on culture influence began to grow and challenge the universality assumed in general psychology. Culture psychologists began to try to explore the relationship between emotions and culture, and answer whether the human mind is independent from culture. For example, people from collectivistic cultures, such as the Japanese, suppress their positive emotions more than their American counterparts. Culture may affect the way that people experience and express emotions. On the other hand, some researchers try to look for differences between people's personalities across cultures. As different cultures dictate distinctive norms, culture shock is also studied to understand how people react when they are confronted with other cultures. LGBT culture is displayed with significantly different levels of tolerance within different cultures and nations. Cognitive tools may not be accessible or they may function differently cross culture. For example, people who are raised in a culture with an abacus are trained with distinctive reasoning style. Cultural lenses may also make people view the same outcome of events differently. Westerners are more motivated by their successes than their failures, while East Asians are better motivated by the avoidance of failure. Culture is important for psychologists to consider when understanding the human mental operation. The notion of the anxious, unstable, and rebellious adolescent has been criticized by experts, such as Robert Epstein, who state that an undeveloped brain is not the main cause of teenagers' turmoils. Some have criticized this understanding of adolescence, classifying it as a relatively recent phenomenon in human history created by modern society, and have been highly critical of what they view as the infantilization of young adults in American society. According to Robert Epstein and Jennifer, "American-style teen turmoil is absent in more than 100 cultures around the world, suggesting that such mayhem is not biologically inevitable. Second, the brain itself changes in response to experiences, raising the question of whether adolescent brain characteristics are the cause of teen tumult or rather the result of lifestyle and experiences." David Moshman has also stated in regards to adolescence that brain research "is crucial for a full picture, but it does not provide an ultimate explanation." Protection of culture There are a number of international agreements and national laws relating to the protection of cultural heritage and cultural diversity. UNESCO and its partner organizations such as Blue Shield International coordinate international protection and local implementation. The Hague Convention for the Protection of Cultural Property in the Event of Armed Conflict and the UNESCO Convention on the Protection and Promotion of the Diversity of Cultural Expressions deal with the protection of culture. Article 27 of the Universal Declaration of Human Rights deals with cultural heritage in two ways: it gives people the right to participate in cultural life on the one hand and the right to the protection of their contributions to cultural life on the other. In the 21st century, the protection of culture has been the focus of increasing activity by national and international organizations. The UN and UNESCO promote cultural preservation and cultural diversity through declarations and legally-binding conventions or treaties. The aim is not to protect a person's property, but rather to preserve the cultural heritage of humanity, especially in the event of war and armed conflict. According to Karl von Habsburg, President of Blue Shield International, the destruction of cultural assets is also part of psychological warfare. The target of the attack is the identity of the opponent, which is why symbolic cultural assets become a main target. It is also intended to affect the particularly sensitive cultural memory, the growing cultural diversity and the economic basis (such as tourism) of a state, region or municipality. Tourism is having an increasing impact on the various forms of culture. On the one hand, this can be physical impact on individual objects or the destruction caused by increasing environmental pollution and, on the other hand, socio-cultural effects on society. See also Animal culture Anthropology Cultural area Cultural studies Cultural identity Cultural tourism Culture 21 – United Nations plan of action Outline of culture Recombinant culture Semiotics of culture References Sources Further reading Books Bastian, Adolf (2009), Encyclopædia Britannica Online Arnold, Matthew. 1869. Culture and Anarchy. New York: Macmillan. Third edition, 1882, available online. Retrieved: 2006-06-28. Bakhtin, M.M. (1981) The Dialogic Imagination: Four Essays. Ed. Michael Holquist. Trans. Caryl Press. . Barzilai, Gad (2003). Communities and Law: Politics and Cultures of Legal Identities University of Michigan Press. Bourdieu, Pierre (1977). Outline of a Theory of Practice. Cambridge University Press. Carhart. Michael C. (2007). The Science of Culture in Enlightenment Germany, Cambridge, Harvard University press. Cohen, Anthony P. (1985). The Symbolic Construction of Community. New York: Routledge Dawkins, R. (1999) [1982]. The Extended Phenotype: The Long Reach of the Gene. Oxford Paperbacks. Findley & Rothney (1986). Twentieth-Century World, Houghton Mifflin Geertz, Clifford (1973). The Interpretation of Cultures: Selected Essays. New York. . Goodall, J. (1986). The Chimpanzees of Gombe: Patterns of Behavior. Cambridge, Massachusetts: Belknap Press of Harvard University Press. Hoult, T.F., ed. (1969). Dictionary of Modern Sociology. Totowa, New Jersey: Littlefield, Adams & Co. Jary, D. and J. Jary (1991). The HarperCollins Dictionary of Sociology. New York: HarperCollins. Keiser, R. Lincoln (1969). The Vice Lords: Warriors of the Streets. Holt, Rinehart, and Winston. . Kroeber, A.L. and C. Kluckhohn (1952). Culture: A Critical Review of Concepts and Definitions. Cambridge, Massachusetts: Peabody Museum Kim, Uichol (2001). "Culture, science and indigenous psychologies: An integrated analysis." In D. Matsumoto (Ed.), Handbook of culture and psychology. Oxford: Oxford University Press McClenon, James. Tylor, Edward B(urnett) (1998) Encyclopedia of Religion and Society. Ed. William Swatos and Peter Kivisto. Walnut Creek: AltaMira, pp. 528–529. Middleton, R. (1990). Studying Popular Music. Philadelphia: Open University Press. . O'Neil, D. (2006). Cultural Anthropology Tutorials , Behavioral Sciences Department, Palomar College, San Marco, California. Retrieved: 2006-07-10. Reagan, Ronald. "Final Radio Address to the Nation" , January 14, 1989. Retrieved June 3, 2006. Reese, W.L. (1980). Dictionary of Philosophy and Religion: Eastern and Western Thought. New Jersey US; Sussex, UK: Humanities Press. UNESCO (2002). Universal Declaration on Cultural Diversity, issued on International Mother Language Day, White, L. (1949). The Science of Culture: A study of man and civilization. New York: Farrar, Straus and Giroux. Wilson, Edward O. (1998). Consilience: The Unity of Knowledge. New York: Vintage. . Articles The Meaning of "Culture" (2014-12-27), Joshua Rothman, The New Yorker External links Cultura: International Journal of Philosophy of Culture and Axiology What is Culture? Social concepts Social constructionism Main topic articles
0.764458
0.99951
0.764083
Neopragmatism
Neopragmatism is a variant of pragmatism that infers that the meaning of words is a result of how they are used, rather than the objects they represent. The Blackwell Dictionary of Western Philosophy (2004) defines "neo-pragmatism" as "A postmodern version of pragmatism developed by the American philosopher Richard Rorty and drawing inspiration from authors such as John Dewey, Martin Heidegger, Wilfrid Sellars, W. V. O. Quine, and Jacques Derrida". It is a contemporary term for a philosophy which reintroduces many concepts from pragmatism. While traditional pragmatism focuses on experience, Rorty centers on language. The self is regarded as a "centerless web of beliefs and desires". It repudiates the notions of universal truth, epistemological foundationalism, representationalism, and epistemic objectivity. It is a nominalist approach that denies that natural kinds and linguistic entities have substantive ontological implications. Rorty denies that the subject-matter of the human sciences can be studied in the same ways as we study the natural sciences. It has been associated with a variety of other thinkers including Hilary Putnam, W. V. O. Quine, and Donald Davidson, though none of these figures have called themselves "neopragmatists". The following contemporary philosophers are also often considered to be neopragmatists: Nicholas Rescher (a proponent of methodological pragmatism and pragmatic idealism), Jürgen Habermas, Susan Haack, Robert Brandom, and Cornel West. Background "Anglo-analytic" influences Neopragmatists, particularly Rorty and Putnam, draw on the ideas of classical pragmatists such as Charles Sanders Peirce, William James, and John Dewey. Putnam, in Words and Life (1994) enumerates the ideas in the classical pragmatist tradition, which newer pragmatists find most compelling. To paraphrase Putnam: Rejection of skepticism (pragmatists hold that doubt requires justification just as much as belief); Fallibilism (the view that there are no metaphysical guarantees against the need to revise a belief); Antidualism about "facts" and "values"; That practice, properly construed, is primary in philosophy. (WL 152) Neopragmatism is distinguished from classical pragmatism (the pragmatism of James, Dewey, Peirce, and Mead) primarily due to the influence of the linguistic turn in philosophy that occurred in the early and mid-twentieth century. The linguistic turn in philosophy reduced talk of mind, ideas, and the world to language and the world. Philosophers stopped talking about the ideas or concepts one may have present in one's mind and started talking about the "mental language" and terms used to employ these concepts. In the early twentieth century philosophers of language (e.g. A.J. Ayer, Bertrand Russell, G.E. Moore) thought that analyzing language would bring about the arrival of meaning, objectivity, and ultimately, truth concerning external reality. In this tradition, it was thought that truth was obtained when linguistic terms stood in a proper correspondence relation to non-linguistic objects (this can be called "representationalism"). The thought was that in order for a statement or proposition to be true it must give facts which correspond to what is actually present in reality. This is called the correspondence theory of truth and is to be distinguished from a neo-pragmatic conception of truth. There were many philosophical inquiries during the mid-twentieth century which began to undermine the legitimacy of the methodology of the early Anglo-analytic philosophers of language. W. V. O. Quine in Word and Object, originally published in 1960, attacked the notion of our concepts having any strong correspondence to reality. Quine argued for ontological relativity which attacked the idea that language could ever describe or paint a purely non-subjective picture of reality. More specifically, ontological relativity is the thesis that the things we believe to exist in the world are wholly dependent on our subjective, "mental languages". A 'mental language' is simply the way words which denote concepts in our minds are mapped to objects in the world. Quine's argument for ontological relativity is roughly as follows: All ideas and perceptions concerning reality are given to our minds in terms of our own mental language. Mental languages specify how objects in the world are to be constructed from our sense data. Different mental languages will specify different ontologies (different objects existing in the world). There is no way to perfectly translate between two different mental languages; there will always be several, consistent ways in which the terms in each language can be mapped onto the other. Reality apart from our perceptions of it can be thought of as constituting a true, object language, that is, the language which specifies how things actually are. There is no difference in translating between two mental languages and translating between the object language of reality and one's own mental language. Therefore, just as there is no objective way of translating between two mental languages (no one-to-one mapping of terms in one to terms in the other) there is no way of objectively translating (or fitting) the true, object language of reality into our own mental language. And therefore, there are many ontologies (possibly an infinite number) that can be consistently held to represent reality. (see Chapter 2, in Word and Object). The above argument is reminiscent of the theme in neopragmatism against the picture theory of language, the idea that the goal of inquiry is to represent reality correctly with one's language. A second critically influential philosopher to the neo-pragmatist is Thomas Kuhn who argued that our languages for representing reality, or what he called "paradigms", are only as good as they produce possible future experiments and observations. Kuhn, being a philosopher of science, argued in The Structure of Scientific Revolutions that "scientific progress" was a kind of a misnomer; for Kuhn, we make progress in science whenever we throw off old scientific paradigms with their associated concepts and methods in favor of new paradigms which offer novel experiments to be done and new scientific ontologies. For Kuhn 'electrons' exist just so much as they are useful in providing us with novel experiments which will allow us to uncover more about the new paradigm we have adopted. Kuhn believes that different paradigms posit different things to exist in the world and are therefore incommensurable with each other. Another way of viewing this is that paradigms describe new languages, which allow us to describe the world in new ways. Kuhn was a fallibilist; he believed that all scientific paradigms (e.g. classical Newtonian mechanics, Einsteinian relativity) should be assumed to be, on the whole, false but good for a time as they give scientists new ideas to play around with. Kuhn's fallibilism, holism, emphasis on incommensurability, and ideas concerning objective reality are themes which often show up in neopragmatist writings. Wilfrid Sellars argued against foundationalist justification in epistemology and was therefore also highly influential to the neopragmatists, especially Rorty. "Continental" influences Philosophers such as Derrida and Heidegger and their views on language have been highly influential to neopragmatist thinkers like Richard Rorty. Rorty has also emphasised the value of "historicist" or "genealogical" methods of philosophy typified by Continental thinkers such as Foucault. Wittgenstein and language games The "later" Ludwig Wittgenstein in the Philosophical Investigations argues contrary to his earlier views in the Tractatus Logico-Philosophicus that the role of language is not to describe reality but rather to perform certain actions in communities. The language-game is the concept Wittgenstein used to emphasize this. Wittgenstein believed roughly that: Languages are used to obtain certain ends within communities. Each language has its own set of rules and objects to which it refers. Just as board games have rules guiding what moves may be made so do languages within communities where the moves to be made within a language game are the types of objects that may be talked about intelligibly. Two people participating in two different language-games cannot be said to communicate in any relevant way. Many of the themes found in Wittgenstein are found in neopragmatism. Wittgenstein's emphasis of the importance of "use" in language to accomplish communal goals and the problems associated with trying to communicate between two different language games finds much traction in neopragmatist writings. Richard Rorty and anti-representationalism Richard Rorty was influenced by James, Dewey, Sellars, Quine, Kuhn, Wittgenstein, Derrida, and Heidegger. He found common implications in the writings of many of these philosophers, as he believed that these philosophers were all in one way or another trying to hit on the thesis that our language does not represent things in reality in any relevant way. Rather than situating our language in ways in order to get things right or correct, Rorty says in the Introduction to the first volume of his philosophical papers that we should believe that beliefs are only habits with which we use to react and adapt to the world. To Rorty getting things right as they are "in themselves" is useless if not downright meaningless. In 1995, Rorty wrote: "I linguisticize as many pre-linguistic-turn philosophers as I can, in order to read them as prophets of the utopia in which all metaphysical problems have been dissolved, and religion and science have yielded their place to poetry." This "linguistic turn" strategy aims to avoid what Rorty sees as the essentialisms ("truth," "reality," "experience") still extant in classical pragmatism. Rorty wrote: "Analytic philosophy, thanks to its concentration on language, was able to defend certain crucial pragmatist theses better than James and Dewey themselves. [...] By focusing our attention on the relation between language and the rest of the world rather than between experience and nature, post-positivistic analytic philosophy was able to make a more radical break with the philosophical tradition." See also Conceptual pragmatism Confirmation holism Constructivist epistemology Direct and indirect realism Fallibilism Linguistic turn Ontological pluralism Philosophy and the Mirror of Nature, the foundational text of the tradition Postanalytic philosophy Contingency, Irony, and Solidarity Notes References Hylton, Peter, "Willard van Orman Quine", The Stanford Encyclopedia of Philosophy (Summer 2013 Edition), Edward N. Zalta (ed.), URL = <The Stanford Encyclopedia of Philosophy>. Further reading Randall Auxier, Eli Kramer, and Krzysztof Piotr Skowroński, eds., (2019). Rorty and Beyond, Lexington. Krzysztof Piotr Skowroński (2015). Values, Valuations, and Axiological Norms in Richard Rorty's Neopragmatism, Lexington. External links Neo-pragmatist Philosophy of Education Epistemological theories Pragmatism Metatheory
0.775854
0.984664
0.763955
Validity (statistics)
Validity is the main extent to which a concept, conclusion, or measurement is well-founded and likely corresponds accurately to the real world. The word "valid" is derived from the Latin validus, meaning strong. The validity of a measurement tool (for example, a test in education) is the degree to which the tool measures what it claims to measure. Validity is based on the strength of a collection of different types of evidence (e.g. face validity, construct validity, etc.) described in greater detail below. In psychometrics, validity has a particular application known as test validity: "the degree to which evidence and theory support the interpretations of test scores" ("as entailed by proposed uses of tests"). It is generally accepted that the concept of scientific validity addresses the nature of reality in terms of statistical measures and as such is an epistemological and philosophical issue as well as a question of measurement. The use of the term in logic is narrower, relating to the relationship between the premises and conclusion of an argument. In logic, validity refers to the property of an argument whereby if the premises are true then the truth of the conclusion follows by necessity. The conclusion of an argument is true if the argument is sound, which is to say if the argument is valid and its premises are true. By contrast, "scientific or statistical validity" is not a deductive claim that is necessarily truth preserving, but is an inductive claim that remains true or false in an undecided manner. This is why "scientific or statistical validity" is a claim that is qualified as being either strong or weak in its nature, it is never necessary nor certainly true. This has the effect of making claims of "scientific or statistical validity" open to interpretation as to what, in fact, the facts of the matter mean. Validity is important because it can help determine what types of tests to use, and help to ensure researchers are using methods that are not only ethical and cost-effective, but also those that truly measure the ideas or constructs in question. Test validity Validity (accuracy) Validity of an assessment is the degree to which it measures what it is supposed to measure. This is not the same as reliability, which is the extent to which a measurement gives results that are very consistent. Within validity, the measurement does not always have to be similar, as it does in reliability. However, just because a measure is reliable, it is not necessarily valid. E.g. a scale that is 5 pounds off is reliable but not valid. A test cannot be valid unless it is reliable. Validity is also dependent on the measurement measuring what it was designed to measure, and not something else instead. Validity (similar to reliability) is a relative concept; validity is not an all-or-nothing idea. There are many different types of validity. Construct validity Construct validity refers to the extent to which operationalizations of a construct (e.g., practical tests developed from a theory) measure a construct as defined by a theory. It subsumes all other types of validity. For example, the extent to which a test measures intelligence is a question of construct validity. A measure of intelligence presumes, among other things, that the measure is associated with things it should be associated with (convergent validity), not associated with things it should not be associated with (discriminant validity). Construct validity evidence involves the empirical and theoretical support for the interpretation of the construct. Such lines of evidence include statistical analyses of the internal structure of the test including the relationships between responses to different test items. They also include relationships between the test and measures of other constructs. As currently understood, construct validity is not distinct from the support for the substantive theory of the construct that the test is designed to measure. As such, experiments designed to reveal aspects of the causal role of the construct also contribute to constructing validity evidence. Content validity Content validity is a non-statistical type of validity that involves "the systematic examination of the test content to determine whether it covers a representative sample of the behavior domain to be measured" (Anastasi & Urbina, 1997 p. 114). For example, does an IQ questionnaire have items covering all areas of intelligence discussed in the scientific literature? Content validity evidence involves the degree to which the content of the test matches a content domain associated with the construct. For example, a test of the ability to add two numbers should include a range of combinations of digits. A test with only one-digit numbers, or only even numbers, would not have good coverage of the content domain. Content related evidence typically involves a subject matter expert (SME) evaluating test items against the test specifications. Experts should pay attention to any cultural differences. For example, when a driving assessment questionnaire adopts from England (e. g. DBQ), the experts should consider right-hand driving in Britain. Some studies found how this will be critical to get a valid questionnaire. Before going to the final administration of questionnaires, the researcher should consult the validity of items against each of the constructs or variables and accordingly modify measurement instruments on the basis of SME's opinion. A test has content validity built into it by careful selection of which items to include (Anastasi & Urbina, 1997). Items are chosen so that they comply with the test specification which is drawn up through a thorough examination of the subject domain. Foxcroft, Paterson, le Roux & Herbst (2004, p. 49)<ref>Foxcroft, C., Paterson, H., le Roux, N., & Herbst, D. Human Sciences Research Council, (2004). 'Psychological assessment in South Africa: A needs analysis: The test use patterns and needs of psychological assessment practitioners: Final Report: July. Retrieved from website: http://www.hsrc.ac.za/research/output/outputDocuments/1716_Foxcroft_Psychologicalassessmentin%20SA.pdf</ref> note that by using a panel of experts to review the test specifications and the selection of items the content validity of a test can be improved. The experts will be able to review the items and comment on whether the items cover a representative sample of the behavior domain. Face validity Face validity is an estimate of whether a test appears to measure a certain criterion; it does not guarantee that the test actually measures phenomena in that domain. Measures may have high validity, but when the test does not appear to be measuring what it is, it has low face validity. Indeed, when a test is subject to faking (malingering), low face validity might make the test more valid. Considering one may get more honest answers with lower face validity, it is sometimes important to make it appear as though there is low face validity whilst administering the measures. Face validity is very closely related to content validity. While content validity depends on a theoretical basis for assuming if a test is assessing all domains of a certain criterion (e.g. does assessing addition skills yield in a good measure for mathematical skills? To answer this you have to know, what different kinds of arithmetic skills mathematical skills include) face validity relates to whether a test appears to be a good measure or not. This judgment is made on the "face" of the test, thus it can also be judged by the amateur. Face validity is a starting point, but should never be assumed to be probably valid for any given purpose, as the "experts" have been wrong before—the Malleus Malificarum (Hammer of Witches) had no support for its conclusions other than the self-imagined competence of two "experts" in "witchcraft detection", yet it was used as a "test" to condemn and burn at the stake tens of thousands men and women as "witches". Criterion validity Criterion validity evidence involves the correlation between the test and a criterion variable (or variables) taken as representative of the construct. In other words, it compares the test with other measures or outcomes (the criteria) already held to be valid. For example, employee selection tests are often validated against measures of job performance (the criterion), and IQ tests are often validated against measures of academic performance (the criterion). If the test data and criterion data are collected at the same time, this is referred to as concurrent validity evidence. If the test data are collected first in order to predict criterion data collected at a later point in time, then this is referred to as predictive validity evidence. Concurrent validity Concurrent validity refers to the degree to which the operationalization correlates with other measures of the same construct that are measured at the same time. When the measure is compared to another measure of the same type, they will be related (or correlated). Returning to the selection test example, this would mean that the tests are administered to current employees and then correlated with their scores on performance reviews. Predictive validity Predictive validity refers to the degree to which the operationalization can predict (or correlate with) other measures of the same construct that are measured at some time in the future. Again, with the selection test example, this would mean that the tests are administered to applicants, all applicants are hired, their performance is reviewed at a later time, and then their scores on the two measures are correlated. This is also when measurement predicts a relationship between what is measured and something else; predicting whether or not the other thing will happen in the future. High correlation between ex-ante predicted and ex-post actual outcomes is the strongest proof of validity. Experimental validity The validity of the design of experimental research studies is a fundamental part of the scientific method, and a concern of research ethics. Without a valid design, valid scientific conclusions cannot be drawn. Statistical conclusion validity Statistical conclusion validity is the degree to which conclusions about the relationship among variables based on the data are correct or 'reasonable'. This began as being solely about whether the statistical conclusion about the relationship of the variables was correct, but now there is a movement towards moving to 'reasonable' conclusions that use: quantitative, statistical, and qualitative data. Statistical conclusion validity involves ensuring the use of adequate sampling procedures, appropriate statistical tests, and reliable measurement procedures. As this type of validity is concerned solely with the relationship that is found among variables, the relationship may be solely a correlation. Internal validity Internal validity is an inductive estimate of the degree to which conclusions about causal'' relationships can be made (e.g. cause and effect), based on the measures used, the research setting, and the whole research design. Good experimental techniques, in which the effect of an independent variable on a dependent variable is studied under highly controlled conditions, usually allow for higher degrees of internal validity than, for example, single-case designs. Eight kinds of confounding variable can interfere with internal validity (i.e. with the attempt to isolate causal relationships): History, the specific events occurring between the first and second measurements in addition to the experimental variables Maturation, processes within the participants as a function of the passage of time (not specific to particular events), e.g., growing older, hungrier, more tired, and so on. Testing, the effects of taking a test upon the scores of a second testing. Instrumentation, changes in calibration of a measurement tool or changes in the observers or scorers may produce changes in the obtained measurements. Statistical regression, operating where groups have been selected on the basis of their extreme scores. Selection, biases resulting from differential selection of respondents for the comparison groups. Experimental mortality, or differential loss of respondents from the comparison groups. Selection-maturation interaction, etc. e.g., in multiple-group quasi-experimental designs External validity External validity concerns the extent to which the (internally valid) results of a study can be held to be true for other cases, for example to different people, places or times. In other words, it is about whether findings can be validly generalized. If the same research study was conducted in those other cases, would it get the same results? A major factor in this is whether the study sample (e.g. the research participants) are representative of the general population along relevant dimensions. Other factors jeopardizing external validity are: Reactive or interaction effect of testing, a pretest might increase the scores on a posttest Interaction effects of selection biases and the experimental variable. Reactive effects of experimental arrangements, which would preclude generalization about the effect of the experimental variable upon persons being exposed to it in non-experimental settings Multiple-treatment interference, where effects of earlier treatments are not erasable. Ecological validity Ecological validity is the extent to which research results can be applied to real-life situations outside of research settings. This issue is closely related to external validity but covers the question of to what degree experimental findings mirror what can be observed in the real world (ecology = the science of interaction between organism and its environment). To be ecologically valid, the methods, materials and setting of a study must approximate the real-life situation that is under investigation. Ecological validity is partly related to the issue of experiment versus observation. Typically in science, there are two domains of research: observational (passive) and experimental (active). The purpose of experimental designs is to test causality, so that you can infer A causes B or B causes A. But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influence a child's cognitive functioning?). Then you can still do research, but it is not causal, it is correlational. You can only conclude that A occurs together with B. Both techniques have their strengths and weaknesses. Relationship to internal validity On first glance, internal and external validity seem to contradict each other – to get an experimental design you have to control for all interfering variables. That is why you often conduct your experiment in a laboratory setting. While gaining internal validity (excluding interfering variables by keeping them constant) you lose ecological or external validity because you establish an artificial laboratory setting. On the other hand, with observational research you can not control for interfering variables (low internal validity) but you can measure in the natural (ecological) environment, at the place where behavior normally occurs. However, in doing so, you sacrifice internal validity. The apparent contradiction of internal validity and external validity is, however, only superficial. The question of whether results from a particular study generalize to other people, places or times arises only when one follows an inductivist research strategy. If the goal of a study is to deductively test a theory, one is only concerned with factors which might undermine the rigor of the study, i.e. threats to internal validity. In other words, the relevance of external and internal validity to a research study depends on the goals of the study. Furthermore, conflating research goals with validity concerns can lead to the mutual-internal-validity problem, where theories are able to explain only phenomena in artificial laboratory settings but not the real world. Diagnostic validity In psychiatry there is a particular issue with assessing the validity of the diagnostic categories themselves. In this context: content validity may refer to symptoms and diagnostic criteria; concurrent validity may be defined by various correlates or markers, and perhaps also treatment response; predictive validity may refer mainly to diagnostic stability over time; discriminant validity may involve delimitation from other disorders. Robins and Guze proposed in 1970 what were to become influential formal criteria for establishing the validity of psychiatric diagnoses. They listed five criteria: distinct clinical description (including symptom profiles, demographic characteristics, and typical precipitants) laboratory studies (including psychological tests, radiology and postmortem findings) delimitation from other disorders (by means of exclusion criteria) follow-up studies showing a characteristic course (including evidence of diagnostic stability) family studies showing familial clustering These were incorporated into the Feighner Criteria and Research Diagnostic Criteria that have since formed the basis of the DSM and ICD classification systems. Kendler in 1980 distinguished between: antecedent validators (familial aggregation, premorbid personality, and precipitating factors) concurrent validators (including psychological tests) predictive validators (diagnostic consistency over time, rates of relapse and recovery, and response to treatment) Nancy Andreasen (1995) listed several additional validators – molecular genetics and molecular biology, neurochemistry, neuroanatomy, neurophysiology, and cognitive neuroscience – that are all potentially capable of linking symptoms and diagnoses to their neural substrates. Kendell and Jablinsky (2003) emphasized the importance of distinguishing between validity and utility, and argued that diagnostic categories defined by their syndromes should be regarded as valid only if they have been shown to be discrete entities with natural boundaries that separate them from other disorders. Kendler (2006) emphasized that to be useful, a validating criterion must be sensitive enough to validate most syndromes that are true disorders, while also being specific enough to invalidate most syndromes that are not true disorders. On this basis, he argues that a Robins and Guze criterion of "runs in the family" is inadequately specific because most human psychological and physical traits would qualify - for example, an arbitrary syndrome comprising a mixture of "height over 6 ft, red hair, and a large nose" will be found to "run in families" and be "hereditary", but this should not be considered evidence that it is a disorder. Kendler has further suggested that "essentialist" gene models of psychiatric disorders, and the hope that we will be able to validate categorical psychiatric diagnoses by "carving nature at its joints" solely as a result of gene discovery, are implausible. In the United States Federal Court System validity and reliability of evidence is evaluated using the Daubert Standard: see Daubert v. Merrell Dow Pharmaceuticals. Perri and Lichtenwald (2010) provide a starting point for a discussion about a wide range of reliability and validity topics in their analysis of a wrongful murder conviction. See also All models are wrong Concurrent validity Content validity Construct validity Cross-validation (statistics) External validity Face validity Internal validity Predictive validity Regression model validation Statistical conclusion validity Statistical model validation Validity (logic) Validity scale Validation (disambiguation) Sensitivity and specificity References Further reading Philosophy of science Psychometrics
0.770071
0.99205
0.763949
Modality (semantics)
In linguistics and philosophy, modality refers to the ways language can express various relationships to reality or truth. For instance, a modal expression may convey that something is likely, desirable, or permissible. Quintessential modal expressions include modal auxiliaries such as "could", "should", or "must"; modal adverbs such as "possibly" or "necessarily"; and modal adjectives such as "conceivable" or "probable". However, modal components have been identified in the meanings of countless natural language expressions, including counterfactuals, propositional attitudes, evidentials, habituals, and generics. Modality has been intensely studied from a variety of perspectives. Within linguistics, typological studies have traced crosslinguistic variation in the strategies used to mark modality, with a particular focus on its interaction with tense–aspect–mood marking. Theoretical linguists have sought to analyze both the propositional content and discourse effects of modal expressions using formal tools derived from modal logic. Within philosophy, linguistic modality is often seen as a window into broader metaphysical notions of necessity and possibility. Force and flavor Modal expressions come in different categories called flavors. Flavors differ in how the possibilities they discuss relate to reality. For instance, an expression like "might" is said to have epistemic flavor, since it discusses possibilities compatible with some body of knowledge. An expression like "obligatory" is said to have deontic flavor, since it discusses possibilities which are required given the laws or norms obeyed in reality. (1) Agatha must be the murderer. (expressing epistemic modality) (2) Agatha must go to jail. (expressing deontic modality) The sentence in (1) might be spoken by someone who has decided that all of the relevant facts in a particular murder investigation point to the conclusion that Agatha was the murderer, even though it may or may not actually be the case. The 'must' in this sentence thus expresses epistemic modality: "'for all we know', Agatha must be the murderer", where 'for all we know' is relative to some knowledge the speakers possess. In contrast, (2) might be spoken by someone who has decided that, according to some standard of conduct, Agatha has committed a vile crime, and therefore the correct course of action is to jail Agatha. In classic formal approaches to linguistic modality, an utterance expressing modality is one that can always roughly be paraphrased to fit the following template: (3) According to [a set of rules, wishes, beliefs,...] it is [necessary, possible] that [the main proposition] is the case. The set of propositions which forms the basis of evaluation is called the modal base. The result of the evaluation is called the modal force. For example, the utterance in (4) expresses that, according to what the speaker has observed, it is necessary to conclude that John has a rather high income: (4) John must be earning a lot of money. The modal base here is the knowledge of the speaker, the modal force is necessity. By contrast, (5) could be paraphrased as 'Given his abilities, the strength of his teeth, etc., it is possible for John to open a beer bottle with his teeth'. Here, the modal base is defined by a subset of John's abilities, the modal force is possibility. (5) John can open a beer bottle with his teeth. Formal semantics Linguistic modality has been one of the central concerns in formal semantics and philosophical logic. Research in these fields has led to a variety of accounts of the propositional content and conventional discourse effects of modal expressions. The predominant approaches in these fields are based on modal logic. In these approaches, modal expressions such as must and can are analyzed as quantifiers over a set of possible worlds. In classical modal logic, this set is identified as the set of worlds accessible from the world of evaluation. Since the seminal work of Angelika Kratzer, formal semanticists have adopted a more finely grained notion of this set as determined by two conversational background functions called the modal base and ordering source respectively. For an epistemic modal like English must or might, this set is understood to contain exactly those worlds compatible with the knowledge that the speaker has in the actual world. Assume for example that the speaker of sentence (4) above knows that John just bought a new luxury car and has rented a huge apartment. The speaker also knows that John is an honest person with a humble family background and doesn't play the lottery. The set of accessible worlds is then the set of worlds in which all these propositions which the speaker knows about John are true. The notions of necessity and possibility are then defined along the following lines: A proposition P follows necessarily from the set of accessible worlds, if all accessible worlds are part of P (that is, if p is true in all of these worlds). Applied to the example in (4) this would mean that in all the worlds which are defined by the speaker's knowledge about John, it is the case that John earns a lot of money (assuming there is no other explanation for John's wealth). In a similar way a proposition p is possible according to the set of accessible worlds (i.e. the modal base), if some of these worlds are part of P. Recent work has departed from this picture in a variety of ways. In dynamic semantics, modals are analyzed as tests which check whether their prejacent is compatible with (or follows from) the information in the conversational common ground. Probabilistic approaches motivated by gradable modal expressions provide a semantics which appeals to speaker credence in the prejacent. Illocutionary approaches assume a sparser view of modals' propositional content and look to conventional discourse effects to explain some of the nuances of modals' use. Grammatical expression of modality Verbal morphology In many languages modal categories are expressed by verbal morphology – that is, by alterations in the form of the verb. If these verbal markers of modality are obligatory in a language, they are called mood markers. Well-known examples of moods in some European languages are referred to as subjunctive, conditional, and indicative as illustrated below with examples from French, all three with the verb 'to have'. As in most Standard European languages, the shape of the verb conveys not only information about modality, but also about other categories such as person and number of the subject. An example for a non-European language with a similar encoding of modality is Manam. Here, a verb is prefixed by a morpheme which encodes number and person of the subject. These prefixes come in two versions, realis and irrealis. Which one is chosen depends on whether the verb refers to an actual past or present event (realis), or merely to a possible or imagined event (irrealis). Auxiliaries Modal auxiliary verbs, such as the English words may, can, must, ought, will, shall, need, dare, might, could, would, and should, are often used to express modality, especially in the Germanic languages. Ability, desirability, permission, obligation, and probability can all be exemplified by the usage of auxiliary modal verbs in English: Ability: I can ride a bicycle (in the present); I could ride a bicycle (in the past) Desirability: I should go; I ought to go Permission: I may go Obligation: I must go Likelihood: He might be there; He may be there; He must be there Lexical expression Verbs such as "want," "need," or "belong" can be used to express modality lexically, as can adverbs. (9) It belongs in a museum! Other Complementizers (e.g. Russian) and conjunctions (e.g. Central Pomo) can be used to convey modality. See also Angelika Kratzer Counterfactuals Dynamic semantics Evidentiality Frank R. Palmer Free choice inference Modal logic Modal subordination Modality (semiotics) Possible world Tense–aspect–mood English modal adverbs at Wiktionary References Further reading Asher, R. E. (ed.), The Encyclopedia of language and linguistics (pp. 2535–2540). Oxford: Pergamon Press. Blakemore, D. (1994). Evidence and modality. In R. E. Asher (Ed.), The Encyclopedia of language and linguistics (pp. 1183–1186). Oxford: Pergamon Press. . Bybee, Joan; Perkins, Revere, & Pagliuca, William (1994). The evolution of grammar: Tense, aspect, and modality in the languages of the world. Chicago: University of Chicago Press. Calbert, J. P. (1975). Toward the semantics of modality. In J. P. Calbert & H. Vater (Eds.), Aspekte der Modalität. Tübingen: Gunter Narr. Callaham, Scott N. (2010). Modality and the Biblical Hebrew Infinitive Absolute. Abhandlungen für die Kunde des Morgenlandes 71. Wiesbaden: Harrassowitz. Chung, Sandra; & Timberlake, Alan (1985). Tense, aspect and mood. In T. Shopen (Ed.), Language typology and syntactic description: Grammatical categories and the lexicon (Vol. 3, pp. 202-258). Cambridge: Cambridge University Press. Kratzer, A. (1981). The notional category of modality. In H.-J. Eikmeyer & H. Rieser (Eds.), Words, worlds, and contexts: New approaches in word semantics. Berlin: Walter de Gruyter. Palmer, F. R. (1979). Modality and the English modals. London: Longman. Palmer, F. R. (1994). Mood and modality. Cambridge Univ. Press. Second edition 2001. Saeed, John I. (2003). Sentence semantics 1: Situations: Modality and evidentiality. In J. I Saeed, Semantics (2nd. ed) (Sec. 5.3, pp. 135–143). Malden, MA: Blackwell Publishing. , . Sweetser, E. E. (1982). Root and epistemic modality: Causality in two worlds. Berkeley Linguistic Papers, 8, 484–507. External links Modality and Evidentiality What is mood and modality? SIL International, Glossary of linguistic terms. Semantics Philosophy of language Formal semantics (natural language)
0.773427
0.987667
0.763888
Semiotics
Semiotics is the systematic study of sign processes and the communication of meaning. In semiotics, a sign is defined as anything that communicates intentional and unintentional meaning or feelings to the sign's interpreter. Semiosis is any activity, conduct, or process that involves signs. Signs can be communicated through thought itself or through the senses. Contemporary semiotics is a branch of science that studies meaning-making and various types of knowledge. The semiotic tradition explores the study of signs and symbols as a significant part of communications. Unlike linguistics, semiotics also studies non-linguistic sign systems. Semiotics includes the study of indication, designation, likeness, analogy, allegory, metonymy, metaphor, symbolism, signification, and communication. Semiotics is frequently seen as having important anthropological and sociological dimensions. Some semioticians regard every cultural phenomenon as being able to be studied as communication. Semioticians also focus on the logical dimensions of semiotics, examining biological questions such as how organisms make predictions about, and adapt to, their semiotic niche in the world. Fundamental semiotic theories take signs or sign systems as their object of study. Applied semiotics analyzes cultures and cultural artifacts according to the ways they construct meaning through their being signs. The communication of information in living organisms is covered in biosemiotics including zoosemiotics and phytosemiotics. History and terminology The importance of signs and signification has been recognized throughout much of the history of philosophy and psychology. The term derives . For the Greeks, 'signs' occurred in the world of nature and 'symbols' in the world of culture. As such, Plato and Aristotle explored the relationship between signs and the world. It would not be until Augustine of Hippo that the nature of the sign would be considered within a conventional system. Augustine introduced a thematic proposal for uniting the two under the notion of 'sign' as transcending the nature–culture divide and identifying symbols as no more than a species (or sub-species) of . A monograph study on this question was done by Manetti (1987). These theories have had a lasting effect in Western philosophy, especially through scholastic philosophy. The general study of signs that began in Latin with Augustine culminated with the 1632 of John Poinsot and then began anew in late modernity with the attempt in 1867 by Charles Sanders Peirce to draw up a "new list of categories". More recently Umberto Eco, in his Semiotics and the Philosophy of Language, has argued that semiotic theories are implicit in the work of most, perhaps all, major thinkers. John Locke John Locke (1690), himself a man of medicine, was familiar with this "semeiotics" as naming a specialized branch within medical science. In his personal library were two editions of Scapula's 1579 abridgement of Henricus Stephanus' , which listed as the name for , the branch of medicine concerned with interpreting symptoms of disease ("symptomatology"). Physician and scholar Henry Stubbe (1670) had transliterated this term of specialized science into English precisely as "semeiotics", marking the first use of the term in English:Locke would use the term sem(e)iotike in An Essay Concerning Human Understanding (book IV, chap. 21), in which he explains how science may be divided into three parts: Locke then elaborates on the nature of this third category, naming it , and explaining it as "the doctrine of signs" in the following terms: Juri Lotman introduced Eastern Europe to semiotics and adopted Locke's coinage as the name to subtitle his founding at the University of Tartu in Estonia in 1964 of the first semiotics journal, Sign Systems Studies. Ferdinand de Saussure Ferdinand de Saussure founded his semiotics, which he called semiology, in the social sciences: Thomas Sebeok would assimilate semiology to semiotics as a part to a whole, and was involved in choosing the name Semiotica for the first international journal devoted to the study of signs. Saussurean semiotics have exercised a great deal of influence on the schools of structuralism and post-structuralism. Jacques Derrida, for example, takes as his object the Saussurean relationship of signifier and signified, asserting that signifier and signified are not fixed, coining the expression , relating to the endless deferral of meaning, and to the absence of a "transcendent signified". Charles Sanders Peirce In the nineteenth century, Charles Sanders Peirce defined what he termed "semiotic" (which he would sometimes spell as "semeiotic") as the "quasi-necessary, or formal doctrine of signs," which abstracts "what must be the characters of all signs used by…an intelligence capable of learning by experience," and which is philosophical logic pursued in terms of signs and sign processes. Peirce's perspective is considered as philosophical logic studied in terms of signs that are not always linguistic or artificial, and sign processes, modes of inference, and the inquiry process in general. The Peircean semiotic addresses not only the external communication mechanism, as per Saussure, but the internal representation machine, investigating sign processes, and modes of inference, as well as the whole inquiry process in general. Peircean semiotic is triadic, including sign, object, interpretant, as opposed to the dyadic Saussurian tradition (signifier, signified). Peircean semiotics further subdivides each of the three triadic elements into three sub-types, positing the existence of signs that are symbols; semblances ("icons"); and "indices," i.e., signs that are such through a factual connection to their objects. Peircean scholar and editor Max H. Fisch (1978) would claim that "semeiotic" was Peirce's own preferred rendering of Locke's σημιωτική. Charles W. Morris followed Peirce in using the term "semiotic" and in extending the discipline beyond human communication to animal learning and use of signals. While the Saussurean semiotic is dyadic (sign/syntax, signal/semantics), the Peircean semiotic is triadic (sign, object, interpretant), being conceived as philosophical logic studied in terms of signs that are not always linguistic or artificial. Peirce's list of categories Peirce would aim to base his new list directly upon experience precisely as constituted by action of signs, in contrast with the list of Aristotle's categories which aimed to articulate within experience the dimension of being that is independent of experience and knowable as such, through human understanding. The estimative powers of animals interpret the environment as sensed to form a "meaningful world" of objects, but the objects of this world (or Umwelt, in Jakob von Uexküll's term) consist exclusively of objects related to the animal as desirable (+), undesirable (–), or "safe to ignore" (0). In contrast to this, human understanding adds to the animal Umwelt a relation of self-identity within objects which transforms objects experienced into 'things' as well as +, –, 0 objects. Thus, the generically animal objective world as Umwelt, becomes a species-specifically human objective world or , wherein linguistic communication, rooted in the biologically underdetermined of humans, makes possible the further dimension of cultural organization within the otherwise merely social organization of non-human animals whose powers of observation may deal only with directly sensible instances of objectivity. This further point, that human culture depends upon language understood first of all not as communication, but as the biologically underdetermined aspect or feature of the human animal's , was originally clearly identified by Thomas A. Sebeok.<ref>Sebeok, Thomas A. 1986. "Communication, Language, and Speech. Evolutionary Considerations." Pp. 10–16 in I Think I Am A Verb. More Contributions to the Doctrine of Signs. New York: Plenum Press. Published lecture. Original lecture title "The Evolution of Communication and the Origin of Language," in International Summer Institute for Semiotic and Structural Studies Colloquium on 'Phylogeny and Ontogeny of Communication Systems (June 1–3, 1984).</ref> Sebeok also played the central role in bringing Peirce's work to the center of the semiotic stage in the twentieth century, first with his expansion of the human use of signs (anthroposemiosis) to include also the generically animal sign-usage (zoösemiosis), then with his further expansion of semiosis to include the vegetative world (phytosemiosis). Such would initially be based on the work of Martin Krampen, but takes advantage of Peirce's point that an interpretant, as the third item within a sign relation, "need not be mental".Peirce, Charles Sanders. 1977 [1908]. "letter to Lady Welby 23 December 1908" [letter]. Pp. 73–86 in Semiotic and Significs: The Correspondence between C. S. Peirce and Victoria Lady Welby, edited by C. S. Hardwick and J. Cook. Bloomington, IN: Indiana University Press. Peirce distinguished between the interpretant and the interpreter. The interpretant is the internal, mental representation that mediates between the object and its sign. The interpreter is the human who is creating the interpretant. Peirce's "interpretant" notion opened the way to understanding an action of signs beyond the realm of animal life (study of phytosemiosis + zoösemiosis + anthroposemiosis = biosemiotics), which was his first advance beyond Latin Age semiotics. Other early theorists in the field of semiotics include Charles W. Morris. Writing in 1951, Jozef Maria Bochenski surveyed the field in this way: "Closely related to mathematical logic is the so-called semiotics (Charles Morris) which is now commonly employed by mathematical logicians. Semiotics is the theory of symbols and falls in three parts; logical syntax, the theory of the mutual relations of symbols, logical semantics, the theory of the relations between the symbol and what the symbol stands for, and logical pragmatics, the relations between symbols, their meanings and the users of the symbols." Max Black argued that the work of Bertrand Russell was seminal in the field. Formulations and subfields Semioticians classify signs or sign systems in relation to the way they are transmitted. This process of carrying meaning depends on the use of codes that may be the individual sounds or letters that humans use to form words, the body movements they make to show attitude or emotion, or even something as general as the clothes they wear. To coin a word to refer to a thing, the community must agree on a simple meaning (a denotative meaning) within their language, but that word can transmit that meaning only within the language's grammatical structures and codes. Codes also represent the values of the culture, and are able to add new shades of connotation to every aspect of life. To explain the relationship between semiotics and communication studies, communication is defined as the process of transferring data and-or meaning from a source to a receiver. Hence, communication theorists construct models based on codes, media, and contexts to explain the biology, psychology, and mechanics involved. Both disciplines recognize that the technical process cannot be separated from the fact that the receiver must decode the data, i.e., be able to distinguish the data as salient, and make meaning out of it. This implies that there is a necessary overlap between semiotics and communication. Indeed, many of the concepts are shared, although in each field the emphasis is different. In Messages and Meanings: An Introduction to Semiotics, Marcel Danesi (1994) suggested that semioticians' priorities were to study signification first, and communication second. A more extreme view is offered by Jean-Jacques Nattiez who, as a musicologist, considered the theoretical study of communication irrelevant to his application of semiotics. Syntactics Semiotics differs from linguistics in that it generalizes the definition of a sign to encompass signs in any medium or sensory modality. Thus it broadens the range of sign systems and sign relations, and extends the definition of language in what amounts to its widest analogical or metaphorical sense. The branch of semiotics that deals with such formal relations between signs or expressions in abstraction from their signification and their interpreters, or—more generally—with formal properties of symbol systems (specifically, with reference to linguistic signs, syntax) is referred to as syntactics. Peirce's definition of the term semiotic as the study of necessary features of signs also has the effect of distinguishing the discipline from linguistics as the study of contingent features that the world's languages happen to have acquired in the course of their evolutions. From a subjective standpoint, perhaps more difficult is the distinction between semiotics and the philosophy of language. In a sense, the difference lies between separate traditions rather than subjects. Different authors have called themselves "philosopher of language" or "semiotician." This difference does not match the separation between analytic and continental philosophy. On a closer look, there may be found some differences regarding subjects. Philosophy of language pays more attention to natural languages or to languages in general, while semiotics is deeply concerned with non-linguistic signification. Philosophy of language also bears connections to linguistics, while semiotics might appear closer to some of the humanities (including literary theory) and to cultural anthropology. Cognitive semiotics Semiosis or semeiosis is the process that forms meaning from any organism's apprehension of the world through signs. Scholars who have talked about semiosis in their subtheories of semiotics include C. S. Peirce, John Deely, and Umberto Eco. Cognitive semiotics is combining methods and theories developed in the disciplines of semiotics and the humanities, with providing new information into human signification and its manifestation in cultural practices. The research on cognitive semiotics brings together semiotics from linguistics, cognitive science, and related disciplines on a common meta-theoretical platform of concepts, methods, and shared data. Cognitive semiotics may also be seen as the study of meaning-making by employing and integrating methods and theories developed in the cognitive sciences. This involves conceptual and textual analysis as well as experimental investigations. Cognitive semiotics initially was developed at the Center for Semiotics at Aarhus University (Denmark), with an important connection with the Center of Functionally Integrated Neuroscience (CFIN) at Aarhus Hospital. Amongst the prominent cognitive semioticians are Per Aage Brandt, Svend Østergaard, Peer Bundgård, Frederik Stjernfelt, Mikkel Wallentin, Kristian Tylén, Riccardo Fusaroli, and Jordan Zlatev. Zlatev later in co-operation with Göran Sonesson established CCS (Center for Cognitive Semiotics) at Lund University, Sweden. Finite semiotics Finite semiotics, developed by Cameron Shackell (2018, 2019),Shackell, Cameron. 2018. "Finite semiotics: A new theoretical basis for the information age ." Cross-Inter-Multi-Trans: Proceedings of the 13th World Congress of the International Association for Semiotic Studies (IASS/AIS). IASS Publications & International Semiotics Institute. Retrieved 2020-01-25. aims to unify existing theories of semiotics for application to the post-Baudrillardian world of ubiquitous technology. Its central move is to place the finiteness of thought at the root of semiotics and the sign as a secondary but fundamental analytical construct. The theory contends that the levels of reproduction that technology is bringing to human environments demands this reprioritisation if semiotics is to remain relevant in the face of effectively infinite signs. The shift in emphasis allows practical definitions of many core constructs in semiotics which Shackell has applied to areas such as human computer interaction, creativity theory, and a computational semiotics method for generating semiotic squares from digital texts. Pictorial semiotics Pictorial semiotics''' is intimately connected to art history and theory. It goes beyond them both in at least one fundamental way, however. While art history has limited its visual analysis to a small number of pictures that qualify as "works of art", pictorial semiotics focuses on the properties of pictures in a general sense, and on how the artistic conventions of images can be interpreted through pictorial codes. Pictorial codes are the way in which viewers of pictorial representations seem automatically to decipher the artistic conventions of images by being unconsciously familiar with them. According to Göran Sonesson, a Swedish semiotician, pictures can be analyzed by three models: the narrative model, which concentrates on the relationship between pictures and time in a chronological manner as in a comic strip; the rhetoric model, which compares pictures with different devices as in a metaphor; and the Laokoon model, which considers the limits and constraints of pictorial expressions by comparing textual mediums that utilize time with visual mediums that utilize space. The break from traditional art history and theory—as well as from other major streams of semiotic analysis—leaves open a wide variety of possibilities for pictorial semiotics. Some influences have been drawn from phenomenological analysis, cognitive psychology, structuralist, and cognitivist linguistics, and visual anthropology and sociology. Globalization Studies have shown that semiotics may be used to make or break a brand. Culture codes strongly influence whether a population likes or dislikes a brand's marketing, especially internationally. If the company is unaware of a culture's codes, it runs the risk of failing in its marketing. Globalization has caused the development of a global consumer culture where products have similar associations, whether positive or negative, across numerous markets. Mistranslations may lead to instances of "Engrish" or "Chinglish" terms for unintentionally humorous cross-cultural slogans intended to be understood in English. When translating surveys, the same symbol may mean different things in the source and target language thus leading to potential errors. For example, the symbol of "x" is used to mark a response in English language surveys but "x" usually means in the Chinese convention. This may be caused by a sign that, in Peirce's terms, mistakenly indexes or symbolizes something in one culture, that it does not in another. In other words, it creates a connotation that is culturally-bound, and that violates some culture code. Theorists who have studied humor (such as Schopenhauer) suggest that contradiction or incongruity creates absurdity and therefore, humor. Violating a culture code creates this construct of ridiculousness for the culture that owns the code. Intentional humor also may fail cross-culturally because jokes are not on code for the receiving culture. A good example of branding according to cultural code is Disney's international theme park business. Disney fits well with Japan's cultural code because the Japanese value "cuteness", politeness, and gift-giving as part of their culture code; Tokyo Disneyland sells the most souvenirs of any Disney theme park. In contrast, Disneyland Paris failed when it launched as Euro Disney because the company did not research the codes underlying European culture. Its storybook retelling of European folktales was taken as elitist and insulting, and the strict appearance standards that it had for employees resulted in discrimination lawsuits in France. Disney souvenirs were perceived as cheap trinkets. The park was a financial failure because its code violated the expectations of European culture in ways that were offensive. However, some researchers have suggested that it is possible to successfully pass a sign perceived as a cultural icon, such as the logos for Coca-Cola or McDonald's, from one culture to another. This may be accomplished if the sign is migrated from a more economically developed to a less developed culture. The intentional association of a product with another culture has been called "foreign consumer culture positioning" (FCCP). Products also may be marketed using global trends or culture codes, for example, saving time in a busy world; but even these may be fine-tuned for specific cultures. Research also found that, as airline industry brandings grow and become more international their logos become more symbolic and less iconic. The iconicity and symbolism of a sign depends on the cultural convention and are, on that ground, in relation with each other. If the cultural convention has greater influence on the sign, the signs get more symbolic value. Semiotics of dreaming The flexibility of human semiotics is well demonstrated in dreams. Sigmund Freud spelled out how meaning in dreams rests on a blend of images, affects, sounds, words, and kinesthetic sensations. In his chapter on "The Means of Representation," he showed how the most abstract sorts of meaning and logical relations can be represented by spatial relations. Two images in sequence may indicate "if this, then that" or "despite this, that." Freud thought the dream started with "dream thoughts" which were like logical, verbal sentences. He believed that the dream thought was in the nature of a taboo wish that would awaken the dreamer. In order to safeguard sleep, the midbrain converts and disguises the verbal dream thought into an imagistic form, through processes he called the "dream-work." Musical topic theory Semiotics can be directly linked to the ideals of musical topic theory, which traces patterns in musical figures throughout their prevalent context in order to assign some aspect of narrative, affect, or aesthetics to the gesture. Danuta Mirka's The Oxford Handbook of Topic Theory presents a holistic recognition and overview regarding the subject, offering insight into the development of the theory. In recognizing the indicative and symbolic elements of a musical line, gesture, or occurrence, one can gain a greater understanding of aspects regarding compositional intent and identity. Philosopher Charles Pierce discusses the relationship of icons and indexes in relation to signification and semiotics. In doing so, he draws on the elements of various ideas, acts, or styles that can be translated into a different field. Whereas indexes consist of a contextual representation of a symbol, icons directly correlate with the object or gesture that is being referenced. In his 1980 book Classic Music: Expression, Form, and Style, Leonard Ratner amends the conversation surrounding musical tropes—or "topics"—in order to create a collection of musical figures that have historically been indicative of a given style. Robert Hatten continues this conversation in Beethoven, Markedness, Correlation, and Interpretation (1994), in which he states that "richly coded style types which carry certain features linked to affect, class, and social occasion such as church styles, learned styles, and dance styles. In complex forms these topics mingle, providing a basis for musical allusion." List of subfields Subfields that have sprouted out of semiotics include, but are not limited to, the following: Biosemiotics: the study of semiotic processes at all levels of biology, or a semiotic study of living systems (e.g., Copenhagen–Tartu School). Annual meetings ("Gatherings in Biosemiotics") have been held since 2001. Semiotic anthropology and anthropological semantics. Cognitive semiotics: the study of meaning-making by employing and integrating methods and theories developed in the cognitive sciences. This involves conceptual and textual analysis as well as experimental investigations. Cognitive semiotics initially was developed at the Center for Semiotics at Aarhus University (Denmark), with an important connection with the Center of Functionally Integrated Neuroscience (CFIN) at Aarhus Hospital. Amongst the prominent cognitive semioticians are Per Aage Brandt, Svend Østergaard, Peer Bundgård, Frederik Stjernfelt, Mikkel Wallentin, Kristian Tylén, Riccardo Fusaroli, and Jordan Zlatev. Zlatev later in co-operation with Göran Sonesson established the Center for Cognitive Semiotics (CCS) at Lund University, Sweden. Comics semiotics: the study of the various codes and signs of comics and how they are understood. Computational semiotics: attempts to engineer the process of semiosis, in the study of and design for human–computer interaction or to mimic aspects of human cognition through artificial intelligence and knowledge representation. Cultural and literary semiotics: examines the literary world, the visual media, the mass media, and advertising in the work of writers such as Roland Barthes, Marcel Danesi, and Juri Lotman (e.g., Tartu–Moscow Semiotic School). Cybersemiotics: built on two already-generated interdisciplinary approaches: cybernetics and systems theory, including information theory and science; and Peircean semiotics, including phenomenology and pragmatic aspects of linguistics, attempts to make the two interdisciplinary paradigms—both going beyond mechanistic and pure constructivist ideas—complement each other in a common framework. Design semiotics or product semiotics: the study of the use of signs in the design of physical products; introduced by Martin Krampen and in a practitioner-oriented version by Rune Monö while teaching industrial design at the Institute of Design, Umeå University, Sweden. Ethnosemiotics: a disciplinary perspective which links semiotics concepts to ethnographic methods. Fashion semiotics Film semiotics: the study of the various codes and signs of film and how they are understood. Key figures include Christian Metz. Finite semiotics: an approach to the semiotics of technology developed by Cameron Shackell. It is used to both trace the effects of technology on human thought and to develop computational methods for performing semiotic analysis. Gregorian chant semiology: a current avenue of palaeographical research in Gregorian chant, which is revising the Solesmes school of interpretation. Hylosemiotics: an approach to semiotics that understands meaning as inference, which is developed through exploratory interaction with the physical world. It expands the concept of communication beyond a human-centered paradigm to include other sentient beings, such as animals, plants, bacteria, fungi, etc. Law and semiotics: one of the more accomplished publications in this field is the International Journal for the Semiotics of Law, published by International Association for the Semiotics of Law. Marketing semiotics (or commercial semiotics): an application of semiotic methods and semiotic thinking to the analysis and development of advertising and brand communications in cultural context. Key figures include Virginia Valentine, Malcolm Evans, Greg Rowland, Georgios Rossolatos. International annual conferences (Semiofest) have been held since 2012. Music semiology: the study of signs as they pertain to music on a variety of levels. Organisational semiotics: the study of semiotic processes in organizations (with strong ties to computational semiotics and human–computer interaction). Pictorial semiotics: an application of semiotic methods and semiotic thinking to art history. Semiotics of music videos: semiotics in popular music. Social semiotics: expands the interpretable semiotic landscape to include all cultural codes, such as in slang, fashion, tattoos, and advertising. Key figures include Roland Barthes, Michael Halliday, Bob Hodge, Chris William Martin and Christian Metz. Structuralism and post-structuralism in the work of Jacques Derrida, Michel Foucault, Louis Hjelmslev, Roman Jakobson, Jacques Lacan, Claude Lévi-Strauss, Roland Barthes, etc. Theatre semiotics: an application of semiotic methods and semiotic thinking to theatre studies. Key figures include Keir Elam. Urban semiotics: the study of meaning in urban form as generated by signs, symbols, and their social connotations. Visual semiotics: analyses visual signs; prominent modern founders to this branch are Groupe μ and Göran Sonesson. Semiotics of photography: is the observation of symbolism used within photography. Artificial intelligence semiotics: the observation of visual symbols and the symbols' recognition by machine learning systems. The phrase was coined by Daniel Hoeg, founder of Semiotics Mobility, due to Semiotics Mobility's design and learning process for autonomous recognition and perception of symbols by neural networks. The phrase refers to machine learning and neural nets application of semiotic methods and semiotic machine learning to the analysis and development of robotics commands and instructions with subsystem communications in autonomous systems context. Semiotics of mathematics: the study of signs, symbols, sign systems and their structure, meaning and use in mathematics and mathematics education. Notable semioticians Thomas Carlyle (1795–1881) ascribed great importance to symbols in a religious context, noting that all worship "must proceed by Symbols"; he propounded this theory in such works as "Characteristics" (1831), Sartor Resartus (1833–4), and On Heroes (1841), which have been retroactively recognized as containing semiotic theories. Charles Sanders Peirce (1839–1914), a noted logician who founded philosophical pragmatism, defined semiosis as an irreducibly triadic process wherein something, as an object, logically determines or influences something as a sign to determine or influence something as an interpretation or interpretant, itself a sign, thus leading to further interpretants. Semiosis is logically structured to perpetuate itself. The object may be quality, fact, rule, or even fictional (Hamlet), and may be "immediate" to the sign, the object as represented in the sign, or "dynamic", the object as it really is, on which the immediate object is founded. The interpretant may be "immediate" to the sign, all that the sign immediately expresses, such as a word's usual meaning; or "dynamic", such as a state of agitation; or "final" or "normal", the ultimate ramifications of the sign about its object, to which inquiry taken far enough would be destined and with which any interpretant, at most, may coincide. His semiotic covered not only artificial, linguistic, and symbolic signs, but also semblances such as kindred sensible qualities, and indices such as reactions. He came c. 1903 to classify any sign by three interdependent trichotomies, intersecting to form ten (rather than 27) classes of sign. Signs also enter into various kinds of meaningful combinations; Peirce covered both semantic and syntactical issues in his speculative grammar. He regarded formal semiotic as logic per se and part of philosophy; as also encompassing study of arguments (hypothetical, deductive, and inductive) and inquiry's methods including pragmatism; and as allied to, but distinct from logic's pure mathematics. In addition to pragmatism, Peirce provided a definition of "sign" as a representamen, in order to bring out the fact that a sign is something that "represents" something else in order to suggest it (that is, "re-present" it) in some way: Ferdinand de Saussure (1857–1913), the "father" of modern linguistics, proposed a dualistic notion of signs, relating the signifier as the form of the word or phrase uttered, to the signified as the mental concept. According to Saussure, the sign is completely arbitrary—i.e., there is no necessary connection between the sign and its meaning. This sets him apart from previous philosophers, such as Plato or the scholastics, who thought that there must be some connection between a signifier and the object it signifies. In his Course in General Linguistics, Saussure credits the American linguist William Dwight Whitney (1827–1894) with insisting on the arbitrary nature of the sign. Saussure's insistence on the arbitrariness of the sign also has influenced later philosophers and theorists such as Jacques Derrida, Roland Barthes, and Jean Baudrillard. Ferdinand de Saussure coined the term while teaching his landmark "Course on General Linguistics" at the University of Geneva from 1906 to 1911. Saussure posited that no word is inherently meaningful. Rather a word is only a "signifier." i.e., the representation of something, and it must be combined in the brain with the "signified", or the thing itself, in order to form a meaning-imbued "sign." Saussure believed that dismantling signs was a real science, for in doing so we come to an empirical understanding of how humans synthesize physical stimuli into words and other abstract concepts. Jakob von Uexküll (1864–1944) studied the sign processes in animals. He used the German word Umwelt, , to describe the individual's subjective world, and he invented the concept of functional circle as a general model of sign processes. In his Theory of Meaning (, 1940), he described the semiotic approach to biology, thus establishing the field that now is called biosemiotics. Valentin Voloshinov (1895–1936) was a Soviet-Russian linguist, whose work has been influential in the field of literary theory and Marxist theory of ideology. Written in the late 1920s in the USSR, Voloshinov's Marxism and the Philosophy of Language developed a counter-Saussurean linguistics, which situated language use in social process rather than in an entirely decontextualized Saussurean langue. Louis Hjelmslev (1899–1965) developed a formalist approach to Saussure's structuralist theories. His best known work is Prolegomena to a Theory of Language, which was expanded in Résumé of the Theory of Language, a formal development of glossematics, his scientific calculus of language. Charles W. Morris (1901–1979): Unlike his mentor George Herbert Mead, Morris was a behaviorist and sympathetic to the Vienna Circle positivism of his colleague, Rudolf Carnap. Morris was accused by John Dewey of misreading Peirce. In his 1938 Foundations of the Theory of Signs, he defined semiotics as grouped into three branches: Syntactics/syntax: deals with the formal properties and interrelation of signs and symbols, without regard to meaning. Semantics: deals with the formal structures of signs, particularly the relation between signs and the objects to which they apply (i.e. signs to their designata, and the objects that they may or do denote). Pragmatics: deals with the biotic aspects of semiosis, including all the psychological, biological, and sociological phenomena that occur in the functioning of signs. Pragmatics is concerned with the relation between the sign system and sign-using agents or interpreters (i.e., the human or animal users). Thure von Uexküll (1908–2004), the "father" of modern psychosomatic medicine, developed a diagnostic method based on semiotic and biosemiotic analyses. Roland Barthes (1915–1980) was a French literary theorist and semiotician. He often would critique pieces of cultural material to expose how bourgeois society used them to impose its values upon others. For instance, the portrayal of wine drinking in French society as a robust and healthy habit would be a bourgeois ideal perception contradicted by certain realities (i.e. that wine can be unhealthy and inebriating). He found semiotics useful in conducting these critiques. Barthes explained that these bourgeois cultural myths were second-order signs, or connotations. A picture of a full, dark bottle is a sign, a signifier relating to a signified: a fermented, alcoholic beverage—wine. However, the bourgeois take this signified and apply their own emphasis to it, making "wine" a new signifier, this time relating to a new signified: the idea of healthy, robust, relaxing wine. Motivations for such manipulations vary from a desire to sell products to a simple desire to maintain the status quo. These insights brought Barthes very much in line with similar Marxist theory. Algirdas Julien Greimas (1917–1992) developed a structural version of semiotics named, "generative semiotics", trying to shift the focus of discipline from signs to systems of signification. His theories develop the ideas of Saussure, Hjelmslev, Claude Lévi-Strauss, and Maurice Merleau-Ponty. Thomas A. Sebeok (1920–2001), a student of Charles W. Morris, was a prolific and wide-ranging American semiotician. Although he insisted that animals are not capable of language, he expanded the purview of semiotics to include non-human signaling and communication systems, thus raising some of the issues addressed by philosophy of mind and coining the term zoosemiotics. Sebeok insisted that all communication was made possible by the relationship between an organism and the environment in which it lives. He also posed the equation between semiosis (the activity of interpreting signs) and life—a view that the Copenhagen-Tartu biosemiotic school has further developed. Juri Lotman (1922–1993) was the founding member of the Tartu (or Tartu-Moscow) Semiotic School. He developed a semiotic approach to the study of culture—semiotics of culture—and established a communication model for the study of text semiotics. He also introduced the concept of the semiosphere. Among his Moscow colleagues were Vladimir Toporov, Vyacheslav Ivanov and Boris Uspensky. Christian Metz (1931–1993) pioneered the application of Saussurean semiotics to film theory, applying syntagmatic analysis to scenes of films and grounding film semiotics in greater context. Eliseo Verón (1935–2014) developed his "Social Discourse Theory" inspired in the Peircian conception of "Semiosis." Groupe μ (founded 1967) developed a structural version of rhetorics, and the visual semiotics. Umberto Eco (1932–2016) was an Italian novelist, semiotician and academic. He made a wider audience aware of semiotics by various publications, most notably A Theory of Semiotics and his novel, The Name of the Rose, which includes (second to its plot) applied semiotic operations. His most important contributions to the field bear on interpretation, encyclopedia, and model reader. He also criticized in several works (A theory of semiotics, La struttura assente, Le signe, La production de signes) the "iconism" or "iconic signs" (taken from Peirce's most famous triadic relation, based on indexes, icons, and symbols), to which he proposed four modes of sign production: recognition, ostension, replica, and invention. Julia Kristeva (born 1941), a student of Lucien Goldmann and Roland Barthes, Bulgarian-French semiotician, literary critic, psychoanalyst, feminist, and novelist. She uses psychoanalytical concepts together with the semiotics, distinguishing the two components in the signification, the symbolic and the semiotic. Kristeva also studies the representation of women and women's bodies in popular culture, such as horror films and has had a remarkable influence on feminism and feminist literary studies. Michael Silverstein (1945–2020), a theoretician of semiotics and linguistic anthropology. Over the course of his career he created an original synthesis of research on the semiotics of communication, the sociology of interaction, Russian formalist literary theory, linguistic pragmatics, sociolinguistics, early anthropological linguistics and structuralist grammatical theory, together with his own theoretical contributions, yielding a comprehensive account of the semiotics of human communication and its relation to culture. His main influence was Charles Sanders Peirce, Ferdinand de Saussure, and Roman Jakobson. Current applications Some applications of semiotics include: Representation of a methodology for the analysis of "texts" regardless of the medium in which it is presented. For these purposes, "text" is any message preserved in a form whose existence is independent of both sender and receiver; By scholars and professional researchers as a method to interpret meanings behind symbols and how the meanings are created; Potential improvement of ergonomic design in situations where it is important to ensure that human beings are able to interact more effectively with their environments, whether it be on a large scale, as in architecture, or on a small scale, such as the configuration of instrumentation for human use; and Marketing: Epure, Eisenstat, and Dinu (2014) express that "semiotics allows for the practical distinction of persuasion from manipulation in marketing communication." Semiotics are used in marketing as a persuasive device to influence buyers to change their attitudes and behaviors in the market place. There are two ways that Epure, Eisenstat, and Dinu (2014), building on the works of Roland Barthes, state in which semiotics are used in marketing: Surface: signs are used to create personality for the product, creativity plays its foremost role at this level; Underlying: the concealed meaning of the text, imagery, sounds, etc. Semiotics can also be used to analyze advertising effectiveness and meaning. Cian (2020), for instance, analyzed a specific printed advertisement from two different semiotic points of view. He applied the interpretative instruments provided by the Barthes' school of thinking (focused on the description of explicit signs taken in isolation). He then analyzed the same advertising using Greimas' structural semiotics (where a sign has meaning only when it is interpreted as part of a system). In some countries, the role of semiotics is limited to literary criticism and an appreciation of audio and visual media. This narrow focus may inhibit a more general study of the social and political forces shaping how different media are used and their dynamic status within modern culture. Issues of technological determinism in the choice of media and the design of communication strategies assume new importance in this age of mass media. Main institutions A world organisation of semioticians, the International Association for Semiotic Studies, and its journal Semiotica, was established in 1969. The larger research centers together with teaching program include the semiotics departments at the University of Tartu, University of Limoges, Aarhus University, and Bologna University. Publications Publication of research is both in dedicated journals such as Sign Systems Studies, established by Juri Lotman and published by Tartu University Press; Semiotica, founded by Thomas A. Sebeok and published by Mouton de Gruyter; Zeitschrift für Semiotik; European Journal of Semiotics; Versus (founded and directed by Umberto Eco), The American Journal of Semiotics, et al.; and as articles accepted in periodicals of other disciplines, especially journals oriented toward philosophy and cultural criticism, communication theory, etc. The major semiotic book series Semiotics, Communication, Cognition, published by De Gruyter Mouton (series editors Paul Cobley and Kalevi Kull) replaces the former "Approaches to Semiotics" (series editor Thomas A. Sebeok, 127 volumes) and "Approaches to Applied Semiotics" (7 volumes). Since 1980 the Semiotic Society of America has produced an annual conference series: Semiotics: The Proceedings of the Semiotic Society of America. See also Ecosemiotics Ethnosemiotics Index of semiotics articles Language game (philosophy) Outline of semiotics Private language argument Semiofest Semiotic theory of Charles Sanders Peirce Social semiotics Universal language References Footnotes Citations Bibliography Atkin, Albert. (2006). "Peirce's Theory of Signs", Stanford Encyclopedia of Philosophy. Barthes, Roland. ([1957] 1987). Mythologies. New York: Hill & Wang. Barthes, Roland ([1964] 1967). Elements of Semiology. (Translated by Annette Lavers & Colin Smith). London: Jonathan Cape. Chandler, Daniel. (2001/2007). Semiotics: The Basics. London: Routledge. Clarke, D. S. (1987). Principles of Semiotic. London: Routledge & Kegan Paul. Clarke, D. S. (2003). Sign Levels. Dordrecht: Kluwer. Culler, Jonathan (1975). Structuralist Poetics: Structuralism, Linguistics and the Study of Literature. London: Routledge & Kegan Paul. Danesi, Marcel & Perron, Paul. (1999). Analyzing Cultures: An Introduction and Handbook. Bloomington: Indiana UP. Danesi, Marcel. (1994). Messages and Meanings: An Introduction to Semiotics. Toronto: Canadian Scholars' Press. Danesi, Marcel. (2002). Understanding Media Semiotics. London: Arnold; New York: Oxford UP. Danesi, Marcel. (2007). The Quest for Meaning: A Guide to Semiotic Theory and Practice. Toronto: University of Toronto Press. Decadt, Yves. 2000. On the Origin and Impact of Information in the Average Evolution: From Bit to Attractor, Atom and Ecosystem [Dutch]. Summary in English available at The Information Philosopher. Deely, John. (2005 [1990]). Basics of Semiotics. 4th ed. Tartu: Tartu University Press. Deely, John. (2000), The Red Book: The Beginning of Postmodern Times or: Charles Sanders Peirce and the Recovery of Signum. Sonesson, Göran, 1989, Pictorial concepts. Inquiries into the semiotic heritage and its relevance for the analysis of the visual world, Lund: Lund University Press.Pictorial concepts. Inquiries into the semiotic heritage and its relevance for the analysis of the visual world . Deely, John. (2001). Four Ages of Understanding. Toronto: University of Toronto Press. Deely, John. (2003), "On the Word Semiotics, Formation and Origins", Semiotica 146.1/4, 1–50. Deely, John. (2003). The Impact on Philosophy of Semiotics. South Bend: St. Augustine Press. Deely, John. (2004), Σημειον' to 'Sign' by Way of 'Signum': On the Interplay of Translation and Interpretation in the Establishment of Semiotics", Semiotica 148–1/4, 187–227. Deely, John. (2006), "On 'Semiotics' as Naming the Doctrine of Signs", Semiotica 158.1/4 (2006), 1–33. Derrida, Jacques (1981). Positions. (Translated by Alan Bass). London: Athlone Press. Eagleton, Terry. (1983). Literary Theory: An Introduction. Oxford: Basil Blackwell. Eco, Umberto. (1976). A Theory of Semiotics. London: Macmillan. Eco, Umberto. (1986) Semiotics and the Philosophy of Language. Bloomington: Indiana University Press. Eco, Umberto. (2000) Kant and the Platypus. New York, Harcourt Brace & Company. Eco, Umberto. (1976) A Theory of Semiotics. Indiana, Indiana University Press. Emmeche, Claus; Kull, Kalevi (eds.) (2011) Towards a Semiotic Biology: Life is the Action of Signs. London: Imperial College Press. pdf Foucault, Michel. (1970). The Order of Things: An Archaeology of the Human Sciences. London: Tavistock. Greimas, Algirdas. (1987). On Meaning: Selected Writings in Semiotic Theory. (Translated by Paul J Perron & Frank H Collins). London: Frances Pinter. Herlihy, David. 1988–present. "2nd year class of semiotics". CIT. Hjelmslev, Louis (1961). Prolegomena to a Theory of Language. (Translated by Francis J. Whitfield). Madison: University of Wisconsin Press Hodge, Robert & Kress, Gunther. (1988). Social Semiotics. Ithaca: Cornell UP. Lacan, Jacques. (1977) Écrits: A Selection. (Translated by Alan Sheridan). New York: Norton. Lidov, David (1999) Elements of Semiotics. New York: St. Martin's Press. Liszka, J. J. (1996) A General Introduction to the Semeiotic of C.S. Peirce. Indiana University Press. Locke, John, The Works of John Locke, A New Edition, Corrected, In Ten Volumes, Vol.III, T. Tegg, (London), 1823. (facsimile reprint by Scientia, (Aalen), 1963.) Lotman, Yuri M. (1990). Universe of the Mind: A Semiotic Theory of Culture. (Translated by Ann Shukman). London: I.B. Tauris. Matthiessen, F. O. 1949. American Renaissance: Art and Expression in the Age of Emerson and Whitman. Harvard, Boston Meyers, Marvin 1957 The Jacksonian Persuasion: Politics and Belief Stanford Press, California Morris, Charles W. (1971). Writings on the general theory of signs. The Hague: Mouton. Nattiez, Jean-Jacques. (1990). Music and Discourse: Toward a Semiology of Music. Translated by Carolyn Abbate. Princeton: Princeton University Press. (Translation of: Musicologie générale et sémiologue. Collection Musique/Passé/Présent 13. Paris: C. Bourgois, 1987). Peirce, Charles S. (1934). Collected papers: Volume V. Pragmatism and pragmaticism. Cambridge, MA, US: Harvard University Press. Ponzio, Augusto & S. Petrilli (2007) Semiotics Today. From Global Semiotics to Semioethics, a Dialogic Response. New York, Ottawa, Toronto: Legas. 84 pp. Romeo, Luigi (1977), "The Derivation of 'Semiotics' through the History of the Discipline", Semiosis, v. 6 pp. 37–50. Sebeok, T.A. (1976), Contributions to the Doctrine of Signs, Indiana University Press, Bloomington, IN. Sebeok, Thomas A. (Editor) (1977). A Perfusion of Signs. Bloomington, IN: Indiana University Press. Signs and Meaning: 5 Questions, edited by Peer Bundgaard and Frederik Stjernfelt, 2009 (Automatic Press / VIP). (Includes interviews with 29 leading semioticians of the world.) Short, T.L. (2007), Peirce's Theory of Signs, Cambridge University Press. Stubbe, Henry (Henry Stubbe), The Plus Ultra reduced to a Non Plus: Or, A Specimen of some Animadversions upon the Plus Ultra of Mr. Glanvill, wherein sundry Errors of some Virtuosi are discovered, the Credit of the Aristotelians in part Re-advanced; and Enquiries made...., (London), 1670. Ward, John William 1955. Andrew Jackson, Symbol for an Age. New York: Oxford University Press. Ward, John William 1969 Red, White, and Blue: Men, Books, and Ideas in American Culture. New York: Oxford University Press Williamson, Judith. (1978). Decoding Advertisements: Ideology and Meaning in Advertising. London: Boyars. Zlatev, Jordan. (2009). "The Semiotic Hierarchy: Life, Consciousness, Signs and Language, Cognitive Semiotics". Sweden: Scania. External links Signo — presents semiotic theories and theories closely related to semiotics. The Semiotics of the Web Center for Semiotics — Denmark: Aarhus University Semiotic Society of America Open Semiotics Resource Center — includes journals, lecture courses, etc. Peircean focus Arisbe: The Peirce Gateway Semiotics according to Robert Marty, with 76 definitions of the sign by C. S. Peirce The Commens Dictionary of Peirce's Terms Journals and book series American Journal of Semiotics, edited by J. Deely and C. Morrissey. US: Semiotic Society of America.Applied Semiotics / Sémiotique appliquée (AS/SA) , edited by P. G. Marteinson & P. G. Michelucci. CA: University of Toronto.Approaches to Applied Semiotics (2000–09 series), edited by T. Sebeok, et al. Berlin: De Gruyter.Approaches to Semiotics (1969–97 series), edited by T. A. Sebeok, A. Rey, R. Posner, et al. Berlin: De Gruyter.Biosemiotics, journal of the International Society for Biosemiotic Studies.Cybernetics and Human Knowing, edited by S. Brier, (chief).International Journal of Marketing Semiotics, edited by G. Rossolatos, (chief).International Journal of Signs and Semiotic Systems (IJSSS), edited by A, Loula & J. Queiroz.The Public Journal of Semiotics, edited by P. Bouissac (eic), A. Cienki (assoc.), R. Jorna, and W. Nöth.S.E.E.D. Journal (Semiotics, Evolution, Energy, and Development) (2001–7), edited by E. Taborsky. Toronto: SEE .The Semiotic Review of Books , edited by G. Genosko (gen.) and P. Bouissac (founding ed.).Semiotica, edited by M. Danesi (chief). International Association for Semiotic Studies.Semiotiche, edited by A. Valle and M. Visalli.Semiotics, Communication and Cognition (series), edited by P. Cobley and K. Kull.Semiotics: Yearbook of the Semiotic Society of America, edited by J. Pelkey. US: Semiotic Society of America.SemiotiX New Series: A Global Information Bulletin, edited by P. Bouissac, et al.Sign Systems Studies, edited by O. Puumeister, K. Kull, et al., Estonia: Dept. of Semiotics, University of Tartu.Signs and Society, edited by R. J. Parmentier.Signs: International Journal of Semiotics, edited by M. Thellefsen, T. Thellefsen, and B. Sørensen, (chief eds.).Tartu Semiotics Library (series), edited by P. Torop, K. Kull, S. Salupere.Transactions of the Charles S. Peirce Society, edited by C. de Waal (chief). The Charles S. Peirce Society.Versus: Quaderni di studi semiotici'', founded by U. Eco. Communication studies Cybernetics Philosophy of language +
0.764521
0.999135
0.76386
Perennial philosophy
The perennial philosophy, also referred to as perennialism and perennial wisdom, is a school of thought in philosophy and spirituality which posits that the recurrence of common themes across world religions illuminates universal truths about the nature of reality, humanity, ethics, and consciousness. Some perennialists emphasise common themes in religious experiences and mystical traditions across time and culture, while others argue that religious traditions share a single, metaphysical truth or origin from which all esoteric and exoteric knowledge and doctrine has grown. Perennialism has its roots in the Renaissance interest in neo-Platonism and its idea of the One, from which all existence emerges. Marsilio Ficino (1433–1499) sought to integrate Hermeticism with Greek and Christian thought, discerning a prisca theologia which could be found in all ages. Giovanni Pico della Mirandola (1463–1494) suggested that truth could be found in many, rather than just two, traditions. He proposed a harmony between the thought of Plato and Aristotle, and saw aspects of the prisca theologia in Averroes (Ibn Rushd), the Quran, the Kabbalah and other sources. Agostino Steuco (1497–1548) coined the term philosophia perennis. Developments in the 19th and 20th Centuries integrated Eastern religions and universalism, the idea that all religions, underneath seeming differences, point to the same Truth. In the early 19th century the Transcendentalists propagated the idea of a metaphysical Truth and universalism, which inspired the Unitarians, who proselytized among Indian elites. Towards the end of the 19th century, the Theosophical Society further popularized universalism, not only in the western world, but also in western colonies. In the 20th century, this form of universalist perennialism was further popularized by Aldous Huxley and his book The Perennial Philosophy, which was inspired by Neo-Vedanta. Huxley and some other perennnialists ground their point of view in the commonalities of mystical experience and generally accept religious syncretism. Also in the 20th Century, the anti-modern Traditionalist School emerged in contrast to the universalist approach to perennialism. Inspired by Advaita Vedanta, Sufism and 20th Century works critical of modernity such as René Guénon's The Crisis of the Modern World, Traditionalism emphasises a metaphysical, single origin of the orthodox religions and rejects syncretism, scientism and secularism as deviations from the truth contained in their concept of Tradition. Definition There is no universally agreed upon definition of the term "perennial philosophy", and various thinkers have employed the term in different ways. For all perennialists, the term denotes a common wisdom at the heart of world religions, but exponents across time and place have differed on whether, or how, it can be defined. Some perennialists emphasise a sense of participation in an ineffable truth discovered in mystical experience, though ultimately beyond the scope of complete human understanding. Others seek a more well-developed metaphysics. Drawing upon the same Renaissance foundations, in the 20th century the mystical universalist interpretation popularised by Aldous Huxley, and the metaphysical approach of the Traditionalist School became particularly influential. Renaissance The idea of a perennial philosophy originated with a number of Renaissance theologians who took inspiration from neo-Platonism and from the theory of Forms. Marsilio Ficino (1433–1499) argued that there is an underlying unity to the world, the soul or love, which has a counterpart in the realm of ideas. According to Giovanni Pico della Mirandola (1463–1494), a student of Ficino, truth could be found in many, rather than just two, traditions. According to Agostino Steuco (1497–1548) there is "one principle of all things, of which there has always been one and the same knowledge among all peoples." Aldous Huxley and mystical universalism Aldous Huxley, author of the popular book The Perennial Philosophy, propagated a universalist interpretation of the world religions, inspired by Vivekananda's neo-Vedanta and his own use of psychedelic drugs. According to Huxley: Huxley's approach to perennialism is grounded in ineffable mystical experience, which ego can obscure.: In Huxley's 1944 essay in Vedanta and the West, he proposes The Minimum Working Hypothesis, a basic outline which an individual can adopt to achieve the "Godhead": Traditionalist School For the Traditionalist Seyyed Hossein Nasr, the perennial philosophy is rooted in the concept of Tradition, which he defines as: Origins The perennial philosophy originates from a blending of neo-Platonism and Christianity. Neo-Platonism itself has diverse origins in the syncretic culture of the Hellenistic period, and was an influential philosophy throughout the Middle Ages. Classical world Hellenistic period: religious syncretism During the Hellenistic period, Alexander the Great's campaigns brought about exchange of cultural ideas on its path throughout most of the known world of his era. The Greek Eleusinian Mysteries and Dionysian Mysteries mixed with such influences as the Cult of Isis, Mithraism and Hinduism, along with some Persian influences. Such cross-cultural exchange was not new to the Greeks; the Egyptian god Osiris and the Greek god Dionysus had been equated as Osiris-Dionysus by the historian Herodotus as early as the 5th century BCE (see ). Roman world: Philo of Alexandria Philo of Alexandria attempted to reconcile Greek Rationalism with the Torah, which helped pave the way for Christianity with neoplatonism, and the adoption of the Old Testament with Christianity, as opposed to Gnostic roots of Christianity. Philo translated Judaism into terms of Stoic, Platonic and neopythagorean elements, and held that God is "supra rational" and can be reached only through "ecstasy." He also held that the oracles of God supply the material of moral and religious knowledge. Neoplatonism Neoplatonism arose in the 3rd century CE and persisted until shortly after the closing of the Platonic Academy in Athens in 529 CE by Justinian I. Neoplatonists were heavily influenced by Plato, but also by the Platonic tradition that thrived during the six centuries which separated the first of the neoplatonists from Plato. The work of neoplatonic philosophy involved describing the derivation of the whole of reality from a single principle, "the One." It was founded by Plotinus, and has been very influential throughout history. In the Middle Ages, neoplatonic ideas were integrated into the philosophical and theological works of many of the most important medieval Islamic, Christian, and Jewish thinkers. Renaissance Ficino and Pico della Mirandola Marsilio Ficino (1433–1499) believed that Hermes Trismegistos, the supposed author of the Corpus Hermeticum, was a contemporary of Moses and the teacher of Pythagoras, and the source of both Greek and Christian thought. He argued that there is an underlying unity to the world, the soul or love, which has a counterpart in the realm of ideas. Platonic Philosophy and Christian theology both embody this truth. Ficino was influenced by a variety of philosophers including Aristotelian Scholasticism and various pseudonymous and mystical writings. Ficino saw his thought as part of a long development of philosophical truth, of ancient pre-Platonic philosophers (including Zoroaster, Hermes Trismegistus, Orpheus, Aglaophemus and Pythagoras) who reached their peak in Plato. The Prisca theologia, or venerable and ancient theology, which embodied the truth and could be found in all ages, was a vitally important idea for Ficino. Giovanni Pico della Mirandola (1463–1494), a student of Ficino, went further than his teacher by suggesting that truth could be found in many, rather than just two, traditions. This proposed a harmony between the thought of Plato and Aristotle, and saw aspects of the Prisca theologia in Averroes, the Koran and the Kabbalah among other sources. After the deaths of Pico and Ficino this line of thought expanded, and included Symphorien Champier, and Francesco Giorgio. Steuco De perenni philosophia libri X The term perenni philosophia was first used by Agostino Steuco (1497–1548) who used it to title a treatise, De perenni philosophia libri X, published in 1540. De perenni philosophia was the most sustained attempt at philosophical synthesis and harmony. Steuco represents the renaissance humanist side of 16th-century Biblical scholarship and theology, although he rejected Luther and Calvin. De perenni philosophia is a complex work which only contains the term philosophia perennis twice. It states that there is "one principle of all things, of which there has always been one and the same knowledge among all peoples." This single knowledge (or sapientia) is the key element in his philosophy. In that he emphasises continuity over progress, Steuco's idea of philosophy is not one conventionally associated with the Renaissance. Indeed, he tends to believe that the truth is lost over time and is only preserved in the prisci theologica. Steuco preferred Plato to Aristotle and saw greater congruence between the former and Christianity than the latter philosopher. He held that philosophy works in harmony with religion and should lead to knowledge of God, and that truth flows from a single source, more ancient than the Greeks. Steuco was strongly influenced by Iamblichus's statement that knowledge of God is innate in all, and also gave great importance to Hermes Trismegistus. Influence Steuco's perennial philosophy was highly regarded by some scholars for the two centuries after its publication, then largely forgotten until it was rediscovered by Otto Willmann in the late part of the 19th century. Overall, De perenni philosophia was not particularly influential, and largely confined to those with a similar orientation to himself. The work was not put on the Index of works banned by the Roman Catholic Church, although his Cosmopoeia which expressed similar ideas was. Religious criticisms tended to the conservative view that held Christian teachings should be understood as unique, rather than seeing them as perfect expressions of truths that are found everywhere. More generally, this philosophical syncretism was set out at the expense of some of the doctrines included within it, and it is possible that Steuco's critical faculties were not up to the task he had set himself. Further, placing so much confidence in the prisca theologia, turned out to be a shortcoming as many of the texts used in this school of thought later turned out to be bogus. In the following two centuries the most favourable responses were largely Protestant and often in England. Gottfried Leibniz later picked up on Steuco's term. The German philosopher stands in the tradition of this concordistic philosophy; his philosophy of harmony especially had affinity with Steuco's ideas. Leibniz knew about Steuco's work by 1687, but thought that by Huguenot philosopher Phillippe du Plessis-Mornay expressed the same truth better. Steuco's influence can be found throughout Leibniz's works, but the German was the first philosopher to refer to the perennial philosophy without mentioning the Italian. Popularisation and later developments Transcendentalism and Unitarian Universalism Ralph Waldo Emerson (1803–1882) was a pioneer of the idea of spirituality as a distinct field. He was one of the major figures in Transcendentalism, which was rooted in English and German Romanticism, the Biblical criticism of Herder and Schleiermacher, and the skepticism of Hume. The Transcendentalists emphasised an intuitive, experiential approach of religion. Following Schleiermacher, an individual's intuition of truth was taken as the criterion for truth. The Transcendentalists were largely inspired by Thomas Carlyle (1795–1881), whose Critical and Miscellaneous Essays popularised German Romanticism in English and whose Sartor Resartus (1833–34) was a pioneer work of Western perennialism. They also read and were influenced by Hindu texts, the first translations of which appeared in the late 18th and early 19th century. They also endorsed universalist and Unitarian ideas, leading in the 20th century to Unitarian Universalism. Universalism holds the idea that there must be truth in other religions as well, since a loving God would redeem all living beings, not just Christians. Theosophical Society By the end of the 19th century, the idea of a perennial philosophy was popularized by leaders of the Theosophical Society such as H. P. Blavatsky and Annie Besant, under the name of "Wisdom-Religion" or "Ancient Wisdom". The Theosophical Society took an active interest in Asian religions, subsequently not only bringing those religions under the attention of a western audience but also influencing Hinduism and Buddhism in Sri Lanka and Japan. Neo-Vedanta Many perennialist thinkers (including Armstrong, Gerald Heard, Aldous Huxley, Huston Smith and Joseph Campbell) are influenced by Hindu mystics Ramakrishna and Swami Vivekananda, who themselves have taken over western notions of universalism. They regarded Hinduism to be a token of this perennial philosophy. This notion has influenced thinkers who have proposed versions of the perennial philosophy in the 20th century. The unity of all religions was a central impulse among Hindu reformers in the 19th century, who in turn influenced many 20th-century perennial philosophy-type thinkers. Key figures in this reforming movement included two Bengali Brahmins. Ram Mohan Roy, a philosopher and the founder of the modernising Brahmo Samaj religious organisation, reasoned that the divine was beyond description and thus that no religion could claim a monopoly in their understanding of it. The mystic Ramakrishna's spiritual ecstasies included experiencing his identity with Christ, Mohammed and his own Hindu deity. Ramakrishna's most famous disciple, Swami Vivekananda, travelled to the United States in the 1890s where he formed the Vedanta Society. Roy, Ramakrishna and Vivekananda were all influenced by the Hindu school of Advaita Vedanta, which they saw as the exemplification of a Universalist Hindu religiosity. Traditionalist School The Traditionalist School is a group of 20th- and 21st-century thinkers concerned with what they consider to be the demise of traditional forms of knowledge, both aesthetic and spiritual, within Western society. The early proponents of this school are René Guénon, Ananda Coomaraswamy and Frithjof Schuon. Other important thinkers in this tradition include Titus Burckhardt, Martin Lings, Seyyed Hossein Nasr, Jean-Louis Michon, Marco Pallis, Huston Smith, Jean Borella, and Elémire Zolla. According to the Traditionalist School, orthodox religions are based on a singular metaphysical origin. According to the Traditionalist School, the "philosophia perennis" designates a worldview that is opposed to the scientism of modern secular societies and which promotes the rediscovery of the wisdom traditions of the pre-secular developed world. This view is exemplified by René Guénon in his 1945 book The Reign of Quantity and the Signs of the Times, one of the founding works of the Traditionalist School. According to Frithjof Schuon: The Traditionalist School continues this metaphysical orientation. According to this school, the perennial philosophy is "absolute Truth and infinite Presence". Absolute Truth is "the perennial wisdom (sophia perennis) that stands as the transcendent source of all the intrinsically orthodox religions of humankind." Infinite Presence is "the perennial religion (religio perennis) that lives within the heart of all intrinsically orthodox religions." The Traditionalist School discerns a transcendent and an immanent dimension, namely the discernment of the Real or Absolute, c.q. that which is permanent; and the intentional "mystical concentration on the Real". According to Soares de Azevedo, the perennialist philosophy states that the universal truth is the same within each of the world's orthodox religious traditions, and is the foundation of their religious knowledge and doctrine. Each world religion is an interpretation of this universal truth, adapted to cater for the psychological, intellectual, and social needs of a given culture of a given period of history. This perennial truth has been rediscovered in each epoch by mystics of all kinds who have revived already existing religions, when they had fallen into empty platitudes and hollow ceremonialism. Shipley further notes that the Traditionalist School is oriented on orthodox traditions, and rejects modern syncretism and universalism, which together create new religions from older religions and compromise the standing traditions. Aldous Huxley The term was popularized in the mid-twentieth century by Aldous Huxley, who was profoundly influenced by Vivekananda's Neo-Vedanta and Universalism. In his 1945 book The Perennial Philosophy he defined the perennial philosophy as: In contrast to the Traditionalist school, Huxley emphasized mystical experience over metaphysics: According to Aldous Huxley, in order to apprehend the divine reality, one must choose to fulfill certain conditions: "making themselves loving, pure in heart and poor in spirit." Huxley argues that very few people can achieve this state. Those who have fulfilled these conditions, grasped the universal truth and interpreted it have generally been given the name of saint, prophet, sage or enlightened one. Huxley argues that those who have, "modified their merely human mode of being," and have thus been able to comprehend "more than merely human kind and amount of knowledge" have also achieved this enlightened state. New Age The idea of a perennial philosophy is influential in the New Age, a loosely defined Western spiritual movement that developed in the second half of the 20th century. Its central precepts have been described as "drawing on both Eastern and Western spiritual and metaphysical traditions and infusing them with influences from self-help and motivational psychology, holistic health, parapsychology, consciousness research and quantum physics". The term New Age refers to the coming astrological Age of Aquarius. The New Age aims to create "a spirituality without borders or confining dogmas" that is inclusive and pluralistic. It holds to "a holistic worldview", emphasising that the Mind, Body and Spirit are interrelated and that there is a form of monism and unity throughout the universe. It attempts to create "a worldview that includes both science and spirituality" and embraces a number of forms of mainstream science as well as other forms of science that are considered fringe. Academic discussions Mystical experience The idea of a perennial philosophy, sometimes called perennialism, is a key area of debate in the academic discussion of mystical experience. Huston Smith notes that the Traditionalist School's vision of a perennial philosophy is not based on mystical experiences, but on metaphysical intuitions. The discussion of mystical experience has shifted the emphasis in the perennial philosophy from these metaphysical intuitions to religious experience and the notion of nonduality or altered state of consciousness. William James popularized the use of the term "religious experience" in his 1902 book The Varieties of Religious Experience. It has also influenced the understanding of mysticism as a distinctive experience which supplies knowledge. Writers such as W.T. Stace, Huston Smith, and Robert Forman argue that there are core similarities to mystical experience across religions, cultures and eras. For Stace the universality of this core experience is a necessary, although not sufficient, condition for one to be able to trust the cognitive content of any religious experience. Wayne Proudfoot traces the roots of the notion of "religious experience" further back to the German theologian Friedrich Schleiermacher (1768–1834), who argued that religion is based on a feeling of the infinite. The notion of "religious experience" was used by Schleiermacher to defend religion against the growing scientific and secular critique. It was adopted by many scholars of religion, of which William James was the most influential. Critics point out that the emphasis on "experience" favours the atomic individual, instead of the community. It also fails to distinguish between episodic experience, and mysticism as a process, embedded in a total religious matrix of liturgy, scripture, worship, virtues, theology, rituals and practices. Richard King also points to disjunction between "mystical experience" and social justice: Religious pluralism Religious pluralism holds that various world religions are limited by their distinctive historical and cultural contexts and thus there is no single, true religion. There are only many equally valid religions. Each religion is a direct result of humanity's attempt to grasp and understand the incomprehensible divine reality. Therefore, each religion has an authentic but ultimately inadequate perception of divine reality, producing a partial understanding of the universal truth, which requires syncretism to achieve a complete understanding as well as a path towards salvation or spiritual enlightenment. Although perennial philosophy also holds that there is no single true religion, it differs when discussing divine reality. Perennial philosophy states that a divine reality can be understood and that its existence is what allows the universal truth to be understood. Each religion provides its own interpretation of the universal truth, based on its historical and cultural context, potentially providing everything required to observe the divine reality and achieve a state in which one will be able to confirm the universal truth and achieve salvation or spiritual enlightenment. Evidence for perennial philosophy Cognitive archeology such as analysis of cave paintings and other pre-historic art and customs suggests that a form of perennial philosophy or Shamanic metaphysics may stretch back to the birth of behavioral modernity, all around the world. Similar beliefs are found in present-day "stone age" cultures such as Aboriginal Australians. Perennial philosophy postulates the existence of a spirit or concept world alongside the day-to-day world, and interactions between these worlds during dreaming and ritual, or on special days or at special places. It has been argued that perennial philosophy formed the basis for Platonism, with Plato articulating, rather than creating, much older widespread beliefs. Perennial trends in religions Perennialists often ground their position in what they call a "common core" of religious wisdom which is found across traditions. They argue that since many of these themes developed independent of contact between the cultures concerned, they are likely to point to deeper truths from anthropological, phenomenological and/or metaphysical perspectives. Perennialists generally make a distinction between the exoteric and esoteric dimensions of the various religions, arguing that the exoteric doctrinal differences are cultural in nature, but that the mystical traditions of these religious use the language of these doctrines and cultural forms to express identical or similar things. The perennialist rabbi Rami Shapiro expresses it from a Jewish perspective in this way: Religions are like languages: no language is true or false; all languages are of human origin; each language reflects and shapes the civilization that speaks it; there are things you can say in one language that you cannot say or say as well in another; and the more languages you speak, the more nuanced your understanding of life becomes. Yet it is silence that reveals the ultimate Truth: אֵ֥ין ע֖וד מִלְבַדֽו/ein od milvado There is nothing other than God (Deut 4:35). What follows is a summary of some of the perennialist currents which have emerged in various religions. Hinduism Famous Hindu mystic Sri Ramakrishna stated that God can be realized through many different means and therefore all religions are true because each religion is nothing but different means towards the ultimate goal. Christianity Clement of Alexandria, who had both knowledge and admiration for Greek philosophy, thought that Greek wisdom did not contradict Christianity because it shared its source with it. According to him, philosophy is not secular knowledge but sacred knowledge derived from the reason revealed in Christ. Islam/Sufism In general, Muslims have shown a tendency towards religious exclusivism, as in other Abrahamic religions. However, there have been some exceptions to this in history. Hallaj was one of the leading Sufis with perennial perspective. Hallaj said the following about a co-religionist who insulted a Jew: Sufi Inayat Khan who lived in the 20th century, explained Sufism to the masses with its universal aspect and stated that it repeated the same common message with the mystical branches of other religions, and frequently made references to different religious/mystical traditions in her speeches and writings. Criticism Criticism of perennialism has come from academic and traditional religious circles. Academic critiques include the contention that perennialists make ontological claims about Divinity, God(s), and supernatural powers that cannot be verified in practice; and that they take an ahistorical or transhistorical view, overemphasizing similarities and downplaying differences between religions. Craig Martin argues that perennialism involves empirical claims, but that they circumvent those issues and make unfalsifiable claims that resemble the No true Scotsman fallacy. Religious criticism has emerged from within various traditions, including Christianity, Islam, and Hinduism. Tom Facchine argues that by prioritizing mystical experience over revelation and sacred texts, perennialists neglect, ignore, or reinterpret the truth claims found in the religious traditions they are engaged with, or that they interpret or distort the words of some religious historical figures to confirm their own views. Gary Stogsdill argues that perennialism can have negative social consequences, perceiving it as anthropocentric and individualistic, and arguing that concepts such as "enlightenment" can be abused by unethical gurus and teachers. Some thinkers of the Traditionalist School have been criticised for their influence on far-right politics. Julius Evola, in particular, was active in Italian fascist politics during his lifetime and counted Benito Mussolini among his admirers. References to Evola are widespread in the alt-right movement. Steve Bannon has called him an influence. Paul Furlong argues that "Evola's initial writings in the inter-war period were from an ideological position close to the Fascist regime in Italy, though not identical to it". Over his active years, Furlong writes, he "synthesized" spiritual bearings of writers like Guénon with his political concerns of the "European authoritarian Right." Evola tried to develop a tradition different from that of Guénon and thus attempted to develop a "strategy of active revolt as a counterpart to the spiritual withdrawal favoured by Guénon". Evola, as Furlong puts it, wanted to have political influence both in Fascist and Nazi regimes, something which he failed to achieve. Renaud Fabbri, a Traditionalist scholar, argues that "certain figures such as Mircea Eliade (1907–1986), Henry Corbin (1903–1978) and Julius Evola (1898–1974) cannot be considered as members of the Perennialist school, despite the fact that they have been influenced at some levels by Perennialism and may have used some of their ideas to support their own views." World Wisdom, a major publisher of Traditionalist literature, does not print or sell the works of these authors. See also Notes References Sources Printed sources Web-sources Further reading Aldous Huxley, The Perennial Philosophy, Harper Perennial Modern Classics (January 1, 2009) Frithjof Schuon, Transcendent Unity of Religions (Quest Book) Paperback – January 1, 1984 William W. Quinn, junior. The Only Tradition, in S.U.N.Y. Series in Western Esoteric Traditions. Albany, N.Y.: State University of New York Press, 1997. xix, 384 p. pbk Samuel Bendeck Sotillos (ed.), Psychology and the Perennial Philosophy in Studies in Comparative Religion (Bloomington, IN: World Wisdom Books, 2013). Zachary Markwith, "Muslim Intellectuals and the Perennial Philosophy in the Twentieth Century", Sophia Perennis Vol. 1, N° 1 (Tehran: Iranian Institute of Philosophy, 2009). Inayat Khan, The Unity of Religious Ideals, Sufi Order Publications, 1979. External links Neo-Vedanta Nonduality Philosophy of religion Religious pluralism Western esotericism
0.765375
0.997997
0.763842
Toponymy
Toponymy, toponymics, or toponomastics is the study of toponyms (proper names of places, also known as place names and geographic names), including their origins, meanings, usage and types. Toponym is the general term for a proper name of any geographical feature, and full scope of the term also includes proper names of all cosmographical features. In a more specific sense, the term toponymy refers to an inventory of toponyms, while the discipline researching such names is referred to as toponymics or toponomastics. Toponymy is a branch of onomastics, the study of proper names of all kinds. A person who studies toponymy is called toponymist. Etymology The term toponymy comes from / , 'place', and / , 'name'. The Oxford English Dictionary records toponymy (meaning "place name") first appearing in English in 1876. Since then, toponym has come to replace the term place-name in professional discourse among geographers. Toponymic typology Toponyms can be divided in two principal groups: geonyms - proper names of all geographical features, on planet Earth. cosmonyms - proper names of cosmographical features, outside Earth. Various types of geographical toponyms (geonyms) include, in alphabetical order: agronyms - proper names of fields and plains. choronyms - proper names of regions or countries. dromonyms - proper names of roads or any other transport routes by land, water or air. drymonyms - proper names of woods and forests. econyms - proper names of inhabited locations, like houses, villages, towns or cities, including: comonyms - proper names of villages. astionyms - proper names of towns and cities. hydronyms - proper names of various bodies of water, including: helonyms - proper names of swamps, marshes and bogs. limnonyms - proper names of lakes and ponds. oceanonyms - proper names of oceans. pelagonyms - proper names of seas. potamonyms - proper names of rivers and streams. insulonyms - proper names of islands. metatoponyms - proper names of places containing recursive elements (e.g. Red River Valley Road). oronyms - proper names of relief features, like mountains, hills and valleys, including: speleonyms - proper names of caves or some other subterranean features. petronyms - proper names of rock climbing routes. urbanonyms - proper names of urban elements (streets, squares etc.) in settlements, including: agoronyms - proper names of squares and marketplaces. hodonyms - proper names of streets and roads. Various types of cosmographical toponyms (cosmonyms) include: asteroidonyms - proper names of asteroids. astronyms - proper names of stars and constellations. cometonyms - proper names of comets. meteoronyms - proper names of meteors. planetonyms - proper names of planets and planetary systems. History Probably the first toponymists were the storytellers and poets who explained the origin of specific place names as part of their tales; sometimes place-names served as the basis for their etiological legends. The process of folk etymology usually took over, whereby a false meaning was extracted from a name based on its structure or sounds. Thus, for example, the toponym of Hellespont was explained by Greek poets as being named after Helle, daughter of Athamas, who drowned there as she crossed it with her brother Phrixus on a flying golden ram. The name, however, is probably derived from an older language, such as Pelasgian, which was unknown to those who explained its origin. In his Names on the Globe, George R. Stewart theorizes that Hellespont originally meant something like 'narrow Pontus' or 'entrance to Pontus', Pontus being an ancient name for the region around the Black Sea, and by extension, for the sea itself. Especially in the 19th century, the age of exploration, a lot of toponyms got a different name because of national pride. Thus the famous German cartographer Petermann thought that the naming of newly discovered physical features was one of the privileges of a map-editor, especially as he was fed up with forever encountering toponyms like 'Victoria', 'Wellington', 'Smith', 'Jones', etc. He writes: "While constructing the new map to specify the detailed topographical portrayal and after consulting with and authorization of messr. Theodor von Heuglin and count Karl Graf von Waldburg-Zeil I have entered 118 names in the map: partly they are the names derived from celebrities of arctic explorations and discoveries, arctic travellers anyway as well as excellent friends, patrons, and participants of different nationalities in the newest northpolar expeditions, partly eminent German travellers in Africa, Australia, America ...". Toponyms may have different names through time, due to changes and developments in languages, political developments and border adjustments to name but a few. More recently many postcolonial countries revert to their own nomenclature for toponyms that have been named by colonial powers. Toponomastics Place names provide the most useful geographical reference system in the world. Consistency and accuracy are essential in referring to a place to prevent confusion in everyday business and recreation. A toponymist, through well-established local principles and procedures developed in cooperation and consultation with the United Nations Group of Experts on Geographical Names (UNGEGN), applies the science of toponymy to establish officially recognized geographical names. A toponymist relies not only on maps and local histories, but interviews with local residents to determine names with established local usage. The exact application of a toponym, its specific language, its pronunciation, and its origins and meaning are all important facts to be recorded during name surveys. Scholars have found that toponyms provide valuable insight into the historical geography of a particular region. In 1954, F. M. Powicke said of place-name study that it "uses, enriches and tests the discoveries of archaeology and history and the rules of the philologists." Toponyms not only illustrate ethnic settlement patterns, but they can also help identify discrete periods of immigration. Toponymists are responsible for the active preservation of their region's culture through its toponymy. They typically ensure the ongoing development of a geographical names database and associated publications, for recording and disseminating authoritative hard-copy and digital toponymic data. This data may be disseminated in a wide variety of formats, including hard-copy topographic maps as well as digital formats such as geographic information systems, Google Maps, or thesauri like the Getty Thesaurus of Geographic Names. Toponymic commemoration In 2002, the United Nations Conference on the Standardization of Geographical Names acknowledged that while common, the practice of naming geographical places after living persons (toponymic commemoration) could be problematic. Therefore, the United Nations Group of Experts on Geographical Names recommends that it be avoided and that national authorities should set their own guidelines as to the time required after a person's death for the use of a commemorative name. In the same vein, writers Pinchevski and Torgovnik (2002) consider the naming of streets as a political act in which holders of the legitimate monopoly to name aspire to engrave their ideological views in the social space. Similarly, the revisionist practice of renaming streets, as both the celebration of triumph and the repudiation of the old regime is another issue of toponymy. Also, in the context of Slavic nationalism, the name of Saint Petersburg was changed to the more Slavic sounding Petrograd from 1914 to 1924, then to Leningrad following the death of Vladimir Lenin and back to Saint-Peterburg in 1991 following the dissolution of the Soviet Union. After 1830, in the wake of the Greek War of Independence and the establishment of an independent Greek state, Turkish, Slavic and Italian place names were Hellenized, as an effort of "toponymic cleansing." This nationalization of place names can also manifest itself in a postcolonial context. In Canada, there have been initiatives in recent years "to restore traditional names to reflect the Indigenous culture wherever possible". Indigenous mapping is a process that can include restoring place names by Indigenous communities themselves. Frictions sometimes arise between countries because of toponymy, as illustrated by the Macedonia naming dispute in which Greece has claimed the name Macedonia, the Sea of Japan naming dispute between Japan and Korea, as well as the Persian Gulf naming dispute. On 20 September 1996 a note on the internet reflected a query by a Canadian surfer, who said as follows: 'One producer of maps labeled the water body "Persian Gulf" on a 1977 map of Iran, and then "Arabian Gulf", also in 1977, in a map which focused on the Gulf States. I would gather that this is an indication of the "politics of maps", but I would be interested to know if this was done to avoid upsetting users of the Iran map and users of the map showing Arab Gulf States'. This symbolizes a further aspect of the topic, namely the spilling over of the problem from the purely political to the economic sphere. Geographic names boards A geographic names board is an official body established by a government to decide on official names for geographical areas and features. Most countries have such a body, which is commonly (but not always) known under this name. Also, in some countries (especially those organised on a federal basis), subdivisions such as individual states or provinces will have individual boards. Individual geographic names boards include: Antarctic Place-names Commission Commission nationale de toponymie (National toponymy commission - France) Geographical Names Board of Canada Geographical Names Board of New South Wales New Zealand Geographic Board South African Geographical Names Council United States Board on Geographic Names Notable toponymists Marcel Aurousseau (1891–1983), Australian geographer, geologist, war hero, historian and translator Guido Borghi (born 1969), Italian historical linguist and toponymist Andrew Breeze (born 1954), English linguist William Bright (1928–2006), American linguist Richard Coates (born 1949), English linguist Joan Coromines (1905–1997), etymologist, dialectologist, toponymist Albert Dauzat (1877–1955), French linguist Eilert Ekwall (1877–1964, Sweden) Henry Gannett (1846–1914), American geographer Margaret Gelling (1924–2009), English toponymist Michel Grosclaude (1926–2002), philosopher and French linguist Erwin Gustav Gudde Joshua Nash, Australian linguist and toponymist Ernest Nègre (1907–2000), French toponymist W. F. H. Nicolaisen (1927–2016), folklorist, linguist, medievalist Oliver Padel (born 1948), English medievalist and toponymist Francesco Perono Cacciafoco (born 1980), Italian historical linguist and toponymist Robert L. Ramsay (1880–1953), American linguist Adrian Room (1933–2010), British toponymist and onomastician Charles Rostaing (1904–1999), French linguist Henry Schoolcraft (1793–1864), American geographer, geologist and ethnologist Walter Skeat (1835–1912), British philologist Albert Hugh Smith (1903–1967), scholar of Old English and Scandinavian languages Frank Stenton (1880–1967), historian of Anglo-Saxon England George R. Stewart (1895–1980), American historian, toponymist and novelist Jan Paul Strid (1947–2018), Swedish toponymist Isaac Taylor (1829–1901), philologist, toponymist and Anglican canon of York Jan Tent, Australian linguist and toponymist James Hammond Trumbull (1821–1897), American scholar and philologist William J. Watson (1865–1948), Scottish scholar See also Related concepts Anthroponymy Demonymy List of demonyms for US states and territories Ethnonymy Exonym and endonym Gazetteer Lists of places Oeconym Toponymy of the Kerguelen Islands Toponymy Toponymic surname Planetary nomenclature Hydronymy Latin names of European rivers Latin names of rivers List of river name etymologies Old European hydronymy Regional toponymy Biblical toponyms in the United States Celtic toponymy German toponymy Germanic toponymy Historical African place names Japanese place names Korean toponymy and list of place names List of English exonyms for German toponyms List of French exonyms for Dutch toponyms List of French exonyms for German toponyms List of French exonyms for Italian toponyms List of Latin place names in Europe List of modern names for biblical place names List of renamed places in the United States List of U.S. place names connected to Sweden List of U.S. States and Territorial demonyms List of U.S. state name etymologies List of U.S. state nicknames Maghreb toponymy Names of European cities in different languages New Zealand place names Norman toponymy Oikonyms in Western and South Asia Place names of Palestine Hebraization of Palestinian place names Place names in Sri Lanka Roman place names Toponyms of Finland Toponyms of Turkey Toponymy in the United Kingdom and Ireland List of British places with Latin names List of generic forms in place names in the British Isles List of places in the United Kingdom List of Roman place names in Britain Place names in Irish Welsh place names Territorial designation Toponymical list of counties of the United Kingdom Other Labeling (map design) List of adjectival forms of place names List of double placenames List of long place names List of names in English with counterintuitive pronunciations List of places named after peace List of places named after Lenin List of places named after Stalin List of places named for their main products List of political entities named after people List of short place names List of tautological place names List of words derived from toponyms Lists of things named after places List of geographic acronyms and initialisms List of geographic portmanteaus List of geographic anagrams and ananyms United Nations Group of Experts on Geographical Names UNGEGN Toponymic Guidelines References Sources Further reading Berg, Lawrence D. and Jani Vuolteenaho. 2009. Critical Toponymies (Re-Materialising Cultural Geography). Ashgate Publishing. Cablitz, Gabriele H. 2008. "When 'what' is 'where': A linguistic analysis of landscape terms, place names and body part terms in Marquesan (Oceanic, French Polynesia)." Language Sciences 30(2/3):200–26. Desjardins, Louis-Hébert. 1973. Les nons géographiques: lexique polyglotte, suivi d'un glossaire de 500 mots. Leméac. Hargitai, Henrik I. 2006. "Planetary Maps: Visualization and Nomenclature." Cartographica 41(2):149–64 Hargitai, Henrik I., Hugh S. Greqorv, Jan Osburq, and Dennis Hands. 2007. "Development of a Local Toponym System at the Mars Desert Research Station." Cartographica 42(2):179–87. Hercus, Luise, Flavia Hodges, and Jane Simpson. 2009. The Land is a Map: Placenames of Indigenous Origin in Australia. Pandanus Books. Kadmon, Naftali. 2000. Toponymy: the lore, laws, and language of geographical names. Vantage Press. Perono Cacciafoco, Francesco and Francesco Paolo Cavallaro. 2023. Place Names: Approaches and Perspectives in Toponymy and Toponomastics. Cambridge University Press. , Book 0; Book 1 ; DOI External links Who Was Who in North American Name Study Forgotten Toponymy Board (German) The origins of British place names (archived 1 March 2012) An Index to the Historical Place Names of Cornwall Celtic toponymy (archived 10 February 2012) The Doukhobor Gazetteer, Doukhobor Heritage website, by Jonathan Kalmakoff. O'Brien Jr., Francis J. (Moondancer) "Indian Place Names—Aquidneck Indian Council" Ghana Place Names Index Anatolicus: Toponyms of Turkey The University of Nottingham's: Key to English Place-names searchable map. The Etymology of Mars crater names on Internet Archive
0.766922
0.995912
0.763787
Philomath
A philomath is a lover of learning and studying. The term is from Greek (; "beloved", "loving", as in philosophy or philanthropy) and , (, ; "to learn", as in polymath). Philomathy is similar to, but distinguished from, philosophy in that , the latter suffix, specifies "wisdom" or "knowledge", rather than the process of acquisition thereof. Philomath is not synonymous with polymath, as a polymath is someone who possesses great and detailed knowledge and facts from a variety of disciplines, while a philomath is someone who greatly enjoys learning and studying. Overview The shift in meaning for is likely a result of the rapid categorization during the time of Plato and Aristotle of their in terms of education: arithmetic, geometry, astronomy, and music (the quadrivium), which the Greeks found to create a "natural grouping" of mathematical (in the modern usage; "" in the ancient usage) precepts. In a philosophical dialogue, King James penned the character Philomathes to debate on arguments of whether the ancient religious concepts of witchcraft should be punished in a politically fueled Christian society. The arguments King James poses through the character Epistemon are based on concepts of theological reasoning regarding society's belief, as his opponent, Philomathes, takes a philosophical stance on society's legal aspects but seeks to obtain the knowledge of Epistemon. This philosophical approach signified a philomath seeking to obtain greater knowledge through epistemology. The dialogue was used by King James to educate society on various concepts including the history and etymology of the subjects debated. See also Benjamin Franklin, who used this pen name Philomath, Oregon Philomathean Literary Society (Erskine College) Philomathean Society, a literary society at the University of Pennsylvania Philomathean Society (New York University) Philomaths, Polish secret student organization that existed, 1817–1823, at the Imperial University of Vilnius References External links Words
0.772361
0.988882
0.763774
Feasibility study
A feasibility study is an assessment of the practicality of a project or system. A feasibility study aims to objectively and rationally uncover the strengths and weaknesses of an existing business or proposed venture, opportunities and threats present in the natural environment, the resources required to carry through, and ultimately the prospects for success. In its simplest terms, the two criteria to judge feasibility are cost required and value to be attained. A well-designed feasibility study should provide a historical background of the business or project, a description of the product or service, accounting statements, details of the operations and management, marketing research and policies, financial data, legal requirements and tax obligations. Generally, feasibility studies precede technical development and project implementation. A feasibility study evaluates the project's potential for success; therefore, perceived objectivity is an important factor in the credibility of the study for potential investors and lending institutions. It must therefore be conducted with an objective, unbiased approach to provide information upon which decisions can be based. Formal definition A project feasibility study is a comprehensive report that examines in detail the five frames of analysis of a given project. It also takes into consideration its four Ps, its risks and POVs, and its constraints (calendar, costs, and norms of quality). The goal is to determine whether the project should go ahead, be redesigned, or else abandoned altogether. The five frames of analysis are: The frame of definition; the frame of contextual risks; the frame of potentiality; the parametric frame; the frame of dominant and contingency strategies. The four Ps are traditionally defined as Plan, Processes, People, and Power. The risks are considered to be external to the project (e.g., weather conditions) and are divided in eight categories: (Plan) financial and organizational (e.g., government structure for a private project); (Processes) environmental and technological; (People) marketing and sociocultural; and (Power) legal and political. POVs are Points of Vulnerability: they differ from risks in the sense that they are internal to the project and can be controlled or else eliminated. The constraints are the standard constraints of calendar, costs and norms of quality that can each be objectively determined and measured along the entire project lifecycle. Depending on projects, portions of the study may suffice to produce a feasibility study; smaller projects, for example, may not require an exhaustive environmental assessment. Common factors TELOS is an acronym in project management used to define five areas of feasibility that determine whether a project should run or not. T - Technical — Is the project technically possible? E - Economic — Can the project be afforded? Will it increase profit? L - Legal — Is the project legal? O - Operational — How will the current operations support the change? S - Scheduling — Can the project be done in time? Technical feasibility This assessment is based on an outline design of system requirements, to determine whether the company has the technical expertise to handle completion of the project. When writing a feasibility report, the following should be taken to consideration: A brief description of the business to assess more possible factors which could affect the study The part of the business being examined The human and economic factor The possible solutions to the problem At this level, the concern is whether the proposal is both technically and legally feasible (assuming moderate cost). The technical feasibility assessment is focused on gaining an understanding of the present technical resources of the organization and their applicability to the expected needs of the proposed system. It is an evaluation of the hardware and software and how it meets the need of the proposed system Method of production The selection among a number of methods to produce the same commodity should be undertaken first. Factors that make one method being preferred to other method in agricultural projects are the following: Availability of inputs or raw materials and their quality and prices. Availability of markets for outputs of each method and the expected prices for these outputs. Various efficiency factors such as the expected increase in one additional unit of fertilizer or productivity of a specified crop per one thing Production technique After we determine the appropriate method of production of a commodity, it is necessary to look for the optimal technique to produce this commodity. Project requirements Once the method of production and its technique are determined, technical people have to determine the projects' requirements during the investment and operating periods. These include: Determination of tools and equipment needed for the project such as drinkers and feeders or pumps or pipes ...etc. Determination of projects' requirements of constructions such as buildings, storage, and roads ...etc. in addition to internal designs for these requirements. Determination of projects' requirements of skilled and unskilled labor and managerial and financial labor. Determination of construction period concerning the costs of designs and consultations and the costs of constructions and other tools. Determination of minimum storage of inputs, cash money to cope with operating and contingency costs. Project location The most important factors that determine the selection of project location are the following: Availability of land (proper acreage and reasonable costs). The impact of the project on the environment and the approval of the concerned institutions for license. The costs of transporting inputs and outputs to the project's location (i.e., the distance from the markets). Availability of various services related to the project such as availability of extension services or veterinary or water or electricity or good roads ...etc. Legal feasibility It determines whether the proposed system conflicts with legal requirements, e.g., a data processing system must comply with the local data protection regulations and if the proposed venture is acceptable in accordance to the laws of the land. Operational feasibility study Operational feasibility is the measure of how well a proposed system solves problems and takes advantage of the opportunities identified during scope definition and how it satisfies the requirements identified in the requirements analysis phase of system development. The operational feasibility assessment focuses on the degree to which the proposed development project fits in with the existing business environment and objectives about the development schedule, delivery date, corporate culture and existing business processes. To ensure success, desired operational outcomes must be imparted during design and development. These include such design-dependent parameters as reliability, maintainability, supportability, usability, producibility, disposability, sustainability, affordability, etc. These parameters are required to be considered at the early stages of the design if desired operational behaviours are to be realised. A system design and development requires appropriate and timely application of engineering and management efforts to meet the previously mentioned parameters. A system may serve its intended purpose most effectively when its technical and operating characteristics are engineered into the design. Therefore, operational feasibility is a critical aspect of systems engineering that must be integral to the early design phases. Time feasibility A time feasibility study will take into account the period in which the project is going to take up to its completion. A project will fail if it takes too long to be completed before it is useful. Typically this means estimating how long the system will take to develop, and if it can be completed in a given time period using some methods like payback period. Time feasibility is a measure of how reasonable the project timetable is. Given our technical expertise, are the project deadlines reasonable? Some projects are initiated with specific deadlines. It is necessary to determine whether the deadlines are mandatory or desirable. Other feasibility factors Resource feasibility Describe how much time is available to build the new system, when it can be built, whether it interferes with normal business operations, type and amount of resources required, dependencies, and developmental procedures with company revenue prospectus. Financial feasibility In case of a new project, financial viability can be judged on the following parameters: Total estimated cost of the project Financing of the project in terms of its capital structure, debt to equity ratio and promoter's share of total cost Existing investment by the promoter in any other business Projected cash flow and profitability The financial viability of a project should provide the following information: Full details of the assets to be financed and how liquid those assets are. Rate of conversion to cash-liquidity (i.e., how easily the various assets can be converted to cash). Project's funding potential and repayment terms. Sensitivity in the repayments capability to the following factors: Mild slowing of sales. Acute reduction/slowing of sales. Small increase in cost. Large increase in cost. Adverse economic conditions. In 1983 the first generation of the Computer Model for Feasibility Analysis and Reporting (COMFAR), a computation tool for financial analysis of investments, was released. Since then, this United Nations Industrial Development Organization (UNIDO) software has been developed to also support the economic appraisal of projects. The COMFAR III Expert is intended as an aid in the analysis of investment projects. The main module of the program accepts financial and economic data, produces financial and economic statements and graphical displays and calculates measures of performance. Supplementary modules assist in the analytical process. Cost-benefit and value-added methods of economic analysis developed by UNIDO are included in the program and the methods of major international development institutions are accommodated. The program is applicable for the analysis of investment in new projects and expansion or rehabilitation of existing enterprises as, e.g., in the case of reprivatisation projects. For joint ventures, the financial perspective of each partner or class of shareholder can be developed. Analysis can be performed under a variety of assumptions concerning inflation, currency revaluation and price escalations. Market research Market research studies is one of the most important sections of the feasibility study as it examines the marketability of the product or service and convinces readers that there is a potential market for the product or service. If a significant market for the product or services cannot be established, then there is no project. Typically, market studies will assess the potential sales of the product, absorption and market capture rates and the project's timing. The feasibility study outputs the feasibility study report, a report detailing the evaluation criteria, the study findings, and the recommendations. See also Project appraisal Environmental impact Mining feasibility study Proof of concept SWOT analysis References Further reading Matson, James. "Cooperative Feasibility Study Guide" , United States Department of Agriculture, Rural Business-Cooperative Service. October 2000. https://pilotandfeasibilitystudies.qmul.ac.uk/ External links Hoagland & Williamson 2000 United Nations Industrial Development Organization (UNIDO) Matson Allan Thompson 2003 Business process management Evaluation methods Project management
0.767838
0.994424
0.763557
Casuistry
Casuistry is a process of reasoning that seeks to resolve moral problems by extracting or extending abstract rules from a particular case, and reapplying those rules to new instances. This method occurs in applied ethics and jurisprudence. The term is also used pejoratively to criticise the use of clever but unsound reasoning, especially in relation to ethical questions (as in sophistry). It has been defined as follows: Study of cases of conscience and a method of solving conflicts of obligations by applying general principles of ethics, religion, and moral theology to particular and concrete cases of human conduct. This frequently demands an extensive knowledge of natural law and equity, civil law, ecclesiastical precepts, and an exceptional skill in interpreting these various norms of conduct.... It remains a common method in applied ethics. Etymology According to the Online Etymological Dictionary, the term and its agent noun "casuist", appearing from about 1600, derive from the Latin noun , meaning "case", especially as referring to a "case of conscience". The same source says, "Even in the earliest printed uses the sense was pejorative". History Casuistry dates from Aristotle (384–322 BC), yet the peak of casuistry was from 1550 to 1650, when the Society of Jesus (commonly known as the Jesuits) used case-based reasoning, particularly in administering the Sacrament of Penance (or "confession"). The term became pejorative following Blaise Pascal's attack on the misuse of the method in his Provincial Letters (1656–57). The French mathematician, religious philosopher and Jansenist sympathiser attacked priests who used casuistic reasoning in confession to pacify wealthy church donors. Pascal charged that "remorseful" aristocrats could confess a sin one day, re-commit it the next, then generously donate to the church and return to re-confess their sin, confident that they were being assigned a penance in name only. These criticisms darkened casuistry's reputation in the following centuries. For example, the Oxford English Dictionary quotes a 1738 essay by Henry St. John, 1st Viscount Bolingbroke to the effect that casuistry "destroys, by distinctions and exceptions, all morality, and effaces the essential difference between right and wrong, good and evil". The 20th century saw a revival of interest in casuistry. In their book The Abuse of Casuistry: A History of Moral Reasoning (1988), Albert Jonsen and Stephen Toulmin argue that it is not casuistry but its abuse that has been a problem; that, properly used, casuistry is powerful reasoning. Jonsen and Toulmin offer casuistry as a method for compromising the contradictory principles of moral absolutism and moral relativism. In addition, the ethical philosophies of utilitarianism (especially preference utilitarianism) and pragmatism have been identified as employing casuistic reasoning. Early modernity The casuistic method was popular among Catholic thinkers in the early modern period. Casuistic authors include Antonio Escobar y Mendoza, whose Summula casuum conscientiae (1627) enjoyed great success, Thomas Sanchez, Vincenzo Filliucci (Jesuit and penitentiary at St Peter's), Antonino Diana, Paul Laymann (Theologia Moralis, 1625), John Azor (Institutiones Morales, 1600), Etienne Bauny, Louis Cellot, Valerius Reginaldus, and Hermann Busembaum (d. 1668). The progress of casuistry was interrupted toward the middle of the 17th century by the controversy which arose concerning the doctrine of probabilism, which effectively stated that one could choose to follow a "probable opinion"that is, an opinion supported by a theologian or anothereven if it contradicted a more probable opinion or a quotation from one of the Fathers of the Church. Certain kinds of casuistry were criticised by early Protestant theologians, because it was used to justify many of the abuses that they sought to reform. It was famously attacked by the Catholic and Jansenist philosopher Blaise Pascal during the formulary controversy against the Jesuits, in his Provincial Letters, as the use of rhetorics to justify moral laxity, which became identified by the public with Jesuitism; hence the everyday use of the term to mean complex and sophistic reasoning to justify moral laxity. By the mid-18th century, "casuistry" had become a synonym for attractive-sounding, but ultimately false, moral reasoning. In 1679 Pope Innocent XI publicly condemned sixty-five of the more radical propositions (stricti mentalis), taken chiefly from the writings of Escobar, Suarez and other casuists as propositiones laxorum moralistarum and forbade anyone to teach them under penalty of excommunication. Despite this condemnation by a pope, both Catholicism and Protestantism permit the use of ambiguous statements in specific circumstances. Later modernity G. E. Moore dealt with casuistry in chapter 1.4 of his Principia Ethica, in which he claimed that "the defects of casuistry are not defects of principle; no objection can be taken to its aim and object. It has failed only because it is far too difficult a subject to be treated adequately in our present state of knowledge". Furthermore, he asserted that "casuistry is the goal of ethical investigation. It cannot be safely attempted at the beginning of our studies, but only at the end". Since the 1960s, applied ethics has revived the ideas of casuistry in applying moral reasoning to particular cases in law, bioethics, and business ethics. Its facility for dealing with situations where rules or values conflict with each other has made it a useful approach in professional ethics, and casuistry's reputation has improved somewhat as a result. Pope Francis, a Jesuit, has criticized casuistry as "the practice of setting general laws on the basis of exceptional cases" in instances where a more holistic approach would be preferred. See also References Further reading Bliton, Mark J. (1993). The Ethics of Clinical Ethics Consultation: On the Way to Clinical Philosophy (Diss. Vanderbilt) Carney, Bridget Mary. (1993). Modern Casuistry: An Essential But Incomplete Method for Clinical Ethical Decision-Making. (Diss., Graduate Theological Union). Carson, Ronald A. (1988). "Paul Ramsey, Principled Protestant Casuist: A Retrospective." Medical Humanities Review, Vol. 2, pp. 24–35. Chidwick, Paula Marjorie (1994). Approaches to Clinical Ethical Decision-Making: Ethical Theory, Casuistry and Consultation. (Diss., U of Guelph) Drane, J.F. (1990). "Methodologies for Clinical Ethics." Bulletin of the Pan American Health Organization, Vol. 24, pp. 394–404. Dworkin, R.B. (1994). "Emerging Paradigms in Bioethics: Symposium." Indiana Law Journal, Vol. 69, pp. 945–1122. Elliot, Carl (1992). "Solving the Doctor's Dilemma?" New Scientist, Vol. 133, pp. 42–43. Emanuel, Ezekiel J. (1991). The Ends of Human Life: Medical Ethics in a Liberal Polity (Cambridge). Franklin, James (2001). The Science of Conjecture: Evidence and Probability Before Pascal (Johns Hopkins), ch. 4. Gallagher, Lowell (1991). Medusa's Gaze: Casuistry and Conscience in the Renaissance (Stanford) Green, Bryan S. (1988). Literary Methods and Sociological Theory: Case Studies of Simmel and Weber (Albany) Houle, Martha Marie (1983). The Fictions of Casuistry and Pascal's Jesuit in "Les Provinciales" (Diss. U California, San Diego) Jonsen, Albert R. (1986). "Casuistry" in J.F. Childress and J. Macgvarrie, eds. Westminster Dictionary of Christian Ethics (Philadelphia) Jonsen, Albert R. and Stephen Toulmin (1988). The Abuse of Casuistry: A History of Moral Reasoning (California). Keenan, James F., S.J. and Thomas A. Shannon. (1995). The Context of Casuistry (Washington). Kirk, K. (1936). Conscience and Its Problems, An Introduction to Casuistry (London) Kuczewski, Mark G. (1994). Fragmentation and Consensus in Contemporary Neo-Aristotelian Ethics: A Study in Communitarianism and Casuistry (Diss., Duquesne U). Long, Edward LeRoy, junior (1954). Conscience and Compromise: an Approach to Protestant Casuistry (Philadelphia, Penn.: Westminster Press) Mackler, Aaron Leonard. Cases of Judgments in Ethical Reasoning: An Appraisal of Contemporary Casuistry and Holistic Model for the Mutual Support of Norms and Case Judgments (Diss., Georgetown U). McCready, Amy R. (1992). "Milton's Casuistry: The Case of 'The Doctrine and Discipline of Divorce.' " Journal of Medieval and Renaissance Studies, Vol. 22, pp. 393–428. Odozor, Paulinus Ikechukwu (1989). Richard A. McCormick and Casuistry: Moral Decision-Making in Conflict Situations (M.A. Thesis, St. Michael's College). Pack, Rolland W. (1988). Case Studies and Moral Conclusions: The Philosophical Use of Case Studies in Biomedical Ethics (Diss., Georgetown U). Pascal, Blaise (1967). The Provincial Letters (London). Río Parra, Elena del (2008). Cartografías de la conciencia española en la Edad de Oro (Mexico). Seiden, Melvin (1990). Measure for Measure: Casuistry and Artistry (Washington). Smith, David H. (1991). "Stories, Values, and Patient Care Decisions." in Charles Conrad, ed. The Ethical Nexus: Values in Organizational Decision Making. (New Jersey). Starr, G. (1971). Defoe and Casuistry (Princeton). Tallmon, James Michael (2001). "Casuistry" in The Encyclopedia of Rhetoric. Ed. Thomas O. Sloane. New York: Oxford University Press, pp. 83–88. Tallmon, James Michael (1993). Casuistry and the Quest for Rhetorical Reason: Conceptualizing a Method of Shared Moral Inquiry (Diss., U of Washington). Taylor, Richard (1984). Good and Evil – A New Direction: A Foreceful Attack on the Rationalist Tradition in Ethics (Buffalo). Toulmin, Stephen (1988). "The Recovery of Practical Philosophy." The American Scholar, Vol. 57, pp. 337–352. Weinstein, Bruce David (1989). The Possibility of Ethical Expertise (Diss. Georgetown U). Wildes, Kevin Wm., S.J. (1993). The View for Somewhere: Moral Judgment in Bioethics (Diss. Rice U). Zacker, David J. (1991). Reflection and Particulars: Does Casuistry Offer Us Stable Beliefs About Ethics? (M.A. Thesis, Western Michigan U). External links Dictionary of the History of Ideas: "Casuistry" Accountancy as computational casuistics, article on how modern compliance regimes in accountancy and law apply casuistry Mortimer Adler's Great Ideas – Casuistry Summary of casuistry by Jeramy Townsley Casuistry – Online Guide to Ethics and Moral Philosophy Casuistry – Oxford Encyclopedia of Rhetoric catalogued at she-philosopher.com Scholasticism Applied ethics Common law Legal reasoning Jurisprudence Criticism of religion
0.768048
0.994105
0.76352
Epistemic modality
Epistemic modality is a sub-type of linguistic modality that encompasses knowledge, belief, or credence in a proposition. Epistemic modality is exemplified by the English modals may, might, must. However, it occurs cross-linguistically, encoded in a wide variety of lexical items and grammatical structures. Epistemic modality has been studied from many perspectives within linguistics and philosophy. It is one of the most studied phenomena in formal semantics. Realisation in speech (a) grammatically: through modal verbs (e.g., English: may, might, must; : Er soll ein guter Schachspieler sein "He is said to be a good chess player"), particular grammatical moods on verbs, the epistemic moods, or a specific grammatical element, such as an affix (Tuyuca: -hīyi "reasonable to assume") or particle; or (b) non-grammatically (often lexically): through adverbials (e.g., English: perhaps, possibly), or a certain intonational pattern Non-canonical environments and objective epistemic modality In 1977, John Lyons started a long discussion regarding in which environments epistemic modal operators can be embedded and from which environments they are banned. He argues that epistemic modal operators compete for the same position as illocutionary operators, such as the assertion operator, question operator or imperative operator. According to him this explains why most epistemic modals in English are not acceptable embedded under questions or negation. As Lyons finds single lexemes of epistemic modals in English that are used in questions and under negation, he assumes that they must be part of a separate class of epistemic modality–the so called objective epistemic modality, in contrast to subjective epistemic modality—whose operators are considered to be taking the same position in the clause as illocutionary operators. Which modal lexemes convey an `objective' epistemic interpretation is subject to much controversy. So far most of the authors who are in favour of a distinct class of objective epistemic modal verbs have not explicitly stated which verbs can be interpreted in an `objective' epistemic way and which can only be interpreted in an `subjective' epistemic way. It is often assumed that, for languages such as English, Hungarian, Dutch and German, epistemic adverbs only involve a subjective epistemic interpretation and can never be interpreted in an objective epistemic way. Since the publication of Lyons' work, a range environments have been suggested from which (subjective) epistemic modals are assumed to be banned. Most of these non-canonical environments were motivated by data from English: No infinitives No past participles No past tenses Excluded from the scope of a counterfactual operator Excluded from nominalisations No verbless directional phrase complements No VP-anaphora No separation in wh-clefts May not bear sentence accent Excluded from the scope of an negation Excluded from polar questions Excluded from wh-questions Excluded from imperatives Excluded from optatives Excluded from complement clauses Excluded from event-related causal clauses Excluded from the antecedent of an event related conditional clause Excluded from temporal clauses Excluded from restrictive relative clauses Excluded from the scope of a quantifier No assent/dissent However, taking a look into languages which have a more productive inflectional morphology such as German, there is solid corpus data that epistemic modal verbs do occur in many of these environments. The only environments in which epistemic modal verbs do not occur in German are as follows. they do not occur with verbless directional phrase complements they cannot be separated from their infinitive complements in wh-clefts they do not undergo nominalisations they are exempt from adverbial infinitives they cannot be embedded under circumstantial modal verbs they cannot be embedded under predicates of desire they cannot be embedded under imperative operators they cannot be embedded under optative operators This corpus data further shows that there is no consistent class of objective epistemic modal verbs, neither in English, nor in German. Each of the assumed objective epistemic modals is acceptable in a different range of environments which are actually supposed to hold for the entire stipulated class of objective epistemic modality. The table below illustrates in which environments the most frequent epistemic modals in German, kann `can', muss `must', dürfte `be.probable', mögen `may' are attested in corpora (yes), or yield ungrammatical judgements (no). The lower part makes reference to classifications by various authors, which of these epistemic modal verb come with an objective epistemic interpretation and which are only restricted to subjective epistemic modality. Link to evidentiality Many linguists have considered possible links between epistemic modality and evidentiality, the grammatical marking of a speaker's evidence or information source. However, there is no consensus about what such a link consists of. Some work takes epistemic modality as a starting point and tries to explain evidentiality as a subtype. Others work in the other direction, attempting to reduce epistemic modality to evidentiality. Still others recognize epistemic modality and evidentiality as two fundamentally separate categories, and posit that particular lexical items may have both an epistemic and an evidential component to their meanings. However, other linguists feel that evidentiality is distinct from and not necessarily related to modality. Some languages mark evidentiality separately from epistemic modality. See also Alethic modality Epistemic logic Epistemology Free choice inference Hedge (linguistics) Dynamic semantics Notes References Aikhenvald, Alexandra Y. (2004). Evidentiality. Oxford: Oxford University Press. . Aikhenvald, Alexandra Y.; & Dixon, R. M. W. (Eds.). (2003). Studies in evidentiality. Typological studies in language (Vol. 54). Amsterdam: John Benjamins Publishing Company. ; . Blakemore, D. (1994). Evidence and modality. In R. E. Asher (Ed.), The Encyclopedia of language and linguistics (pp. 1183–1186). Oxford: Pergamon Press. . De Haan, F. (2006). Typological approaches to modality. In W. Frawley (Ed.), The Expression of Modality (pp. 27–69). Berlin: Mouton de Gruyter. Diewald, Gabriele. 1999. Die Modalverben im Deutschen: Grammatikalisierung und Polyfunktionalität. Reihe Germanistische Linguistik, No. 208, Tübingen: Niemeyer. Hacquard, Valentine and Wellwood, Alexis: Embedding epistemic modals in English. A corpus-based study. In Semantics & Pragmatics 5(4), pp. 1–29 http://dx.doi.org/10.3765/sp.5.4 Kiefer, Ferenc. 1984. Focus and modality. Groninger Abreiten zur Germanistischen Linguistik 24, 55–81. Kiefer, Ferenc. (1986). Epistemic possibility and focus. In W. Abraham & S. de Meij (Eds.), Topic, focus, and configurationality. Amsterdam: Benjamins. Kiefer, Ferenc. (1994). Modality. In R. E. Asher (Ed.), The Encyclopedia of language and linguistics (pp. 2515–2520). Oxford: Pergamon Press. . Lyons, John. 1977. Semantics, volume 2. Cambridge: Cambridge University Press Maché, Jakob 2013: On Black Magic -- How epistemic modifiers emerge. Phd-Thesis. Freie Universität Berlin. Nuyts, J. (2001). Epistemic modality, language, and conceptualization: A cognitive-pragmatic perspective. Amsterdam: John Benjamins Publishing Company. Nuyts, Jan. 2001b. Subjectivity as an evidential dimension in epistemic modal expression. Journal of Pragmatics 33(3), 383–400. Öhlschläger, Günther. 1989. Zur Syntax und Semantik der Modalverben, volume 144 of Linguistische Arbeiten. Tübingen: Niemeyer. Palmer, F. R. (1979). Modality and the English modals. London: Longman. Palmer, F. R. (1986). Mood and modality. Cambridge: Cambridge University Press. , . Palmer, F. R. (2001). Mood and modality (2nd ed.). Cambridge: Cambridge University Press. , . Palmer, F. R. (1994). Mood and modality. In R. E. Asher (Ed.), The Encyclopedia of language and linguistics (pp. 2535–2540). Oxford: Pergamon Press. Saeed, John I. (2003). Sentence semantics 1: Situations: Modality and evidentiality. In J. I Saeed, Semantics (2nd. ed) (Sec. 5.3, pp. 135–143). Malden, MA: Blackwell Publishing. , . Tancredi, Christopher. 2007. A Multi-Model Modal Theory of I-Semantics. Part I: Modals. Ms. University of Tokyo. Watts, Richard J. 1984. An analysis of epistemic possibility and probability. English Studies 65(2), 129–140. External links Modality and Evidentiality SIL: mood and modality SIL: epistemic modality SIL: judgment modality: (assumptive mood, declarative mood, deductive mood, dubitative mood, hypothetical mood, interrogative mood, speculative mood) SIL: evidentiality modality in a machine-translation interlingua Grammar Grammatical moods Semantics Linguistic modality Formal semantics (natural language)
0.784433
0.973105
0.763336
Analytical skill
Analytical skill is the ability to deconstruct information into smaller categories in order to draw conclusions. Analytical skill consists of categories that include logical reasoning, critical thinking, communication, research, data analysis and creativity. Analytical skill is taught in contemporary education with the intention of fostering the appropriate practices for future professions. The professions that adopt analytical skill include educational institutions, public institutions, community organisations and industry. Richards J. Heuer Jr. explained that In the article by Freed, the need for programs within the educational system to help students develop these skills is demonstrated. Workers "will need more than elementary basic skills to maintain the standard of living of their parents. They will have to think for a living, analyse problems and solutions, and work cooperatively in teams". Logical Reasoning Logical reasoning is a process consisting of inferences, where premises and hypotheses are formulated to arrive at a probable conclusion. It is a broad term covering three sub-classifications in deductive reasoning, inductive reasoning and abductive reasoning. Deductive Reasoning ‘Deductive reasoning is a basic form of valid reasoning, commencing with a general statement or hypothesis, then examines the possibilities to reach a specific, logical conclusion’. This scientific method utilises deductions, to test hypotheses and theories, to predict if possible observations were correct. A logical deductive reasoning sequence can be executed by establishing: an assumption, followed by another assumption and finally, conducting an inference. For example, ‘All men are mortal. Harold is a man. Therefore, Harold is mortal.’ For deductive reasoning to be upheld, the hypothesis must be correct, therefore, reinforcing the notion that the conclusion is logical and true. It is possible for deductive reasoning conclusions to be inaccurate or incorrect entirely, but the reasoning and premise is logical. For example, ‘All bald men are grandfathers. Harold is bald. Therefore, Harold is a grandfather.’ is a valid and logical conclusion but it is not true as the original assumption is incorrect. Deductive reasoning is an analytical skill used in many professions such as management, as the management team delegates tasks for day-to-day business operations. Inductive Reasoning Inductive reasoning compiles information and data to establish a general assumption that is suitable to the situation. Inductive reasoning commences with an assumption based on faithful data, leading to a generalised conclusion. For example, ‘All the swans I have seen are white. (Premise) Therefore all swans are white. (Conclusion)’. It is clear that the conclusion is incorrect, therefore, it is a weak argument. To strengthen the conclusion, it is made more probable, for example, ‘All the swans I have seen are white. (Premise) Therefore most swans are probably white (Conclusion)’. Inductive reasoning is an analytical skill common in many professions such as the corporate environment, where statistics and data are constantly analysed. The 6 types of inductive reasoning Generalised: This manner utilises a premise on a sample set to extract a conclusion about a population. Statistical: This is a method that utilises statistics based on a large and viable random sample set that is quantifiable to strengthen conclusions and observations. Bayesian: This form adapts statistical reasoning to account for additional or new data. Analogical: This is a method that records on the foundations of shared properties between two groups, leading to a conclusion that they are also likely to share further properties. Predictive: This form of reasoning extrapolates a conclusion about the future based on a current or past sample. Causal inference: This method of reasoning is formed around a causal link between the premise and the conclusion. Abductive reasoning Abductive reasoning commences with layered hypotheses, which may be insufficient with evidence, leading to a conclusion that is most likely explanatory for the problem. It is a form of reasoning where the conductor chooses a hypothesis that would best suit the given data. For example, when a patient is ill, the doctor gathers a hypothesis from the patient's symptoms, or other evidence, that they deem factual and appropriate. The doctor will then go through a list of possible illnesses and will attempt to assign the appropriate illness. Abductive reasoning is characterised by its lack of completeness, in evidence, explanation or both. This form of reasoning can be creative, intuitive and revolutionary due to its instinctive design. Critical Thinking Critical thinking is a skill used to interpret and explain the data given. It is the ability to think cautiously and rationally to resolve problems. This thinking is achieved by supporting conclusions without biases, having reliable evidence and reasoning, and using appropriate data and information. Critical thinking is an imperative skill as it underpins contemporary living in areas such as education and professional careers, but it is not restricted to a specific area. Critical thinking is used to solve problems, calculate the likelihood, make decisions, and formulate inferences. Critical thinking requires examining information, reflective thinking, using appropriate skills, and confidence in the quality of the information given to come to a conclusion or plan. Critical thinking includes being willing to change if better information becomes available. As a critical thinker individuals do not accept assumptions without further questioning the reliability of it with further research and analysing the results found. Developing Critical Thinking Critical thinking can be developed through establishing personal beliefs and values. It is critical that individuals are able to query authoritative bodies: teachers, specialists, textbooks, books, newspapers, television etc. Querying these authorities allow critical thinking ability to be developed as the individual gains their own freedom and wisdom to think about reality and contemporary society, revering from autonomy. Developing Critical Thinking through Probability Models Critical thinking can be developed through probability models, where individuals adhere to a logical, conceptual understanding of mathematics and emphasise investigation, problem-solving, mathematical literacy and the use of mathematical discourse. The student actively constructs their knowledge and understanding, while teaching models function as a mediator by actively testing the student through querying, challenging and assigning investigation tasks, ultimately, allowing the student to think in deeper ways about various concepts, ideas and mathematical contexts. Communication Communication is a process where individuals transfer information from one another. It is a complex system consisting of a listener interpreting the information, understanding it and then transferring it. Communication as an analytical skill includes communicating with confidence, clarity, and sticking with the point you are trying to communicate. It consists of verbal and non-verbal communication. Communication is an imperative component of analytical skill as it allows the individual to develop relationships, contribute to group decisions, organisational communication, and influence media and culture. Verbal Communication Verbal communication is interaction through words in linguistic form. Verbal communication consists of oral communication, written communication and sign language. It is an effective form of communication as the individuals sending and receiving the information are physically present, allowing immediate responses. In this form of communication, the sender uses words, spoken or written, to express the message to the individuals receiving the information. Verbal communication is an essential analytical skill as it allows for the development of positive relationships among individuals. This positive relationship is attributed to the notion that verbal communication between individuals fosters a depth of understanding, empathy and versatility among them, providing each other with more attention. Verbal communication is a skill that is commonly used in professions such as the health sector, where healthcare workers are desired to possess strong interpersonal skills. Verbal communication has been linked to patient satisfaction. An effective strategy to improve verbal communication ability is through debating as is it fosters communication and critical thinking. Non-verbal Communication Non-verbal communication is commonly known as unspoken dialogue between individuals. It is a significant analytical skill as it allows individuals to distinguish true feelings, opinions and behaviours, as individuals are more likely to believe nonverbal cues as opposed to verbal expressions. Non-verbal communication is able to transcend communicational barriers such as race, ethnicity and sexual orientation. Statistical measures showcase that the true meaning behind all messages is 93% non-verbal and 7% verbal. Non-verbal communication is a critical analytical skill as it allows individuals to delve deeper into the meaning of messages. It allows individuals to analyse another person's perceptions, expressions and social beliefs. Individuals who excel in communicating and understanding non-verbal communication are able to analyse the interconnectedness of mutualism, social beliefs and expectations. Communication Theories A communication theory is an abstract understanding of how information is transferred from individuals. Many communication theories have been developed to foster and build upon the ongoing dynamic nature of how people communicate. Early models of communication were simple, such as Aristotle's model of communication, consisting of a speaker communicating a speech to an audience, leading to an effect. This is a basic form of communication that addresses communication as a linear concept where information is not being relayed back. Modern theories for communication include Schramm's model where there are multiple individuals, each individual is encoding, interpreting and decoding the message, and messages are being transferred between one another. Schramm has included another factor in his model in experience i.e. expressing that each individual's experience influences their ability to interpret a message. Communication theories are constantly being developed to acclimatise to certain organisations or individuals. It is imperative for an individual to adopt a suitable communication theory for organisations to ensure that the organisation is able to function as desired. For example, traditional corporate hierarchy are commonly known to adopt a linear communicational model i.e. Aristotle's model of communication. Research Research is the construct of utilising tools and techniques to deconstruct and solve problems. While researching, it is important to distinguish what information is relevant to the data and avoiding excess, irrelevant data. Research involves the collection and analysis of information and data with the intention of founding new knowledge and/or deciphering a new understanding of existing data. Research ability is an analytical skill as it allows individuals to comprehend social implications. Research ability is valuable as it fosters transferable employment related skills. Research is primarily employed in academia and higher education, it is a profession pursued by many graduates, individuals intending to supervise or teach research students or those in pursuit of a PhD. Research in Academia In higher education, new research provides the most desired quality of evidence, if this is not available, then existing forms of evidence should be used. It is accepted that research provides the greatest form of knowledge, in the form of quantitative or qualitative data. Research students are highly desired by various industries due to their dynamic mental capacity. Research students are commonly sought after due to their analysis and problem-solving ability, interpersonal and leadership skills, project management and organisation, research and information management and written and oral communication. Data Analysis Data analysis is a systematic method of cleaning, transforming and modelling statistical or logical techniques to describe and evaluate data. Using data analysis as an analytical skill means being able to examine large volumes of data and then identifying trends within the data. It is critical to be able to look at the data and determine what information is important and should be kept and what information is irrelevant and can be discarded. Data analysis includes finding different patterns within the information which allows you to narrow your research and come to a better conclusion. It is a tool to discover and decipher useful information for business decision-making. It is imperative in inferring information from data and adhering to a conclusion or decision from that data. Data analysis can stem from past or future data. Data analysis is an analytical skill, commonly adopted in business, as it allows organisations to become more efficient, internally and externally, solve complex problems and innovate. Text Analysis Text analysis is the discovery and understanding of valuable information in unstructured or large data. It is a method to transform raw data into business information, allowing for strategic business decisions by offering a method to extract and examine data, derive patterns and finally interpret the data. Statistical Analysis Statistical analysis involves the collection, analyses and presentation of data to decipher trends and patterns. It is common in research, industry and government to enhance the scientific aspects of the decision that needs to be made. It consists of descriptive analysis and inferential analysis. Descriptive Analysis Descriptive analysis provides information about a sample set that reflects the population by summarising relevant aspects of the dataset i.e. uncovering patterns. It displays the measures of central tendency and measures of spread, such as mean, deviation, proportion, frequency etc. Inferential Analysis Inferential analysis analyses a sample from complete data to compare the difference between treatment groups. Multiple conclusions are constructed by selecting different samples. Inferential analysis can provide evidence that, with a certain percentage of confidence, there is a relationship between two variables. It is adopted that the sample will be different to the population, thus, we further accept a degree of uncertainty. Diagnostic Analysis Diagnostic analysis showcases the origin of the problem by finding the cause from the insight found in statistical analysis. This form of analysis is useful to identify behavioural patterns of data. Predictive Analysis Predictive analysis is an advanced form of analytics that forecasts future activity, behaviour, trends and patterns from new and historical data. Its accuracy is based on how much faithful data is present and the degree of inference that can be exploited from it. Prescriptive Analysis Prescriptive analytics provide firms with optimal recommendations to solve complex decisions. It is used in many industries, such as aviation to optimise schedule selection for airline crew. Creativity Creativity is important when it comes to solving different problems when presented. Creative thinking works best for problems that can have multiple solutions to solve the problem. It is also used when there seems to be no correct answer that applies to every situation, and is instead based from situation to situation. It includes being able to put the pieces of a problem together, as well as figure out pieces that may be missing. Then it includes brainstorming with all the pieces and deciding what pieces are important and what pieces can be discarded. The next step would be now analysing the pieces found to be of worth and importance and using those to come to a logical conclusion on how to best solve the problem. There can be multiple answers you come across to solve this problem. Many times creative thinking is referred to as right brain thinking. Creativity is an analytical skill as it allows individuals to utilise innovative methods to solve problems. Individuals that adopt this analytical skill are able to perceive problems from varying perspectives. This analytical skill is highly transferable among professions. References Further references Problem solving skills Learning Intelligence
0.770243
0.990803
0.763159
Law of three stages
The law of three stages is an idea developed by Auguste Comte in his work The Course in Positive Philosophy. It states that society as a whole, and each particular science, develops through three mentally conceived stages: (1) the theological stage, (2) the metaphysical stage, and (3) the positive stage. The progression of the three stages of sociology (1) The Theological stage refers to the appeal to personified deities. During the earlier stages, people believed that all the phenomena of nature were the creation of the divine or supernatural. Adults and children failed to discover the natural causes of various phenomena and hence attributed them to a supernatural or divine power. Comte broke this stage into 3 sub-stages: 1A. Fetishism – Fetishism was the primary stage of the theological stage of thinking. Throughout this stage, primitive people believe that inanimate objects have living spirits in them, also known as animism. People worship inanimate objects like trees, stones, a pieces of wood, volcanic eruptions, etc. Through this practice, people believe that all things root from a supernatural source. 1B. Polytheism – At one point, Fetishism began to bring about doubt in the minds of its believers. As a result, people turned towards polytheism: the explanation of things through the use of many Gods. Primitive people believe that all natural forces are controlled by different Gods; a few examples would be the God of water, God of rain, God of fire, God of air, God of earth, etc. 1C. Monotheism – Monotheism means believing in one God or God in one; attributing all to a single, supreme deity. Primitive people believe a single theistic entity is responsible for the existence of the universe. (2) The Metaphysical stage is an extension of the theological stage. It refers to explanation by impersonal abstract concepts. People often try to characterize God as an abstract being. They believe that an abstract power or force guides and determines events in the world. Metaphysical thinking discards belief in a concrete God. For example: In Classical Hindu Indian society, the principle of the transmigration of the soul, the conception of rebirth, and notions of pursuant were largely governed by metaphysical uphill. (3) The Positivity stage, also known as the scientific stage, refers to scientific explanation based on observation, experiment, and comparison. Positive explanations rely upon a distinct method, the scientific method, for their justification. Today people attempt to establish cause-and-effect relationships. Positivism is a purely intellectual way of looking at the world; as well, it also emphasizes observation and classification of data and facts. This is the highest, most evolved behavior according to Comte. Comte, however, was conscious of the fact that the three stages of thinking may or do coexist in the same society or the same mind and may not always be successive. Comte proposed a hierarchy of the sciences based on historical sequence, with areas of knowledge passing through these stages in order of complexity. The simplest and most remote areas of knowledge—mechanical or physical—are the first to become scientific. These are followed by the more complex sciences, those considered closest to us. The sciences, then, according to Comte's "law", developed in this order: Mathematics; Astronomy; Physics; Chemistry; Biology; Sociology. A science of society is thus the "Queen science" in Comte's hierarchy as it would be the most fundamentally complex. Since Comte saw social science as an observation of human behavior and knowledge, his definition of sociology included observing humanity’s development of science itself. Because of this, Comte presented this introspective field of study as the science above all others. Sociology would both complete the body of positive sciences by discussing humanity as the last unstudied scientific field and would link the fields of science together in human history, showing the "intimate interrelation of scientific and social development". To Comte, the law of three stages made the development of sociology inevitable and necessary. Comte saw the formation of his law as an active use of sociology, but this formation was dependent on other sciences reaching the positive stage; Comte’s three-stage law would not have evidence for a positive stage without the observed progression of other sciences through these three stages. Thus, sociology and its first law of three stages would be developed after other sciences were developed out of the metaphysical stage, with the observation of these developed sciences becoming the scientific evidence used in a positive stage of sociology. This special dependence on other sciences contributed to Comte’s view of sociology being the most complex. It also explains sociology being the last science to be developed. Comte saw the results of his three-stage law and sociology as not only inevitable but good. In Comte’s eyes, the positive stage was not only the most evolved but also the stage best for mankind. Through the continuous development of positive sciences, Comte hoped that humans would perfect their knowledge of the world and make real progress to improve the welfare of humanity. He acclaimed the positive stage as the "highest accomplishment of the human mind" and as having "natural superiority" over the other, more primitive stages. Overall, Comte saw his law of three stages as the start of the scientific field of sociology as a positive science. He believed this development was the key to completing positive philosophy and would finally allow humans to study every observable aspect of the universe. For Comte, sociology’s human-centered studies would relate the fields of science to each other as progressions in human history and make positive philosophy one coherent body of knowledge. Comte presented the positive stage as the final state of all sciences, which would allow human knowledge to be perfected, leading to human progress. Critiques of the law Historian William Whewell wrote "Mr. Comte's arrangement of the progress of science as successively metaphysical and positive, is contrary to history in fact, and contrary to sound philosophy in principle." The historian of science H. Floris Cohen has made a significant effort to draw the modern eye towards this first debate on the foundations of positivism. In contrast, within an entry dated early October 1838 Charles Darwin wrote in one of his then private notebooks that "M. Comte's idea of a theological state of science [is a] grand idea." See also Antipositivism Religion of Humanity Sociological positivism References External links History Guide Sociocultural evolution theory Religion and science Auguste Comte History of sociology
0.766462
0.995598
0.763088
Research
Research is "creative and systematic work undertaken to increase the stock of knowledge". It involves the collection, organization, and analysis of evidence to increase understanding of a topic, characterized by a particular attentiveness to controlling sources of bias and error. These activities are characterized by accounting and controlling for biases. A research project may be an expansion of past work in the field. To test the validity of instruments, procedures, or experiments, research may replicate elements of prior projects or the project as a whole. The primary purposes of basic research (as opposed to applied research) are documentation, discovery, interpretation, and the research and development (R&D) of methods and systems for the advancement of human knowledge. Approaches to research depend on epistemologies, which vary considerably both within and between humanities and sciences. There are several forms of research: scientific, humanities, artistic, economic, social, business, marketing, practitioner research, life, technological, etc. The scientific study of research practices is known as meta-research. A researcher is a person who conducts research, especially in order to discover new information or to reach a new understanding. In order to be a social researcher or a social scientist, one should have enormous knowledge of subjects related to social science that they are specialized in. Similarly, in order to be a natural science researcher, the person should have knowledge of fields related to natural science (physics, chemistry, biology, astronomy, zoology and so on). Professional associations provide one pathway to mature in the research profession. Etymology The word research is derived from the Middle French "recherche", which means "to go about seeking", the term itself being derived from the Old French term "recerchier," a compound word from "re-" + "cerchier", or "sercher", meaning 'search'. The earliest recorded use of the term was in 1577. Definitions Research has been defined in a number of different ways, and while there are similarities, there does not appear to be a single, all-encompassing definition that is embraced by all who engage in it. Research, in its simplest terms, is searching for knowledge and searching for truth. In a formal sense, it is a systematic study of a problem attacked by a deliberately chosen strategy, which starts with choosing an approach to preparing a blueprint (design) and acting upon it in terms of designing research hypotheses, choosing methods and techniques, selecting or developing data collection tools, processing the data, interpretation, and ending with presenting solution(s) of the problem. Another definition of research is given by John W. Creswell, who states that "research is a process of steps used to collect and analyze information to increase our understanding of a topic or issue". It consists of three steps: pose a question, collect data to answer the question, and present an answer to the question. The Merriam-Webster Online Dictionary defines research more generally to also include studying already existing knowledge: "studious inquiry or examination; especially: investigation or experimentation aimed at the discovery and interpretation of facts, revision of accepted theories or laws in the light of new facts, or practical application of such new or revised theories or laws". Forms of research Original research Original research, also called primary research, is research that is not exclusively based on a summary, review, or synthesis of earlier publications on the subject of research. This material is of a primary-source character. The purpose of the original research is to produce new knowledge rather than present the existing knowledge in a new form (e.g., summarized or classified). Original research can take various forms, depending on the discipline it pertains to. In experimental work, it typically involves direct or indirect observation of the researched subject(s), e.g., in the laboratory or in the field, documents the methodology, results, and conclusions of an experiment or set of experiments, or offers a novel interpretation of previous results. In analytical work, there are typically some new (for example) mathematical results produced or a new way of approaching an existing problem. In some subjects which do not typically carry out experimentation or analysis of this kind, the originality is in the particular way existing understanding is changed or re-interpreted based on the outcome of the work of the researcher. The degree of originality of the research is among the major criteria for articles to be published in academic journals and usually established by means of peer review. Graduate students are commonly required to perform original research as part of a dissertation. Scientific research Scientific research is a systematic way of gathering data and harnessing curiosity. This research provides scientific information and theories for the explanation of the nature and the properties of the world. It makes practical applications possible. Scientific research may be funded by public authorities, charitable organizations, and private organizations. Scientific research can be subdivided by discipline. Generally, research is understood to follow a certain structural process. Though the order may vary depending on the subject matter and researcher, the following steps are usually part of most formal research, both basic and applied: Observations and formation of the topic: Consists of the subject area of one's interest and following that subject area to conduct subject-related research. The subject area should not be randomly chosen since it requires reading a vast amount of literature on the topic to determine the gap in the literature the researcher intends to narrow. A keen interest in the chosen subject area is advisable. The research will have to be justified by linking its importance to already existing knowledge about the topic. Hypothesis: A testable prediction which designates the relationship between two or more variables. Conceptual definition: Description of a concept by relating it to other concepts. Operational definition: Details in regards to defining the variables and how they will be measured/assessed in the study. Gathering of data: Consists of identifying a population and selecting samples, gathering information from or about these samples by using specific research instruments. The instruments used for data collection must be valid and reliable. Analysis of data: Involves breaking down the individual pieces of data to draw conclusions about it. Data Interpretation: This can be represented through tables, figures, and pictures, and then described in words. Test, revising of hypothesis Conclusion, reiteration if necessary A common misconception is that a hypothesis will be proven (see, rather, null hypothesis). Generally, a hypothesis is used to make predictions that can be tested by observing the outcome of an experiment. If the outcome is inconsistent with the hypothesis, then the hypothesis is rejected (see falsifiability). However, if the outcome is consistent with the hypothesis, the experiment is said to support the hypothesis. This careful language is used because researchers recognize that alternative hypotheses may also be consistent with the observations. In this sense, a hypothesis can never be proven, but rather only supported by surviving rounds of scientific testing and, eventually, becoming widely thought of as true. A useful hypothesis allows prediction and within the accuracy of observation of the time, the prediction will be verified. As the accuracy of observation improves with time, the hypothesis may no longer provide an accurate prediction. In this case, a new hypothesis will arise to challenge the old, and to the extent that the new hypothesis makes more accurate predictions than the old, the new will supplant it. Researchers can also use a null hypothesis, which states no relationship or difference between the independent or dependent variables. Research in the humanities Research in the humanities involves different methods such as for example hermeneutics and semiotics. Humanities scholars usually do not search for the ultimate correct answer to a question, but instead, explore the issues and details that surround it. Context is always important, and context can be social, historical, political, cultural, or ethnic. An example of research in the humanities is historical research, which is embodied in historical method. Historians use primary sources and other evidence to systematically investigate a topic, and then to write histories in the form of accounts of the past. Other studies aim to merely examine the occurrence of behaviours in societies and communities, without particularly looking for reasons or motivations to explain these. These studies may be qualitative or quantitative, and can use a variety of approaches, such as queer theory or feminist theory. Artistic research Artistic research, also seen as 'practice-based research', can take form when creative works are considered both the research and the object of research itself. It is the debatable body of thought which offers an alternative to purely scientific methods in research in its search for knowledge and truth. The controversial trend of artistic teaching becoming more academics-oriented is leading to artistic research being accepted as the primary mode of enquiry in art as in the case of other disciplines. One of the characteristics of artistic research is that it must accept subjectivity as opposed to the classical scientific methods. As such, it is similar to the social sciences in using qualitative research and intersubjectivity as tools to apply measurement and critical analysis. Artistic research has been defined by the School of Dance and Circus (Dans och Cirkushögskolan, DOCH), Stockholm in the following manner – "Artistic research is to investigate and test with the purpose of gaining knowledge within and for our artistic disciplines. It is based on artistic practices, methods, and criticality. Through presented documentation, the insights gained shall be placed in a context." Artistic research aims to enhance knowledge and understanding with presentation of the arts. A simpler understanding by Julian Klein defines artistic research as any kind of research employing the artistic mode of perception. For a survey of the central problematics of today's artistic research, see Giaco Schiesser. According to artist Hakan Topal, in artistic research, "perhaps more so than other disciplines, intuition is utilized as a method to identify a wide range of new and unexpected productive modalities". Most writers, whether of fiction or non-fiction books, also have to do research to support their creative work. This may be factual, historical, or background research. Background research could include, for example, geographical or procedural research. The Society for Artistic Research (SAR) publishes the triannual Journal for Artistic Research (JAR), an international, online, open access, and peer-reviewed journal for the identification, publication, and dissemination of artistic research and its methodologies, from all arts disciplines and it runs the Research Catalogue (RC), a searchable, documentary database of artistic research, to which anyone can contribute. Patricia Leavy addresses eight arts-based research (ABR) genres: narrative inquiry, fiction-based research, poetry, music, dance, theatre, film, and visual art. In 2016, the European League of Institutes of the Arts launched The Florence Principles' on the Doctorate in the Arts. The Florence Principles relating to the Salzburg Principles and the Salzburg Recommendations of the European University Association name seven points of attention to specify the Doctorate / PhD in the Arts compared to a scientific doctorate / PhD. The Florence Principles have been endorsed and are supported also by AEC, CILECT, CUMULUS and SAR. Historical research The historical method comprises the techniques and guidelines by which historians use historical sources and other evidence to research and then to write history. There are various history guidelines that are commonly used by historians in their work, under the headings of external criticism, internal criticism, and synthesis. This includes lower criticism and sensual criticism. Though items may vary depending on the subject matter and researcher, the following concepts are part of most formal historical research: Identification of origin date Evidence of localization Recognition of authorship Analysis of data Identification of integrity Attribution of credibility Documentary research Steps in conducting research Research is often conducted using the hourglass model structure of research. The hourglass model starts with a broad spectrum for research, focusing in on the required information through the method of the project (like the neck of the hourglass), then expands the research in the form of discussion and results. The major steps in conducting research are: Identification of research problem Literature review Specifying the purpose of research Determining specific research questions Specification of a conceptual framework, sometimes including a set of hypotheses Choice of a methodology (for data collection) Data collection Verifying data Analyzing and interpreting the data Reporting and evaluating research Communicating the research findings and, possibly, recommendations The steps generally represent the overall process; however, they should be viewed as an ever-changing iterative process rather than a fixed set of steps. Most research begins with a general statement of the problem, or rather, the purpose for engaging in the study. The literature review identifies flaws or holes in previous research which provides justification for the study. Often, a literature review is conducted in a given subject area before a research question is identified. A gap in the current literature, as identified by a researcher, then engenders a research question. The research question may be parallel to the hypothesis. The hypothesis is the supposition to be tested. The researcher(s) collects data to test the hypothesis. The researcher(s) then analyzes and interprets the data via a variety of statistical methods, engaging in what is known as empirical research. The results of the data analysis in rejecting or failing to reject the null hypothesis are then reported and evaluated. At the end, the researcher may discuss avenues for further research. However, some researchers advocate for the reverse approach: starting with articulating findings and discussion of them, moving "up" to identification of a research problem that emerges in the findings and literature review. The reverse approach is justified by the transactional nature of the research endeavor where research inquiry, research questions, research method, relevant research literature, and so on are not fully known until the findings have fully emerged and been interpreted. Rudolph Rummel says, "... no researcher should accept any one or two tests as definitive. It is only when a range of tests are consistent over many kinds of data, researchers, and methods can one have confidence in the results." Plato in Meno talks about an inherent difficulty, if not a paradox, of doing research that can be paraphrased in the following way, "If you know what you're searching for, why do you search for it?! [i.e., you have already found it] If you don't know what you're searching for, what are you searching for?!" Research methods The goal of the research process is to produce new knowledge or deepen understanding of a topic or issue. This process takes three main forms (although, as previously discussed, the boundaries between them may be obscure): Exploratory research, which helps to identify and define a problem or question. Constructive research, which tests theories and proposes solutions to a problem or question. Empirical research, which tests the feasibility of a solution using empirical evidence. There are two major types of empirical research design: qualitative research and quantitative research. Researchers choose qualitative or quantitative methods according to the nature of the research topic they want to investigate and the research questions they aim to answer: Qualitative research Qualitative research refers to much more subjective non-quantitative, use different methods of collecting data, analyzing data, interpreting data for meanings, definitions, characteristics, symbols metaphors of things. Qualitative research further classified into the following types: Ethnography: This research mainly focus on culture of group of people which includes share attributes, language, practices, structure, value, norms and material things, evaluate human lifestyle. Ethno: people, Grapho: to write, this disciple may include ethnic groups, ethno genesis, composition, resettlement and social welfare characteristics. Phenomenology: It is very powerful strategy for demonstrating methodology to health professions education as well as best suited for exploring challenging problems in health professions educations. In addition, PMP researcher Mandy Sha argued that a project management approach is necessary to control the scope, schedule, and cost related to qualitative research design, participant recruitment, data collection, reporting, as well as stakeholder engagement. Quantitative research This involves systematic empirical investigation of quantitative properties and phenomena and their relationships, by asking a narrow question and collecting numerical data to analyze it utilizing statistical methods. The quantitative research designs are experimental, correlational, and survey (or descriptive). Statistics derived from quantitative research can be used to establish the existence of associative or causal relationships between variables. Quantitative research is linked with the philosophical and theoretical stance of positivism. The quantitative data collection methods rely on random sampling and structured data collection instruments that fit diverse experiences into predetermined response categories. These methods produce results that can be summarized, compared, and generalized to larger populations if the data are collected using proper sampling and data collection strategies. Quantitative research is concerned with testing hypotheses derived from theory or being able to estimate the size of a phenomenon of interest. If the research question is about people, participants may be randomly assigned to different treatments (this is the only way that a quantitative study can be considered a true experiment). If this is not feasible, the researcher may collect data on participant and situational characteristics to statistically control for their influence on the dependent, or outcome, variable. If the intent is to generalize from the research participants to a larger population, the researcher will employ probability sampling to select participants. In either qualitative or quantitative research, the researcher(s) may collect primary or secondary data. Primary data is data collected specifically for the research, such as through interviews or questionnaires. Secondary data is data that already exists, such as census data, which can be re-used for the research. It is good ethical research practice to use secondary data wherever possible. Mixed-method research, i.e. research that includes qualitative and quantitative elements, using both primary and secondary data, is becoming more common. This method has benefits that using one method alone cannot offer. For example, a researcher may choose to conduct a qualitative study and follow it up with a quantitative study to gain additional insights. Big data has brought big impacts on research methods so that now many researchers do not put much effort into data collection; furthermore, methods to analyze easily available huge amounts of data have also been developed. Types of Research Method 1. Observatory Research Method 2. Correlation Research Method Non-empirical research Non-empirical (theoretical) research is an approach that involves the development of theory as opposed to using observation and experimentation. As such, non-empirical research seeks solutions to problems using existing knowledge as its source. This, however, does not mean that new ideas and innovations cannot be found within the pool of existing and established knowledge. Non-empirical research is not an absolute alternative to empirical research because they may be used together to strengthen a research approach. Neither one is less effective than the other since they have their particular purpose in science. Typically empirical research produces observations that need to be explained; then theoretical research tries to explain them, and in so doing generates empirically testable hypotheses; these hypotheses are then tested empirically, giving more observations that may need further explanation; and so on. See Scientific method. A simple example of a non-empirical task is the prototyping of a new drug using a differentiated application of existing knowledge; another is the development of a business process in the form of a flow chart and texts where all the ingredients are from established knowledge. Much of cosmological research is theoretical in nature. Mathematics research does not rely on externally available data; rather, it seeks to prove theorems about mathematical objects. Research ethics Problems in research Meta-research Meta-research is the study of research through the use of research methods. Also known as "research on research", it aims to reduce waste and increase the quality of research in all fields. Meta-research concerns itself with the detection of bias, methodological flaws, and other errors and inefficiencies. Among the finding of meta-research is a low rates of reproducibility across a large number of fields. This widespread difficulty in reproducing research has been termed the "replication crisis." Methods of research In many disciplines, Western methods of conducting research are predominant. Researchers are overwhelmingly taught Western methods of data collection and study. The increasing participation of indigenous peoples as researchers has brought increased attention to the scientific lacuna in culturally sensitive methods of data collection. Western methods of data collection may not be the most accurate or relevant for research on non-Western societies. For example, "Hua Oranga" was created as a criterion for psychological evaluation in Māori populations, and is based on dimensions of mental health important to the Māori people – "taha wairua (the spiritual dimension), taha hinengaro (the mental dimension), taha tinana (the physical dimension), and taha whanau (the family dimension)". Bias Research is often biased in the languages that are preferred (linguicism) and the geographic locations where research occurs. Periphery scholars face the challenges of exclusion and linguicism in research and academic publication. As the great majority of mainstream academic journals are written in English, multilingual periphery scholars often must translate their work to be accepted to elite Western-dominated journals. Multilingual scholars' influences from their native communicative styles can be assumed to be incompetence instead of difference. For comparative politics, Western countries are over-represented in single-country studies, with heavy emphasis on Western Europe, Canada, Australia, and New Zealand. Since 2000, Latin American countries have become more popular in single-country studies. In contrast, countries in Oceania and the Caribbean are the focus of very few studies. Patterns of geographic bias also show a relationship with linguicism: countries whose official languages are French or Arabic are far less likely to be the focus of single-country studies than countries with different official languages. Within Africa, English-speaking countries are more represented than other countries. Generalizability Generalization is the process of more broadly applying the valid results of one study. Studies with a narrow scope can result in a lack of generalizability, meaning that the results may not be applicable to other populations or regions. In comparative politics, this can result from using a single-country study, rather than a study design that uses data from multiple countries. Despite the issue of generalizability, single-country studies have risen in prevalence since the late 2000s. Publication peer review Peer review is a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are employed to maintain standards of quality, improve performance, and provide credibility. In academia, scholarly peer review is often used to determine an academic paper's suitability for publication. Usually, the peer review process involves experts in the same field who are consulted by editors to give a review of the scholarly works produced by a colleague of theirs from an unbiased and impartial point of view, and this is usually done free of charge. The tradition of peer reviews being done for free has however brought many pitfalls which are also indicative of why most peer reviewers decline many invitations to review. It was observed that publications from periphery countries rarely rise to the same elite status as those of North America and Europe, because limitations on the availability of resources including high-quality paper and sophisticated image-rendering software and printing tools render these publications less able to satisfy standards currently carrying formal or informal authority in the publishing industry. These limitations in turn result in the under-representation of scholars from periphery nations among the set of publications holding prestige status relative to the quantity and quality of those scholars' research efforts, and this under-representation in turn results in disproportionately reduced acceptance of the results of their efforts as contributions to the body of knowledge available worldwide. Influence of the open-access movement The open access movement assumes that all information generally deemed useful should be free and belongs to a "public domain", that of "humanity". This idea gained prevalence as a result of Western colonial history and ignores alternative conceptions of knowledge circulation. For instance, most indigenous communities consider that access to certain information proper to the group should be determined by relationships. There is alleged to be a double standard in the Western knowledge system. On the one hand, "digital right management" used to restrict access to personal information on social networking platforms is celebrated as a protection of privacy, while simultaneously when similar functions are used by cultural groups (i.e. indigenous communities) this is denounced as "access control" and reprehended as censorship. Future perspectives Even though Western dominance seems to be prominent in research, some scholars, such as Simon Marginson, argue for "the need [for] a plural university world". Marginson argues that the East Asian Confucian model could take over the Western model. This could be due to changes in funding for research both in the East and the West. Focused on emphasizing educational achievement, East Asian cultures, mainly in China and South Korea, have encouraged the increase of funding for research expansion. In contrast, in the Western academic world, notably in the United Kingdom as well as in some state governments in the United States, funding cuts for university research have occurred, which some say may lead to the future decline of Western dominance in research. Neo-colonial approaches Professionalisation In several national and private academic systems, the professionalisation of research has resulted in formal job titles. In Russia In present-day Russia, and some other countries of the former Soviet Union, the term researcher (, nauchny sotrudnik) has been used both as a generic term for a person who has been carrying out scientific research, and as a job position within the frameworks of the Academy of Sciences, universities, and in other research-oriented establishments. The following ranks are known: Junior Researcher (Junior Research Associate) Researcher (Research Associate) Senior Researcher (Senior Research Associate) Leading Researcher (Leading Research Associate) Chief Researcher (Chief Research Associate) Publishing Academic publishing is a system that is necessary for academic scholars to peer review the work and make it available for a wider audience. The system varies widely by field and is also always changing, if often slowly. Most academic work is published in journal article or book form. There is also a large body of research that exists in either a thesis or dissertation form. These forms of research can be found in databases explicitly for theses and dissertations. In publishing, STM publishing is an abbreviation for academic publications in science, technology, and medicine. Most established academic fields have their own scientific journals and other outlets for publication, though many academic journals are somewhat interdisciplinary, and publish work from several distinct fields or subfields. The kinds of publications that are accepted as contributions of knowledge or research vary greatly between fields, from the print to the electronic format. A study suggests that researchers should not give great consideration to findings that are not replicated frequently. It has also been suggested that all published studies should be subjected to some measure for assessing the validity or reliability of its procedures to prevent the publication of unproven findings. Business models are different in the electronic environment. Since about the early 1990s, licensing of electronic resources, particularly journals, has been very common. Presently, a major trend, particularly with respect to scholarly journals, is open access. There are two main forms of open access: open access publishing, in which the articles or the whole journal is freely available from the time of publication, and self-archiving, where the author makes a copy of their own work freely available on the web. Research statistics and funding Most funding for scientific research comes from three major sources: corporate research and development departments; private foundations; and government research councils such as the National Institutes of Health in the US and the Medical Research Council in the UK. These are managed primarily through universities and in some cases through military contractors. Many senior researchers (such as group leaders) spend a significant amount of their time applying for grants for research funds. These grants are necessary not only for researchers to carry out their research but also as a source of merit. The Social Psychology Network provides a comprehensive list of U.S. Government and private foundation funding sources. The total number of researchers (full-time equivalents) per million inhabitants for individual countries is shown in the following table. Research expenditure by type of research as a share of GDP for individual countries is shown in the following table. See also Advertising research European Charter for Researchers Funding bias Internet research Laboratory List of countries by research and development spending List of words ending in ology Logology (science) Market research Marketing research Open research Operations research Participatory action research Psychological research methods Research integrity Research-intensive cluster Research organization Research proposal Research university Scholarly research Secondary research Social research Society for Artistic Research Timeline of the history of the scientific method Undergraduate research References Further reading Soeters, Joseph; Shields, Patricia and Rietjens, Sebastiaan. 2014. Handbook of Research Methods in Military Studies New York: Routledge. Talja, Sanna and Pamela J. Mckenzie (2007). Editor's Introduction: Special Issue on Discursive Approaches to Information Seeking in Context, The University of Chicago Press. External links Methodology Scientific method
0.763674
0.999219
0.763078
Mīmāṃsā
Mīmāṁsā (Sanskrit: मीमांसा; IAST: Mīmāṃsā) is a Sanskrit word that means "reflection" or "critical investigation" and thus refers to a tradition of contemplation which reflected on the meanings of certain Vedic texts. This tradition is also known as Pūrva-Mīmāṁsā because of its focus on the earlier (pūrva) Vedic texts dealing with ritual actions, and similarly as Karma-Mīmāṁsā due to its focus on ritual action (karma). It is one of six Vedic "affirming" (āstika) schools of Hindu philosophy. This particular school is known for its philosophical theories on the nature of Dharma, based on hermeneutics of the Vedas, especially the Brāḥmanas and samhitas. The Mīmāṃsā school was foundational and influential for the Vedāntic schools, which were also known as Uttara-Mīmāṁsā for their focus on the "later" (uttara) portions of the Vedas, the Upanishads. While both "earlier" and "later" Mīmāṃsā investigate the aim of human action, they do so with different attitudes towards the necessity of ritual praxis. Mīmāṁsā has several sub-schools, each defined by its pramana. The Prabhākara sub-school, which takes its name from the seventh-century philosopher Prabhākara, described the five epistemically reliable means to gaining knowledge: pratyakṣa or perception; anumāna or inference; upamāṇa, comparison and analogy; arthāpatti, the use of postulation and derivation from circumstances; and shabda, the word or testimony of past or present reliable experts. The Bhāṭṭa sub-school, from philosopher Kumārila Bhaṭṭa, added a sixth means to its canon; anupalabdhi meant non-perception, or proof by the absence of cognition (e.g., the lack of gunpowder on a suspect's hand) The school of Mīmāṃsā consists of both non-theistic and theistic doctrines, but the school showed little interest in systematic examination of the existence of Gods. Rather, it held that the soul is an eternal, omnipresent, inherently active spiritual essence, and focused on the epistemology and metaphysics of Dharma. For the Mīmāṃsā school, Dharma meant rituals and social duties, not Devas, or Gods, because Gods existed only in name. The Mīmāṃsakas also held that Vedas are "eternal, author-less, [and] infallible", that Vedic vidhi, or injunctions and mantras in rituals are prescriptive kārya or actions, and the rituals are of primary importance and merit. They considered the Upaniṣads and other texts related to self-knowledge and spirituality as subsidiary, a philosophical view that Vedānta disagreed with. While their deep analysis of language and linguistics influenced other schools of Hinduism, their views were not shared by others. Mīmāṃsakas considered the purpose and power of language was to clearly prescribe the proper, correct and right. In contrast, Vedāntins extended the scope and value of language as a tool to also describe, develop and derive. Mīmāṁsakās considered orderly, law driven, procedural life as central purpose and noblest necessity of Dharma and society, and divine (theistic) sustenance means to that end. The Mīmāṁsā school is a form of philosophical realism. A key text of the Mīmāṁsā school is the Mīmāṁsā Sūtra of Jaimini. Terminology Mīmāṃsā (IAST), also romanized Mimansa or Mimamsa, means "reflection, consideration, profound thought, investigation, examination, discussion" in Sanskrit. It also refers to the "examination of the Vedic text" and to a school of Hindu philosophy that is also known as ("prior" inquiry, also ), in contrast to ("posterior" inquiry, also ) – the opposing school of Vedanta. This division is based on classification of the Vedic texts into , the early sections of the Veda treating of mantras and rituals (Samhitas and Brahmanas), and the dealing with the meditation, reflection and knowledge of Self, Oneness, Brahman (the Upaniṣads). Between the Samhitas and Brahmanas, the Mīmāṃsā school places greater emphasis to the Brahmanas – the part of Vedas that is a commentary on Vedic rituals. The word comes from the desiderative stem of √man (Macdonell, A. A, 1883, A Sanskrit-English Dictionary), from Proto-Indo-European *men- (“to think”). Donald Davis translates Mīmāṃsā as the "desire to think", and in colloquial historical context as "how to think and interpret things". In the last centuries of the first millennium BCE, the word Mīmāṃsā began to denote the thoughts on and interpretation of the Vedas, first as Pūrva-Mīmāṃsā for rituals portions in the earlier layers of texts in the Vedas, and as Uttara-Mīmāṃsā for the philosophical portions in the last layers. Over time, Pūrva-Mīmāṃsā was just known as the Mīmāṃsā school, and the Uttara-Mīmāṃsā as the Vedanta school. Mīmāṃsā scholars are referred to as Mīmāṃsākas. Darśana (philosophy) – central concerns Mīmāṁsā is one of the six classical Hindu darśanas. It is among the earliest schools of Hindu philosophies. It has attracted relatively less scholarly study, although its theories and particularly its questions on exegesis and theology have been highly influential on all classical Indian philosophies. Its analysis of language has been of central importance to the legal literature of India. Ancient Mīmāṁsā's central concern was epistemology (pramana), that is what are the reliable means to knowledge. It debated not only "how does man ever learn or know, whatever he knows", but also whether the nature of all knowledge is inherently circular, whether those such as foundationalists who critique the validity of any "justified beliefs" and knowledge system make flawed presumptions of the very premises they critique, and how to correctly interpret and avoid incorrectly interpreting dharma texts such as the Vedas. It asked questions such as "what is devata (god)?", "are rituals dedicated to devatas efficacious?", "what makes anything efficacious?", and "Can it be proved that the Vedas, or any canonical text in any system of thought, is fallible or infallible (svatah pramanya, intrinsically valid)?, if so, how?" and others. To Mīmāṁsā scholars, the nature of non-empirical knowledge and human means to it are such that one can never demonstrate certainty, one can only falsify knowledge claims, in some cases. According to Francis Clooney, the Mīmāṁsā school is "one of the most distinctively Hindu forms of thinking; it is without real parallel elsewhere in the world". The central text of the Mīmāṁsā school is Jamini's Mīmāṁsā Sutras, accompanied by the historically influential commentary of Sabara and Kumarila Bhatta's commentary (Ślokavārttika) on Sabara's commentary. Together, these texts develop and apply the rules of language analysis (such as the rules of contradiction), asserting that one must not only examine injunctive propositions in any scripture but also examine the alternate related or reverse propositions for better understanding. They suggested that to reach correct and valid knowledge it is not only sufficient to demand proof of a proposition, it is important to give proof of a proposition's negative as well as declare and prove one's preferred propositions. Further, they asserted that whenever perception is not the means of direct proof and knowledge, one cannot prove such non-empirical propositions to be "true or not true", rather one can only prove a non-empirical proposition is "false, not false, or uncertain". For example, Mīmāṁsakas welcome not only the demand for proof of an injunctive proposition such as "agnihotra ritual leads one to heaven", but suggest that one must examine and prove alternate propositions such as "ritual does not lead one to heaven", "something else leads one to heaven", "there is heaven", "there is no heaven" and so on. Mīmāṁsā literature states that if satisfactory, verifiable proof for all of such propositions cannot be found by its proponents and its opponents, then the proposition needs to be accepted as a part of a "belief system". Beliefs, such as those in the scriptures (Vedas), must be accepted to be true unless its opponents can demonstrate the proof of the validity of their own texts or teacher(s) these opponents presume to be prima facie justified, and until these opponents can demonstrate that the scriptures they challenge are false. If they do not try to do so, it is hypocrisy; if they try to do so, it can only lead to an infinite regress, according to Mīmānsākas. Any historic scripture with widespread social acceptance, according to Mīmāṁsāka, is an activity of communication (vyavaharapravrtti) and is accepted as authoritative because it is socially validated practice unless perceptually verifiable evidence emerges that proves parts or all of it as false or harmful. Mīmāṁsākas were predominantly concerned with the central motivation of human beings, the highest good, and actions that make this possible. They stated that human beings seek niratisaya priti (unending ecstatic pleasure, joy, happiness) in this life and the next. They argued that this highest good is the result of one's own ethical actions (dharma), that such actions are what the Vedic sentences contain and communicate, and therefore it important to properly interpret and understand Vedic sentences, words and meaning. Mīmāṁsā scholarship was centrally concerned with the philosophy of language, how human beings learn and communicate with each other and across generations with language in order to act in a manner that enables them to achieve that which motivates them. The Mīmāṁsā school focussed on dharma, deriving ethics and activity from the karma-kanda (rituals) part of the Vedas, with the argument that ethics for this life and efficacious action for svarga (heaven) cannot be derived from sense-perception, and can only be derived from experience, reflection and understanding of past teachings. In every human activity, the motivating force to perform an action is his innate longing for priti (pleasure, happiness), whether at the lowest level or the highest level. At the highest level, it is nothing but an unsurpassed state of priti, which is ensured only by performing ethical actions. – Sabara, 2nd century Mīmānsā scholar According to Daniel Arnold, Mīmāṁsā scholarship has "striking affinities" with that of William Alston, the 20th century Western philosopher, along with some notable differences. The Mīmāṁsākas subjected to a radical critique, more than two thousand years ago, states Francis Clooney, the notions such as "God," the "sacred text," the "author" and the "anthropocentric ordering of reality". Epistemology In the realm of epistemological studies, subsequent Mīmāṃsākas scholars have made significant contributions. Unlike the Nyaya or the Vaisheshika systems, the branch of Mīmāṃsā recognizes five means of valid knowledge (Skt. pramāṇa). In addition to these, the sub-school of Mīmāṃsā acknowledges a sixth means, namely anuapalabdhi, akin to the Advaita Vedanta school of Hinduism. The following are the six epistemically reliable means of gaining knowledge: Pratyaksa Pratyakṣa (प्रत्यक्ष means perception. It is of two types in Mīmānsā and other schools of Hinduism: external and internal. External perception is described as that arising from the interaction of five senses and worldly objects, while internal perception is described by this school as that of inner sense, the mind. The ancient and medieval Indian texts identify four requirements for correct perception: Indriyarthasannikarsa (direct experience by one's sensory organ(s) with the object, whatever is being studied), Avyapadesya (non-verbal; correct perception is not through hearsay, according to ancient Indian scholars, where one's sensory organ relies on accepting or rejecting someone else's perception), Avyabhicara (does not wander; correct perception does not change, nor is it the result of deception because one's sensory organ or means of observation is drifting, defective, suspect) and Vyavasayatmaka (definite; correct perception excludes judgments of doubt, either because of one's failure to observe all the details, or because one is mixing inference with observation and observing what one wants to observe, or not observing what one does not want to observe). Some ancient scholars proposed "unusual perception" as pramana and called it internal perception, a proposal contested by other Indian scholars. The internal perception concepts included pratibha (intuition), samanyalaksanapratyaksa (a form of induction from perceived specifics to a universal), and jnanalaksanapratyaksa (a form of perception of prior processes and previous states of a 'topic of study' by observing its current state). Further, some schools of Hinduism considered and refined rules of accepting uncertain knowledge from Pratyakṣa-pramana, so as to contrast nirnaya (definite judgment, conclusion) from anadhyavasaya (indefinite judgment). Anumana Anumāṇa (अनुमान) means inference. It is described as reaching a new conclusion and truth from one or more observations and previous truths by applying reason. Observing smoke and inferring fire is an example of Anumana. In all except one Hindu philosophies, this is a valid and useful means to knowledge. The method of inference is explained by Indian texts as consisting of three parts: pratijna (hypothesis), hetu (a reason), and drshtanta (examples). The hypothesis must further be broken down into two parts, state the ancient Indian scholars: sadhya (that idea which needs to proven or disproven) and paksha (the object on which the sadhya is predicated). The inference is conditionally true if sapaksha (positive examples as evidence) are present, and if vipaksha (negative examples as counter-evidence) are absent. For rigor, the Indian philosophies also state further epistemic steps. For example, they demand Vyapti – the requirement that the hetu (reason) must necessarily and separately account for the inference in "all" cases, in both sapaksha and vipaksha. A conditionally proven hypothesis is called a nigamana (conclusion). Upamana Upamāṇa means comparison and analogy. Some Hindu schools consider it as a proper means of knowledge. Upamana, states Lochtefeld, may be explained with the example of a traveller who has never visited lands or islands with endemic population of wildlife. He or she is told, by someone who has been there, that in those lands you see an animal that sort of looks like a cow, grazes like a cow, but is different from a cow in such and such way. Such use of analogy and comparison is, state the Indian epistemologists, a valid means of conditional knowledge, as it helps the traveller identify the new animal later. The subject of comparison is formally called upameyam, the object of comparison is called upamanam, while the attribute(s) are identified as samanya. Thus, explains Monier Monier-Williams, if a boy says "her face is like the moon in charmingness", "her face" is upameyam, the moon is upamanam, and charmingness is samanya. The 7th century text Bhaṭṭikāvya in verses 10.28 through 10.63 discusses many types of comparisons and analogies, identifying when this epistemic method is more useful and reliable, and when it is not. In various ancient and medieval texts of Hinduism, 32 types of Upanama and their value in epistemology are debated. Arthāpatti Arthāpatti (अर्थापत्ति) means postulation, derivation from circumstances. In contemporary logic, this pramāṇa is similar to circumstantial implication. As example, if a person left in a boat on a river earlier, and the time is now past the expected time of arrival, then the circumstances support the truth postulate that the person has arrived. Many Indian scholars considered this pramāṇa as invalid or at best weak, because the boat may have gotten delayed or diverted. However, in cases such as deriving the time of a future sunrise or sunset, this method was asserted by the proponents to be reliable. Another common example for arthāpatti found in the texts of Mīmāṃsā and other schools of Hinduism is, that if "Devadatta is fat" and "Devadatta does not eat in the day", then the following must be true: "Devadatta eats in the night". This form of postulation and deriving from circumstances is, claim the Indian scholars, a means to discovery, proper insight and knowledge. The Hindu schools that accept this means of knowledge state that this method is a valid means to conditional knowledge and truths about a subject and object in original premises or different premises. The schools that do not accept this method, state that postulation, extrapolation and circumstantial implication is either derivable from other pramāṇas or flawed means to correct knowledge, instead one must rely on direct perception or proper inference. Anupalabdhi Anupalabdhi (अनुपलब्धि), accepted only by Kumarila Bhatta sub-school of Mīmāṃsā, means non-perception, negative/cognitive proof. Anupalabdhi pramana suggests that knowing a negative, such as "there is no jug in this room" is a form of valid knowledge. If something can be observed or inferred or proven as non-existent or impossible, then one knows more than what one did without such means. In the two schools of Hinduism that consider Anupalabdhi as epistemically valuable, a valid conclusion is either sadrupa (positive) or asadrupa (negative) relation – both correct and valuable. Like other pramana, Indian scholars refined Anupalabdi to four types: non-perception of the cause, non-perception of the effect, non-perception of object, and non-perception of contradiction. Only two schools of Hinduism accepted and developed the concept "non-perception" as a pramana. The schools that endorsed Anupalabdi affirmed that it as valid and useful when the other five pramanas fail in one's pursuit of knowledge and truth. Abhava (अभाव) means non-existence. Some scholars consider Anupalabdi to be same as Abhava, while others consider Anupalabdi and Abhava as different. Abhava-pramana has been discussed in ancient Hindu texts in the context of Padārtha (पदार्थ, referent of a term). A Padartha is defined as that which is simultaneously Astitva (existent), Jneyatva (knowable) and Abhidheyatva (nameable). Specific examples of padartha, states Bartley, include dravya (substance), guna (quality), karma (activity/motion), samanya/jati (universal/class property), samavaya (inherence) and vishesha (individuality). Abhava is then explained as "referents of negative expression" in contrast to "referents of positive expression" in Padartha. An absence, state the ancient scholars, is also "existent, knowable and nameable", giving the example of negative numbers, silence as a form of testimony, asatkaryavada theory of causation, and analysis of deficit as real and valuable. Abhava was further refined in four types, by the schools of Hinduism that accepted it as a useful method of epistemology: dhvamsa (termination of what existed), atyanta-abhava (impossibility, absolute non-existence, contradiction), anyonya-abhava (mutual negation, reciprocal absence) and pragavasa (prior, antecedent non-existence). Shabda Shabda (शब्द) means relying on word, testimony of past or present reliable experts. Hiriyanna explains Sabda-pramana as a concept which means reliable expert testimony. The schools of Hinduism which consider it epistemically valid suggest that a human being needs to know numerous facts, and with the limited time and energy available, he can learn only a fraction of those facts and truths directly. He must rely on others, his parent, family, friends, teachers, ancestors and kindred members of society to rapidly acquire and share knowledge and thereby enrich each other's lives. This means of gaining proper knowledge is either spoken or written, but through Sabda (words). The reliability of the source is important, and legitimate knowledge can only come from the Sabda of reliable sources. The disagreement between the schools of Hinduism has been on how to establish reliability. Some schools, such as Charvaka, state that this is never possible, and therefore Sabda is not a proper pramana. Other schools debate means to establish reliability. Svatah Pramanya The doctrine of svatah pramanya in Mīmāṃsā emphasizes accepting appearances as they are. It holds that since a cognition initially appears true, it should be accepted as true unless there is concrete evidence to the contrary. If no such evidence ever appears, the cognition is considered genuinely true. Relation to Vedanta school An interesting feature of the Mīmāṃsā school of philosophy is its unique epistemological theory of the intrinsic validity of all cognition as such. It is held that all knowledge is ipso facto true (Skt. svataḥ prāmāṇyavāda). Thus, what is to be proven is not the truth of a cognition, but its falsity. The Mīmāṃsākas advocate the self-validity of knowledge both in respect of its origin (utpatti) and ascertainment (jñapti). Not only did the Mīmāṃsākas make a very great use of this theory to establish the unchallengeable validity of the Vedas, but later Vedantists also drew freely upon this particular Mīmāṃsā contribution. Metaphysics and beliefs The core tenets of are ritualism (orthopraxy) and anti-asceticism. The central aim of the school is elucidation of the nature of dharma, understood as a set ritual obligations and prerogatives to be performed properly. Apaurusheya The term Apaurusheya, central to the Mīmāṃsā school, asserts that the Vedas are not of human origin. Instead, they are considered uncreated, without any specific author, and self-validating in their authority. Jaimini explains in his fifth Mīmāṃsā Sutra that the relationship between words and their meanings in the Vedas is primordial, meaning it has existed since the beginning of time. Non-theism Mīmāṃsā theorists decided that the evidence allegedly proving the existence of God was insufficient. They argue that there was no need to postulate a maker for the world, just as there was no need for an author to compose the Vedas or a God to validate the rituals. Mīmāṃsā argues that the Gods named in the Vedas have no existence apart from the mantras that speak their names. To that regard, the power of the mantras is what is seen as the power of Gods. Dharma Dharma as understood by Pūrva Mīmāṃsā can be loosely translated into English as "virtue", "morality" or "duty". The Pūrva Mīmāṃsā school traces the source of the knowledge of dharma neither to sense-experience nor inference, but to verbal cognition (i.e. knowledge of words and meanings) according to Vedas. In this respect it is related to the Nyāya school, the latter, however, accepts only four sources of knowledge (pramāṇa) as valid. The Pūrva Mīmāṃsā school held dharma to be equivalent to following the prescriptions of the Saṃhitās and their Brāhmaṇa commentaries relating the correct performance of Vedic rituals. Seen in this light, Pūrva Mīmāṃsā is essentially ritualist (orthopraxy), placing great weight on the performance of karma or action as enjoined by the Vedas. Relation to Vedānta Emphasis of Yajnic Karmakāṇḍas in Pūrva Mīmāṃsā is erroneously interpreted by some to be an opposition to Jñānakāṇḍa of Vedānta and Upaniṣads. Pūrva Mīmāṃsā does not discuss topics related to Jñānakāṇḍa, such as salvation (mokṣa), but it never speaks against mokṣa. Vedānta quotes Jaimini's belief in Brahman as well as in mokṣa: In Uttara-Mīmāṃsā or Vedānta (4.4.5–7), Bāḍarāyaṇa cites Jaimini as saying (ब्राह्मेण जैमिनिरूपन्यासादिभ्यः) "(The mukta Puruṣa is united with the Brahman) as if it were like the Brahman, because descriptions (in Śruti etc) prove so". In Vedānta (1.2.28), Bāḍarāyaṇa cites Jaimini as saying that "There is no contradiction in taking Vaishvānara as the supreme Brahman". In 1.2.31, Jaimini is again quoted by Bāḍarāyana as saying that the nirguna (attribute-less) Brahman can manifest itself as having a form. In 4.3.12, Bādarāyana again cites Jaimini as saying that the mukta Purusha attains Brahman. In Pūrva Mīmāṃsā too, Jaimini emphasises the importance of faith in and attachment to the Omnipotent Supreme Being Whom Jaimini calls "The Omnipotent Pradhaana" (The Main): Pūrva Mīmāṃsā 6.3.1: "sarvaśaktau pravṛttiḥ syāt tathābhūtopadeśāt" (सर्वशक्तौ प्रवृत्तिः स्यात् तथाभूतोपदेशात्). The term upadeśa here means instructions of the śāstras as taught. We should tend towards the omnipotent supreme being. In the context of Pūrva Mīmāṃsā 6.3.1 shown above, next two sutras becomes significant, in which this Omnipotent Being is termed as "pradhāna", and keeping away from Him is said to be a "doṣa", hence all beings are asked to get related ("abhisambandhāt" in tadakarmaṇi ca doṣas tasmāt tato viśeṣaḥ syāt pradhānenābhisambandhāt; Jaimini 6, 3.3) to the "Omnipotent Main Being" (api vāpy ekadeśe syāt pradhāne hy arthanirvṛttir guṇamātram itarat tadarthatvāt; Jaimini 6, 3.2). Karma-Mīmāṃsā supports the Vedas, and Rgveda says that one Truth is variously named by the sages. It is irrelevant whether we call Him as Pradhāna or Brahman or Vaishvānara or Shiva or God. History The school for some time in the Early Middle Ages exerted near-dominant influence on learned Hindu thought, and is credited as a major force contributing to the decline of Buddhism in India, but it has fallen into decline in the High Middle Ages and today is all but eclipsed by Vedanta. Mīmāṃsā texts The foundational text for the Mīmāṃsā school is the Purva Mīmāṃsā Sutras of Jaimini (ca. 5th to 4th century BCE). A major commentary was composed by Śabara in ca. the 5th or 6th century CE. The school reaches its height with and (fl. ca. 700 CE). Both Kumarila Bhatta and Prabhākara (along with , whose work is no longer extant) have written extensive commentaries on Śabara's Mīmāṃsāsūtrabhāṣyam. Kumārila Bhaṭṭa, Mandana Miśra, Pārthasārathi Miśra, Sucarita Miśra, Ramakrishna Bhatta, Madhava Subhodini, Sankara Bhatta, Krsnayajvan, Anantadeva, Gaga Bhatta, Ragavendra Tirtha, VijayIndhra Tirtha, Appayya Dikshitar, Paruthiyur Krishna Sastri, Mahomahapadyaya Sri Ramsubba Sastri, Sri Venkatsubba Sastri, Sri A. Chinnaswami Sastri, Sengalipuram Vaidhyanatha Dikshitar were some of Mīmānsā scholars. The of Jaimini (c. 3rd century BCE) has summed up the general rules of for Vedic interpretation. The text has 12 chapters, of which the first chapter is of philosophical value. The commentaries on the by , , Hari and are no more extant. (c. 1st century BCE) is the first commentator of the , whose work is available to us. His is the basis of all later works of . (7th century CE), the founder of the first school of the commented on both the and its . His treatise consists of 3 parts, the , the and the . (8th century CE) was a follower of , who wrote Vidhiviveka and . There are several commentaries on the works of . wrote a (commentary) on the . wrote , also known as , a commentary on the . wrote (1300 CE), another commentary on the . He also wrote , an independent work on the and Tantraratna. ’s is a commentary on the . (8th century CE), the originator of the second school of the wrote his commentary on the . ’s (ninth century CE) is a commentary on the . His is an independent work of this school and the is a brief explanation of the . ’s deals with the views of this school in details. The founder of the third school of the was , whose works have not reached us. (17th century) wrote an elementary work on the , known as or . of is based on the . ’s was an attempt to combine the views of the and the schools. See also Śrauta Vaikhanasa Nambudiri Saura Charvaka Vaisheshika Samkhya Yoga Nyaya Vedanta Śālikanātha Mimamsa – IISER Pune References Bibliography Further reading Reprint edition; Originally published under the title of The Six Systems of Indian Philosophy. Bollingen Series XXVI; Edited by Joseph Campbell. External links The Mimamsa Sutras of Jaimini Introduction to Purva-Mimamsa G. Jha (Translator), Asiatic Society of Bengal Complete Lectures on Purva Mimamsa Sutras of Jaimini at ShastraNethralaya S. Srikanta Sastri, "The Logical system of Madhvacharya " Published in Poona Oriental Series, No. 75 – "A Volume of Studies in Indology", presented to P. V. Kane on his 60th birthday Hindu philosophical concepts Hermeneutics Ritual Schools and traditions in ancient Indian philosophy Āstika
0.768143
0.993155
0.762885
Ableism
Ableism (; also known as ablism, disablism (British English), anapirophobia, anapirism, and disability discrimination) is discrimination and social prejudice against people with physical or mental disabilities (see also Sanism). Ableism characterizes people as they are defined by their disabilities and it also classifies disabled people as people who are inferior to non-disabled people. On this basis, people are assigned or denied certain perceived abilities, skills, or character orientations. Although ableism and disablism are both terms which describe disability discrimination, the emphasis for each of these terms is slightly different. Ableism is discrimination in favor of non-disabled people, while disablism is discrimination against disabled people. There are stereotypes which are either associated with disability in general, or they are associated with specific impairments or chronic health conditions (for instance the presumption that all disabled people want to be cured, the presumption that wheelchair users also have an intellectual disability, or the presumption that blind people have some special form of insight). These stereotypes, in turn, serve as a justification for discriminatory practices, and reinforce discriminatory attitudes and behaviors toward people who are disabled. Labeling affects people when it limits their options for action or changes their identity. In ableist societies, the lives of disabled people is considered less worth living, or disabled people less valuable, even sometimes expendable. The eugenics movement of the early 20th century is considered an expression of widespread ableism. Ableism can be further understood by reading literature which is written and published by those who experience disability and ableism first-hand. Disability studies is an academic discipline which is also beneficial when non-disabled people pursue it in order to gain a better understanding of ableism. Etymology Originating from -able (in disable, disabled) and -ism (in racism, sexism); first recorded in 1981. History Canada Ableism in Canada refers to a set of discourses, behaviors, and structures that express feelings of anxiety, fear, hostility, and antipathy towards people with disabilities in Canada. The specific types of discrimination that have occurred or are still occurring in Canada include the inability to access important facilities such as infrastructure within the transport network, restrictive immigration policies, involuntary sterilization to stop people with disabilities from having offspring, barriers to employment opportunities, wages that are insufficient to maintain a minimal standard of living, and institutionalization of people with disabilities in substandard conditions. Austerity measures implemented by the government of Canada have also at times been referred to as ableist, such as funding cuts that put people with disabilities at risk of living in abusive arrangements. Nazi Germany In July 1933, Hitler, along with the Nazi Government, implemented the Law for the Prevention of Progeny with Hereditary Diseased Offspring. Essentially, this law implemented sterilization practices for all people who had what were considered hereditary disabilities. For example, disabilities such as mental illness, blindness and deafness were all considered hereditary diseases; therefore, people with these disabilities were sterilized. The law also created propaganda against people with disabilities; people with disabilities were displayed as unimportant towards progressing the Aryan race. In 1939 Hitler signed the secret euthanasia program decree Aktion T4, which authorized the killing of selected patients diagnosed with chronic neurological and psychiatric disorders. This program killed about 70,000 disabled people before it was officially halted by Hitler in 1941 under public pressure, and it was unofficially continued out of the public eye, killing a total of 200,000 or more by the end of Hitler's reign in 1945. United Kingdom In the UK, disability discrimination became unlawful as a result of the Disability Discrimination Act 1995, and the Disability Discrimination Act 2005. These were later superseded, retaining the substantive law, by the Equality Act 2010. The Equality Act 2010 brought together protections against multiple areas of discriminatory behavior (disability, race, religion and belief, sex, sexual orientation, gender identity, age and pregnancy the so-called "protected characteristics"). Under the Equality Act 2010, there are prohibitions addressing several forms of discrimination including direct discrimination (s.13), indirect discrimination (s.6, s.19), harassment (s.26), victimisation (s.27), discrimination arising from disability (s.15), and failure to make reasonable adjustments (s.20). Part 2, chapter 1, section 6, of the Equality Act 2010 states that "A person (P) has a disability if (a) P has a physical or mental impairment, and (b) the impairment has a substantial and long-term adverse effect on P's ability to carry out normal day-to-day activities." United States Much like many minority groups, disabled Americans were often segregated and denied certain rights for a majority of American history. In the 1800s, a shift from a religious view to a more scientific view took place and caused more individuals with disabilities to be examined. Public stigma began to change after World War II when many Americans returned home with disabilities. In the 1960s, following the civil rights movement in America, the world began the disabled rights movement. The movement was intended to give all individuals with disabilities equal rights and opportunities. Until the 1970s, ableism in the United States was often codified into law. For example, in many jurisdictions, so-called "ugly laws" barred people from appearing in public if they had diseases or disfigurements that were considered unsightly. UN Convention on the Rights of Persons with Disabilities In May 2012, the UN Convention on the Rights of Persons with Disabilities was ratified. The document establishes the inadmissibility of discrimination on the basis of disability, including in employment. In addition, the amendments create a legal basis for significantly expanding opportunities to protect the rights of persons with disabilities, including in the administrative procedure and in court. The law defined specific obligations that all owners of facilities and service providers must fulfill to create conditions for disabled people equal to the rest. Workplace In 1990, the Americans with Disabilities Act was put in place to prohibit private employers, state and local government, employment agencies and labor unions from discrimination against qualified disabled people in job applications, when hiring, firing, advancement in workplace, compensation, training, and on other terms, conditions and privileges of employment. The U.S. Equal Employment Opportunity Commission (EEOC) plays a part in fighting against ableism by being responsible for enforcing federal laws that make it illegal to discriminate against a job applicant or an employee because of the person's race, color, religion, sex (including pregnancy, gender identity, and sexual orientation), national origin, age (40 or older), disability or genetic information. Similarly in the UK, the Equality Act 2010 was put in place and provides legislation that there should be no workplace discrimination. Under the act, all employers have a duty to make reasonable adjustments for their disabled employees to help them overcome any disadvantages resulting from the impairment. Failure to carry out reasonable adjustment amounts to disability discrimination. Employers and managers are often concerned about the potential cost associated with providing accommodations to employees with disabilities. However, many accommodations have a cost of $0 (59% in a survey of employers conducted by the Job Accommodation Network (JAN)), and accommodation costs may be offset by the savings associated with employing people with disabilities (higher performance, lower turnover costs). Moreover, organizational interventions that support workplace inclusion of the most vulnerable, such as neurodivergent individuals, are likely to benefit all employees. Idiosyncratic deals (i-deals), individually negotiated work arrangements (e.g., flexible schedules, working from home), can also serve as an important work accommodation for persons with disabilities. I-deals can create the conditions for long-term employment for people with disabilities by creating jobs that fit each employee's abilities, interests, and career aspirations. Agents can represent people with disabilities and help them negotiate their unique employment terms, but successful i-deals require resources and flexibility on the part of the employer. Healthcare Ableism is prevalent in the many different divisions of healthcare, whether that be in prison systems, the legal or policy side of healthcare, and clinical settings. The following subsections will explore the ways in which ableism makes its way into these areas of focus through the inaccessibility of appropriate medical treatment. Clinical settings Just as in every other facet of life, ableism is present in clinical healthcare settings. A 2021 study of over 700 physicians in the United States found that only 56.5% "strongly agreed that they welcomed patients with disability into their practices." The same study also found that 82.4% of these physicians believed that people with a significant disability had a lower quality of life than those without disabilities. Data from the 1994–1995 National Health Interview Survey-Disability Supplement has shown that those with disabilities have lower life expectancies than those without them. While that can be explained by a myriad of factors, one of the factors is the ableism experienced by those with disabilities in clinical settings. Those with disabilities may be more hesitant to seek care when needed due to barriers created by ableism such as dentist chairs that are not accessible or offices that are filled with bright lights and noises that can be triggering. In June 2020, near the start of the COVID-19 pandemic, a 46-year-old quadriplegic in Austin, Texas named Michael Hickson was denied treatment for COVID-19, sepsis, and a urinary tract infection and died 6 days after treatment was withheld. His physician was quoted as having said that he had a "preference to treat patients who can walk and talk." The physician also had stated that Hickson's brain injury made him have not much of a quality of life. Several complaints have since been filed with the Texas Office of Civil Rights and many disability advocacy groups have become involved in the case. Several states, including Alabama, Arizona, Kansas, Pennsylvania, Tennessee, Utah, and Washington allow healthcare providers, in times of crisis, to triage based on the perceived quality of life of the patients, which tends to be perceived as lower for those with disabilities. In Alabama, a ventilator-rationing scheme put in place during the pandemic enabled healthcare providers to exclude patients with disabilities from treatment; such patients were those who required assistance with various activities of daily living, had certain mental conditions (varying degrees of mental retardation or moderate-to-severe dementia) or other preexisting conditions categorized as disabilities. Criminal justice settings The provision of effective healthcare for people with disabilities in criminal justice institutions is an important issue because the percentage of disabled people in such facilities has been shown to be larger than the percentage in the general population. A lack of prioritization on working to incorporate efficient and quality medical support into prison structures endangers the health and safety of disabled prisoners. Limited access to medical care in prisons consists of long waiting times to meet with physicians and to consistently receive treatment, as well as the absence of harm reduction measures and updated healthcare protocols. Discriminatory medical treatment also takes place through the withholding of proper diets, medications, and assistance (equipment and interpreters), in addition to failures to adequately train prison staff. Insufficient medical accommodations can worsen prisoners' health conditions through greater risks of depression, HIV/AIDS and Hepatitis C transmission, and unsafe drug injections. In Canada, the usage of prisons as psychiatric facilities may involve issues concerning inadequate access to medical support, particularly mental health counseling, and the inability of prisoners to take part in decision-making regarding their medical treatment. The usage of psychologists employed by the correctional services organization and the lack of confidentiality in therapeutic sessions also present barriers for disabled prisoners. That makes it more difficult for prisoners with disabilities to express discontentment about problems in the available healthcare since it may later complicate their release from the prison. In the United States, the population of older adults in the criminal justice system is growing rapidly, but older prisoners' healthcare needs are not being sufficiently met. One specific issue includes a lack of preparation for correctional officers to be able to identify geriatric disability. Regarding that underrecognition of disability, further improvement is needed in training programs to allow officers to learn when and how to provide proper healthcare intervention and treatment for older adult prisoners. Healthcare policy Ableism has long been a serious concern in healthcare policy, and the COVID-19 pandemic has greatly exaggerated and highlighted the prevalence of this serious concern. Studies frequently show what a "headache" patients with disabilities are for the healthcare system. In a 2020 study, 83.6% of healthcare providers preferred patients without disabilities to those with disabilities. This policy is especially concerning since according to the CDC, people with disabilities are at a heightened risk for contracting COVID-19. Additionally, in the second wave of the COVID-19 pandemic in the UK, people with intellectual disabilities were told that they will not be resuscitated if they become ill with COVID-19. Education Ableism often makes the world inaccessible to disabled people, especially in schools. Within education systems, the use of the medical model of disability and social model of disability contributes to the divide between students within special education and general education classrooms. Oftentimes, the medical model of disability portrays the overarching idea that disability can be corrected and diminished at the result of removing children from general education classrooms. This model of disability suggests that the impairment is more important than the person, who is helpless and should be separated from those who are not disabled. The social model of disability suggests that people with impairments are disabled at the result of the way society acts. When students with disabilities are pulled out of their classrooms into receive the support that they need, that often leads their peers to socially reject them because they don't form relationships with them in the classroom. By using the social model of disability, inclusive schools where the social norm is not to alienate students can promote more teamwork and less division throughout their campuses. Implementing the social model within modern forms of inclusive education provides children of all abilities with the role of changing discriminatory attitudes within the school system. For example, a disabled student may need to read text instead of listening to a tape recording of the text. In the past, schools have focused on fixing the disability, but progressive reforms make schools now focused on minimizing the impact of a student's disability and giving support. Moreover, schools are required to maximize access to their entire community. In 2004, U.S. Congress made into law the Individuals with Disabilities Education Act, which states that free and appropriate education is eligible to children with disabilities with insurance of necessary services. Congress later amended the law, in 2015, to include the Every Student Succeeds Act, which guarantees equal opportunity for people with disabilities full participation in society, and the tools for overall independent success. Media These common ways of framing disability are heavily criticized for being dehumanizing and failing to place importance on the perspectives of disabled people. Disabled villain One common form of media depiction of disability is to portray villains with a mental or physical disability. Lindsey Row-Heyveld notes, for instance, "that villainous pirates are scraggly, wizened and inevitably kitted out with a peg leg, eye patch or hook hand, whereas heroic pirates look like Johnny Depp's Jack Sparrow". The disability of the villain is meant to separate them from the average viewer and dehumanize the antagonist. As a result, stigma forms surrounding the disability and the individuals that live with it. There are many instances in literature where the antagonist is depicted as having a disability or mental illness. Some common examples include Captain Hook, Darth Vader and the Joker. Captain Hook is notorious for having a hook as a hand and seeks revenge on Peter Pan for his lost hand. Darth Vader's situation is unique because Luke Skywalker is also disabled. Luke's prosthetic hand looks lifelike, whereas Darth Vader appears robotic and emotionless because his appearance does not resemble humans and takes away human emotions. The Joker is a villain with a mental illness, and he is an example of the typical depiction of associating mental illness with violence. Inspiration porn Inspiration porn is the use of disabled people performing ordinary tasks as a form of inspiration. Criticisms of inspiration porn say that it distances disabled people from individuals who are not disabled and portrays disability as an obstacle to overcome or rehab. One of the most common examples of inspiration porn includes the Paralympics. Athletes with disabilities often get praised as inspirational because of their athletic accomplishments. Critics of this type of inspiration porn have said, "athletic accomplishments by these athletes are oversimplified as 'inspirational' because they're such a surprise." Pitied character In many forms of media such as films and articles a disabled person is portrayed as a character who is viewed as less than able, different, and an "outcast." Hayes and Black (2003) explore Hollywood films as the discourse of pity towards disability as a problem of social, physical, and emotional confinement. The aspect of pity is heightened through the storylines of media focusing on the individual's weaknesses as opposed to strengths and therefore leaving audiences a negative and ableist portrayal towards disability. Supercrip stereotype The supercrip narrative is generally a story of a person with an apparent disability who is able to "overcome" their physical differences and accomplish an impressive task. Professor Thomas Hehir's "Eliminating Ableism in Education" gives the story of a blind man who climbs Mount Everest, Erik Weihenmayer, as an example of the supercrip narrative. The Paralympics are another example of the supercrip stereotype since they generate a large amount of media attention and demonstrate disabled people doing extremely strenuous physical tasks. Although that may appear inspiring at face value, Hehir explains that many people with disabilities view those news stories as setting unrealistic expectations. Additionally, Hehir mentions that supercrip stories imply that disabled people are required to perform those impressive tasks to be seen as an equal and to avoid pity from those without disabilities. The disability studies scholar Alison Kafer describes how those narratives reinforce the problematic idea that disability can be overcome by an individual's hard work, in contrast to other theories, which understand disability to be a result of a world that is not designed to be accessible. Supercrip stories reinforce ableism by emphasizing independence, reliance on one's body, and the role of individual will in self-cure. Other examples of the supercrip narrative include the stories of Rachael Scdoris, the first blind woman to race in the Iditarod, and Aron Ralston, who has continued to climb after the amputation of his arm. Environmental and outdoor recreation media Disability has often been used as a short-hand in environmental literature for representing distance from nature, in what Sarah Jaquette Ray calls the "disability-equals-alienation-from-nature trope." An example of this trope can be seen in Moby Dick, as Captain Ahab's lost leg symbolizes his exploitative relationship with nature. Additionally, in canonical environmental thought, figures such as Ralph Waldo Emerson and Edward Abbey wrote using metaphors of disability to describe relationships between nature, technology, and the individual. Ableism in outdoor media can also be seen in promotional materials from the outdoor recreation industry: Alison Kafer highlighted a 2000 Nike advertisement, which ran in eleven outdoor magazines promoting a pair of running shoes. Kafer alleged that the advertisement depicted a person with a spinal cord injury and a wheelchair user as a "drooling, misshapen, non-extreme-trail-running husk of [their] former self", and said that the advertisement promised non-disabled runners and hikers the ability to protect their bodies against disability by purchasing the pair of shoes. The advertisement was withdrawn after the company received over six hundred complaints in the first two days after its publication, and Nike apologized. Sports Sports are often an area of society in which ableism is evident. In sports media, disabled athletes are often portrayed to be inferior. When disabled athletes are discussed in the media, there is often an emphasis on rehabilitation and the road to recovery, which is inherently a negative view on the disability. Oscar Pistorius is a South African runner who competed in the 2004, 2008, and 2012 Paralympics and the 2012 Olympic games in London. Pistorius was the first double amputee athlete to compete in the Olympic games. While media coverage focused on inspiration and competition during his time in the Paralympic games, it shifted to questioning whether his prosthetic legs gave him an advantage while competing in the Olympic games. Types of ableism Physical ableism is hate or discrimination based on physical disability. Sanism, or mental ableism, is discrimination based on mental health conditions and cognitive disabilities. Medical ableism exists both interpersonally (as healthcare providers can be ableist) and systemically, as decisions determined by medical institutions and caregivers may prevent the exercise of rights from disabled patients like autonomy and making decisions. The medical model of disability can be used to justify medical ableism. Structural ableism is failing to provide accessibility tools: ramps, wheelchairs, special education equipments, etc. (Which is often also an example of Hostile architecture.) Cultural ableism is behavioural, cultural, attitudinal and social patterns that may discriminate against disabled people, including by denying, dismissing or invisibilising disabled people, and by making accessibility and support unattainable. Internalised ableism is a disabled person discriminating against themself and other disabled people by holding the view that disability is something to be ashamed of or something to hide or by refusing accessibility or support. Internalised ableism may be a result of mistreatment of disabled individuals. Hostile ableism is a cultural or social kind of ableism where people are hostile towards symptoms of a disability or phenotypes of the disabled person. Benevolent ableism is when people treat the disabled person well but like a child (infantilization), instead of considering them full grown adults. Examples include ignoring disabilities, not respecting the life experiences of the disabled person, microaggression, not considering the opinion of the disabled person in important decision making, invasion of privacy or personal boundaries, forced corrective measures, unwanted help, not listening to the disabled person, etc. Ambivalent ableism can be characterized as somewhere in between hostile and benevolent ableism. Causes of ableism Ableism may have evolutionary and existential origins (fear of contagion, fear of death). It may also be rooted in belief systems (social Darwinism, meritocracy), language (such as "suffering from" disability), or unconscious biases. See also Disability abuse Disability and poverty Disability hate crime Disability rights movement Inclusion (disability rights) Mentalism (discrimination) Medical industrial complex Violent behavior in autistic people Violence against people with disabilities References Further reading Fandrey, Walter: Krüppel, Idioten, Irre: zur Sozialgeschichte behinderter Menschen in Deutschland (Cripples, idiots, madmen: the social history of disabled people in Germany) Schweik, Susan. (2009). The Ugly Laws: Disability in Public (History of Disability). NYU Press. Shaver, James P. (1981). Handicapism and Equal Opportunity: Teaching About the Disabled in Social Studies. Library of Congress Card Catalog Number 80-70737 ERIC Number: ED202185 External links Disablism: How to tackle the last prejudice by DEMOS (2004) Social theories Social concepts Prejudice and discrimination by type Disability rights
0.764406
0.997928
0.762822
Law of thought
The laws of thought are fundamental axiomatic rules upon which rational discourse itself is often considered to be based. The formulation and clarification of such rules have a long tradition in the history of philosophy and logic. Generally they are taken as laws that guide and underlie everyone's thinking, thoughts, expressions, discussions, etc. However, such classical ideas are often questioned or rejected in more recent developments, such as intuitionistic logic, dialetheism and fuzzy logic. According to the 1999 Cambridge Dictionary of Philosophy, laws of thought are laws by which or in accordance with which valid thought proceeds, or that justify valid inference, or to which all valid deduction is reducible. Laws of thought are rules that apply without exception to any subject matter of thought, etc.; sometimes they are said to be the object of logic. The term, rarely used in exactly the same sense by different authors, has long been associated with three equally ambiguous expressions: the law of identity (ID), the law of contradiction (or non-contradiction; NC), and the law of excluded middle (EM). Sometimes, these three expressions are taken as propositions of formal ontology having the widest possible subject matter, propositions that apply to entities as such: (ID), everything is (i.e., is identical to) itself; (NC) no thing having a given quality also has the negative of that quality (e.g., no even number is non-even); (EM) every thing either has a given quality or has the negative of that quality (e.g., every number is either even or non-even). Equally common in older works is the use of these expressions for principles of metalogic about propositions: (ID) every proposition implies itself; (NC) no proposition is both true and false; (EM) every proposition is either true or false. Beginning in the middle to late 1800s, these expressions have been used to denote propositions of Boolean algebra about classes: (ID) every class includes itself; (NC) every class is such that its intersection ("product") with its own complement is the null class; (EM) every class is such that its union ("sum") with its own complement is the universal class. More recently, the last two of the three expressions have been used in connection with the classical propositional logic and with the so-called protothetic or quantified propositional logic; in both cases the law of non-contradiction involves the negation of the conjunction ("and") of something with its own negation, ¬(A∧¬A), and the law of excluded middle involves the disjunction ("or") of something with its own negation, A∨¬A. In the case of propositional logic, the "something" is a schematic letter serving as a place-holder, whereas in the case of protothetic logic the "something" is a genuine variable. The expressions "law of non-contradiction" and "law of excluded middle" are also used for semantic principles of model theory concerning sentences and interpretations: (NC) under no interpretation is a given sentence both true and false, (EM) under any interpretation, a given sentence is either true or false. The expressions mentioned above all have been used in many other ways. Many other propositions have also been mentioned as laws of thought, including the dictum de omni et nullo attributed to Aristotle, the substitutivity of identicals (or equals) attributed to Euclid, the so-called identity of indiscernibles attributed to Gottfried Wilhelm Leibniz, and other "logical truths". The expression "laws of thought" gained added prominence through its use by Boole (1815–64) to denote theorems of his "algebra of logic"; in fact, he named his second logic book An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities (1854). Modern logicians, in almost unanimous disagreement with Boole, take this expression to be a misnomer; none of the above propositions classed under "laws of thought" are explicitly about thought per se, a mental phenomenon studied by psychology, nor do they involve explicit reference to a thinker or knower as would be the case in pragmatics or in epistemology. The distinction between psychology (as a study of mental phenomena) and logic (as a study of valid inference) is widely accepted. The three traditional laws History Hamilton offers a history of the three traditional laws that begins with Plato, proceeds through Aristotle, and ends with the schoolmen of the Middle Ages; in addition he offers a fourth law (see entry below, under Hamilton): "The principles of Contradiction and Excluded Middle can be traced back to Plato: The principles of Contradiction and of Excluded Middle can both be traced back to Plato, by whom they were enounced and frequently applied; though it was not till long after, that either of them obtained a distinctive appellation. To take the principle of Contradiction first. This law Plato frequently employs, but the most remarkable passages are found in the Phœdo, in the Sophista, and in the fourth and seventh books of the Republic. [Hamilton LECT. V. LOGIC. 62] Law of Excluded Middle: The law of Excluded Middle between two contradictories remounts, as I have said, also to Plato, though the Second Alcibiades, the dialogue in which it is most clearly expressed, must be admitted to be spurious. It is also in the fragments of Pseudo-Archytas, to be found in Stobæus. [Hamilton LECT. V. LOGIC. 65] Hamilton further observes that "It is explicitly and emphatically enounced by Aristotle in many passages both of his Metaphysics (l. iii. (iv.) c.7.) and of his Analytics, both Prior (l. i. c. 2) and Posterior (1. i. c. 4). In the first of these, he says: "It is impossible that there should exist any medium between contradictory opposites, but it is necessary either to affirm or to deny everything of everything." [Hamilton LECT. V. LOGIC. 65] "Law of Identity. [Hamilton also calls this "The principle of all logical affirmation and definition"] Antonius Andreas: The law of Identity, I stated, was not explicated as a coordinate principle till a comparatively recent period. The earliest author in whom I have found this done, is Antonius Andreas, a scholar of Scotus, who flourished at the end of the thirteenth and beginning of the fourteenth century. The schoolman, in the fourth book of his Commentary of Aristotle's Metaphysics – a commentary which is full of the most ingenious and original views, – not only asserts to the law of Identity a coordinate dignity with the law of Contradiction, but, against Aristotle, he maintains that the principle of Identity, and not the principle of Contradiction, is the one absolutely first. The formula in which Andreas expressed it was Ens est ens. Subsequently to this author, the question concerning the relative priority of the two laws of Identity and of Contradiction became one much agitated in the schools; though there were also found some who asserted to the law of Excluded Middle this supreme rank." [From Hamilton LECT. V. LOGIC. 65–66] Three traditional laws: identity, non-contradiction, excluded middle The following states the three traditional "laws" in the words of Bertrand Russell (1912): The law of identity The law of identity: 'Whatever is, is.' For all a: a = a. Regarding this law, Aristotle wrote: More than two millennia later, George Boole alluded to the very same principle as did Aristotle when Boole made the following observation with respect to the nature of language and those principles that must inhere naturally within them: The law of non-contradiction The law of non-contradiction (alternately the 'law of contradiction'): 'Nothing can both be and not be.' In other words: "two or more contradictory statements cannot both be true in the same sense at the same time": ¬(A∧¬A). In the words of Aristotle, that "one cannot say of something that it is and that it is not in the same respect and at the same time". As an illustration of this law, he wrote: The law of excluded middle The law of excluded middle: 'Everything must either be or not be.' In accordance with the law of excluded middle or excluded third, for every proposition, either its positive or negative form is true: A∨¬A. Regarding the law of excluded middle, Aristotle wrote: Rationale As the quotations from Hamilton above indicate, in particular the "law of identity" entry, the rationale for and expression of the "laws of thought" have been fertile ground for philosophic debate since Plato. Today the debate—about how we "come to know" the world of things and our thoughts—continues; for examples of rationales see the entries, below. Plato In one of Plato's Socratic dialogues, Socrates described three principles derived from introspection: Indian logic The law of non-contradiction is found in ancient Indian logic as a meta-rule in the Shrauta Sutras, the grammar of Pāṇini, and the Brahma Sutras attributed to Vyasa. It was later elaborated on by medieval commentators such as Madhvacharya. Locke John Locke claimed that the principles of identity and contradiction (i.e. the law of identity and the law of non-contradiction) were general ideas and only occurred to people after considerable abstract, philosophical thought. He characterized the principle of identity as "Whatsoever is, is." He stated the principle of contradiction as "It is impossible for the same thing to be and not to be." To Locke, these were not innate or a priori principles. Leibniz Gottfried Leibniz formulated two additional principles, either or both of which may sometimes be counted as a law of thought: principle of sufficient reason identity of indiscernibles In Leibniz's thought, as well as generally in the approach of rationalism, the latter two principles are regarded as clear and incontestable axioms. They were widely recognized in European thought of the 17th, 18th, and 19th centuries, although they were subject to greater debate in the 19th century. As turned out to be the case with the law of continuity, these two laws involve matters which, in contemporary terms, are subject to much debate and analysis (respectively on determinism and extensionality). Leibniz's principles were particularly influential in German thought. In France, the Port-Royal Logic was less swayed by them. Hegel quarrelled with the identity of indiscernibles in his Science of Logic (1812–1816). Schopenhauer Four laws "The primary laws of thought, or the conditions of the thinkable, are four: – 1. The law of identity [A is A]. 2. The law of contradiction. 3. The law of exclusion; or excluded middle. 4. The law of sufficient reason." (Thomas Hughes, The Ideal Theory of Berkeley and the Real World, Part II, Section XV, Footnote, p. 38) Arthur Schopenhauer discussed the laws of thought and tried to demonstrate that they are the basis of reason. He listed them in the following way in his On the Fourfold Root of the Principle of Sufficient Reason, §33: A subject is equal to the sum of its predicates, or a = a. No predicate can be simultaneously attributed and denied to a subject, or a ≠ ~a. Of every two contradictorily opposite predicates one must belong to every subject. Truth is the reference of a judgment to something outside it as its sufficient reason or ground. Also: To show that they are the foundation of reason, he gave the following explanation: Schopenhauer's four laws can be schematically presented in the following manner: A is A. A is not not-A. X is either A or not-A. If A then B (A implies B). Two laws Later, in 1844, Schopenhauer claimed that the four laws of thought could be reduced to two. In the ninth chapter of the second volume of The World as Will and Representation, he wrote: Boole (1854): From his "laws of the mind" Boole derives Aristotle's "Law of contradiction" The title of George Boole's 1854 treatise on logic, An Investigation on the Laws of Thought, indicates an alternate path. The laws are now incorporated into an algebraic representation of his "laws of the mind", honed over the years into modern Boolean algebra. Rationale: How the "laws of the mind" are to be distinguished Boole begins his chapter I "Nature and design of this Work" with a discussion of what characteristic distinguishes, generally, "laws of the mind" from "laws of nature": "The general laws of Nature are not, for the most part, immediate objects of perception. They are either inductive inferences from a large body of facts, the common truth in which they express, or, in their origin at least, physical hypotheses of a causal nature. ... They are in all cases, and in the strictest sense of the term, probable conclusions, approaching, indeed, ever and ever nearer to certainty, as they receive more and more of the confirmation of experience. ..." Contrasted with this are what he calls "laws of the mind": Boole asserts these are known in their first instance, without need of repetition: "On the other hand, the knowledge of the laws of the mind does not require as its basis any extensive collection of observations. The general truth is seen in the particular instance, and it is not confirmed by the repetition of instances. ... we not only see in the particular example the general truth, but we see it also as a certain truth – a truth, our confidence in which will not continue to increase with increasing experience of its practical verification." (Boole 1854:4) Boole's signs and their laws Boole begins with the notion of "signs" representing "classes", "operations" and "identity": "All the signs of Language, as an instrument of reasoning may be conducted by a system of signs composed of the following elements "1st Literal symbols as x, y, etc representing things as subjects of our conceptions, "2nd Signs of operation, as +, −, x standing for those operations of the mind by which conceptions of things are combined or resolved so as to form new conceptions involving the same elements, "3rd The sign of identity, =. And these symbols of Logic are in their use subject to definite laws, partly agreeing with and partly differing from the laws of the corresponding symbols in the science of Algebra. (Boole 1854:27) Boole then clarifies what a "literal symbol" e.g. x, y, z,... represents—a name applied to a collection of instances into "classes". For example, "bird" represents the entire class of feathered winged warm-blooded creatures. For his purposes he extends the notion of class to represent membership of "one", or "nothing", or "the universe" i.e. totality of all individuals: "Let us then agree to represent the class of individuals to which a particular name or description is applicable, by a single letter, as z. ... By a class is usually meant a collection of individuals, to each of which a particular name or description may be applied; but in this work the meaning of the term will be extended so as to include the case in which but a single individual exists, answering to the required name or description, as well as the cases denoted by the terms "nothing" and "universe," which as "classes" should be understood to comprise respectively 'no beings,' 'all beings.'" (Boole 1854:28) He then defines what the string of symbols e.g. xy means [modern logical &, conjunction]: "Let it further be agreed, that by the combination xy shall be represented that class of things to which the names or descriptions represented by x and y are simultaneously, applicable. Thus, if x alone stands for "white things," and y for "sheep," let xy stand for 'white Sheep;'" (Boole 1854:28) Given these definitions he now lists his laws with their justification plus examples (derived from Boole): (1) xy = yx [commutative law] "x represents 'estuaries,' and y 'rivers,' the expressions xy and yx will indifferently represent" 'rivers that are estuaries,' or 'estuaries that are rivers,'" (2) xx = x, alternately x2 = x [Absolute identity of meaning, Boole's "fundamental law of thought" cf page 49] "Thus 'good, good' men, is equivalent to 'good' men". Logical OR: Boole defines the "collecting of parts into a whole or separate a whole into its parts" (Boole 1854:32). Here the connective "and" is used disjunctively, as is "or"; he presents a commutative law (3) and a distributive law (4) for the notion of "collecting". The notion of separating a part from the whole he symbolizes with the "-" operation; he defines a commutative (5) and distributive law (6) for this notion: (3) y + x = x + y [commutative law] "Thus the expression 'men and women' is ... equivalent with the expression" women and men. Let x represent 'men,' y, 'women' and let + stand for 'and' and 'or' ..." (4) z(x + y) = zx + zy [distributive law] z = European, (x = "men, y = women): European men and women = European men and European women (5) x − y = −y + x [commutation law: separating a part from the whole] "All men (x) except Asiatics (y)" is represented by x − y. "All states (x) except monarchical states (y)" is represented by x − y (6) z(x − y) = zx − zy [distributive law] Lastly is a notion of "identity" symbolized by "=". This allows for two axioms: (axiom 1): equals added to equals results in equals, (axiom 2): equals subtracted from equals results in equals. (7) Identity ("is", "are") e.g. x = y + z, "stars" = "suns" and "the planets" Nothing "0" and Universe "1": He observes that the only two numbers that satisfy xx = x are 0 and 1. He then observes that 0 represents "Nothing" while "1" represents the "Universe" (of discourse). The logical NOT: Boole defines the contrary (logical NOT) as follows (his Proposition III): "If x represent any class of objects, then will 1 − x represent the contrary or supplementary class of objects, i.e. the class including all objects which are not comprehended in the class x" (Boole 1854:48) If x = "men" then "1 − x" represents the "universe" less "men", i.e. "not-men". The notion of a particular as opposed to a universal: To represent the notion of "some men", Boole writes the small letter "v" before the predicate-symbol "vx" some men. Exclusive- and inclusive-OR: Boole does not use these modern names, but he defines these as follows x(1-y) + y(1-x) and x + y(1-x), respectively; these agree with the formulas derived by means of the modern Boolean algebra. Boole derives the law of contradiction Armed with his "system" he derives the "principle of [non]contradiction" starting with his law of identity: x2 = x. He subtracts x from both sides (his axiom 2), yielding x2 − x = 0. He then factors out the x: x(x − 1) = 0. For example, if x = "men" then 1 − x represents NOT-men. So we have an example of the "Law of Contradiction": "Hence: x(1 − x) will represent the class whose members are at once "men," and" not men," and the equation [x(1 − x)=0] thus express the principle, that a class whose members are at the same time men and not men does not exist. In other words, that it is impossible for the same individual to be at the same time a man and not a man. ... this is identically that "principle of contradiction" which Aristotle has described as the fundamental axiom of all philosophy. ... what has been commonly regarded as the fundamental axiom of metaphysics is but the consequence of a law of thought, mathematical in its form." (with more explanation about this "dichotomy" comes about cf Boole 1854:49ff) Boole defines the notion "domain (universe) of discourse" This notion is found throughout Boole's "Laws of Thought" e.g. 1854:28, where the symbol "1" (the integer 1) is used to represent "Universe" and "0" to represent "Nothing", and in far more detail later (pages 42ff): " Now, whatever may be the extent of the field within which all the objects of our discourse are found, that field may properly be termed the universe of discourse. ... Furthermore, this universe of discourse is in the strictest sense the ultimate subject of the discourse." In his chapter "The Predicate Calculus" Kleene observes that the specification of the "domain" of discourse is "not a trivial assumption, since it is not always clearly satisfied in ordinary discourse ... in mathematics likewise, logic can become pretty slippery when no D [domain] has been specified explicitly or implicitly, or the specification of a D [domain] is too vague (Kleene 1967:84). Hamilton (1837–38 lectures on Logic, published 1860): a 4th "Law of Reason and Consequent" As noted above, Hamilton specifies four laws—the three traditional plus the fourth "Law of Reason and Consequent"—as follows: "XIII. The Fundamental Laws of Thought, or the conditions of the thinkable, as commonly received, are four: – 1. The Law of Identity; 2. The Law of Contradiction; 3. The Law of Exclusion or of Excluded Middle; and, 4. The Law of Reason and Consequent, or of Sufficient Reason." Rationale: "Logic is the science of the Laws of Thought as Thought" Hamilton opines that thought comes in two forms: "necessary" and "contingent" (Hamilton 1860:17). With regards the "necessary" form he defines its study as "logic": "Logic is the science of the necessary forms of thought" (Hamilton 1860:17). To define "necessary" he asserts that it implies the following four "qualities": (1) "determined or necessitated by the nature of the thinking subject itself ... it is subjectively, not objectively, determined; (2) "original and not acquired; (3) "universal; that is, it cannot be that it necessitates on some occasions, and does not necessitate on others. (4) "it must be a law; for a law is that which applies to all cases without exception, and from which a deviation is ever, and everywhere, impossible, or, at least, unallowed. ... This last condition, likewise, enables us to give the most explicit enunciation of the object-matter of Logic, in saying that Logic is the science of the Laws of Thought as Thought, or the science of the Formal Laws of Thought, or the science of the Laws of the Form of Thought; for all these are merely various expressions of the same thing." Hamilton's 4th law: "Infer nothing without ground or reason" Here's Hamilton's fourth law from his LECT. V. LOGIC. 60–61: "I now go on to the fourth law. "Par. XVII. Law of Sufficient Reason, or of Reason and Consequent: "XVII. The thinking of an object, as actually characterized by positive or by negative attributes, is not left to the caprice of Understanding – the faculty of thought; but that faculty must be necessitated to this or that determinate act of thinking by a knowledge of something different from, and independent of; the process of thinking itself. This condition of our understanding is expressed by the law, as it is called, of Sufficient Reason (principium Rationis Sufficientis); but it is more properly denominated the law of Reason and Consequent (principium Rationis et Consecutionis). That knowledge by which the mind is necessitated to affirm or posit something else, is called the logical reason ground, or antecedent; that something else which the mind is necessitated to affirm or posit, is called the logical consequent; and the relation between the reason and consequent, is called the logical connection or consequence. This law is expressed in the formula – Infer nothing without a ground or reason.1 Relations between Reason and Consequent: The relations between Reason and Consequent, when comprehended in a pure thought, are the following: 1. When a reason is explicitly or implicitly given, then there must ¶ exist a consequent; and, vice versa, when a consequent is given, there must also exist a reason. 1 See Schulze, Logik, §19, and Krug, Logik, §20, – ED. 2. Where there is no reason there can be no consequent; and, vice versa, where there is no consequent (either implicitly or explicitly) there can be no reason. That is, the concepts of reason and of consequent, as reciprocally relative, involve and suppose each other. The logical significance of this law: The logical significance of the law of Reason and Consequent lies in this, – That in virtue of it, thought is constituted into a series of acts all indissolubly connected; each necessarily inferring the other. Thus it is that the distinction and opposition of possible, actual and necessary matter, which has been introduced into Logic, is a doctrine wholly extraneous to this science. Welton In the 19th century, the Aristotelian laws of thoughts, as well as sometimes the Leibnizian laws of thought, were standard material in logic textbooks, and J. Welton described them in this way: Russell (1903–1927) The sequel to Bertrand Russell's 1903 "The Principles of Mathematics" became the three-volume work named Principia Mathematica (hereafter PM), written jointly with Alfred North Whitehead. Immediately after he and Whitehead published PM he wrote his 1912 "The Problems of Philosophy". His "Problems" reflects "the central ideas of Russell's logic". The Principles of Mathematics (1903) In his 1903 "Principles" Russell defines Symbolic or Formal Logic (he uses the terms synonymously) as "the study of the various general types of deduction" (Russell 1903:11). He asserts that "Symbolic Logic is essentially concerned with inference in general" (Russell 1903:12) and with a footnote indicates that he does not distinguish between inference and deduction; moreover he considers induction "to be either disguised deduction or a mere method of making plausible guesses" (Russell 1903:11). This opinion will change by 1912, when he deems his "principle of induction" to be par with the various "logical principles" that include the "Laws of Thought". In his Part I "The Indefinables of Mathematics" Chapter II "Symbolic Logic" Part A "The Propositional Calculus" Russell reduces deduction ("propositional calculus") to 2 "indefinables" and 10 axioms: "17. We require, then, in the propositional calculus, no indefinable except the two kinds of implication [simple aka "material" and "formal"]-- remembering, however, that formal implication is a complex notion, whose analysis remains to be undertaken. As regards our two indefinables, we require certain indemonstrable propositions, which hitherto I have not succeeded in reducing to less ten (Russell 1903:15). From these he claims to be able to derive the law of excluded middle and the law of contradiction but does not exhibit his derivations (Russell 1903:17). Subsequently, he and Whitehead honed these "primitive principles" and axioms into the nine found in PM, and here Russell actually exhibits these two derivations at ❋1.71 and ❋3.24, respectively. The Problems of Philosophy (1912) By 1912 Russell in his "Problems" pays close attention to "induction" (inductive reasoning) as well as "deduction" (inference), both of which represent just two examples of "self-evident logical principles" that include the "Laws of Thought." Induction principle: Russell devotes a chapter to his "induction principle". He describes it as coming in two parts: firstly, as a repeated collection of evidence (with no failures of association known) and therefore increasing probability that whenever A happens B follows; secondly, in a fresh instance when indeed A happens, B will indeed follow: i.e. "a sufficient number of cases of association will make the probability of a fresh association nearly a certainty, and will make it approach certainty without limit." He then collects all the cases (instances) of the induction principle (e.g. case 1: A1 = "the rising sun", B1 = "the eastern sky"; case 2: A2 = "the setting sun", B2 = "the western sky"; case 3: etc.) into a "general" law of induction which he expresses as follows: "(a) The greater the number of cases in which a thing of the sort A has been found associated with a thing of the sort B, the more probable it is (if cases of failure of association are known) that A is always associated with B; "(b) Under the same circumstances, a sufficient number of cases of the association of A with B will make it nearly certain that A is always associated with B, and will make this general law approach certainty without limit." He makes an argument that this induction principle can neither be disproved or proved by experience, the failure of disproof occurring because the law deals with probability of success rather than certainty; the failure of proof occurring because of unexamined cases that are yet to be experienced, i.e. they will occur (or not) in the future. "Thus we must either accept the inductive principle on the ground of its intrinsic evidence, or forgo all justification of our expectations about the future". In his next chapter ("On Our Knowledge of General Principles") Russell offers other principles that have this similar property: "which cannot be proved or disproved by experience, but are used in arguments which start from what is experienced." He asserts that these "have even greater evidence than the principle of induction ... the knowledge of them has the same degree of certainty as the knowledge of the existence of sense-data. They constitute the means of drawing inferences from what is given in sensation". Inference principle: Russell then offers an example that he calls a "logical" principle. Twice previously he has asserted this principle, first as the 4th axiom in his 1903 and then as his first "primitive proposition" of PM: "❋1.1 Anything implied by a true elementary proposition is true". Now he repeats it in his 1912 in a refined form: "Thus our principle states that if this implies that, and this is true, then that is true. In other words, 'anything implied by a true proposition is true', or 'whatever follows from a true proposition is true'. This principle he places great stress upon, stating that "this principle is really involved – at least, concrete instances of it are involved – in all demonstrations". He does not call his inference principle modus ponens, but his formal, symbolic expression of it in PM (2nd edition 1927) is that of modus ponens; modern logic calls this a "rule" as opposed to a "law". In the quotation that follows, the symbol "⊦" is the "assertion-sign" (cf PM:92); "⊦" means "it is true that", therefore "⊦p" where "p" is "the sun is rising" means "it is true that the sun is rising", alternately "The statement 'The sun is rising' is true". The "implication" symbol "⊃" is commonly read "if p then q", or "p implies q" (cf PM:7). Embedded in this notion of "implication" are two "primitive ideas", "the Contradictory Function" (symbolized by NOT, "~") and "the Logical Sum or Disjunction" (symbolized by OR, "⋁"); these appear as "primitive propositions" ❋1.7 and ❋1.71 in PM (PM:97). With these two "primitive propositions" Russell defines "p ⊃ q" to have the formal logical equivalence "NOT-p OR q" symbolized by "~p ⋁ q": "Inference. The process of inference is as follows: a proposition "p" is asserted, and a proposition "p implies q" is asserted, and then as a sequel the proposition "q" is asserted. The trust in inference is the belief that if the two former assertions are not in error, the final assertion is not in error. Accordingly, whenever, in symbols, where p and q have of course special determination " "⊦p" and "⊦(p ⊃ q)" " have occurred, then "⊦q" will occur if it is desired to put it on record. The process of the inference cannot be reduced to symbols. Its sole record is the occurrence of "⊦q". ... An inference is the dropping of a true premiss; it is the dissolution of an implication". In other words, in a long "string" of inferences, after each inference we can detach the "consequent" "⊦q" from the symbol string "⊦p, ⊦(p⊃q)" and not carry these symbols forward in an ever-lengthening string of symbols. The three traditional "laws" (principles) of thought: Russell goes on to assert other principles, of which the above logical principle is "only one". He asserts that "some of these must be granted before any argument or proof becomes possible. When some of them have been granted, others can be proved." Of these various "laws" he asserts that "for no very good reason, three of these principles have been singled out by tradition under the name of 'Laws of Thought'. And these he lists as follows: "(1) The law of identity: 'Whatever is, is.' (2) The law of contradiction: 'Nothing can both be and not be.' (3) The law of excluded middle: 'Everything must either be or not be.'" Rationale: Russell opines that "the name 'laws of thought' is ... misleading, for what is important is not the fact that we think in accordance with these laws, but the fact that things behave in accordance with them; in other words, the fact that when we think in accordance with them we think truly." But he rates this a "large question" and expands it in two following chapters where he begins with an investigation of the notion of "a priori" (innate, built-in) knowledge, and ultimately arrives at his acceptance of the Platonic "world of universals". In his investigation he comes back now and then to the three traditional laws of thought, singling out the law of contradiction in particular: "The conclusion that the law of contradiction is a law of thought is nevertheless erroneous ... [rather], the law of contradiction is about things, and not merely about thoughts ... a fact concerning the things in the world." His argument begins with the statement that the three traditional laws of thought are "samples of self-evident principles". For Russell the matter of "self-evident" merely introduces the larger question of how we derive our knowledge of the world. He cites the "historic controversy ... between the two schools called respectively 'empiricists' [ Locke, Berkeley, and Hume ] and 'rationalists' [ Descartes and Leibniz]" (these philosophers are his examples). Russell asserts that the rationalists "maintained that, in addition to what we know by experience, there are certain 'innate ideas' and 'innate principles', which we know independently of experience"; to eliminate the possibility of babies having innate knowledge of the "laws of thought", Russell renames this sort of knowledge a priori. And while Russell agrees with the empiricists that "Nothing can be known to exist except by the help of experience", he also agrees with the rationalists that some knowledge is a priori, specifically "the propositions of logic and pure mathematics, as well as the fundamental propositions of ethics". This question of how such a priori knowledge can exist directs Russell to an investigation into the philosophy of Immanuel Kant, which after careful consideration he rejects as follows: "... there is one main objection which seems fatal to any attempt to deal with the problem of a priori knowledge by his method. The thing to be accounted for is our certainty that the facts must always conform to logic and arithmetic. ... Thus Kant's solution unduly limits the scope of a priori propositions, in addition to failing in the attempt at explaining their certainty". His objections to Kant then leads Russell to accept the 'theory of ideas' of Plato, "in my opinion ... one of the most successful attempts hitherto made."; he asserts that " ... we must examine our knowledge of universals ... where we shall find that [this consideration] solves the problem of a priori knowledge.". Principia Mathematica (Part I: 1910 first edition, 1927 2nd edition) Unfortunately, Russell's "Problems" does not offer an example of a "minimum set" of principles that would apply to human reasoning, both inductive and deductive. But PM does at least provide an example set (but not the minimum; see Post below) that is sufficient for deductive reasoning by means of the propositional calculus (as opposed to reasoning by means of the more-complicated predicate calculus)—a total of 8 principles at the start of "Part I: Mathematical Logic". Each of the formulas :❋1.2 to :❋1.6 is a tautology (true no matter what the truth-value of p, q, r ... is). What is missing in PM’s treatment is a formal rule of substitution; in his 1921 PhD thesis Emil Post fixes this deficiency (see Post below). In what follows the formulas are written in a more modern format than that used in PM; the names are given in PM). ❋1.1 Anything implied by a true elementary proposition is true. ❋1.2 Principle of Tautology: (p ⋁ p) ⊃ p ❋1.3 Principle of [logical] Addition: q ⊃ (p ⋁ q) ❋1.4 Principle of Permutation: (p ⋁ q) ⊃ (q ⋁ p) ❋1.5 Associative Principle: p ⋁ (q ⋁ r) ⊃ q ⋁ (p ⋁ r) [redundant] ❋1.6 Principle of [logical] Summation: (q ⊃ r) ⊃ ((p ⋁ q) ⊃ (p ⋁ r)) ❋1.7 [logical NOT]: If p is an elementary proposition, ~p is an elementary proposition. ❋1.71 [logical inclusive OR]: If p and q are elementary propositions, (p ⋁ q) is an elementary proposition. Russell sums up these principles with "This completes the list of primitive propositions required for the theory of deduction as applied to elementary propositions" (PM:97). Starting from these eight tautologies and a tacit use of the "rule" of substitution, PM then derives over a hundred different formulas, among which are the Law of Excluded Middle ❋1.71, and the Law of Contradiction ❋3.24 (this latter requiring a definition of logical AND symbolized by the modern ⋀: (p ⋀ q) =def ~(~p ⋁ ~q). (PM uses the "dot" symbol ▪ for logical AND)). Ladd-Franklin (1914): "principle of exclusion" and the "principle of exhaustion" At about the same time (1912) that Russell and Whitehead were finishing the last volume of their Principia Mathematica, and the publishing of Russell's "The Problems of Philosophy" at least two logicians (Louis Couturat, Christine Ladd-Franklin) were asserting that two "laws" (principles) of contradiction" and "excluded middle" are necessary to specify "contradictories"; Ladd-Franklin renamed these the principles of exclusion and exhaustion. The following appears as a footnote on page 23 of Couturat 1914: "As Mrs. LADD·FRANKLlN has truly remarked (BALDWIN, Dictionary of Philosophy and Psychology, article "Laws of Thought"), the principle of contradiction is not sufficient to define contradictories; the principle of excluded middle must be added which equally deserves the name of principle of contradiction. This is why Mrs. LADD-FRANKLIN proposes to call them respectively the principle of exclusion and the principle of exhaustion, inasmuch as, according to the first, two contradictory terms are exclusive (the one of the other); and, according to the second, they are exhaustive (of the universe of discourse)." In other words, the creation of "contradictories" represents a dichotomy, i.e. the "splitting" of a universe of discourse into two classes (collections) that have the following two properties: they are (i) mutually exclusive and (ii) (collectively) exhaustive. In other words, no one thing (drawn from the universe of discourse) can simultaneously be a member of both classes (law of non-contradiction), but [and] every single thing (in the universe of discourse) must be a member of one class or the other (law of excluded middle). Post (1921): The propositional calculus is consistent and complete As part of his PhD thesis "Introduction to a general theory of elementary propositions" Emil Post proved "the system of elementary propositions of Principia [PM]" i.e. its "propositional calculus" described by PM's first 8 "primitive propositions" to be consistent. The definition of "consistent" is this: that by means of the deductive "system" at hand (its stated axioms, laws, rules) it is impossible to derive (display) both a formula S and its contradictory ~S (i.e. its logical negation) (Nagel and Newman 1958:50). To demonstrate this formally, Post had to add a primitive proposition to the 8 primitive propositions of PM, a "rule" that specified the notion of "substitution" that was missing in the original PM of 1910. Given PM's tiny set of "primitive propositions" and the proof of their consistency, Post then proves that this system ("propositional calculus" of PM) is complete, meaning every possible truth table can be generated in the "system": "...every truth system has a representation in the system of Principia while every complete system, that is one having all possible truth tables, is equivalent to it. ... We thus see that complete systems are equivalent to the system of Principia not only in the truth table development but also postulationally. As other systems are in a sense degenerate forms of complete systems we can conclude that no new logical systems are introduced." A minimum set of axioms? The matter of their independence Then there is the matter of "independence" of the axioms. In his commentary before Post 1921, van Heijenoort states that Paul Bernays solved the matter in 1918 (but published in 1926) – the formula ❋1.5 Associative Principle: p ⋁ (q ⋁ r) ⊃ q ⋁ (p ⋁ r) can be proved with the other four. As to what system of "primitive-propositions" is the minimum, van Heijenoort states that the matter was "investigated by Zylinski (1925), Post himself (1941), and Wernick (1942)" but van Heijenoort does not answer the question. Model theory versus proof theory: Post's proof Kleene (1967:33) observes that "logic" can be "founded" in two ways, first as a "model theory", or second by a formal "proof" or "axiomatic theory"; "the two formulations, that of model theory and that of proof theory, give equivalent results"(Kleene 1967:33). This foundational choice, and their equivalence also applies to predicate logic (Kleene 1967:318). In his introduction to Post 1921, van Heijenoort observes that both the "truth-table and the axiomatic approaches are clearly presented". This matter of a proof of consistency both ways (by a model theory, by axiomatic proof theory) comes up in the more-congenial version of Post's consistency proof that can be found in Nagel and Newman 1958 in their chapter V "An Example of a Successful Absolute Proof of Consistency". In the main body of the text they use a model to achieve their consistency proof (they also state that the system is complete but do not offer a proof) (Nagel & Newman 1958:45–56). But their text promises the reader a proof that is axiomatic rather than relying on a model, and in the Appendix they deliver this proof based on the notions of a division of formulas into two classes K1 and K2 that are mutually exclusive and exhaustive (Nagel & Newman 1958:109–113). Gödel (1930): The first-order predicate calculus is complete The (restricted) "first-order predicate calculus" is the "system of logic" that adds to the propositional logic (cf Post, above) the notion of "subject-predicate" i.e. the subject x is drawn from a domain (universe) of discourse and the predicate is a logical function f(x): x as subject and f(x) as predicate (Kleene 1967:74). Although Gödel's proof involves the same notion of "completeness" as does the proof of Post, Gödel's proof is far more difficult; what follows is a discussion of the axiom set. Completeness Kurt Gödel in his 1930 doctoral dissertation "The completeness of the axioms of the functional calculus of logic" proved that in this "calculus" (i.e. restricted predicate logic with or without equality) that every valid formula is "either refutable or satisfiable" or what amounts to the same thing: every valid formula is provable and therefore the logic is complete. Here is Gödel's definition of whether or not the "restricted functional calculus" is "complete": "... whether it actually suffices for the derivation of every logico-mathematical proposition, or where, perhaps, it is conceivable that there are true propositions (which may be provable by means of other principles) that cannot be derived in the system under consideration." The first-order predicate calculus This particular predicate calculus is "restricted to the first order". To the propositional calculus it adds two special symbols that symbolize the generalizations "for all" and "there exists (at least one)" that extend over the domain of discourse. The calculus requires only the first notion "for all", but typically includes both: (1) the notion "for all x" or "for every x" is symbolized in the literature as variously as (x), ∀x, Πx etc., and the (2) notion of "there exists (at least one x)" variously symbolized as Ex, ∃x. The restriction is that the generalization "for all" applies only to the variables (objects x, y, z etc. drawn from the domain of discourse) and not to functions, in other words the calculus will permit ∀xf(x) ("for all creatures x, x is a bird") but not ∀f∀x(f(x)) [but if "equality" is added to the calculus it will permit ∀f:f(x); see below under Tarski]. Example: Let the predicate "function" f(x) be "x is a mammal", and the subject-domain (or universe of discourse) (cf Kleene 1967:84) be the category "bats": The formula ∀xf(x) yields the truth value "truth" (read: "For all instances x of objects 'bats', 'x is a mammal'" is a truth, i.e. "All bats are mammals"); But if the instances of x are drawn from a domain "winged creatures" then ∀xf(x) yields the truth value "false" (i.e. "For all instances x of 'winged creatures', 'x is a mammal'" has a truth value of "falsity"; "Flying insects are mammals" is false); However over the broad domain of discourse "all winged creatures" (e.g. "birds" + "flying insects" + "flying squirrels" + "bats") we can assert ∃xf(x) (read: "There exists at least one winged creature that is a mammal'"; it yields a truth value of "truth" because the objects x can come from the category "bats" and perhaps "flying squirrels" (depending on how we define "winged"). But the formula yields "falsity" when the domain of discourse is restricted to "flying insects" or "birds" or both "insects" and "birds". Kleene remarks that "the predicate calculus (without or with equality) fully accomplishes (for first order theories) what has been conceived to be the role of logic" (Kleene 1967:322). A new axiom: Aristotle's dictum – "the maxim of all and none" This first half of this axiom – "the maxim of all" will appear as the first of two additional axioms in Gödel's axiom set. The "dictum of Aristotle" (dictum de omni et nullo) is sometimes called "the maxim of all and none" but is really two "maxims" that assert: "What is true of all (members of the domain) is true of some (members of the domain)", and "What is not true of all (members of the domain) is true of none (of the members of the domain)". The "dictum" appears in Boole 1854 a couple places: "It may be a question whether that formula of reasoning, which is called the dictum of Aristotle, de Omni et nullo, expresses a primary law of human reasoning or not; but it is no question that it expresses a general truth in Logic" (1854:4) But later he seems to argue against it: "[Some principles of] general principle of an axiomatic nature, such as the "dictum of Aristotle:" Whatsoever is affirmed or denied of the genus may in the same sense be affirmed or denied of any species included under that genus. ... either state directly, but in an abstract form, the argument which they are supposed to elucidate, and, so stating that argument, affirm its validity; or involve in their expression technical terms which, after definition, conduct us again to the same point, viz. the abstract statement of the supposed allowable forms of inference." But the first half of this "dictum" (dictum de omni) is taken up by Russell and Whitehead in PM, and by Hilbert in his version (1927) of the "first order predicate logic"; his (system) includes a principle that Hilbert calls "Aristotle's dictum" (x)f(x) → f(y) This axiom also appears in the modern axiom set offered by Kleene (Kleene 1967:387), as his "∀-schema", one of two axioms (he calls them "postulates") required for the predicate calculus; the other being the "∃-schema" f(y) ⊃ ∃xf(x) that reasons from the particular f(y) to the existence of at least one subject x that satisfies the predicate f(x); both of these requires adherence to a defined domain (universe) of discourse. Gödel's restricted predicate calculus To supplement the four (down from five; see Post) axioms of the propositional calculus, Gödel 1930 adds the dictum de omni as the first of two additional axioms. Both this "dictum" and the second axiom, he claims in a footnote, derive from Principia Mathematica. Indeed, PM includes both as ❋10.1 ⊦ ∀xf(x) ⊃ f(y) ["I.e. what is true in all cases is true in any one case" ("Aristotle's dictum", rewritten in more-modern symbols)] ❋10.2 ⊦∀x(p ⋁ f(x)) ⊃ (p ⋁ ∀xf(x)) [rewritten in more-modern symbols] The latter asserts that the logical sum (i.e. ⋁, OR) of a simple proposition p and a predicate ∀xf(x) implies the logical sum of each separately. But PM derives both of these from six primitive propositions of ❋9, which in the second edition of PM is discarded and replaced with four new "Pp" (primitive principles) of ❋8 (see in particular ❋8.2, and Hilbert derives the first from his "logical ε-axiom" in his 1927 and does not mention the second. How Hilbert and Gödel came to adopt these two as axioms is unclear. Also required are two more "rules" of detachment ("modus ponens") applicable to predicates. Tarski (1946): Leibniz's law Alfred Tarski in his 1946 (2nd edition) "Introduction to Logic and to the Methodology of the Deductive Sciences" cites a number of what he deems "universal laws" of the sentential calculus, three "rules" of inference, and one fundamental law of identity (from which he derives four more laws). The traditional "laws of thought" are included in his long listing of "laws" and "rules". His treatment is, as the title of his book suggests, limited to the "Methodology of the Deductive Sciences". Rationale: In his introduction (2nd edition) he observes that what began with an application of logic to mathematics has been widened to "the whole of human knowledge": "[I want to present] a clear idea of that powerful trend of contemporary thought which is concentrated about modern logic. This trend arose originally from the somewhat limited task of stabilizing the foundations of mathematics. In its present phase, however, it has much wider aims. For it seeks to create a unified conceptual apparatus which would supply a common basis for the whole of human knowledge.". Law of identity (Leibniz's law, equality) To add the notion of "equality" to the "propositional calculus" (this new notion not to be confused with logical equivalence symbolized by ↔, ⇄, "if and only if (iff)", "biconditional", etc.) Tarski (cf p54-57) symbolizes what he calls "Leibniz's law" with the symbol "=". This extends the domain (universe) of discourse and the types of functions to numbers and mathematical formulas (Kleene 1967:148ff, Tarski 1946:54ff). In a nutshell: given that "x has every property that y has", we can write "x = y", and this formula will have a truth value of "truth" or "falsity". Tarski states this Leibniz's law as follows: I. Leibniz' Law: x = y, if, and only if, x has every property which y has, and y has every property which x has. He then derives some other "laws" from this law: II. Law of Reflexivity: Everything is equal to itself: x = x. [Proven at PM ❋13.15] III. Law of Symmetry: If x = y, then y = x. [Proven at PM ❋13.16] IV. Law of Transitivity: If x = y and y = z, then x = z. [Proven at PM ❋13.17] V. If x = z and y = z, then x = y. [Proven at PM ❋13.172] Principia Mathematica defines the notion of equality as follows (in modern symbols); note that the generalization "for all" extends over predicate-functions f( ): ❋13.01. x = y =def ∀f:(f(x) → f(y)) ("This definition states that x and y are to be called identical when every predicate function satisfied by x is satisfied by y" Hilbert 1927:467 adds only two axioms of equality, the first is x = x, the second is (x = y) → ((f(x) → f(y)); the "for all f" is missing (or implied). Gödel 1930 defines equality similarly to PM :❋13.01. Kleene 1967 adopts the two from Hilbert 1927 plus two more (Kleene 1967:387). George Spencer-Brown (1969): Laws of Form George Spencer-Brown in his 1969 "Laws of Form" (LoF) begins by first taking as given that "we cannot make an indication without drawing a distinction". This, therefore, presupposes the law of excluded middle. He then goes on to define two axioms, which describe how distinctions (a "boundary") and indications (a "call") work: Axiom 1. The law of calling: The value of a call made again is the value of the call. Axiom 2. The law of crossing: The value of a (boundary) crossing made again is not the value of the crossing. These axioms bare a resemblance to the "law of identity" and the "law of non-contradiction" respectively. However, the law of identity is proven as a theorem (Theorem 4.5 in "Laws of Form") within the framework of LoF. In general, LoF can be reinterpreted as First-order logic, propositional logic, and second-order logic by assigning specific interpretations to the symbols and values of LoF. Contemporary developments All of the above "systems of logic" are considered to be "classical" meaning propositions and predicate expressions are two-valued, with either the truth value "truth" or "falsity" but not both(Kleene 1967:8 and 83). While intuitionistic logic falls into the "classical" category, it objects to extending the "for all" operator to the Law of Excluded Middle; it allows instances of the "Law", but not its generalization to an infinite domain of discourse. Intuitionistic logic 'Intuitionistic logic', sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not assume the law of the excluded middle and double negation elimination, which are fundamental inference rules in classical logic. Paraconsistent logic 'Paraconsistent logic' refers to so-called contradiction-tolerant logical systems in which a contradiction does not necessarily result in trivialism. In other words, the principle of explosion is not valid in such logics. Some (namely the dialetheists) argue that the law of non-contradiction is denied by dialetheic logic. They are motivated by certain paradoxes which seem to imply a limit of the law of non-contradiction, namely the liar paradox. In order to avoid a trivial logical system and still allow certain contradictions to be true, dialetheists will employ a paraconsistent logic of some kind. Three-valued logic TBD cf Three-valued logic try this A Ternary Arithmetic and Logic – Semantic Scholar Modal propositional calculi (cf Kleene 1967:49): These "calculi" include the symbols ⎕A, meaning "A is necessary" and ◊A meaning "A is possible". Kleene states that: "These notions enter in domains of thinking where there are understood to be two different kinds of "truth", one more universal or compelling than the other ... A zoologist might declare that it is impossible that salamanders or any other living creatures can survive fire; but possible (though untrue) that unicorns exist, and possible (though improbable) that abominable snowmen exist." Fuzzy logic 'Fuzzy logic' is a form of many-valued logic; it deals with reasoning that is approximate rather than fixed and exact. See also Algebra of concepts References Aristotle, "The Categories", Harold P. Cooke (trans.), pp. 1–109 in Aristotle, Vol. 1, Loeb Classical Library, William Heinemann, London, UK, 1938. Aristotle, "On Interpretation", Harold P. Cooke (trans.), pp. 111–179 in Aristotle, Vol. 1, Loeb Classical Library, William Heinemann, London, UK, 1938. Aristotle, "Prior Analytics", Hugh Tredennick (trans.), pp. 181–531 in Aristotle, Vol. 1, Loeb Classical Library, William Heinemann, London, UK, 1938. Boole, George, An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities, Macmillan, 1854. Reprinted with corrections, Dover Publications, New York, NY, 1958. Louis Couturat, translated by Lydia Gillingham Robinson, 1914, The Algebra of Logic, The Open Court Publishing Company, Chicago and London. Downloaded via googlebooks. Gödel 1944 Russell's mathematical logic in Kurt Gödel: Collected Works Volume II, Oxford University Press, New York, NY, Sir William Hamilton, 9th Baronet, (Henry L. Mansel and John Veitch, ed.), 1860 Lectures on Metaphysics and Logic, in Two Volumes. Vol. II. Logic, Boston: Gould and Lincoln. Downloaded via googlebooks. Stephen Cole Kleene, 1967, Mathematical Logic reprint 2002, Dover Publications, Inc., Mineola, NY, (pbk.) Ernest Nagel, James R. Newman, 1958, Gödel's Proof, New York University Press, LCCCN: 58-5610. Bertrand Russell, The Problems of Philosophy (1912), Oxford University Press, New York, 1997, . Arthur Schopenhauer, The World as Will and Representation, Volume 2, Dover Publications, Mineola, New York, 1966, Alfred Tarski, 1946 (second edition), republished 1995, Introduction to Logic and to the Methodology of Deductive Sciences translated by Olaf Helmer, Dover Publications, Inc., New York, (pbk.) Jean van Heijenoort, 1967, From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931, Harvard University Press, Cambridge, MA, (pbk) Emil Post, 1921, Introduction to a general theory of elementary propositions with commentary by van Heijenoort, pages 264ff David Hilbert, 1927, The foundations of mathematics with commentary by van Heijenoort, pages 464ff Kurt Gödel, 1930a, The completeness of the axioms of the functional calculus of logic with commentary by van Heijenoort, pages 592ff. Alfred North Whitehead, Bertrand Russell. Principia Mathematica, 3 vols, Cambridge University Press, 1910, 1912, and 1913. Second edition, 1925 (Vol. 1), 1927 (Vols 2, 3). Abridged as Principia Mathematica to *56 (2nd edition), Cambridge University Press, 1962, no LCCCN or ISBN External links James Danaher, "The Laws of Thought", The Philosopher, Volume LXXXXII No. 1 Peter Suber, "Non-Contradiction and Excluded Middle ", Earlham College Classical logic
0.76618
0.995573
0.762788
Verificationism
Verificationism, also known as the verification principle or the verifiability criterion of meaning, is a doctrine in philosophy which asserts that a statement is meaningful only if it is either empirically verifiable (can be confirmed through the senses) or a tautology (true by virtue of its own meaning or its own logical form). Verificationism rejects statements of metaphysics, theology, ethics and aesthetics as meaningless in conveying truth value or factual content, though they may be meaningful in influencing emotions or behavior. Verificationism was a central thesis of logical positivism, a movement in analytic philosophy that emerged in the 1920s by philosophers who sought to unify philosophy and science under a common naturalistic theory of knowledge. The verifiability criterion underwent various revisions throughout the 1920s to 1950s. However, by the 1960s, it was deemed to be irreparably untenable. Its abandonment would eventually precipitate the collapse of the broader logical positivist movement. Origins The roots of verificationism may be traced to at least the 19th century, in philosophical principles that aim to ground scientific theory in verifiable experience, such as C.S. Peirce's pragmatism and the work of conventionalist Pierre Duhem, who fostered instrumentalism. Verificationism, as principle, would be conceived in the 1920s by the logical positivists of the Vienna Circle, who sought an epistemology whereby philosophical discourse would be, in their perception, as authoritative and meaningful as empirical science. The movement established grounding in the empiricism of David Hume, Auguste Comte and Ernst Mach, and the positivism of the latter two, borrowing perspectives from Immanuel Kant and defining their exemplar of science in Einstein's general theory of relativity. Ludwig Wittgenstein's Tractatus, published in 1921, established the theoretical foundations for the verifiability criterion of meaning. Building upon Gottlob Frege's work, the analytic–synthetic distinction was also reformulated, reducing logic and mathematics to semantical conventions. This would render logical truths (being unverifiable by the senses) tenable under verificationism, as tautologies. Revisions Logical positivists within the Vienna Circle recognized quickly that the verifiability criterion was too stringent. Specifically, universal generalizations were noted to be empirically unverifiable, rendering vital domains of science and reason, including scientific hypothesis, meaningless under verificationism, absent revisions to its criterion of meaning. Rudolf Carnap, Otto Neurath, Hans Hahn and Philipp Frank led a faction seeking to make the verifiability criterion more inclusive, beginning a movement they referred to as the "liberalization of empiricism". Moritz Schlick and Friedrich Waismann led a "conservative wing" that maintained a strict verificationism. Whereas Schlick sought to redefine universal generalizations as tautological rules, thereby to reconcile them with the existing criterion, Hahn argued that the criterion itself should be weakened to accommodate non-conclusive verification. Neurath, within the liberal wing, proposed the adoption of coherentism, though challenged by Schlick's foundationalism. However, his physicalism would eventually be adopted over Mach's phenomenalism by most members of the Vienna Circle. In 1936, Carnap sought a switch from verification to confirmation. Carnap's confirmability criterion (confirmationism) would not require conclusive verification (thus accommodating for universal generalizations) but allow for partial testability to establish degrees of confirmation on a probabilistic basis. Carnap never succeeded in finalising his thesis despite employing abundant logical and mathematical tools for this purpose. In all of Carnap's formulations, a universal law's degree of confirmation was zero. In Language, Truth and Logic, published that year, A. J. Ayer distinguished between strong and weak verification. This system espoused conclusive verification, yet allowed for probabilistic inclusion where verifiability is inconclusive. He also distinguished theoretical from practical verifiability, proposing that statements that are verifiable in principle should be meaningful, even if unverifiable in practice. Criticisms Philosopher Karl Popper, a graduate of the University of Vienna, though not a member within the ranks of the Vienna Circle, was among the foremost critics of verificationism. He identified three fundamental deficiencies in verifiability as a criterion of meaning: Verificationism rejects universal generalizations, such as "all swans are white," as meaningless. Popper argues that while universal statements cannot be verified, they can be proven false, a foundation on which he was to propose his criterion of falsifiability. Verificationism allows existential statements, such as “unicorns exist”, to be classified as scientifically meaningful, despite the absence of any definitive method to show that they are false (one could possibly find a unicorn somewhere not yet examined). Verificationism is meaningless by virtue of its own criterion because it cannot be empirically verified. Thus the concept is self-defeating. Popper regarded scientific hypotheses to never be completely verifiable, as well as not confirmable under Carnap's thesis. He also considered metaphysical, ethical and aesthetic statements often rich in meaning and important in the origination of scientific theories. Other philosophers also voiced their own criticisms of verificationism: The 1951 article "Two Dogmas of Empiricism", by Willard Van Orman Quine, found no suitable explanations for the concept of analyticity in that they reduced ultimately to circular reasoning. This served to uproot the analytic/synthetic division pivotal to verificationism. Carl Hempel (1950, 1951) demonstrated that the verifiability criterion was not justifiable in that it was too strong to accommodate key proceedings within science, such as general laws and limits in infinite sequences. In 1958, Norwood Hanson explained that even direct observations are never truly neutral in that they are laden with theory. ie. Influenced by a system of presuppositions that act as an interpretative framework for those observations. This served to destabilize the foundations of empiricism by challenging the infallibility and objectivity of empirical observation. Thomas Kuhn's landmark book of 1962, The Structure of Scientific Revolutions—which discussed paradigm shifts in fundamental physics—critically undermined confidence in scientific foundationalism, a theory commonly, if erroneously, attributed to verificationism. Falsifiability In The Logic of Scientific Discovery (1959), Popper proposed falsifiability, or falsificationism. Though formulated in the context of what he perceived were intractable problems in both verifiability and confirmability, Popper intended falsifiability, not as a criterion of meaning like verificationism (as commonly misunderstood), but as a criterion to demarcate scientific statements from non-scientific statements. Notably, the falsifiability criterion would allow for scientific hypotheses (expressed as universal generalizations) to be held as provisionally true until proven false by observation, whereas under verificationism, they would be disqualified immediately as meaningless. In formulating his criterion, Popper was informed by the contrasting methodologies of Albert Einstein and Sigmund Freud. Appealing to the general theory of relativity and its predicted effects on gravitational lensing, it was evident to Popper that Einstein's theories carried significantly greater predictive risk than Freud's of being falsified by observation. Though Freud found ample confirmation of his theories in observations, Popper would note that this method of justification was vulnerable to confirmation bias, leading in some cases to contradictory outcomes. He would therefore conclude that predictive risk, or falsifiability, should serve as the criterion to demarcate the boundaries of science. Though falsificationism has been criticized extensively by philosophers for methodological shortcomings in its intended demarcation of science, it would receive acclamatory adoption among scientists. Logical positivists too adopted the criterion, even as their movement ran its course, catapulting Popper, initially a contentious misfit, to carry the richest philosophy out of interwar Vienna. Legacy In 1967, John Passmore, a leading historian of 20th-century philosophy, wrote, "Logical positivism is dead, or as dead as a philosophical movement ever becomes". Logical positivism's fall heralded postpositivism, where Popper's view of human knowledge as hypothetical, continually growing and open to change ascended and verificationism, in academic circles, became mostly maligned. In a 1976 TV interview, A. J. Ayer, who had introduced logical positivism to the English-speaking world in the 1930s was asked what he saw as its main defects, and answered that "nearly all of it was false". However, he soon said that he still held "the same general approach", referring to empiricism and reductionism, whereby mental phenomena resolve to the material or physical and philosophical questions largely resolve to ones of language and meaning. In 1977, Ayer had noted: In the late 20th and early 21st centuries, the general concept of verification criteria—in forms that differed from those of the logical positivists—was defended by Bas van Fraassen, Michael Dummett, Crispin Wright, Christopher Peacocke, David Wiggins, Richard Rorty, and others. See also Epistemic theories of truth Newton's flaming laser sword Semantic anti-realism (epistemology) References Analytic philosophy Empiricism Logical positivism Metatheory of science Epistemological theories Meaning (philosophy of language) Meaning in religious language
0.770633
0.989708
0.762701
Wissenschaft
Wissenschaft ( "knowledgeship") is a German-language term that embraces scholarship, research, study, higher education, and academia. Wissenschaft translates exactly into many other languages, e.g. vetenskap in Swedish or nauka in Polish, but there is no exact translation in modern English. The common translation to science can be misleading, depending on the context, because Wissenschaft equally includes humanities (Geisteswissenschaft), and sciences and humanities are mutually exclusive categories in modern English. Wissenschaft includes humanities like history, anthropology, or arts (study of literature, visual arts, or music) at the same level as sciences like chemistry or psychology. Wissenschaft incorporates scientific and non-scientific inquiry, learning, knowledge, scholarship, and does not necessarily imply empirical research. History Before Immanuel Kant published his Critique of Judgment in 1790, "schöne Wissenschaft" was highly regarded. "Schöne Wissenschaft" included poetry, rhetoric, and other subjects that were meant to promote an understanding of truth, beauty, and goodness. Kant argued that aesthetic judgments were not an area of systematic knowledge, and therefore were outside the realm of Wissenschaft. Compared to the term "science" Although Wissenschaft and science were roughly comparable words in previous centuries, the word science in English "has narrowed its meaning incomparably, whereas Wissenschaft...has retained its broad meaning". In modern English, the word science refers to systematically acquired, objective knowledge that is about a particular subject (the workings of the natural world, including the people in it) and produced through a particular methodology (the scientific method), in a progressive, iterative process that builds on previous knowledge. Wissenschaft, by contrast, encompasses both humanities and sciences, and both knowledge of objects as well as truths, such as what it means to be good. The difficulties of being precise about knowledge are one reason why English is not considered well-suited for discussions about epistemology, and terms from other languages, notably Latin and German, are commonly used. Some 19th-century Americans visiting German universities interpreted Wissenschaft as meaning "pure science," untainted by social purposes and opposed to the liberal arts. Some contemporary scientists and philosophers interpret Wissenschaft as meaning any true knowledge or successful method, including philosophical, mathematical, and logical knowledge and methods. See also Phrases employing this term include the following: Wissenschaft des Judentums, the "scholarship of Judaism," a 19th-century scholarly movement Die fröhliche Wissenschaft, the title of a book written by Friedrich Nietzsche and first published in 1882 Bildwissenschaft, an academic discipline in the German-speaking world associated with visual studies and art history References External links Philosophy is Science Concepts in epistemology German words and phrases de:Wissenschaft no:Vitenskap nn:Vitskap
0.783442
0.973425
0.762622
Teleology
Teleology (from , and ) or finality is a branch of causality giving the reason or an explanation for something as a function of its end, its purpose, or its goal, as opposed to as a function of its cause. James Wood, in his Nuttall Encyclopaedia, explained the meaning of teleology as "the doctrine of final causes, particularly the argument for the being and character of God from the being and character of His works; that the end reveals His purpose from the beginning, the end being regarded as the thought of God at the beginning, or the universe viewed as the realisation of Him and His eternal purpose." A purpose that is imposed by human use, such as the purpose of a fork to hold food, is called extrinsic. Natural teleology, common in classical philosophy, though controversial today, contends that natural entities also have intrinsic purposes, regardless of human use or opinion. For instance, Aristotle claimed that an acorn's intrinsic telos is to become a fully grown oak tree. Though ancient atomists rejected the notion of natural teleology, teleological accounts of non-personal or non-human nature were explored and often endorsed in ancient and medieval philosophies, but fell into disfavor during the modern era (1600–1900). History In Western philosophy, the term and concept of teleology originated in the writings of Plato and Aristotle. Aristotle's 'four causes' give special place to the telos or "final cause" of each thing. In this, he followed Plato in seeing purpose in both human and nonhuman nature. Etymology The word teleology combines Greek (, from ) and . German philosopher Christian Wolff would coin the term, as (Latin), in his work (1728). Platonic In Plato's dialogue Phaedo, Socrates argues that true explanations for any given physical phenomenon must be teleological. He bemoans those who fail to distinguish between a thing's necessary and sufficient causes, which he identifies respectively as material and final causes: Socrates here argues that while the materials that compose a body are necessary conditions for its moving or acting in a certain way, they nevertheless cannot be the sufficient condition for its moving or acting as it does. For example, if Socrates is sitting in an Athenian prison, the elasticity of his tendons is what allows him to be sitting, and so a physical description of his tendons can be listed as necessary conditions or auxiliary causes of his act of sitting. However, these are only necessary conditions of Socrates' sitting. To give a physical description of Socrates' body is to say that Socrates is sitting, but it does not give any idea why it came to be that he was sitting in the first place. To say why he was sitting and not not sitting, it is necessary to explain what it is about his sitting that is good, for all things brought about (i.e., all products of actions) are brought about because the actor saw some good in them. Thus, to give an explanation of something is to determine what about it is good. Its goodness is its actual cause—its purpose, telos or 'reason for which'. Aristotelian Aristotle argued that Democritus was wrong to attempt to reduce all things to mere necessity, because doing so neglects the aim, order, and "final cause", which brings about these necessary conditions: In Physics, using the hylomorphic theory, (using eternal forms as his model), Aristotle rejects Plato's assumption that the universe was created by an intelligent designer. For Aristotle, natural ends are produced by "natures" (principles of change internal to living things), and natures, Aristotle argued, do not deliberate: These Platonic and Aristotelian arguments ran counter to those presented earlier by Democritus and later by Lucretius, both of whom were supporters of what is now often called accidentalism: Modern philosophy The chief instance, and the largest polemic morass, of teleological viewpoint in modern cosmology and ontology is the teleological argument that posits an intelligent designer as a god. Postmodern philosophy Teleological-based "grand narratives" are renounced by the postmodern tradition, where teleology may be viewed as reductive, exclusionary, and harmful to those whose stories are diminished or overlooked. Against this postmodern position, Alasdair MacIntyre has argued that a narrative understanding of oneself, of one's capacity as an independent reasoner, one's dependence on others and on the social practices and traditions in which one participates, all tend towards an ultimate good of liberation. Social practices may themselves be understood as teleologically oriented to internal goods, for example, practices of philosophical and scientific inquiry are teleologically ordered to the elaboration of a true understanding of their objects. MacIntyre's After Virtue (1981) famously dismissed the naturalistic teleology of Aristotle's "metaphysical biology", but he has cautiously moved from that book's account of a sociological teleology toward an exploration of what remains valid in a more traditional teleological naturalism. Ethics Teleology significantly informs the study of ethics, such as in: Business ethics: People in business commonly think in terms of purposeful action, as in, for example, management by objectives. Teleological analysis of business ethics leads to consideration of the full range of stakeholders in any business decision, including the management, the staff, the customers, the shareholders, the country, humanity and the environment. Medical ethics: Teleology provides a moral basis for the professional ethics of medicine, as physicians are generally concerned with outcomes and must therefore know the telos of a given treatment paradigm. Consequentialism The broad spectrum of consequentialist ethics—of which utilitarianism is a well-known example—focuses on the result or consequences, with such principles as John Stuart Mill's 'principle of utility': "the greatest good for the greatest number". This principle is thus teleological, though in a broader sense than is elsewhere understood in philosophy. In the classical notion, teleology is grounded in the inherent nature of things themselves, whereas in consequentialism, teleology is imposed on nature from outside by the human will. Consequentialist theories justify inherently what most people would call evil acts by their desirable outcomes, if the good of the outcome outweighs the bad of the act. So, for example, a consequentialist theory would say it was acceptable to kill one person in order to save two or more other people. These theories may be summarized by the maxim "." Deontology Consequentialism stands in contrast to the more classical notions of deontological ethics, of which examples include Immanuel Kant's categorical imperative, and Aristotle's virtue ethics—although formulations of virtue ethics are also often consequentialist in derivation. In deontological ethics, the goodness or badness of individual acts is primary and a larger, more desirable goal is insufficient to justify bad acts committed on the way to that goal, even if the bad acts are relatively minor and the goal is major (like telling a small lie to prevent a war and save millions of lives). In requiring all constituent acts to be good, deontological ethics is much more rigid than consequentialism, which varies by circumstance. Practical ethics are usually a mix of the two. For example, Mill also relies on deontic maxims to guide practical behavior, but they must be justifiable by the principle of utility. Economics A teleology of human aims played a crucial role in the work of economist Ludwig von Mises, especially in the development of his science of praxeology. Mises believed that an individual's action is teleological because it is governed by the existence of their chosen ends. In other words, individuals select what they believe to be the most appropriate means to achieve a sought after goal or end. Mises also stressed that, with respect to human action, teleology is not independent of causality: "No action can be devised and ventured upon without definite ideas about the relation of cause and effect, teleology presupposes causality." Assuming reason and action to be predominantly influenced by ideological credence, Mises derived his portrayal of human motivation from Epicurean teachings, insofar as he assumes "atomistic individualism, teleology, and libertarianism, and defines man as an egoist who seeks a maximum of happiness" (i.e. the ultimate pursuit of pleasure over pain). "Man strives for," Mises remarks, "but never attains the perfect state of happiness described by Epicurus." Furthermore, expanding upon the Epicurean groundwork, Mises formalized his conception of pleasure and pain by assigning each specific meaning, allowing him to extrapolate his conception of attainable happiness to a critique of liberal versus socialist ideological societies. It is there, in his application of Epicurean belief to political theory, that Mises flouts Marxist theory, considering labor to be one of many of man's 'pains', a consideration which positioned labor as a violation of his original Epicurean assumption of man's manifest hedonistic pursuit. From here he further postulates a critical distinction between introversive labor and extroversive labor, further divaricating from basic Marxist theory, in which Marx hails labor as man's "species-essence", or his "species-activity". Science In modern science, explanations that rely on teleology are often, but not always, avoided, either because they are unnecessary or because whether they are true or false is thought to be beyond the ability of human perception and understanding to judge. But using teleology as an explanatory style, in particular within evolutionary biology, is still controversial. Since the Novum Organum of Francis Bacon, teleological explanations in physical science tend to be deliberately avoided in favor of focus on material and efficient explanations, although some recent accounts of quantum phenomena make use of teleology. Final and formal causation came to be viewed as false or too subjective. Nonetheless, some disciplines, in particular within evolutionary biology, continue to use language that appears teleological in describing natural tendencies towards certain end conditions. Some suggest, however, that these arguments ought to be, and practicably can be, rephrased in non-teleological forms; others hold that teleological language cannot always be easily expunged from descriptions in the life sciences, at least within the bounds of practical pedagogy. Contemporary philosophers and scientists still debate whether teleological axioms are useful or accurate in proposing modern philosophies and scientific theories. An example of the reintroduction of teleology into modern language is the notion of an attractor. Another instance is when Thomas Nagel (2012), though not a biologist, proposed a non-Darwinian account of evolution that incorporates impersonal and natural teleological laws to explain the existence of life, consciousness, rationality, and objective value. Regardless, the accuracy can also be considered independently from the usefulness: it is a common experience in pedagogy that a minimum of apparent teleology can be useful in thinking about and explaining Darwinian evolution even if there is no true teleology driving evolution. Thus it is easier to say that evolution "gave" wolves sharp canine teeth because those teeth "serve the purpose of" predation regardless of whether there is an underlying non-teleologic reality in which evolution is not an actor with intentions. In other words, because human cognition and learning often rely on the narrative structure of stories – with actors, goals, and immediate (proximate) rather than ultimate (distal) causation (see also proximate and ultimate causation) – some minimal level of teleology might be recognized as useful or at least tolerable for practical purposes even by people who reject its cosmologic accuracy. Its accuracy is upheld by Barrow and Tipler (1986), whose citations of such teleologists as Max Planck and Norbert Wiener are significant for scientific endeavor. Biology Apparent teleology is a recurring issue in evolutionary biology, much to the consternation of some writers. Statements implying that nature has goals, for example where a species is said to do something "in order to" achieve survival appear teleological, and therefore invalid. Usually, it is possible to rewrite such sentences to avoid the apparent teleology. Some biology courses have incorporated exercises requiring students to rephrase such sentences so that they do not read teleologically. Nevertheless, biologists still frequently write in a way which can be read as implying teleology even if that is not the intention. John Reiss argues that evolutionary biology can be purged of such teleology by rejecting the analogy of natural selection as a watchmaker. Other arguments against this analogy have also been promoted by writers such as Richard Dawkins. Some authors, like James Lennox, have argued that Darwin was a teleologist, while others, such as Michael Ghiselin, describe this claim as a myth promoted by misinterpretations of his discussions and emphasized the distinction between using teleological metaphors and being teleological. Biologist philosopher Francisco Ayala has argued that all statements about processes can be trivially translated into teleological statements, and vice versa, but that teleological statements are more explanatory and cannot be disposed of. Karen Neander has argued that the modern concept of biological 'function' is dependent upon selection. So, for example, it is not possible to say that anything that simply winks into existence without going through a process of selection has functions. We decide whether an appendage has a function by analysing the process of selection that led to it. Therefore, any talk of functions must be posterior to natural selection and function cannot be defined in the manner advocated by Reiss and Dawkins. Ernst Mayr states that "adaptedness ... is an a posteriori result rather than an a priori goal-seeking". Various commentators view the teleological phrases used in modern evolutionary biology as a type of shorthand. For example, Simon Hugh Piper Maddrell writes that "the proper but cumbersome way of describing change by evolutionary adaptation [may be] substituted by shorter overtly teleological statements" for the sake of saving space, but that this "should not be taken to imply that evolution proceeds by anything other than from mutations arising by chance, with those that impart an advantage being retained by natural selection". Likewise, J. B. S. Haldane says, "Teleology is like a mistress to a biologist: he cannot live without her but he's unwilling to be seen with her in public." Cybernetics Cybernetics is the study of the communication and control of regulatory feedback both in living beings and machines, and in combinations of the two. Arturo Rosenblueth, Norbert Wiener, and Julian Bigelow had conceived of feedback mechanisms as lending a teleology to machinery. Wiener coined the term cybernetics to denote the study of "teleological mechanisms". In the cybernetic classification presented by Rosenblueth, Wiener, and Bigelow, teleology is feedback controlled purpose. The classification system underlying cybernetics has been criticized by Frank Honywill George and Les Johnson, who cite the need for an external observability to the purposeful behavior in order to establish and validate the goal-seeking behavior. In this view, the purpose of observing and observed systems is respectively distinguished by the system's subjective autonomy and objective control. See also References Notes Citations Further reading Espinoza, Miguel. "La finalité, le temps et les principes de la physique". Gotthelf, Allan. 1987. "Aristotle's Conception of Final Causality". In Philosophical Issues in Aristotle's Biology, edited by A. Gotthelf and J. G. Lennox. Cambridge: Cambridge University Press. Horkheimer, Max, and Theodor Adorno. Dialectic of Enlightenment. Johnson, Monte Ransome. 2005. Aristotle on Teleology. New York: Oxford University Press. Knight, Kelvin. 2007 Aristotelian Philosophy: Ethics and Politics from Aristotle to MacIntyre. New York: Polity Press. Lukács, Georg. History and Class Consciousness. MacIntyre, Alasdair. 2006. "First Principles, Final Ends, and Contemporary Philosophical Issues". The Tasks of Philosophy: Selected Essays 1, edited by A. MacIntyre. Cambridge: Cambridge University Press. Makin, Stephen. 2006. Metaphysics Book Theta, by Aristotle, with an introduction and commentary by S. Makin. New York: Oxford University Press. Marcuse, Herbert. Hegel's Ontology and the Theory of Historicity. Nissen, Lowell. 1997. Teleological Language in the Life Sciences. Rowman & Littlefield. Barrow, John D., and Frank J. Tipler. The Anthropic Cosmological Principal. Causality Historiography Subfields of metaphysics
0.763754
0.998433
0.762557
Dualism in cosmology
Dualism or dualistic cosmology is the moral or belief that two fundamental concepts exist, which often oppose each other. It is an umbrella term that covers a diversity of views from various religions, including both traditional religions and scriptural religions. Moral dualism is the belief of the great complement of, or conflict between, the benevolent and the malevolent. It simply implies that there are two moral opposites at work, independent of any interpretation of what might be "moral" and independent of how these may be represented. Moral opposites might, for example, exist in a worldview that has one god, more than one god, or none. By contrast, duotheism, bitheism or ditheism implies (at least) two gods. While bitheism implies harmony, ditheism implies rivalry and opposition, such as between good and evil, or light and dark, or summer and winter. For example, a ditheistic system could be one in which one god is a creator and the other a destroyer. In theology, dualism can also refer to the relationship between the deity and creation or the deity and the universe (see theistic dualism). That form of dualism is a belief shared in certain traditions of Christianity and Hinduism. Alternatively, in ontological dualism, the world is divided into two overarching categories. Within Chinese culture and philosophy the opposition and combination of the universe's two basic principles are expressed as yin and yang and are traditionally foundational doctrine of Taoism, Confucianism and some Chinese Buddhist Schools. Many myths and creation motifs with dualistic cosmologies have been described in ethnographic and anthropological literature. The motifs conceive the world as being created, organized, or influenced by two demiurges, culture heroes, or other mythological beings, who compete with each other or have a complementary function in creating, arranging or influencing the world. There is a huge diversity of such cosmologies. In some cases, such as among the Chukchi, the beings collaborate rather than compete, and they contribute to the creation in a coequal way. In many other instances the two beings are not of the same importance or power (sometimes, one of them is even characterized as gullible). Sometimes they can be contrasted as good versus evil. They may be often believed to be twins or at least brothers. Dualistic motifs in mythologies can be observed in all inhabited continents. Zolotarjov concludes that they cannot be explained by diffusion or borrowing but are rather of convergent origin. They are related to a dualistic organization of society (moieties); in some cultures, the social organization may have ceased to exist, but mythology preserves the memory in more and more disguised ways. Moral dualism Moral dualism is the belief of the great complement or conflict between the benevolent and the malevolent. Like ditheism/bitheism (see below), moral dualism does not imply the absence of monist or monotheistic principles. Moral dualism simply implies that there are two moral opposites at work, independent of any interpretation of what might be "moral" and—unlike ditheism/bitheism—independent of how these may be represented. For example, Mazdaism (Mazdean Zoroastrianism) is both dualistic and monotheistic (but not monist by definition) since in that philosophy God—the Creator—is purely good, and the antithesis—which is also uncreated–is an absolute one. Mandaeism is monotheistic and Gnostic and in its cosmology, the World of Light that is good, is contrasted with the World of Darkness or underworld (alma d-hšuka) that is evil. Zurvanism (Zurvanite Zoroastrianism) and Manichaeism are representative of dualistic and monist philosophies since each has a supreme and transcendental First Principle from which the two equal-but-opposite entities then emanate. This is also true for the suppressed Christian gnostic religions, such as Bogomils, Catharism, and so on. More complex forms of monist dualism also exist, for instance in Hermeticism, where Nous "thought"—that is described to have created man—brings forth both good and evil, dependent on interpretation, whether it receives prompting from the God or from the Demon. Duality with pluralism is considered a logical fallacy. History Moral dualism began as a theological belief. Dualism was first seen implicitly in Egyptian religious beliefs by the contrast of the gods Set (disorder, death) and Osiris (order, life). The first explicit conception of dualism came from the Ancient Persian religion of Zoroastrianism around the mid-fifth century BC. Zoroastrianism is a monotheistic religion that believes that Ahura Mazda is the eternal creator of all good things. Any violations of Ahura Mazda's order arise from druj, which is everything uncreated. From this comes a significant choice for humans to make. Either they fully participate in human life for Ahura Mazda or they do not and give druj power. Personal dualism is even more distinct in the beliefs of later religions. The religious dualism of Christianity between good and evil is not a perfect dualism as God (good) will inevitably destroy Satan (evil). Early Christian dualism is largely based on Platonic Dualism (See: Neoplatonism and Christianity). There is also a personal dualism in Christianity with a soul-body distinction based on the idea of an immaterial Christian soul. Duotheism, bitheism, ditheism When used with regards to multiple gods, dualism may refer to duotheism, bitheism, or ditheism. Although ditheism/bitheism imply moral dualism, they are not equivalent: ditheism/bitheism implies (at least) two gods, while moral dualism does not necessarily imply theism (theos = god) at all. Both bitheism and ditheism imply a belief in two equally powerful gods with complementary or antonymous properties; however, while bitheism implies harmony, ditheism implies rivalry and opposition, such as between good and evil, bright and dark, or summer and winter. For example, a ditheistic system would be one in which one god is creative, the other is destructive (cf. theodicy). In the original conception of Zoroastrianism, for example, Ahura Mazda was the spirit of ultimate good, while Ahriman (Angra Mainyu) was the spirit of ultimate evil. In a bitheistic system, by contrast, where the two deities are not in conflict or opposition, one could be male and the other female (cf. duotheism). One well-known example of a bitheistic or duotheistic theology based on gender polarity is found in the neopagan religion of Wicca. In Wicca, dualism is represented in the belief of a god and a goddess as a dual partnership in ruling the universe. This is centered on the worship of a divine couple, the Moon Goddess and the Horned God, who are regarded as lovers. However, there is also a ditheistic theme within traditional Wicca, as the Horned God has dual aspects of bright and dark - relating to day/night, summer/winter - expressed as the Oak King and the Holly King, who in Wiccan myth and ritual are said to engage in battle twice a year for the hand of the Goddess, resulting in the changing seasons. (Within Wicca, bright and dark do not correspond to notions of "good" and "evil" but are aspects of the natural world, much like yin and yang in Taoism.) Radical and mitigated dualism Radical Dualism – or absolute Dualism which posits two co-equal divine forces. Manichaeism conceives of two previously coexistent realms of light and darkness which become embroiled in conflict, owing to the chaotic actions of the latter. Subsequently, certain elements of the light became entrapped within darkness; the purpose of material creation is to enact the slow process of extraction of these individual elements, at the end of which the kingdom of light will prevail over darkness. Manicheanism likely inherits this dualistic mythology from Zoroastrianism, in which the eternal spirit Ahura Mazda is opposed by his antithesis, Angra Mainyu; the two are engaged in a cosmic struggle, the conclusion of which will likewise see Ahura Mazda triumphant. The 'Hymn of the Pearl' included the belief that the material world corresponds to some sort of malevolent intoxication brought about by the powers of darkness to keep elements of the light trapped inside it in a state of drunken distraction. Mitigated Dualism – is where one of the two principles is in some way inferior to the other. Such classical Gnostic movements as the Sethians conceived of the material world as being created by a lesser divinity than the true God that was the object of their devotion. The spiritual world is conceived of as being radically different from the material world, co-extensive with the true God, and the true home of certain enlightened members of humanity; thus, these systems were expressive of a feeling of acute alienation within the world, and their resultant aim was to allow the soul to escape the constraints presented by the physical realm. However, bitheistic and ditheistic principles are not always so easily contrastable, for instance in a system where one god is the representative of summer and drought and the other of winter and rain/fertility (cf. the mythology of Persephone). Marcionism, an early Christian sect, held that the Old and New Testaments were the work of two opposing gods: both were First Principles, but of different religions. Theistic dualism In theology, dualism can refer to the relationship between God and creation or God and the universe. This form of dualism is a belief shared in certain traditions of Christianity and Hinduism. Zoroastrianism Zoroastrianism or "Mazdayasna" is one of the world's oldest continuously-practiced religions, based on the teachings of the Iranian-speaking prophet Zoroaster. It has a dualistic cosmology of good and evil and an eschatology which predicts the ultimate conquest of evil by good. Zoroastrianism exalts an uncreated and benevolent deity of wisdom known as Ahura Mazda as its supreme being. Manichaeism Manichaeism was a major religion founded in the 3rd century AD by the Parthian prophet Mani, in the Sasanian Empire. Manichaeism taught an elaborate dualistic cosmology describing the struggle between a good, spiritual world of light, and an evil, material world of darkness. Through an ongoing process that takes place in human history, light is gradually removed from the world of matter and returned to the world of light, whence it came. Its beliefs were based on local Mesopotamian religious movements and Gnosticism. The dualism between God and Creation has existed as a central belief in multiple historical sects and traditions of Christianity, including Marcionism, Catharism, Paulicianism, and other forms of Gnostic Christianity. Christian dualism refers to the belief that God and creation are distinct, but interrelated through an indivisible bond. However, Gnosticism is a diverse, syncretistic religious movement consisting of various belief systems generally united in a belief in a distinction between a supreme, transcendent God and a blind, evil demiurge responsible for creating the material universe, thereby trapping the divine spark within matter. Gnosticism is not limited to Christianity, and also incorporates beliefs from other Abrahamic traditions, such as early Jewish sects. In sects like the Cathars and the Paulicians, this is a dualism between the material world, created by an evil god, and a moral god. Historians divide Christian dualism into absolute dualism, which held that the good and evil gods were equally powerful, and mitigated dualism, which held that material evil was subordinate to the spiritual good. The belief, by Christian theologians who adhere to a libertarian or compatibilist view of free will, that free will separates humankind from God has also been characterized as a form of dualism. The theologian Leroy Stephens Rouner compares the dualism of Christianity with the dualism that exists in Zoroastrianism and the Samkhya tradition of Hinduism. The theological use of the word dualism dates back to 1700, in a book that describes the dualism between good and evil. The tolerance of dualism ranges widely among the different Christian traditions. As a monotheistic religion, the conflict between dualism and monism has existed in Christianity since its inception. The 1912 Catholic Encyclopedia describes that, in the Catholic Church, "the dualistic hypothesis of an eternal world existing side by side with God was of course rejected" by the thirteenth century, but mind–body dualism was not. The problem of evil is difficult to reconcile with absolute monism, and has prompted some Christian sects to veer towards dualism. Gnostic forms of Christianity were more dualistic, and some Gnostic traditions posited that the Devil was separate from God as an independent deity. The Christian dualists of the Byzantine Empire, the Paulicians, were seen as Manichean heretics by Byzantine theologians. This tradition of Christian dualism, founded by Constantine-Silvanus, argued that the universe was created through evil and separate from a moral God. Cathars The Cathars, a Christian sect in southern France, believed that there was a dualism between two gods, one representing good and the other representing evil. Whether or not the Cathari possessed direct historical influence from ancient Gnosticism is a matter of dispute, as the basic conceptions of Gnostic cosmology are to be found in Cathar beliefs (most distinctly in their notion of a lesser creator god), though unlike the second century Gnostics, they did not apparently place any special relevance upon knowledge (gnosis) as an effective salvific force. In any case, the Roman Catholic Church denounced the Cathars as heretics, and sought to crush the movement in the 13th century. The Albigensian Crusade was initiated by Pope Innocent III in 1208 to remove the Cathars from Languedoc in France, where they were known as Albigensians. The Inquisition, which began in 1233 under Pope Gregory IX, also targeted the Cathars. In Hinduism The Dvaita Vedanta school of Indian philosophy espouses a dualism between God and the universe by theorizing the existence of two separate realities. The first and the more important reality is that of Shiva or Shakti or Vishnu or Brahman. Shiva or Shakti or Vishnu is the supreme Self, God, the absolute truth of the universe, the independent reality. The second reality is that of dependent but equally real universe that exists with its own separate essence. Everything that is composed of the second reality, such as individual soul (Jiva), matter, etc. exist with their own separate reality. The distinguishing factor of this philosophy as opposed to Advaita Vedanta (monistic conclusion of Vedas) is that God takes on a personal role and is seen as a real eternal entity that governs and controls the universe. Because the existence of individuals is grounded in the divine, they are depicted as reflections, images or even shadows of the divine, but never in any way identical with the divine. Salvation therefore is described as the realization that all finite reality is essentially dependent on the Supreme. Ontological dualism Alternatively, dualism can mean the tendency of humans to perceive and understand the world as being divided into two overarching categories. In this sense, it is dualistic when one perceives a tree as a thing separate from everything surrounding it. This form of ontological dualism exists in Taoism and Confucianism and a foundational theory within Traditional Chinese medicine, beliefs that divide the universe into the complementary oppositions of yin and yang. In traditions such as classical Hinduism (Samkhya, Yoga, Vaisheshika and the later Vedanta schools, which accepted the theory of Gunas), Chinese Pure land and Zen Buddhism or Islamic Sufism, a key to enlightenment is "transcending" this sort of dualistic thinking, without merely substituting dualism with monism or pluralism. In Chinese philosophy The opposition and combination of the universe's two basic principles of yin and yang is a large part of Chinese philosophy, and is an important feature of Taoism, both as a philosophy and as a religion, although the concept developed much earlier. Some argue that yin and yang were originally an earth and sky god, respectively. Some of the common associations with yang and yin, respectively, are: male and female, light and dark, active and passive, motion and stillness. Some scholars believe that the two ideas may have originally referred to two opposite sides of a mountain, facing towards and away from the sun. The yin and yang symbol actually has very little to do with Western dualism; instead it represents the philosophy of balance, where two opposites co-exist in harmony and are able to transmute into each other. In the yin-yang symbol there is a dot of yin in yang and a dot of yang in yin. In Taoism, this symbolizes the inter-connectedness of the opposite forces as different aspects of Tao, the First Principle. Contrast is needed to create a distinguishable reality, without which we would experience nothingness. Therefore, the independent principles of yin and yang are actually dependent on one another for each other's distinguishable existence. The complementary dualistic concept seen in yin and yang represent the reciprocal interaction throughout nature, related to a feedback loop, where opposing forces do not exchange in opposition but instead exchange reciprocally to promote stabilization similar to homeostasis. An underlying principle in Taoism states that within every independent entity lies a part of its opposite. Within sickness lies health and vice versa. This is because all opposites are manifestations of the single Tao, and are therefore not independent from one another, but rather a variation of the same unifying force throughout all of nature. In other religions Samoyed peoples In a Nenets myth, Num and Nga collaborate and compete with each other, creating land, there are also other myths about competing-collaborating demiurges. Comparative studies of Kets and neighboring peoples Among others, also dualistic myths were investigated in researches which tried to compare the mythologies of Siberian peoples and settle the problem of their origins. Vyacheslav Ivanov and Vladimir Toporov compared the mythology of Ket people with those of speakers of Uralic languages, assuming in the studies, that there are modelling semiotic systems in the compared mythologies; and they have also made typological comparisons. Among others, from possibly Uralic mythological analogies, those of Ob-Ugric peoples and Samoyedic peoples are mentioned. Some other discussed analogies (similar folklore motifs, and purely typological considerations, certain binary pairs in symbolics) may be related to dualistic organization of society—some of such dualistic features can be found at these compared peoples. It must be admitted that, for Kets, neither dualistic organization of society nor cosmological dualism has been researched thoroughly: if such features existed at all, they have either weakened or remained largely undiscovered; although there are some reports on division into two exogamous patrilinear moieties, folklore on conflicts of mythological figures, and also on cooperation of two beings in creating the land: the diving of the water fowl. If we include dualistic cosmologies meant in broad sense, not restricted to certain concrete motifs, then we find that they are much more widespread, they exist not only among some Siberian peoples, but there are examples in each inhabited continent. Chukchi A Chukchi myth and its variations report the creation of the world; in some variations, it is achieved by the collaboration of several beings (birds, collaborating in a coequal way; or the creator and the raven, collaborating in a coequal way; or the creator alone, using the birds only as assistants). Fuegians All three Fuegian tribes had dualistic myths about culture heroes. The Yámana have dualistic myths about the two brothers. They act as culture heroes, and sometimes stand in an antagonistic relation with each other, introducing opposite laws. Their figures can be compared to the Kwanyip-brothers of the Selk'nam. In general, the presence of dualistic myths in two compared cultures does not imply relatedness or diffusion necessarily. See also Didache – The Two Ways Duality Mind-body dualism Cosmotheism Evil twin Gnosticism Pantheism Nondualism Table of Opposites Trinity Yanantin (complementary dualism in Native South American culture) Footnotes Bibliography External links "Duality" entry in the UCLA Encyclopedia of Egyptology "Dualism in Philosophy and Religion" in the Dictionary of the History of Ideas Religious cosmologies Religious existentialism
0.765999
0.995369
0.762451
False dilemma
A false dilemma, also referred to as false dichotomy or false binary, is an informal fallacy based on a premise that erroneously limits what options are available. The source of the fallacy lies not in an invalid form of inference but in a false premise. This premise has the form of a disjunctive claim: it asserts that one among a number of alternatives must be true. This disjunction is problematic because it oversimplifies the choice by excluding viable alternatives, presenting the viewer with only two absolute choices when, in fact, there could be many. False dilemmas often have the form of treating two contraries, which may both be false, as contradictories, of which one is necessarily true. Various inferential schemes are associated with false dilemmas, for example, the constructive dilemma, the destructive dilemma or the disjunctive syllogism. False dilemmas are usually discussed in terms of deductive arguments, but they can also occur as defeasible arguments. The human liability to commit false dilemmas may be due to the tendency to simplify reality by ordering it through either-or-statements, which is to some extent already built into human language. This may also be connected to the tendency to insist on clear distinction while denying the vagueness of many common expressions. Definition A false dilemma is an informal fallacy based on a premise that erroneously limits what options are available. In its most simple form, called the fallacy of bifurcation, all but two alternatives are excluded. A fallacy is an argument, i.e. a series of premises together with a conclusion, that is unsound, i.e. not both valid and true. Fallacies are usually divided into formal and informal fallacies. Formal fallacies are unsound because of their structure, while informal fallacies are unsound because of their content. The problematic content in the case of the false dilemma has the form of a disjunctive claim: it asserts that one among a number of alternatives must be true. This disjunction is problematic because it oversimplifies the choice by excluding viable alternatives. Sometimes a distinction is made between a false dilemma and a false dichotomy. On this view, the term "false dichotomy" refers to the false disjunctive claim while the term "false dilemma" refers not just to this claim but to the argument based on this claim. Types Disjunction with contraries In its most common form, a false dilemma presents the alternatives as contradictories, while in truth they are merely contraries. Two propositions are contradictories if it has to be the case that one is true and the other is false. Two propositions are contraries if at most one of them can be true, but leaves open the option that both of them might be false, which is not possible in the case of contradictories. Contradictories follow the law of the excluded middle but contraries do not. For example, the sentence "the exact number of marbles in the urn is either 10 or not 10" presents two contradictory alternatives. The sentence "the exact number of marbles in the urn is either 10 or 11" presents two contrary alternatives: the urn could also contain 2 marbles or 17 marbles or... A common form of using contraries in false dilemmas is to force a choice between extremes on the agent: someone is either good or bad, rich or poor, normal or abnormal. Such cases ignore that there is a continuous spectrum between the extremes that is excluded from the choice. While false dilemmas involving contraries, i.e. exclusive options, are a very common form, this is just a special case: there are also arguments with non-exclusive disjunctions that are false dilemmas. For example, a choice between security and freedom does not involve contraries since these two terms are compatible with each other. Logical forms In logic, there are two main types of inferences known as dilemmas: the constructive dilemma and the destructive dilemma. In their most simple form, they can be expressed in the following way: simple constructive: simple destructive: The source of the fallacy is found in the disjunctive claim in the third premise, i.e. and respectively. The following is an example of a false dilemma with the simple constructive form: (1) "If you tell the truth, you force your friend into a social tragedy; and therefore, are an immoral person". (2) "If you lie, you are an immoral person (since it is immoral to lie)". (3) "Either you tell the truth, or you lie". Therefore "[y]ou are an immoral person (whatever choice you make in the given situation)". This example constitutes a false dilemma because there are other choices besides telling the truth and lying, like keeping silent. A false dilemma can also occur in the form of a disjunctive syllogism: disjunctive syllogism: In this form, the first premise is responsible for the fallacious inference. Lewis's trilemma is a famous example of this type of argument involving three disjuncts: "Jesus was either a liar, a lunatic, or Lord". By denying that Jesus was a liar or a lunatic, one is forced to draw the conclusion that he was God. But this leaves out various other alternatives, for example, that Jesus was a prophet. Deductive and defeasible arguments False dilemmas are usually discussed in terms of deductive arguments. But they can also occur as defeasible arguments. A valid argument is deductive if the truth of its premises ensures the truth of its conclusion. For a valid defeasible argument, on the other hand, it is possible for all its premises to be true and the conclusion to be false. The premises merely offer a certain degree of support for the conclusion but do not ensure it. In the case of a defeasible false dilemma, the support provided for the conclusion is overestimated since various alternatives are not considered in the disjunctive premise. Explanation and avoidance Part of understanding fallacies involves going beyond logic to empirical psychology in order to explain why there is a tendency to commit or fall for the fallacy in question. In the case of the false dilemma, the tendency to simplify reality by ordering it through either-or-statements may play an important role. This tendency is to some extent built into human language, which is full of pairs of opposites. This type of simplification is sometimes necessary to make decisions when there is not enough time to get a more detailed perspective. In order to avoid false dilemmas, the agent should become aware of additional options besides the prearranged alternatives. Critical thinking and creativity may be necessary to see through the false dichotomy and to discover new alternatives. Relation to distinctions and vagueness Some philosophers and scholars believe that "unless a distinction can be made rigorous and precise it isn't really a distinction". An exception is analytic philosopher John Searle, who called it an incorrect assumption that produces false dichotomies. Searle insists that "it is a condition of the adequacy of a precise theory of an indeterminate phenomenon that it should precisely characterize that phenomenon as indeterminate; and a distinction is no less a distinction for allowing for a family of related, marginal, diverging cases." Similarly, when two options are presented, they often are, although not always, two extreme points on some spectrum of possibilities; this may lend credence to the larger argument by giving the impression that the options are mutually exclusive, even though they need not be. Furthermore, the options in false dichotomies typically are presented as being collectively exhaustive, in which case the fallacy may be overcome, or at least weakened, by considering other possibilities, or perhaps by considering a whole spectrum of possibilities, as in fuzzy logic. This issue arises from real dichotomies in nature, the most prevalent example is the occurrence of an event. It either happened or it did not happen. This ontology sets a logical construct that cannot be reasonably applied to epistemology. Examples False choice The presentation of a false choice often reflects a deliberate attempt to eliminate several options that may occupy the middle ground on an issue. A common argument against noise pollution laws involves a false choice. It might be argued that in New York City noise should not be regulated, because if it were, a number of businesses would be required to close. This argument assumes that, for example, a bar must be shut down to prevent disturbing levels of noise emanating from it after midnight. This ignores the fact that law could require the bar to lower its noise levels, or install soundproofing structural elements to keep the noise from excessively transmitting onto others' properties. Black-and-white thinking In psychology, a phenomenon related to the false dilemma is "black-and-white thinking" or "thinking in black and white". There are people who routinely engage in black-and-white thinking, an example of which is someone who categorizes other people as all good or all bad. Similar concepts Various different terms are used to refer to false dilemmas. Some of the following terms are equivalent to the term "false dilemma", some refer to special forms of false dilemmas and others refer to closely related concepts. bifurcation fallacy black-or-white fallacy denying a conjunct (similar to a false dichotomy: see ) double bind either/or fallacy fallacy of exhaustive hypotheses fallacy of the excluded middle fallacy of the false alternative false binary false choice false dichotomy invalid disjunction no middle ground See also Bivalence Choice architecture Degrees of truth Dichotomy Distinction without a difference Euthyphro dilemma Fallacy of the single cause Half-truth Hobson's choice Law of excluded middle Lewis' trilemma Loaded question Love–hate relationship Many-valued logic Morton's fork Mutually exclusive Nolan Chart Nondualism None of the above Obscurantism Pascal's Wager Perspectivism Political systems One-party system Two-party system Rogerian argument Show election Slippery slope Sorites paradox Splitting (psychology) Straw man Thinking outside the box Unreasonable You're either with us, or against us Zero-sum thinking References External links The Black-or-White Fallacy entry in The Fallacy Files Barriers to critical thinking Deception Dilemmas Error Ignorance Informal fallacies Propaganda sv:Dikotomi#Falsk dikotomi
0.765097
0.996404
0.762346
Elitism
Elitism is the notion that individuals who form an elite — a select group with desirable qualities such as intellect, wealth, power, physical attractiveness, notability, special skills, experience, lineage — are more likely to be constructive to society and deserve greater influence or authority. The term elitism may be used to describe a situation in which power is concentrated in the hands of a limited number of people. Beliefs that are in opposition to elitism include egalitarianism, anti-intellectualism (against powerful institutions perceived to be controlled by elites), populism, and the political theory of pluralism. Elite theory is the sociological or political science analysis of elite influence in society: elite theorists regard pluralism as a utopian ideal. Elitism is closely related to social class and what sociologists term "social stratification". In modern Western societies, social stratification is typically defined in terms of three distinct social classes: the upper class, the middle class, and the lower class. Some synonyms for "elite" might be "upper-class" or "aristocratic", indicating that the individual in question has a relatively large degree of control over a society's means of production. This includes those who gain this position due to socioeconomic means and not personal achievement. However, these terms are misleading when discussing elitism as a political theory, because they are often associated with negative "class" connotations and fail to appreciate a more unbiased exploration of the philosophy. Characteristics Attributes that identify an elite vary; personal achievement may not be essential. Elite status can be based on personal achievement, such as degrees from top-rate universities or impressive internships and job offers, as well as on lineage or passed-on fame from parents or grandparents. As a term, "elite" usually describes a person or group of people who are members of the uppermost class of society, and wealth can contribute to that class determination. Personal attributes commonly purported by elitist theorists to be characteristic of the elite include: rigorous study of, or great accomplishment within, a particular field; a long track record of competence in a demanding field; an extensive history of dedication and effort in service to a specific discipline (e.g., medicine or law) or a high degree of accomplishment, training or wisdom within a given field; a high degree of physical discipline. Elitists tend to favor social systems such as technocracy, combined with meritocracy and/or plutocracy, as opposed to political egalitarianism and populism. Elitists believe only a few "movers and shakers" truly change society, rather than the majority of people who only vote and elect the elites into power. See also Caste Classism Collective narcissism Exclusivism Global elite International Debutante Ball Ivory tower Narcissism Oligarchy Rankism Right-wing populism Sectarianism Self-righteousness Snobbery Social Darwinism Social Evolution Supremacism References External links Deresiewicz, William (June 2008). The Disadvantages of an Elite Education. "Our best universities have forgotten that the reason they exist is to make minds, not careers." The American Scholar.   Review of William Deresiewicz's book Excellent Sheep (April 2015), Foreign Affairs Social groups Political science Ideologies Oligarchy Social theories Prejudices Elite theory Psychological attitude
0.765733
0.995493
0.762282
Didactic method
A didactic method ( didáskein, "to teach") is a teaching method that follows a consistent scientific approach or educational style to present information to students. The didactic method of instruction is often contrasted with dialectics and the Socratic method; the term can also be used to refer to a specific didactic method, as for instance constructivist didactics. Overview Didactics is a theory of teaching, and in a wider sense, a theory and practical application of teaching and learning. In demarcation from "mathetics" (the science of learning), didactics refers only to the science of teaching. This theory might be contrasted with open learning, also known as experiential learning, in which people can learn by themselves, in an unstructured manner (or in an unusually structured manner) as in experiential education, on topics of interest. It can also be contrasted with autodidactic learning, in which one instructs oneself, often from existing books or curricula. The theory of didactic learning methods focuses on the baseline knowledge students possess and seeks to improve upon and convey this information. It also refers to the foundation or starting point in a lesson plan, where the overall goal is knowledge. A teacher or educator functions in this role as an authoritative figure, but also as both a guide and a resource for students. Didactics or the didactic method have different connotations in continental Europe and English-speaking countries. Didacticism was indeed the cultural origin of the didactic method but refers within its narrow context usually pejoratively to the use of language to a doctrinal end. The interpretation of these opposing views are theorised to be the result of a differential cultural development in the 19th century when Great Britain and its former colonies went through a renewal and increased cultural distancing from continental Europe. It was particularly the later appearance of Romanticism and Aestheticism in the Anglo-Saxon world which offered these negative and limiting views of the didactic method. On the other hand, in continental Europe those moralising aspects of didactics were removed earlier by cultural representatives of the Age of Enlightenment, such as Voltaire, Rousseau, and later specifically related to teaching by Johann Heinrich Pestalozzi. The consequences of these cultural differences then created two main didactic traditions: The Anglo-Saxon tradition of curriculum studies on one side and the Continental and North European tradition of didactics on the other. Still today, the science of didactics carries much less weight in much of the English-speaking world.. With the advent of globalisation at the beginning of the 20th century, however, the arguments for such relative philosophical aspects in the methods of teaching started to diminish somewhat. It is therefore possible to categorise didactics and pedagogy as a general analytic theory on three levels: a theoretical or research level (denoting a field of study) a practical level (summaries of curricular activities) a discursive level (implying a frame of reference for professional dialogs) Nature of didactics and difference with pedagogy The discipline of didactics is interested in both theoretical knowledge and practical activities related to teaching, learning and their conditions. It is concerned with the content of teaching (the "what"), the method of teaching (the "how") and the historical, cultural and social justifications of curricular choices (the "why"). It focuses on the individual learner, their cognitive characteristics and functioning when they learn a given content and become a knowing subject. The perspective of educational reality in didactics is drawn extensively from cognitive psychology and the theory of teaching, and sometimes from social psychology. Didactics is descriptive and diachronic ("what is" and "what was"), as opposed to pedagogy, the other discipline related to educational theorizing, which is normative or prescriptive and synchronic ("what should or ought to be") in nature. Didactics can be said to provide the descriptive foundation for pedagogy, which is more concerned with educational goal-setting and with the learner's becoming a social subject and their future role in society. In continental Europe, as opposed to English-speaking research cultures, pedagogy and didactics are distinct areas of study. Didactics is a knowledge-based discipline concerned with the descriptive and rational study of all teaching-related activities before, during and after the teaching of content in the classroom, which includes the "planning, control and regulation of the teaching context" and its objective is to analyze how teaching leads to learning. On the other hand, pedagogy is a practice-oriented discipline concerned with the normative study of the applied aspects of teaching in real teaching contexts, i.e., inside the classroom. Pedagogy draws from didactic research and can be seen as an applied component of didactics. Didactic transposition In France, didactics refers to the science that takes the teaching of disciplined knowledge as its object of study. In other words, didactics is concerned with the teaching of specific disciplines to students. One of the central concepts studied in didactics of a specific discipline in France is the concept of "didactic transposition" (La transposition didactique in French). French philosopher and sociologist Michel Verret introduced this concept in 1975, which was borrowed and elaborated further in the 1980s by the French didactician of Mathematics Yves Chevallard. Although Chevallard initially presented this concept regarding the didactics of mathematics, it has since been generalized for other disciplines as well. Didactic transposition is composed of multiple steps. The first step, called the "external transposition" (transposition externe), is about how the "scholarly knowledge" (savoir savant) produced by the scholars, scientists or specialists of a certain discipline in a research context, i.e., at universities and other academic institutions is transformed into "knowledge to teach" (savoir à enseigner) by precisely selecting, rearranging and defining the knowledge which will be taught (the official curriculum for each discipline) and how it will be taught, so that it becomes an object of learning accessible to the learner. This external didactic transposition is a socio-political construction made possible by different actors working within various educational institutions: education specialists, political authorities, teachers and their associations define the issues of teaching and choose what should be taught under which form. Chevallard called this socio-political context of institutional organization the “noosphere”, which defines the limits, redefines and reorganizes the knowledge in socially, historically or culturally determined contexts. The second step, called the "internal transposition" (transposition interne) is about how the knowledge to teach is transformed into "taught knowledge" (savoir enseigné), which is the knowledge actually taught through the day-to-day concrete practices of a teacher in a teaching context, e.g. in a classroom, and which depends on their students and the constraints imposed on them (time, exams, conformity to prevailing school rules, etc.). In the third and final step, the taught knowledge is transformed into "acquired knowledge" (savoir acquis), which is the knowledge as it is actually acquired by students in a learning context. The acquired knowledge can be used as a feedback to the didactic system. Didactic research has to account for all the aforementioned steps of didactic transposition. Didactic triangle The teacher is given the knowledge or content to be taught to students in what is called a teaching situation. The teaching or didactic situation is represented by a triangle with three vertices: the knowledge or content to be taught, the teacher, and the student. This is called the "didactic triangle". In this triangle, the teacher-content side is concerned with didactic elaboration, the student-content side is about didactic appropriation, and the teacher-student side is about didactic interaction. Didactic teaching Didactic method provides students with the required theoretical knowledge. It is an effective method used to teach students who are unable to organize their work and depend on the teachers for instructions. It is also used to teach basic skills of reading and writing. The teacher or the literate is the source of knowledge and the knowledge is transmitted to the students through didactic method. Didactic Teaching materials: The Montessori school had preplanned teaching (Didactic) materials designed, to develop practical, sensory, and formal skills. Lacing and buttoning frames, weights, and packet to be identified by their sound or smell. Because they direct learning in the prepared environment, Montessori educators are called directress rather than teachers. In Brazil, there has been for more than 80 years the government program called PNLD (National Program of Didactic Book). This program seeks to provide basic education schools with didactic and pedagogical records, expanding access to the book and democratizing access to sources of information and culture. Textbooks, in many cases, are the only sources of information that poor children and young people have access to in a poor country like Brazil. These books are also valuable support to teachers, offering modern learning methodologies and updated concepts and content in the most diverse disciplines Functions of didactic method cognitive function: to understand and learn basic concepts formative-educative function: to develop skills, behavior, abilities, etc. instrumental function : to achieve educational objectives normative function : helps to achieve productive learning, attain required results, etc. Method of teaching In didactic method of teaching, the teacher gives instructions to the students and the students are mostly passive listeners . It is a teacher-centered method of teaching and is content-oriented. Neither the content nor the knowledge of the teacher are questioned. The process of teaching involves the teacher who gives instructions, commands, delivers content, and provides necessary information. The pupil activity involves listening and memorization of the content. In the modern education system, lecture method which is one of the most commonly used methods is a form of didactic teaching. Limitations Though the didactic method has been given importance in several schools, it does not satisfy the needs and interests of all students. It can be tedious for students to listen to the possible lectures. There is minimum interaction between the students and the teachers. Learning which also involves motivating the students to develop an interest towards the subject may not be satisfied through this teaching method. It may be a monologue process and experience of the students may not have a significant role in learning. See also Guy Brousseau References External links Didactics Pedagogy
0.768883
0.991336
0.762222
Epistemological idealism
Epistemological idealism is a subjectivist position in epistemology that holds that what one knows about an object exists only in one's mind. It is opposed to epistemological realism. Overview Epistemological idealism suggests that everything we experience and know is of a mental nature—sense data in philosophical jargon. Although it is sometimes employed to argue in favor of metaphysical idealism, in principle epistemological idealism makes no claim about whether sense data are grounded in reality. As such, it is a container for both indirect realism and idealism. This is the version of epistemological idealism which interested Ludwig Boltzmann; it had roots in the positivism of Ernst Mach and Gustav Kirchhoff plus a number of aspects of the Kantianism or neo-Kantianism of Hermann von Helmholtz and Heinrich Hertz. A contemporary representative of epistemological idealism is Brand Blanshard. References Subjectivism Idealism Epistemological theories
0.79609
0.957397
0.762175
Paradigm shift
A paradigm shift is a fundamental change in the basic concepts and experimental practices of a scientific discipline. It is a concept in the philosophy of science that was introduced and brought into the common lexicon by the American physicist and philosopher Thomas Kuhn. Even though Kuhn restricted the use of the term to the natural sciences, the concept of a paradigm shift has also been used in numerous non-scientific contexts to describe a profound change in a fundamental model or perception of events. Kuhn presented his notion of a paradigm shift in his influential book The Structure of Scientific Revolutions (1962). Kuhn contrasts paradigm shifts, which characterize a Scientific Revolution, to the activity of normal science, which he describes as scientific work done within a prevailing framework or paradigm. Paradigm shifts arise when the dominant paradigm under which normal science operates is rendered incompatible with new phenomena, facilitating the adoption of a new theory or paradigm. As one commentator summarizes: History The nature of scientific revolutions has been studied by modern philosophy since Immanuel Kant used the phrase in the preface to the second edition of his Critique of Pure Reason (1787). Kant used the phrase "revolution of the way of thinking" to refer to Greek mathematics and Newtonian physics. In the 20th century, new developments in the basic concepts of mathematics, physics, and biology revitalized interest in the question among scholars. Original usage In his 1962 book The Structure of Scientific Revolutions, Kuhn explains the development of paradigm shifts in science into four stages: Normal science – In this stage, which Kuhn sees as most prominent in science, a dominant paradigm is active. This paradigm is characterized by a set of theories and ideas that define what is possible and rational to do, giving scientists a clear set of tools to approach certain problems. Some examples of dominant paradigms that Kuhn gives are: Newtonian physics, caloric theory, and the theory of electromagnetism. Insofar as paradigms are useful, they expand both the scope and the tools with which scientists do research. Kuhn stresses that, rather than being monolithic, the paradigms that define normal science can be particular to different people. A chemist and a physicist might operate with different paradigms of what a helium atom is. Under normal science, scientists encounter anomalies that cannot be explained by the universally accepted paradigm within which scientific progress has thereto been made. Extraordinary research – When enough significant anomalies have accrued against a current paradigm, the scientific discipline is thrown into a state of crisis. To address the crisis, scientists push the boundaries of normal science in what Kuhn calls “extraordinary research”, which is characterized by its exploratory nature. Without the structures of the dominant paradigm to depend on, scientists engaging in extraordinary research must produce new theories, thought experiments, and experiments to explain the anomalies. Kuhn sees the practice of this stage – “the proliferation of competing articulations, the willingness to try anything, the expression of explicit discontent, the recourse to philosophy and to debate over fundamentals” – as even more important to science than paradigm shifts. Adoption of a new paradigm – Eventually a new paradigm is formed, which gains its own new followers. For Kuhn, this stage entails both resistance to the new paradigm, and reasons for why individual scientists adopt it. According to Max Planck, "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." Because scientists are committed to the dominant paradigm, and paradigm shifts involve gestalt-like changes, Kuhn stresses that paradigms are difficult to change. However, paradigms can gain influence by explaining or predicting phenomena much better than before (i.e., Bohr's model of the atom) or by being more subjectively pleasing. During this phase, proponents for competing paradigms address what Kuhn considers the core of a paradigm debate: whether a given paradigm will be a good guide for problems – things that neither the proposed paradigm nor the dominant paradigm are capable of solving currently. Aftermath of the scientific revolution – In the long run, the new paradigm becomes institutionalized as the dominant one. Textbooks are written, obscuring the revolutionary process. Features Paradigm shifts and progress A common misinterpretation of paradigms is the belief that the discovery of paradigm shifts and the dynamic nature of science (with its many opportunities for subjective judgments by scientists) are a case for relativism: the view that all kinds of belief systems are equal. Kuhn vehemently denies this interpretation and states that when a scientific paradigm is replaced by a new one, albeit through a complex social process, the new one is always better, not just different. Incommensurability These claims of relativism are, however, tied to another claim that Kuhn does at least somewhat endorse: that the language and theories of different paradigms cannot be translated into one another or rationally evaluated against one another—that they are incommensurable. This gave rise to much talk of different peoples and cultures having radically different worldviews or conceptual schemes—so different that whether or not one was better, they could not be understood by one another. However, the philosopher Donald Davidson published the highly regarded essay "On the Very Idea of a Conceptual Scheme" in 1974 arguing that the notion that any languages or theories could be incommensurable with one another was itself incoherent. If this is correct, Kuhn's claims must be taken in a weaker sense than they often are. Furthermore, the hold of the Kuhnian analysis on social science has long been tenuous, with the wide application of multi-paradigmatic approaches in order to understand complex human behaviour. Gradualism vs. sudden change Paradigm shifts tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, physics seemed to be a discipline filling in the last few details of a largely worked-out system. In The Structure of Scientific Revolutions, Kuhn wrote, "Successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science" (p. 12). Kuhn's idea was itself revolutionary in its time as it caused a major change in the way that academics talk about science. Thus, it could be argued that it caused or was itself part of a "paradigm shift" in the history and sociology of science. However, Kuhn would not recognise such a paradigm shift. In the social sciences, people can still use earlier ideas to discuss the history of science. Philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn's model, which synthesizes his original view with the gradualist model that preceded it. Examples Natural sciences Some of the "classical cases" of Kuhnian paradigm shifts in science are: 1543 – The transition in cosmology from a Ptolemaic cosmology to a Copernican one. 1543 – The acceptance of the work of Andreas Vesalius, whose work De humani corporis fabrica corrected the numerous errors in the previously held system of human anatomy created by Galen. 1687 – The transition in mechanics from Aristotelian mechanics to classical mechanics. 1783 – The acceptance of Lavoisier's theory of chemical reactions and combustion in place of phlogiston theory, known as the chemical revolution. The transition in optics from geometrical optics to physical optics with Augustin-Jean Fresnel's wave theory. 1826 – The discovery of hyperbolic geometry. 1830 to 1833 – Geologist Charles Lyell published Principles of Geology, which not only put forth the concept of uniformitarianism, which was in direct contrast to the popular geological theory, at the time, catastrophism, but also utilized geological proof to determine that the age of the Earth was older than 6,000 years, which was previously held to be true. 1859 – The revolution in evolution from goal-directed change to Charles Darwin's natural selection. 1880 – The germ theory of disease began overtaking Galen's miasma theory. 1905 – The development of quantum mechanics, which replaced classical mechanics at microscopic scales. 1887 to 1905 – The transition from the luminiferous aether present in space to electromagnetic radiation in spacetime. 1919 – The transition between the worldview of Newtonian gravity and general relativity. 1920 – The emergence of the modern view of the Milky Way as just one of countless galaxies within an immeasurably vast universe following the results of the Smithsonian's Great Debate between astronomers Harlow Shapley and Heber Curtis. 1952 – Chemists Stanley Miller and Harold Urey perform an experiment which simulated the conditions on the early Earth that favored chemical reactions that synthesized more complex organic compounds from simpler inorganic precursors, kickstarting decades of research into the chemical origins of life. 1964 – The discovery of cosmic microwave background radiation leads to the big bang theory being accepted over the steady state theory in cosmology. 1965 – The acceptance of plate tectonics as the explanation for large-scale geologic changes. 1969 – Astronomer Victor Safronov, in his book Evolution of the protoplanetary cloud and formation of the Earth and the planets, developed the early version of the current accepted theory of planetary formation. 1974 – The November Revolution, with the discovery of the J/psi meson, and the acceptance of the existence of quarks and the Standard Model of particle physics. 1960 to 1985 – The acceptance of the ubiquity of nonlinear dynamical systems as promoted by chaos theory, instead of a laplacian world-view of deterministic predictability. Social sciences In Kuhn's view, the existence of a single reigning paradigm is characteristic of the natural sciences, while philosophy and much of social science were characterized by a "tradition of claims, counterclaims, and debates over fundamentals." Others have applied Kuhn's concept of paradigm shift to the social sciences. The movement known as the cognitive revolution moved away from behaviourist approaches to psychology and the acceptance of cognition as central to studying human behavior. Anthropologist Franz Boas published The Mind of Primitive Man, which integrated his theories concerning the history and development of cultures and established a program that would dominate American anthropology in the following years. His research, along with that of his other colleagues, combatted and debunked the claims being made by scholars at the time, given scientific racism and eugenics were dominant in many universities and institutions that were dedicated to studying humans and society. Eventually anthropology would apply a holistic approach, utilizing four subcategories to study humans: archaeology, cultural, evolutionary, and linguistic anthropology. At the turn of the 20th century, sociologists, along with other social scientists developed and adopted methodological antipositivism, which sought to uphold a subjective perspective when studying human activities pertaining to culture, society, and behavior. This was in stark contrast to positivism, which took its influence from the methodologies utilized within the natural sciences. First proposed by Ferdinand de Saussure in 1879, the laryngeal theory in Indo-European linguistics postulated the existence of "laryngeal" consonants in the Proto-Indo-European language (PIE), a theory that was confirmed by the discovery of the Hittite language in the early 20th century. The theory has since been accepted by the vast majority of linguists, paving the way for the internal reconstruction of the syntax and grammatical rules of PIE and is considered one of the most significant developments in linguistics since the initial discovery of the Indo-European language family. The adoption of radiocarbon dating by archaeologists has been proposed as a paradigm shift because of how it greatly increased the time depth the archaeologists could reliably date objects from. Similarly the use of LIDAR for remote geospatial imaging of cultural landscapes, and the shift from processual to post-processual archaeology have both been claimed as paradigm shifts by archaeologists. The emergence of three-phase traffic theory created by Boris Kerner in vehicular traffic science as an alternative theory to classical (standard) traffic flow theories. Applied sciences More recently, paradigm shifts are also recognisable in applied sciences: In medicine, the transition from "clinical judgment" to evidence-based medicine. In Artificial Intelligence, the transition from a knowledge-based to a data-driven paradigm has been discussed from 2010. Other uses The term "paradigm shift" has found uses in other contexts, representing the notion of a major change in a certain thought pattern—a radical change in personal beliefs, complex systems or organizations, replacing the former way of thinking or organizing with a radically different way of thinking or organizing: M. L. Handa, a professor of sociology in education at O.I.S.E. University of Toronto, Canada, developed the concept of a paradigm within the context of social sciences. He defines what he means by "paradigm" and introduces the idea of a "social paradigm". In addition, he identifies the basic component of any social paradigm. Like Kuhn, he addresses the issue of changing paradigms, the process popularly known as "paradigm shift". In this respect, he focuses on the social circumstances that precipitate such a shift. Relatedly, he addresses how that shift affects social institutions, including the institution of education. The concept has been developed for technology and economics in the identification of new techno-economic paradigms as changes in technological systems that have a major influence on the behaviour of the entire economy (Carlota Perez; earlier work only on technological paradigms by Giovanni Dosi). This concept is linked to Joseph Schumpeter's idea of creative destruction. Examples include the move to mass production and the introduction of microelectronics. Two photographs of the Earth from space, "Earthrise" (1968) and "The Blue Marble" (1972), are thought to have helped to usher in the environmentalist movement, which gained great prominence in the years immediately following distribution of those images. Hans Küng applies Thomas Kuhn's theory of paradigm change to the entire history of Christian thought and theology. He identifies six historical "macromodels": 1) the apocalyptic paradigm of primitive Christianity, 2) the Hellenistic paradigm of the patristic period, 3) the medieval Roman Catholic paradigm, 4) the Protestant (Reformation) paradigm, 5) the modern Enlightenment paradigm, and 6) the emerging ecumenical paradigm. He also discusses five analogies between natural science and theology in relation to paradigm shifts. Küng addresses paradigm change in his books, Paradigm Change in Theology and Theology for the Third Millennium: An Ecumenical View. In the later part of the 1990s, 'paradigm shift' emerged as a buzzword, popularized as marketing speak and appearing more frequently in print and publication. In his book Mind The Gaffe, author Larry Trask advises readers to refrain from using it, and to use caution when reading anything that contains the phrase. It is referred to in several articles and books as abused and overused to the point of becoming meaningless. The concept of technological paradigms has been advanced, particularly by Giovanni Dosi. Criticism In a 2015 retrospective on Kuhn, the philosopher Martin Cohen describes the notion of the paradigm shift as a kind of intellectual virus – spreading from hard science to social science and on to the arts and even everyday political rhetoric today. Cohen claims that Kuhn had only a very hazy idea of what it might mean and, in line with the Austrian philosopher of science Paul Feyerabend, accuses Kuhn of retreating from the more radical implications of his theory, which are that scientific facts are never really more than opinions whose popularity is transitory and far from conclusive. Cohen says scientific knowledge is less certain than it is usually portrayed, and that science and knowledge generally is not the 'very sensible and reassuringly solid sort of affair' that Kuhn describes, in which progress involves periodic paradigm shifts in which much of the old certainties are abandoned in order to open up new approaches to understanding that scientists would never have considered valid before. He argues that information cascades can distort rational, scientific debate. He has focused on health issues, including the example of highly mediatised 'pandemic' alarms, and why they have turned out eventually to be little more than scares. See also (author of Paradigm Shift) References Citations Sources External links MIT 6.933J – The Structure of Engineering Revolutions. From MIT OpenCourseWare, course materials (graduate level) for a course on the history of technology through a Kuhnian lens. Change Cognition Concepts in epistemology Concepts in the philosophy of science Consensus reality Critical thinking Epistemology of science Historiography of science Innovation Philosophical theories Reasoning Scientific Revolution Thomas Kuhn
0.764179
0.997237
0.762068
Argument
An argument is a series of sentences, statements, or propositions some of which are called premises and one is the conclusion. The purpose of an argument is to give reasons for one's conclusion via justification, explanation, and/or persuasion. Arguments are intended to determine or show the degree of truth or acceptability of another statement called a conclusion. The process of crafting or delivering arguments, argumentation, can be studied from three main perspectives: the logical, the dialectical and the rhetorical perspective. In logic, an argument is usually expressed not in natural language but in a symbolic formal language, and it can be defined as any group of propositions of which one is claimed to follow from the others through deductively valid inferences that preserve truth from the premises to the conclusion. This logical perspective on argument is relevant for scientific fields such as mathematics and computer science. Logic is the study of the forms of reasoning in arguments and the development of standards and criteria to evaluate arguments. Deductive arguments can be valid, and the valid ones can be sound: in a valid argument, premises necessitate the conclusion, even if one or more of the premises is false and the conclusion is false; in a sound argument, true premises necessitate a true conclusion. Inductive arguments, by contrast, can have different degrees of logical strength: the stronger or more cogent the argument, the greater the probability that the conclusion is true, the weaker the argument, the lesser that probability. The standards for evaluating non-deductive arguments may rest on different or additional criteria than truth—for example, the persuasiveness of so-called "indispensability claims" in transcendental arguments, the quality of hypotheses in retroduction, or even the disclosure of new possibilities for thinking and acting. In dialectics, and also in a more colloquial sense, an argument can be conceived as a social and verbal means of trying to resolve, or at least contend with, a conflict or difference of opinion that has arisen or exists between two or more parties. For the rhetorical perspective, the argument is constitutively linked with the context, in particular with the time and place in which the argument is located. From this perspective, the argument is evaluated not just by two parties (as in a dialectical approach) but also by an audience. In both dialectic and rhetoric, arguments are used not through formal but through natural language. Since classical antiquity, philosophers and rhetoricians have developed lists of argument types in which premises and conclusions are connected in informal and defeasible ways. Etymology The Latin root arguere (to make bright, enlighten, make known, prove, etc.) is from Proto-Indo-European argu-yo-, suffixed form of arg- (to shine; white). Formal and informal Informal arguments as studied in informal logic, are presented in ordinary language and are intended for everyday discourse. Formal arguments are studied in formal logic (historically called symbolic logic, more commonly referred to as mathematical logic today) and are expressed in a formal language. Informal logic emphasizes the study of argumentation; formal logic emphasizes implication and inference. Informal arguments are sometimes implicit. The rational structure—the relationship of claims, premises, warrants, relations of implication, and conclusion—is not always spelled out and immediately visible and must be made explicit by analysis. Standard logical account of argument types There are several kinds of arguments in logic, the best known of which are "deductive" and "inductive." An argument has one or more premises but only one conclusion. Each premise and the conclusion are truth bearers or "truth-candidates", each capable of being either true or false (but not both). These truth values bear on the terminology used with arguments. Deductive arguments A deductive argument asserts that the truth of the conclusion is a logical consequence of the premises: if the premises are true, the conclusion must be true. It would be self-contradictory to assert the premises and deny the conclusion because the negation of the conclusion is contradictory to the truth of the premises. Based on the premises, the conclusion follows necessarily (with certainty). Given premises that A=B and B=C, then the conclusion follows necessarily that A=C. Deductive arguments are sometimes referred to as "truth-preserving" arguments. For example, consider the argument that because bats can fly (premise=true), and all flying creatures are birds (premise=false), therefore bats are birds (conclusion=false). If we assume the premises are true, the conclusion follows necessarily, and it is a valid argument. Validity In terms of validity, deductive arguments may be either valid or invalid. An argument is valid, if and only if (iff) it is impossible in all possible worlds for the premises to be true and the conclusion false; validity is about what is possible; it is concerned with how the premises and conclusion relate and what is possible. An argument is formally valid if and only if the denial of the conclusion is incompatible with accepting all the premises. In formal logic, the validity of an argument depends not on the actual truth or falsity of its premises and conclusion, but on whether the argument has a valid logical form. The validity of an argument is not a guarantee of the truth of its conclusion. A valid argument may have false premises that render it inconclusive: the conclusion of a valid argument with one or more false premises may be true or false. Logic seeks to discover the forms that make arguments valid. A form of argument is valid if and only if the conclusion is true under all interpretations of that argument in which the premises are true. Since the validity of an argument depends on its form, an argument can be shown invalid by showing that its form is invalid. This can be done by a counter example of the same form of argument with premises that are true under a given interpretation, but a conclusion that is false under that interpretation. In informal logic this is called a counter argument. The form of an argument can be shown by the use of symbols. For each argument form, there is a corresponding statement form, called a corresponding conditional, and an argument form is valid if and only if its corresponding conditional is a logical truth. A statement form which is logically true is also said to be a valid statement form. A statement form is a logical truth if it is true under all interpretations. A statement form can be shown to be a logical truth by either (a) showing that it is a tautology or (b) by means of a proof procedure. The corresponding conditional of a valid argument is a necessary truth (true in all possible worlds) and so the conclusion necessarily follows from the premises, or follows of logical necessity. The conclusion of a valid argument is not necessarily true, it depends on whether the premises are true. If the conclusion, itself, is a necessary truth, it is without regard to the premises. Some examples: All Greeks are human and all humans are mortal; therefore, all Greeks are mortal. : Valid argument; if the premises are true the conclusion must be true. Some Greeks are logicians and some logicians are tiresome; therefore, some Greeks are tiresome. Invalid argument: the tiresome logicians might all be Romans (for example). Either we are all doomed or we are all saved; we are not all saved; therefore, we are all doomed. Valid argument; the premises entail the conclusion. (This does not mean the conclusion has to be true; it is only true if the premises are true, which they may not be!) Some men are hawkers. Some hawkers are rich. Therefore, some men are rich. Invalid argument. This can be easier seen by giving a counter-example with the same argument form: Some people are herbivores. Some herbivores are zebras. Therefore, some people are zebras. Invalid argument, as it is possible that the premises be true and the conclusion false. In the above second to last case (Some men are hawkers ...), the counter-example follows the same logical form as the previous argument, (Premise 1: "Some X are Y." Premise 2: "Some Y are Z." Conclusion: "Some X are Z.") in order to demonstrate that whatever hawkers may be, they may or may not be rich, in consideration of the premises as such. (See also: Existential import). The forms of argument that render deductions valid are well-established, however some invalid arguments can also be persuasive depending on their construction (inductive arguments, for example). (See also: Formal fallacy and Informal fallacy). Soundness An argument is sound when the argument is valid and argument's premise(s) is/are true, therefore the conclusion is true. Inductive arguments An inductive argument asserts that the truth of the conclusion is supported by the probability of the premises. For example, given that the military budget of the United States is the largest in the world (premise=true), then it is probable that it will remain so for the next 10 years (conclusion=true). Arguments that involve predictions are inductive since the future is uncertain. An inductive argument is said to be strong or weak. If the premises of an inductive argument are assumed true, is it probable the conclusion is also true? If yes, the argument is strong. If no, it is weak. A strong argument is said to be cogent if it has all true premises. Otherwise, the argument is uncogent. The military budget argument example is a strong, cogent argument. Non-deductive logic is reasoning using arguments in which the premises support the conclusion but do not entail it. Forms of non-deductive logic include the statistical syllogism, which argues from generalizations true for the most part, and induction, a form of reasoning that makes generalizations based on individual instances. An inductive argument is said to be cogent if and only if the truth of the argument's premises would render the truth of the conclusion probable (i.e., the argument is strong), and the argument's premises are, in fact, true. Cogency can be considered inductive logic's analogue to deductive logic's "soundness". Despite its name, mathematical induction is not a form of inductive reasoning. The lack of deductive validity is known as the problem of induction. Defeasible arguments and argumentation schemes In modern argumentation theories, arguments are regarded as defeasible passages from premises to a conclusion. Defeasibility means that when additional information (new evidence or contrary arguments) is provided, the premises may be no longer lead to the conclusion (non-monotonic reasoning). This type of reasoning is referred to as defeasible reasoning. For instance we consider the famous Tweety example: Tweety is a bird. Birds generally fly. Therefore, Tweety (probably) flies. This argument is reasonable and the premises support the conclusion unless additional information indicating that the case is an exception comes in. If Tweety is a penguin, the inference is no longer justified by the premise. Defeasible arguments are based on generalizations that hold only in the majority of cases, but are subject to exceptions and defaults. In order to represent and assess defeasible reasoning, it is necessary to combine the logical rules (governing the acceptance of a conclusion based on the acceptance of its premises) with rules of material inference, governing how a premise can support a given conclusion (whether it is reasonable or not to draw a specific conclusion from a specific description of a state of affairs). Argumentation schemes have been developed to describe and assess the acceptability or the fallaciousness of defeasible arguments. Argumentation schemes are stereotypical patterns of inference, combining semantic-ontological relations with types of reasoning and logical axioms and representing the abstract structure of the most common types of natural arguments. A typical example is the argument from expert opinion, shown below, which has two premises and a conclusion. Each scheme may be associated with a set of critical questions, namely criteria for assessing dialectically the reasonableness and acceptability of an argument. The matching critical questions are the standard ways of casting the argument into doubt. By analogy Argument by analogy may be thought of as argument from the particular to particular. An argument by analogy may use a particular truth in a premise to argue towards a similar particular truth in the conclusion. For example, if A. Plato was mortal, and B. Socrates was like Plato in other respects, then asserting that C. Socrates was mortal is an example of argument by analogy because the reasoning employed in it proceeds from a particular truth in a premise (Plato was mortal) to a similar particular truth in the conclusion, namely that Socrates was mortal. Other kinds Other kinds of arguments may have different or additional standards of validity or justification. For example, philosopher Charles Taylor said that so-called transcendental arguments are made up of a "chain of indispensability claims" that attempt to show why something is necessarily true based on its connection to our experience, while Nikolas Kompridis has suggested that there are two types of "fallible" arguments: one based on truth claims, and the other based on the time-responsive disclosure of possibility (world disclosure). Kompridis said that the French philosopher Michel Foucault was a prominent advocate of this latter form of philosophical argument. World-disclosing World-disclosing arguments are a group of philosophical arguments that according to Nikolas Kompridis employ a disclosive approach, to reveal features of a wider ontological or cultural-linguistic understanding—a "world", in a specifically ontological sense—in order to clarify or transform the background of meaning (tacit knowledge) and what Kompridis has called the "logical space" on which an argument implicitly depends. Explanations While arguments attempt to show that something was, is, will be, or should be the case, explanations try to show why or how something is or will be. If Fred and Joe address the issue of whether or not Fred's cat has fleas, Joe may state: "Fred, your cat has fleas. Observe, the cat is scratching right now." Joe has made an argument that the cat has fleas. However, if Joe asks Fred, "Why is your cat scratching itself?" the explanation, "... because it has fleas." provides understanding. Both the above argument and explanation require knowing the generalities that a) fleas often cause itching, and b) that one often scratches to relieve itching. The difference is in the intent: an argument attempts to settle whether or not some claim is true, and an explanation attempts to provide understanding of the event. Note, that by subsuming the specific event (of Fred's cat scratching) as an instance of the general rule that "animals scratch themselves when they have fleas", Joe will no longer wonder why Fred's cat is scratching itself. Arguments address problems of belief, explanations address problems of understanding. In the argument above, the statement, "Fred's cat has fleas" is up for debate (i.e. is a claim), but in the explanation, the statement, "Fred's cat has fleas" is assumed to be true (unquestioned at this time) and just needs explaining. Arguments and explanations largely resemble each other in rhetorical use. This is the cause of much difficulty in thinking critically about claims. There are several reasons for this difficulty. People often are not themselves clear on whether they are arguing for or explaining something. The same types of words and phrases are used in presenting explanations and arguments. The terms 'explain' or 'explanation,' et cetera are frequently used in arguments. Explanations are often used within arguments and presented so as to serve as arguments. Likewise, "... arguments are essential to the process of justifying the validity of any explanation as there are often multiple explanations for any given phenomenon." Explanations and arguments are often studied in the field of information systems to help explain user acceptance of knowledge-based systems. Certain argument types may fit better with personality traits to enhance acceptance by individuals. Fallacies and non-arguments Fallacies are types of argument or expressions which are held to be of an invalid form or contain errors in reasoning. One type of fallacy occurs when a word frequently used to indicate a conclusion is used as a transition (conjunctive adverb) between independent clauses. In English the words therefore, so, because and hence typically separate the premises from the conclusion of an argument. Thus: Socrates is a man, all men are mortal therefore Socrates is mortal is an argument because the assertion Socrates is mortal follows from the preceding statements. However, I was thirsty and therefore I drank is not an argument, despite its appearance. It is not being claimed that I drank is logically entailed by I was thirsty. The therefore in this sentence indicates for that reason not it follows that. Elliptical or ethymematic arguments Often an argument is invalid or weak because there is a missing premise—the supply of which would make it valid or strong. This is referred to as an elliptical or enthymematic argument (see also ). Speakers and writers will often leave out a necessary premise in their reasoning if it is widely accepted and the writer does not wish to state the blindingly obvious. Example: All metals expand when heated, therefore iron will expand when heated. The missing premise is: Iron is a metal. On the other hand, a seemingly valid argument may be found to lack a premise—a "hidden assumption"—which, if highlighted, can show a fault in reasoning. Example: A witness reasoned: Nobody came out the front door except the milkman; therefore the murderer must have left by the back door. The hidden assumptions are: (1) the milkman was not the murderer and (2) the murderer has left (3) by a door and (4) not by e.g. a window or through an 'ole in 't roof and (5) there are no other doors than the front or back door. Argument mining The goal of argument mining is the automatic extraction and identification of argumentative structures from natural language text with the aid of computer programs. Such argumentative structures include the premise, conclusions, the argument scheme and the relationship between the main and subsidiary argument, or the main and counter-argument within discourse. See also Abductive reasoning Argument map Bayes' theorem Belief bias Boolean logic Cosmological argument Evidence-based policy Logical reasoning Practical arguments Semantic argument Notes References Robert Audi, Epistemology, Routledge, 1998. Particularly relevant is Chapter 6, which explores the relationship between knowledge, inference and argument. J. L. Austin How to Do Things With Words, Oxford University Press, 1976. H. P. Grice, Logic and Conversation in The Logic of Grammar, Dickenson, 1975. Vincent F. Hendricks, Thought 2 Talk: A Crash Course in Reflection and Expression, New York: Automatic Press / VIP, 2005, R. A. DeMillo, R. J. Lipton and A. J. Perlis, Social Processes and Proofs of Theorems and Programs, Communications of the ACM, Vol. 22, No. 5, 1979. A classic article on the social process of acceptance of proofs in mathematics. Yu. Manin, A Course in Mathematical Logic, Springer Verlag, 1977. A mathematical view of logic. This book is different from most books on mathematical logic in that it emphasizes the mathematics of logic, as opposed to the formal structure of logic. Ch. Perelman and L. Olbrechts-Tyteca, The New Rhetoric, Notre Dame, 1970. This classic was originally published in French in 1958. Henri Poincaré, Science and Hypothesis, Dover Publications, 1952 Frans van Eemeren and Rob Grootendorst, Speech Acts in Argumentative Discussions, Foris Publications, 1984. K. R. Popper Objective Knowledge; An Evolutionary Approach, Oxford: Clarendon Press, 1972. L. S. Stebbing, A Modern Introduction to Logic, Methuen and Co., 1948. An account of logic that covers the classic topics of logic and argument while carefully considering modern developments in logic. Douglas N. Walton, Informal Logic: A Handbook for Critical Argumentation, Cambridge, 1998. Walton, Douglas; Christopher Reed; Fabrizio Macagno, Argumentation Schemes, New York: Cambridge University Press, 2008. Carlos Chesñevar, Ana Maguitman and Ronald Loui, Logical Models of Argument, ACM Computing Surveys, vol. 32, num. 4, pp. 337–383, 2000. T. Edward Damer. Attacking Faulty Reasoning, 5th Edition, Wadsworth, 2005. Charles Arthur Willard, A Theory of Argumentation. 1989. Charles Arthur Willard, Argumentation and the Social Grounds of Knowledge. 1982. Further reading Salmon, Wesley C. Logic. New Jersey: Prentice-Hall (1963). Library of Congress Catalog Card no. 63–10528. Aristotle, Prior and Posterior Analytics. Ed. and trans. John Warrington. London: Dent (1964) Mates, Benson. Elementary Logic. New York: OUP (1972). Library of Congress Catalog Card no. 74–166004. Mendelson, Elliot. Introduction to Mathematical Logic. New York: Van Nostran Reinholds Company (1964). Frege, Gottlob. The Foundations of Arithmetic. Evanston, IL: Northwestern University Press (1980). Martin, Brian. The Controversy Manual (Sparsnäs, Sweden: Irene Publishing, 2014). External links Critical thinking skills Logical consequence Reasoning
0.765032
0.996095
0.762045
Determinism
Determinism is the philosophical view that all events in the universe, including human decisions and actions, are causally inevitable. Deterministic theories throughout the history of philosophy have developed from diverse and sometimes overlapping motives and considerations. Like eternalism, determinism focuses on particular events rather than the future as a concept. The opposite of determinism is indeterminism, or the view that events are not deterministically caused but rather occur due to chance. Determinism is often contrasted with free will, although some philosophers claim that the two are compatible. Historically, debates about determinism have involved many philosophical positions and given rise to multiple varieties or interpretations of determinism. One topic of debate concerns the scope of determined systems. Some philosophers have maintained that the entire universe is a single determinate system, while others identify more limited determinate systems. Another common debate topic is whether determinism and free will can coexist; compatibilism and incompatibilism represent the opposing sides of this debate. Determinism should not be confused with the self-determination of human actions by reasons, motives, and desires. Determinism is about interactions which affect cognitive processes in people's lives. It is about the cause and the result of what people have done. Cause and result are always bound together in cognitive processes. It assumes that if an observer has sufficient information about an object or human being, that such an observer might be able to predict every consequent move of that object or human being. Determinism rarely requires that perfect prediction be practically possible. Varieties Determinism may commonly refer to any of the following viewpoints. Causal Causal determinism, sometimes synonymous with historical determinism (a sort of path dependence), is "the idea that every event is necessitated by antecedent events and conditions together with the laws of nature." However, it is a broad enough term to consider that:...One's deliberations, choices, and actions will often be necessary links in the causal chain that brings something about. In other words, even though our deliberations, choices, and actions are themselves determined like everything else, it is still the case, according to causal determinism, that the occurrence or existence of yet other things depends upon our deliberating, choosing and acting in a certain way.Causal determinism proposes that there is an unbroken chain of prior occurrences stretching back to the origin of the universe. The relation between events and the origin of the universe may not be specified. Causal determinists believe that there is nothing in the universe that has no cause or is self-caused. Causal determinism has also been considered more generally as the idea that everything that happens or exists is caused by antecedent conditions. In the case of nomological determinism, these conditions are considered events also, implying that the future is determined completely by preceding events—a combination of prior states of the universe and the laws of nature. These conditions can also be considered metaphysical in origin (such as in the case of theological determinism). Nomological Nomological determinism is the most common form of causal determinism and is generally synonymous with physical determinism. This is the notion that the past and the present dictate the future entirely and necessarily by rigid natural laws and that every occurrence inevitably results from prior events. Nomological determinism is sometimes illustrated by the thought experiment of Laplace's demon. Although sometimes called scientific determinism, the term is a misnomer for nomological determinism. Necessitarianism Necessitarianism is a metaphysical principle that denies all mere possibility and maintains that there is only one possible way for the world to exist. Leucippus claimed there are no uncaused events and that everything occurs for a reason and by necessity. Predeterminism Predeterminism is the idea that all events are determined in advance. The concept is often argued by invoking causal determinism, implying that there is an unbroken chain of prior occurrences stretching back to the origin of the universe. In the case of predeterminism, this chain of events has been pre-established, and human actions cannot interfere with the outcomes of this pre-established chain. Predeterminism can be categorized as a specific type of determinism when it is used to mean pre-established causal determinism. It can also be used interchangeably with causal determinism—in the context of its capacity to determine future events. However, predeterminism is often considered as independent of causal determinism. Biological The term predeterminism is also frequently used in the context of biology and heredity, in which case it represents a form of biological determinism, sometimes called genetic determinism. Biological determinism is the idea that all human behaviors, beliefs, and desires are fixed by human genetic nature. Friedrich Nietzsche explained that human beings are "determined" by their bodies and are subject to its passions, impulses, and instincts. Fatalism Fatalism is normally distinguished from determinism, as a form of teleological determinism. Fatalism is the idea that everything is fated to happen, resulting in humans having no control over their future. Fate has arbitrary power, and does not necessarily follow any causal or deterministic laws. Types of fatalism include hard theological determinism and the idea of predestination, where there is a God who determines all that humans will do. This may be accomplished through either foreknowledge of their actions, achieved through omniscience or by predetermining their actions. Theological Theological determinism is a form of determinism that holds that all events that happen are either preordained (i.e., predestined) to happen by a monotheistic deity, or are destined to occur given its omniscience. Two forms of theological determinism exist, referred to as strong and weak theological determinism. Strong theological determinism is based on the concept of a creator deity dictating all events in history: "everything that happens has been predestined to happen by an omniscient, omnipotent divinity." Weak theological determinism is based on the concept of divine foreknowledge—"because God's omniscience is perfect, what God knows about the future will inevitably happen, which means, consequently, that the future is already fixed." There exist slight variations on this categorization, however. Some claim either that theological determinism requires predestination of all events and outcomes by the divinity—i.e., they do not classify the weaker version as theological determinism unless libertarian free will is assumed to be denied as a consequence—or that the weaker version does not constitute theological determinism at all. With respect to free will, "theological determinism is the thesis that God exists and has infallible knowledge of all true propositions including propositions about our future actions," more minimal criteria designed to encapsulate all forms of theological determinism. Theological determinism can also be seen as a form of causal determinism, in which the antecedent conditions are the nature and will of God. Some have asserted that Augustine of Hippo introduced theological determinism into Christianity in 412 CE, whereas all prior Christian authors supported free will against Stoic and Gnostic determinism. However, there are many Biblical passages that seem to support the idea of some kind of theological determinism. Adequate Adequate determinism is the idea, because of quantum decoherence, that quantum indeterminacy can be ignored for most macroscopic events. Random quantum events "average out" in the limit of large numbers of particles (where the laws of quantum mechanics asymptotically approach the laws of classical mechanics). Stephen Hawking explains a similar idea: he says that the microscopic world of quantum mechanics is one of determined probabilities. That is, quantum effects rarely alter the predictions of classical mechanics, which are quite accurate (albeit still not perfectly certain) at larger scales. Something as large as an animal cell, then, would be "adequately determined" (even in light of quantum indeterminacy). Many-worlds The many-worlds interpretation accepts the linear causal sets of sequential events with adequate consistency yet also suggests constant forking of causal chains creating "multiple universes" to account for multiple outcomes from single events. Meaning the causal set of events leading to the present are all valid yet appear as a singular linear time stream within a much broader unseen conic probability field of other outcomes that "split off" from the locally observed timeline. Under this model causal sets are still "consistent" yet not exclusive to singular iterated outcomes. The interpretation sidesteps the exclusive retrospective causal chain problem of "could not have done otherwise" by suggesting "the other outcome does exist" in a set of parallel universe time streams that split off when the action occurred. This theory is sometimes described with the example of agent based choices but more involved models argue that recursive causal splitting occurs with all wave functions at play. This model is highly contested with multiple objections from the scientific community. Philosophical varieties Nature/nurture controversy Although some of the above forms of determinism concern human behaviors and cognition, others frame themselves as an answer to the debate on nature and nurture. They will suggest that one factor will entirely determine behavior. As scientific understanding has grown, however, the strongest versions of these theories have been widely rejected as a single-cause fallacy. In other words, the modern deterministic theories attempt to explain how the interaction of both nature and nurture is entirely predictable. The concept of heritability has been helpful in making this distinction. Biological determinism, sometimes called genetic determinism, is the idea that each of human behaviors, beliefs, and desires are fixed by human genetic nature. Behaviorism involves the idea that all behavior can be traced to specific causes—either environmental or reflexive. John B. Watson and B. F. Skinner developed this nurture-focused determinism. Cultural materialism, contends that the physical world impacts and sets constraints on human behavior. Cultural determinism, along with social determinism, is the nurture-focused theory that the culture in which we are raised determines who we are. Environmental determinism, also known as climatic or geographical determinism, proposes that the physical environment, rather than social conditions, determines culture. Supporters of environmental determinism often also support behavioral determinism. Key proponents of this notion have included Ellen Churchill Semple, Ellsworth Huntington, Thomas Griffith Taylor and possibly Jared Diamond, although his status as an environmental determinist is debated. Determinism and prediction Other "deterministic" theories actually seek only to highlight the importance of a particular factor in predicting the future. These theories often use the factor as a sort of guide or constraint on the future. They need not suppose that complete knowledge of that one factor would allow the making of perfect predictions. Psychological determinism can mean that humans must act according to reason, but it can also be synonymous with some sort of psychological egoism. The latter is the view that humans will always act according to their perceived best interest. Linguistic determinism proposes that language determines (or at least limits) the things that humans can think and say and thus know. The Sapir–Whorf hypothesis argues that individuals experience the world based on the grammatical structures they habitually use. Economic determinism attributes primacy to economic structure over politics in the development of human history. It is associated with the dialectical materialism of Karl Marx. Technological determinism is the theory that a society's technology drives the development of its social structure and cultural values. Structural Structural determinism is the philosophical view that actions, events, and processes are predicated on and determined by structural factors. Given any particular structure or set of estimable components, it is a concept that emphasizes rational and predictable outcomes. Chilean biologists Humberto Maturana and Francisco Varela popularized the notion, writing that a living system's general order is maintained via a circular process of ongoing self-referral, and thus its organization and structure defines the changes it undergoes. According to the authors, a system can undergo changes of state (alteration of structure without loss of identity) or disintegrations (alteration of structure with loss of identity). Such changes or disintegrations are not ascertained by the elements of the disturbing agent, as each disturbance will only trigger responses in the respective system, which in turn, are determined by each system's own structure. On an individualistic level, what this means is that human beings as free and independent entities are triggered to react by external stimuli or change in circumstance. However, their own internal state and existing physical and mental capacities determine their responses to those triggers. On a much broader societal level, structural determinists believe that larger issues in the society—especially those pertaining to minorities and subjugated communities—are predominantly assessed through existing structural conditions, making change of prevailing conditions difficult, and sometimes outright impossible. For example, the concept has been applied to the politics of race in the United States of America and other Western countries such as the United Kingdom and Australia, with structural determinists lamenting structural factors for the prevalence of racism in these countries. Additionally, Marxists have conceptualized the writings of Karl Marx within the context of structural determinism as well. For example, Louis Althusser, a structural Marxist, argues that the state, in its political, economic, and legal structures, reproduces the discourse of capitalism, in turn, allowing for the burgeoning of capitalistic structures. Proponents of the notion highlight the usefulness of structural determinism to study complicated issues related to race and gender, as it highlights often gilded structural conditions that block meaningful change. Critics call it too rigid, reductionist and inflexible. Additionally, they also criticize the notion for overemphasizing deterministic forces such as structure over the role of human agency and the ability of the people to act. These critics argue that politicians, academics, and social activists have the capability to bring about significant change despite stringent structural conditions. With free will Philosophers have debated both the truth of determinism, and the truth of free will. This creates the four possible positions in the figure. Compatibilism refers to the view that free will is, in some sense, compatible with determinism. The three incompatibilist positions deny this possibility. The hard incompatibilists hold that free will is incompatible with both determinism and indeterminism, the libertarians that determinism does not hold, and free will might exist, and the hard determinists that determinism does hold and free will does not exist. The Dutch philosopher Baruch Spinoza was a determinist thinker, and argued that human freedom can be achieved through knowledge of the causes that determine desire and affections. He defined human servitude as the state of bondage of anyone who is aware of their own desires, but ignorant of the causes that determined them. However, the free or virtuous person becomes capable, through reason and knowledge, to be genuinely free, even as they are being "determined". For the Dutch philosopher, acting out of one's own internal necessity is genuine freedom while being driven by exterior determinations is akin to bondage. Spinoza's thoughts on human servitude and liberty are respectively detailed in the fourth and fifth volumes of his work Ethics. The standard argument against free will, according to philosopher J. J. C. Smart, focuses on the implications of determinism for free will. He suggests free will is denied whether determinism is true or not. He says that if determinism is true, all actions are predicted and no one is assumed to be free; however, if determinism is false, all actions are presumed to be random and as such no one seems free because they have no part in controlling what happens. With the soul Some determinists argue that materialism does not present a complete understanding of the universe, because while it can describe determinate interactions among material things, it ignores the minds or souls of conscious beings. A number of positions can be delineated: Immaterial souls are all that exist (idealism). Immaterial souls exist and exert a non-deterministic causal influence on bodies (traditional free-will, interactionist dualism). Immaterial souls exist but are part of a deterministic framework. Immaterial souls exist, but exert no causal influence, free or determined (epiphenomenalism, occasionalism) Immaterial souls do not exist – there is no mind–body dichotomy, and there is a materialistic explanation for intuitions to the contrary. With ethics and morality Another topic of debate is the implication that determinism has on morality. Philosopher and incompatibilist Peter van Inwagen introduced this thesis, when arguments that free will is required for moral judgments, as such: The moral judgment that X should not have been done implies that something else should have been done instead. That something else should have been done instead implies that there was something else to do. That there was something else to do, implies that something else could have been done. That something else could have been done implies that there is free will. If there is no free will to have done other than X we cannot make the moral judgment that X should not have been done. History Determinism was developed by the Greek philosophers during the 7th and 6th centuries BCE by the Pre-socratic philosophers Heraclitus and Leucippus, later Aristotle, and mainly by the Stoics. Some of the main philosophers who have dealt with this issue are Marcus Aurelius, Omar Khayyam, Thomas Hobbes, Baruch Spinoza, Gottfried Leibniz, David Hume, Baron d'Holbach (Paul Heinrich Dietrich), Pierre-Simon Laplace, Arthur Schopenhauer, William James, Friedrich Nietzsche, Albert Einstein, Niels Bohr, Ralph Waldo Emerson and, more recently, John Searle, Ted Honderich, and Daniel Dennett. Mecca Chiesa notes that the probabilistic or selectionistic determinism of B. F. Skinner comprised a wholly separate conception of determinism that was not mechanistic at all. Mechanistic determinism assumes that every event has an unbroken chain of prior occurrences, but a selectionistic or probabilistic model does not. Western tradition In the West, some elements of determinism have been expressed in Greece from the 6th century BCE by the Presocratics Heraclitus and Leucippus. The first notions of determinism appears to originate with the Stoics, as part of their theory of universal causal determinism. The resulting philosophical debates, which involved the confluence of elements of Aristotelian Ethics with Stoic psychology, led in the 1st–3rd centuries CE in the works of Alexander of Aphrodisias to the first recorded Western debate over determinism and freedom, an issue that is known in theology as the paradox of free will. The writings of Epictetus as well as middle Platonist and early Christian thought were instrumental in this development. Jewish philosopher Moses Maimonides said of the deterministic implications of an omniscient god: "Does God know or does He not know that a certain individual will be good or bad? If thou sayest 'He knows', then it necessarily follows that [that] man is compelled to act as God knew beforehand he would act, otherwise God's knowledge would be imperfect." Newtonian mechanics Determinism in the West is often associated with Newtonian mechanics/physics, which depicts the physical matter of the universe as operating according to a set of fixed laws. The "billiard ball" hypothesis, a product of Newtonian physics, argues that once the initial conditions of the universe have been established, the rest of the history of the universe follows inevitably. If it were actually possible to have complete knowledge of physical matter and all of the laws governing that matter at any one time, then it would be theoretically possible to compute the time and place of every event that will ever occur (Laplace's demon). In this sense, the basic particles of the universe operate in the same fashion as the rolling balls on a billiard table, moving and striking each other in predictable ways to produce predictable results. Whether or not it is all-encompassing in so doing, Newtonian mechanics deals only with caused events; for example, if an object begins in a known position and is hit dead on by an object with some known velocity, then it will be pushed straight toward another predictable point. If it goes somewhere else, the Newtonians argue, one must question one's measurements of the original position of the object, the exact direction of the striking object, gravitational or other fields that were inadvertently ignored, etc. Then, they maintain, repeated experiments and improvements in accuracy will always bring one's observations closer to the theoretically predicted results. When dealing with situations on an ordinary human scale, Newtonian physics has been successful. But it fails as velocities become some substantial fraction of the speed of light and when interactions at the atomic scale are studied. Before the discovery of quantum effects and other challenges to Newtonian physics, "uncertainty" was always a term that applied to the accuracy of human knowledge about causes and effects, and not to the causes and effects themselves. Newtonian mechanics, as well as any following physical theories, are results of observations and experiments, and so they describe "how it all works" within a tolerance. However, old western scientists believed if there are any logical connections found between an observed cause and effect, there must be also some absolute natural laws behind. Belief in perfect natural laws driving everything, instead of just describing what we should expect, led to searching for a set of universal simple laws that rule the world. This movement significantly encouraged deterministic views in Western philosophy, as well as the related theological views of classical pantheism. Eastern tradition Throughout history, the belief that the entire universe is a deterministic system subject to the will of fate or destiny has been articulated in both Eastern and Western religions, philosophy, music, and literature. The ancient Arabs that inhabited the Arabian Peninsula before the advent of Islam used to profess a widespread belief in fatalism (ḳadar) alongside a fearful consideration for the sky and the stars as divine beings, which they held to be ultimately responsible for every phenomena that occurs on Earth and for the destiny of humankind. Accordingly, they shaped their entire lives in accordance with their interpretations of astral configurations and phenomena. In the I Ching and philosophical Taoism, the ebb and flow of favorable and unfavorable conditions suggests the path of least resistance is effortless (see: Wu wei). In the philosophical schools of the Indian Subcontinent, the concept of karma deals with similar philosophical issues to the Western concept of determinism. Karma is understood as a spiritual mechanism which causes the eternal cycle of birth, death, and rebirth (saṃsāra). Karma, either positive or negative, accumulates according to an individual's actions throughout their life, and at their death determines the nature of their next life in the cycle of Saṃsāra. Most major religions originating in India hold this belief to some degree, most notably Hinduism, Jainism, Sikhism, and Buddhism. The views on the interaction of karma and free will are numerous, and diverge from each other. For example, in Sikhism, god's grace, gained through worship, can erase one's karmic debts, a belief which reconciles the principle of karma with a monotheistic god one must freely choose to worship. Jainists believe in compatibilism, in which the cycle of Saṃsara is a completely mechanistic process, occurring without any divine intervention. The Jains hold an atomic view of reality, in which particles of karma form the fundamental microscopic building material of the universe. Ājīvika In ancient India, the Ājīvika school of philosophy founded by Makkhali Gosāla (around 500 BCE), otherwise referred to as "Ājīvikism" in Western scholarship, upheld the Niyati ("Fate") doctrine of absolute fatalism or determinism, which negates the existence of free will and karma, and is therefore considered one of the nāstika or "heterodox" schools of Indian philosophy. The oldest descriptions of the Ājīvika fatalists and their founder Gosāla can be found both in the Buddhist and Jaina scriptures of ancient India. The predetermined fate of all sentient beings and the impossibility to achieve liberation (mokṣa) from the eternal cycle of birth, death, and rebirth (saṃsāra) was the major distinctive philosophical and metaphysical doctrine of this heterodox school of Indian philosophy, annoverated among the other Śramaṇa movements that emerged in India during the Second urbanization (600–200 BCE). Buddhism Buddhist philosophy contains several concepts which some scholars describe as deterministic to various levels. However, the direct analysis of Buddhist metaphysics through the lens of determinism is difficult, due to the differences between European and Buddhist traditions of thought. One concept which is argued to support a hard determinism is the doctrine of dependent origination (pratītyasamutpāda) in the early Buddhist texts, which states that all phenomena (dharma) are necessarily caused by some other phenomenon, which it can be said to be dependent on, like links in a massive, never-ending chain; the basic principle is that all things (dharmas, phenomena, principles) arise in dependence upon other things, which means that they are fundamentally "empty" or devoid of any intrinsic, eternal essence and therefore are impermanent. In traditional Buddhist philosophy, this concept is used to explain the functioning of the eternal cycle of birth, death, and rebirth (saṃsāra); all thoughts and actions exert a karmic force that attaches to the individual's consciousness, which will manifest through reincarnation and results in future lives. In other words, righteous or unrighteous actions in one life will necessarily cause good or bad responses in another future life or more lives. The early Buddhist texts and later Tibetan Buddhist scriptures associate dependent arising with the fundamental Buddhist doctrines of emptiness (śūnyatā) and non-self (anattā). Another Buddhist concept which many scholars perceive to be deterministic is the doctrine of non-self (anattā). In Buddhism, attaining enlightenment involves one realizing that neither in humans nor any other sentient beings there is a fundamental core of permanent being, identity, or personality which can be called the "soul", and that all sentient beings (including humans) are instead made of several, constantly changing factors which bind them to the eternal cycle of birth, death, and rebirth (saṃsāra). Sentient beings are composed of the five aggregates of existence (skandha): matter, sensation, perception, mental formations, and consciousness. In the Saṃyutta Nikāya of the Pāli Canon, the historical Buddha is recorded as saying that "just as the word 'chariot' exists on the basis of the aggregation of parts, even so the concept of 'being' exists when the five aggregates are available." The early Buddhist texts outline different ways in which dependent origination is a middle way between different sets of "extreme" views (such as "monist" and "pluralist" ontologies or materialist and dualist views of mind-body relation). In the Kaccānagotta Sutta of the Pāli Canon (SN 12.15, parallel at SA 301), the historical Buddha stated that "this world mostly relies on the dual notions of existence and non-existence" and then explains the right view as follows: Some Western scholars argue that the concept of non-self necessarily disproves the ideas of free will and moral responsibility. If there is no autonomous self, in this view, and all events are necessarily and unchangeably caused by others, then no type of autonomy can be said to exist, moral or otherwise. However, other scholars disagree, claiming that the Buddhist conception of the universe allows for a form of compatibilism. Buddhism perceives reality occurring on two different levels: the ultimate reality, which can only be truly understood by the enlightened ones, and the illusory or false reality of the material world, which is considered to be "real" or "true" by those who are ignorant about the nature of metaphysical reality; i.e., those who still haven't achieved enlightenment. Therefore, Buddhism perceives free will as a notion belonging to the illusory belief in the unchanging self or personhood that pertains to the false reality of the material world, while concepts like non-self and dependent origination belong to the ultimate reality; the transition between the two can be truly understood, Buddhists claim, by one who has attained enlightenment. Modern scientific perspective Generative processes Although it was once thought by scientists that any indeterminism in quantum mechanics occurred at too small a scale to influence biological or neurological systems, there is indication that nervous systems are influenced by quantum indeterminism due to chaos theory. It is unclear what implications this has for the problem of free will given various possible reactions to the problem in the first place. Many biologists do not grant determinism: Christof Koch, for instance, argues against it, and in favour of libertarian free will, by making arguments based on generative processes (emergence). Other proponents of emergentist or generative philosophy, cognitive sciences, and evolutionary psychology, argue that a certain form of determinism (not necessarily causal) is true. They suggest instead that an illusion of free will is experienced due to the generation of infinite behaviour from the interaction of finite-deterministic set of rules and parameters. Thus the unpredictability of the emerging behaviour from deterministic processes leads to a perception of free will, even though free will as an ontological entity does not exist. As an illustration, the strategy board-games chess and Go have rigorous rules in which no information (such as cards' face-values) is hidden from either player and no random events (such as dice-rolling) happen within the game. Yet, chess and especially Go with its extremely simple deterministic rules, can still have an extremely large number of unpredictable moves. When chess is simplified to 7 or fewer pieces, however, endgame tables are available that dictate which moves to play to achieve a perfect game. This implies that, given a less complex environment (with the original 32 pieces reduced to 7 or fewer pieces), a perfectly predictable game of chess is possible. In this scenario, the winning player can announce that a checkmate will happen within a given number of moves, assuming a perfect defense by the losing player, or fewer moves if the defending player chooses sub-optimal moves as the game progresses into its inevitable, predicted conclusion. By this analogy, it is suggested, the experience of free will emerges from the interaction of finite rules and deterministic parameters that generate nearly infinite and practically unpredictable behavioural responses. In theory, if all these events could be accounted for, and there were a known way to evaluate these events, the seemingly unpredictable behaviour would become predictable. Another hands-on example of generative processes is John Horton Conway's playable Game of Life. Nassim Taleb is wary of such models, and coined the term "ludic fallacy." Compatibility with the existence of science Certain philosophers of science argue that, while causal determinism (in which everything including the brain/mind is subject to the laws of causality) is compatible with minds capable of science, fatalism and predestination is not. These philosophers make the distinction that causal determinism means that each step is determined by the step before and therefore allows sensory input from observational data to determine what conclusions the brain reaches, while fatalism in which the steps between do not connect an initial cause to the results would make it impossible for observational data to correct false hypotheses. This is often combined with the argument that if the brain had fixed views and the arguments were mere after-constructs with no causal effect on the conclusions, science would have been impossible and the use of arguments would have been a meaningless waste of energy with no persuasive effect on brains with fixed views. Mathematical models Many mathematical models of physical systems are deterministic. This is true of most models involving differential equations (notably, those measuring rate of change over time). Mathematical models that are not deterministic because they involve randomness are called stochastic. Because of sensitive dependence on initial conditions, some deterministic models may appear to behave non-deterministically; in such cases, a deterministic interpretation of the model may not be useful due to numerical instability and a finite amount of precision in measurement. Such considerations can motivate the consideration of a stochastic model even though the underlying system is governed by deterministic equations. Quantum and classical mechanics Day-to-day physics Since the beginning of the 20th century, quantum mechanics—the physics of the extremely small—has revealed previously concealed aspects of events. Before that, Newtonian physics—the physics of everyday life—dominated. Taken in isolation (rather than as an approximation to quantum mechanics), Newtonian physics depicts a universe in which objects move in perfectly determined ways. At the scale where humans exist and interact with the universe, Newtonian mechanics remain useful, and make relatively accurate predictions (e.g. calculating the trajectory of a bullet). But whereas in theory, absolute knowledge of the forces accelerating a bullet would produce an absolutely accurate prediction of its path, modern quantum mechanics casts reasonable doubt on this main thesis of determinism. Quantum realm Quantum physics works differently in many ways from Newtonian physics. Physicist Aaron D. O'Connell explains that understanding the universe, at such small scales as atoms, requires a different logic than day-to-day life does. O'Connell does not deny that it is all interconnected: the scale of human existence ultimately does emerge from the quantum scale. O'Connell argues that we must simply use different models and constructs when dealing with the quantum world. Quantum mechanics is the product of a careful application of the scientific method, logic and empiricism. The Heisenberg uncertainty principle is frequently confused with the observer effect. The uncertainty principle actually describes how precisely we may measure the position and momentum of a particle at the same time—if we increase the accuracy in measuring one quantity, we are forced to lose accuracy in measuring the other. "These uncertainty relations give us that measure of freedom from the limitations of classical concepts which is necessary for a consistent description of atomic processes." This is where statistical mechanics come into play, and where physicists begin to require rather unintuitive mental models: A particle's path simply cannot be exactly specified in its full quantum description. "Path" is a classical, practical attribute in everyday life, but one that quantum particles do not meaningfully possess. The probabilities discovered in quantum mechanics do nevertheless arise from measurement (of the perceived path of the particle). As Stephen Hawking explains, the result is not traditional determinism, but rather determined probabilities. In some cases, a quantum particle may indeed trace an exact path, and the probability of finding the particles in that path is one (certain to be true). In fact, as far as prediction goes, the quantum development is at least as predictable as the classical motion, but the key is that it describes wave functions that cannot be easily expressed in ordinary language. As far as the thesis of determinism is concerned, these probabilities, at least, are quite determined. These findings from quantum mechanics have found many applications, and allow people to build transistors and lasers. Put another way: personal computers, Blu-ray players and the Internet all work because humankind discovered the determined probabilities of the quantum world. On the topic of predictable probabilities, the double-slit experiments are a popular example. Photons are fired one-by-one through a double-slit apparatus at a distant screen. They do not arrive at any single point, nor even the two points lined up with the slits (the way it might be expected of bullets fired by a fixed gun at a distant target). Instead, the light arrives in varying concentrations at widely separated points, and the distribution of its collisions with the target can be calculated reliably. In that sense the behavior of light in this apparatus is predictable, but there is no way to predict where in the resulting interference pattern any individual photon will make its contribution (although, there may be ways to use weak measurement to acquire more information without violating the uncertainty principle). Some (including Albert Einstein) have argued that the inability to predict any more than probabilities is simply due to ignorance. The idea is that, beyond the conditions and laws can be observed or deduced, there are also hidden factors or "hidden variables" that determine absolutely in which order photons reach the detector screen. They argue that the course of the universe is absolutely determined, but that humans are screened from knowledge of the determinative factors. So, they say, it only appears that things proceed in a merely probabilistically determinative way. In actuality, they proceed in an absolutely deterministic way. John S. Bell criticized Einstein's work in his famous Bell's theorem, which, under a strict set of assumptions, demonstrates that quantum mechanics can make statistical predictions that would be violated if local hidden variables really existed. A number of experiments have tried to verify such predictions, and so far they do not appear to be violated. Current experiments continue to verify the result, including the 2015 "Loophole Free Test" that plugged all known sources of error and the 2017 "Cosmic Bell Test" experiment that used cosmic data streaming from different directions toward the Earth, precluding the possibility the sources of data could have had prior interactions. Bell's theorem has been criticized from the perspective of its strict set of assumptions. A foundational assumption to quantum mechanics is the Principle of locality. To abandon this assumption would require the construction of a non-local hidden variable theory. Therefore, it is possible to augment quantum mechanics with non-local hidden variables to achieve a deterministic theory that is in agreement with experiment. An example is the Bohm interpretation of quantum mechanics. Bohm's Interpretation, though, violates special relativity and it is highly controversial whether or not it can be reconciled without giving up on determinism. Another foundational assumption to quantum mechanics is that of free will, which has been argued to be foundational to the scientific method as a whole. Bell acknowledged that abandoning this assumption would both allow for the maintenance of determinism as well as locality. This perspective is known as superdeterminism, and is defended by some physicists such as Sabine Hossenfelder and Tim Palmer. More advanced variations on these arguments include quantum contextuality, by Bell, Simon B. Kochen and Ernst Specker, which argues that hidden variable theories cannot be "sensible", meaning that the values of the hidden variables inherently depend on the devices used to measure them. This debate is relevant because there are possibly specific situations in which the arrival of an electron at a screen at a certain point and time would trigger one event, whereas its arrival at another point would trigger an entirely different event (e.g. see Schrödinger's cat—a thought experiment used as part of a deeper debate). In his 1939 address "The Relation between Mathematics and Physics", Paul Dirac points out that purely deterministic classical mechanics cannot explain the cosmological origins of the universe; today the early universe is modeled quantum mechanically. Thus, quantum physics casts reasonable doubt on the traditional determinism of classical, Newtonian physics in so far as reality does not seem to be absolutely determined. This was the subject of the famous Bohr–Einstein debates between Einstein and Niels Bohr and there is still no consensus. Adequate determinism (see Varieties, above) is the reason that Stephen Hawking called libertarian free will "just an illusion". References Notes Bibliography Daniel Dennett (2003) Freedom Evolves. Viking Penguin. John Earman (2007) "Aspects of Determinism in Modern Physics" in Butterfield, J., and Earman, J., eds., Philosophy of Physics, Part B. North Holland: 1369–1434. George Ellis (2005) "Physics and the Real World", Physics Today. Epstein, J.M. and Axtell R. (1996) Growing Artificial Societies – Social Science from the Bottom. MIT Press. Harris, James A. (2005) Of Liberty and Necessity: The Free Will Debate in Eighteenth-Century British Philosophy. Clarendon Press. Albert Messiah, Quantum Mechanics, English translation by G. M. Temmer of Mécanique Quantique, 1966, John Wiley and Sons, vol. I, chapter IV, section III. (Online version found here) Nowak A., Vallacher R.R., Tesser A., Borkowski W., (2000) "Society of Self: The emergence of collective properties in self-structure", Psychological Review 107. Further reading George Musser, "Is the Cosmos Random? (Einstein's assertion that God does not play dice with the universe has been misinterpreted)", Scientific American, vol. 313, no. 3 (September 2015), pp. 88–93. External links Stanford Encyclopedia of Philosophy entry on Causal Determinism Determinism in History from the Dictionary of the History of Ideas Philosopher Ted Honderich's Determinism web resource Determinism on Information Philosopher The Society of Natural Science Determinism and Free Will in Judaism Snooker, Pool, and Determinism Metaphysical theories Causality
0.763163
0.99847
0.761995
Philosophical fiction
Philosophical fiction is any fiction that devotes a significant portion of its content to the sort of questions addressed by philosophy. It might explore any facet of the human condition, including the function and role of society, the nature and motivation of human acts, the purpose of life, ethics or morals, the role of art in human lives, the role of experience or reason in the development of knowledge, whether there exists free will, or any other topic of philosophical interest. Philosophical fiction includes the novel of ideas, which can also fall under the genre of science fiction, utopian and dystopian fiction, and bildungsroman. There is no universally accepted definition of philosophical fiction, but a sampling of notable works can help to outline its history. For example, a Platonic dialogue could be considered philosophical fiction. Some modern philosophers have written novels, plays, or short fiction in order to demonstrate or introduce their ideas. Common examples include Voltaire, Fyodor Dostoevsky, Thomas Mann, Hermann Hesse, Albert Camus, Jean-Paul Sartre, Simone de Beauvoir and Ayn Rand. Authors who admire certain philosophers may incorporate their ideas into the principal themes or central narratives of novels. Some examples include The Moviegoer (Kierkegaard), Thus Spake Zarathustra (Nietzsche), Wittgenstein's Mistress (David Markson), and Speedboat (post-structuralism). See also List of philosophical fiction authors Philosophy and literature Sci Phi Journal, online magazine dedicated to publishing science and philosophical fiction Literary fiction References External links What is a philosophical novel? Philosophy Ethics Short Story Magazine Literary genres
0.769513
0.990208
0.761978
Nondualism
Nondualism includes a number of philosophical and spiritual traditions that emphasize the absence of fundamental duality or separation in existence. This viewpoint questions the boundaries conventionally imposed between self and other, mind and body, observer and observed, and other dichotomies that shape our perception of reality. As a field of study, nondualism delves into the concept of nonduality and the state of nondual awareness, encompassing a diverse array of interpretations, not limited to a particular cultural or religious context; instead, nondualism emerges as a central teaching across various belief systems, inviting individuals to examine reality beyond the confines of dualistic thinking. What sets nondualism apart is its inclination towards direct experience as a path to understanding. While intellectual comprehension has its place, nondualism emphasizes the transformative power of firsthand encounters with the underlying unity of existence. Through practices like meditation and self-inquiry, practitioners aim to bypass the limitations of conceptual understanding and directly apprehend the interconnectedness that transcends superficial distinctions. This experiential aspect of nondualism challenges the limitations of language and rational thought, aiming for a more immediate, intuitive form of knowledge. Nondualism is distinct from monism, another philosophical concept that deals with the nature of reality. While both philosophies challenge the conventional understanding of dualism, they approach it differently. Nondualism emphasizes unity amid diversity. In contrast, monism posits that reality is ultimately grounded in a singular substance or principle, reducing the multiplicity of existence to a singular foundation. The distinction lies in their approach to the relationship between the many and the one. Each nondual tradition presents unique interpretations of nonduality. Advaita Vedanta, a school of thought within Hindu philosophy, focuses on the realization of the unity between the individual self (Ātman) and the ultimate reality (Brahman). In Zen Buddhism, the emphasis is on the direct experience of interconnectedness that goes beyond conventional thought constructs. Dzogchen, found in Tibetan Buddhism, highlights the recognition of an innate nature free from dualistic limitations. Taoism embodies nondualism by emphasizing the harmony and interconnectedness of all phenomena, transcending dualistic distinctions, towards a pure state of awareness free of conceptualizations. Etymology "Dual" comes from Latin "duo", two, prefixed with "non-" meaning "not"; "non-dual" means "not-two". When referring to nonduality, Hinduism generally uses the Sanskrit term Advaita, while Buddhism uses Advaya (Tibetan: gNis-med, Chinese: pu-erh, Japanese: fu-ni). "Advaita" (अद्वैत) is from Sanskrit roots a, not; dvaita, dual. As Advaita, it means "not-two". or "one without a second", and is usually translated as "nondualism", "nonduality" and "nondual". The term "nondualism" and the term "advaita" from which it originates are polyvalent terms. "Advaya" (अद्वय) is also a Sanskrit word that means "identity, unique, not two, without a second", and typically refers to the two truths doctrine of Mahayana Buddhism, especially Madhyamaka. The English term "nondual" was informed by early translations of the Upanishads in Western languages other than English from 1775. These terms have entered the English language from literal English renderings of "advaita" subsequent to the first wave of English translations of the Upanishads. These translations commenced with the work of Müller (1823–1900), in the monumental Sacred Books of the East (1879). He rendered "advaita" as "Monism", as have many recent scholars. However, some scholars state that "advaita" is not really monism. Definitions Nonduality is a fuzzy concept, for which many definitions can be found. According to David Loy, since there are similar ideas and terms in a wide variety of spiritualities and religions, ancient and modern, no single definition for the English word "nonduality" can suffice, and perhaps it is best to speak of various "nondualities" or theories of nonduality. Loy sees non-dualism as a common thread in Taoism, Mahayana Buddhism, and Advaita Vedanta, and distinguishes "Five Flavors Of Nonduality": Nondual awareness, the nondifference of subject and object, or nonduality between subject and object. It is the notion that the observer and the 'things' observed cannot be strictly separated, but form, in the final analysis, a whole. The nonplurality of the world. Although the phenomenal world appears as a plurality of "things", in reality they are "of a single cloth". The negation of dualistic thinking in pairs of opposites. The Yin-Yang symbol of Taoism symbolises the transcendence of this dualistic way of thinking. The identity of phenomena and the Absolute, the "nonduality of duality and nonduality", or the nonduality of relative and ultimate truth as found in Madhyamaka Buddhism and the two truths doctrine. Mysticism, a mystical unity between God and Human. In his book Nonduality, which focuses on nondual awareness, Loy discusses three of them, namely thinking without dualistic concepts, the interconnectedness of everything that exists, and the non-difference of subject and object. According to Loy, "all three claims are found in Mahaya Buddhism, Advaita Vedanta, and Taoism, arguing that "the nondual experience 'behind' these contradictory systems is the same, and that the differences between them may be seen as due primarily to the nature of language." Indian ideas of nondual awareness developed as proto-Samkhya speculations in ascetic milieus in the 1st millennium BCE, with the notion of Purusha, the witness-conscious or 'pure consciousness'. Proto-samkhya ideas can be found in the earliest Upanishads, but are not restricted to the Vedic tradition. Brahmanical and non-Brahmanical (Buddhism, Jainism) ascetic traditions of the first millennium BCE developed in close interaction, utilizing proto-Samkhya enumerations (lists) analyzing experience in the context of meditative practices providing liberating insight into the nature of experience. The first millennium CE saw a movement towards postulating an underlying "basis of unity", both in the Buddhist Madhyamaka and Yogacara schools, and in Advaita Vedanta, collapsing phenomenal reality into a "single substrate or underlying principle". Nondual awareness According to Hanley, Nakamura and Garland, nondual awareness is central to contemplative wisdom traditions, "a state of consciousness that rests in the background of all conscious experiencing – a background field of awareness that is unified, immutable, and empty of mental content, yet retains a quality of cognizant bliss [...] This field of awareness is thought to be ever present, yet typically unrecognized, obscured by discursive thought, emotion, and perception." According to Josipovic, "consciousness-as-such is a non-conceptual nondual awareness, whose essential property is non-representational reflexivity. This property makes consciousness-as-such phenomenologically, cognitively and neurobiologically a unique kind, different from and irreducible to any contents, functions and states." It is the pure consciousness or witness-consciousness of the Purusha of Samkhya and the Atman of Advaita Vedanta, which is aware of prakriti, the entanglements of the muddled mind and cognitive apparatus. Appearance in various religious traditions Different theories and concepts which can be linked to nonduality and nondual awareness are taught in a wide variety of religious traditions, including some western religions and philosophies. While their metaphysical systems differ, they may refer to a similar experience. These include: Early Indian asceticism (pre-Buddhist and pre-Hindu), as documented in the Upanishads, which contain proto-Samkhya speculations and form the basis for Vedanta Buddhism: "Shūnyavāda (emptiness view) of the Mādhyamaka school", which holds that there is a non-dual relationship (that is, there is no true separation) between conventional truth and ultimate truth, as well as between samsara and nirvana. "Vijnānavāda (consciousness view) of the Yogācāra school", which holds that there is no ultimate perceptual and conceptual division between a subject and its objects, or a cognizer and that which is cognized. It also argues against mind-body dualism, holding that there is only consciousness. In other words, consciousness is the fundamental and only reality. Tathagatagarbha-thought, which holds that all beings have the inborn potential to become Buddhas. Vajrayana-buddhism, including Tibetan Buddhist traditions of Dzogchen and Mahamudra. East Asian Mahāyāna Buddhist traditions like Pure Land and Zen. Hinduism: The Advaita Vedanta of Shankara which teaches that the Atman is pure consciousness, and that a single pure consciousness, svayam prakāśa, is the only reality, and that the world is unreal (Maya). Non-dual forms of Hindu Tantra including Kashmira Shaivism and the goddess centered Shaktism. Their view is similar to Advaita, but they teach that the world is not unreal, but it is the real manifestation of consciousness. Taoism, which teaches the idea of a single subtle universal force or cosmic creative power called Tao (literally "way"). Abrahamic traditions: Christian mystics who promote a "nondual experience", such as Meister Eckhart and Julian of Norwich. The focus of this Christian nondualism is on bringing the worshiper closer to God and realizing a "oneness" with the Divine. Sufism Jewish Kabbalah Western traditions: Western philosophers like Hegel, Spinoza and Schopenhauer. They defended different forms of philosophical monism or Idealism. Origins Nondual reality: Nasadiya Sukta According to Signe Cohen, the notion of the highest truth lying beyond all dualistic constructs of reality finds its origins in ancient Indian philosophical thought. One of the earliest articulations of this concept is evident in the renowned Nasadiya ("Non-Being") hymn of the Ṛigveda, which contemplates a primordial state of undifferentiated existence, devoid of both being and non-being. Concurrently, several Upanishads, including the Īśā, imply a similar quest for an undifferentiated oneness as the ultimate objective of human spiritual pursuit. According to the Īśā Upanishad, this goal transcends both the processes of becoming (saṃbhūti) and non-becoming (asaṃbhūti). The Isha Upanishad (second half of the first millennium BCE) employs a series of paradoxes to describe the supreme entity. The divine being is depicted as immovable, yet swifter than the human mind, surpassing even the fastest runners. It exists both far and near, within and outside. The term "eka" is used to convey that this entity transcends all dichotomies, encompassing wisdom and ignorance, existence and non-existence, and creation and destruction. It emphasizes that not only is the divine entity beyond dualities, but human seekers of immortality must also transcend their dualistic perception of the world. Nondual awareness: Samkhya and yoga Samkhya is a dualistic āstika school of Indian philosophy, regarding human experience as being constituted by two independent realities, puruṣa ('consciousness'); and prakṛti, cognition, mind and emotions. Samkhya is strongly related to the Yoga school of Hinduism, for which it forms the theoretical foundation, and it was influential on other schools of Indian philosophy. Origins and development While samkhya-like speculations can be found in the Rig Veda and some of the older Upanishads, Samkhya may have non-Vedic origins, and developed in ascetic milieus. Proto-samkhya ideas developed from the 8th/7th c. BCE onwards, as evidenced in the middle Upanishads, the Buddhacarita, the Bhagavad Gita, and the Moksadharma-section of the Mahabharata. It was related to the early ascetic traditions and meditation, spiritual practices, and religious cosmology, and methods of reasoning that result in liberating knowledge (vidya, jnana, viveka) that end the cycle of dukkha and rebirth. allowing for "a great variety of philosophical formulations". Pre-karika systematic Samkhya existed around the beginning of the first millennium CE. The defining method of Samkhya was established with the Samkhyakarika (4th c. CE). Philosophy Purusha, ( or ) is a complex concept whose meaning evolved in Vedic and Upanishadic times. Depending on source and historical timeline, it means the cosmic being or self, consciousness, and universal principle. In early Vedas, Purusha was a cosmic being whose sacrifice by the gods created all life. This was one of many creation myths discussed in the Vedas. In the Upanishads, the Purusha concept refers to abstract essence of the Self, Spirit and the Universal Principle that is eternal, indestructible, without form and is all pervasive. In the Sankhya philosophy, purusha is the plural immobile male (spiritual) cosmic principle, pure consciousness. It is absolute, independent, free, imperceptible, unknowable through other agencies, above any experience by mind or senses and beyond any words or explanations. It remains pure, "nonattributive consciousness". Puruṣa is neither produced nor does it produce. No appellations can qualify purusha, nor can it substantialized or objectified. It "cannot be reduced, can't be 'settled. Any designation of purusha comes from prakriti, and is a limitation. Unmanifest prakriti is infinite, inactive, and unconscious, and consists of an equilibrium of the three guṇas ('qualities, innate tendencies'), namely sattva, rajas, and tamas. When prakṛti comes into contact with Purusha this equilibrium is disturbed, and Prakriti becomes manifest, evolving twenty-three tattvas, namely intellect (buddhi, mahat), ego (ahamkara) mind (manas); the five sensory capacities; the five action capacities; and the five "subtle elements" or "modes of sensory content" (tanmatras), from which the five "gross elements" or "forms of perceptual objects" emerge, giving rise to the manifestation of sensory experience and cognition. Jiva ('a living being') is that state in which purusha is bonded to prakriti. Human experience is an interplay of purusha-prakriti, purusha being conscious of the various combinations of cognitive activities. The end of the bondage of Purusha to prakriti is called liberation or kaivalya by the Samkhya school, and can be attained by insight and self-restraint. Upanishads The Upanishads contain proto-Shamkhya speculations. Yajnavalkya's exposition on the Self in the Brihadaranyaka Upanishad, and the dialogue between Uddalaka Aruni and his son Svetaketu in the Chandogya Upanishad represent a more developed notion of the essence of man (Atman) as "pure subjectivity - i.e., the knower who is himself unknowable, the seer who cannot be seen", and as "pure conscious", discovered by means of speculations, or enumerations. According to Larson, "it seems quite likely that both the monistic trends in Indian thought and the dualistic samkhya could have developed out of these ancient speculations." According to Larson, the enumeration of tattvas in Samkhya is also found in Taittiriya Upanishad, Aitareya Upanishad and Yajnavalkya–Maitri dialogue in the Brihadaranyaka Upanishad. The Katha Upanishad in verses 3.10–13 and 6.7–11 describes a concept of puruṣa, and other concepts also found in later Samkhya. The Katha Upanishad, dated to be from about the middle of the 1st millennium BCE, in verses 2.6.6 through 2.6.13 recommends a path to Self-knowledge akin to Samkhya, and calls this path Yoga. Buddhism There are different Buddhist views which resonate with the concepts and experiences of primordial awareness and non-duality or "not two" (advaya). The Buddha does not use the term advaya in the earliest Buddhist texts, but it does appear in some of the Mahayana sutras, such as the Vimalakīrti. The Buddha taught meditative inquiry (dhyana) and nondiscursive attention (samadhi), equivalents of which can be found in Upanishadic thought. He rejected the metaphysical doctrines of the Upanishads, particularly ideas which are often associated with Hindu nonduality, such as the doctrine that "this cosmos is the self" and "everything is a Oneness" (cf. SN 12.48 and MN 22). Because of this, Buddhist views of nonduality are particularly different from Hindu conceptions, which tend towards idealistic monism. Indian Buddhism Nirvana, luminous mind, and Buddha-nature Nirvana In archaic Buddhism, Nirvana may have been a kind of transformed and transcendent consciousness or discernment (viññana) that has "stopped" (nirodhena). According to Harvey this nirvanic consciousness is said to be "objectless", "infinite" (anantam), "unsupported" (appatiṭṭhita) and "non-manifestive" (anidassana) as well as "beyond time and spatial location". Stanislaw Schayer, a Polish scholar, argued in the 1930s that the Nikayas preserve elements of an archaic form of Buddhism which is close to Brahmanical beliefs, and survived in the Mahayana tradition. Schayer's view, possibly referring to texts where "'consciousness' (vinnana) seems to be the ultimate reality or substratum" as well as to luminous mind, saw nirvana as an immortal, deathless sphere, a transmundane reality or state. A similar view is also defended by C. Lindtner, who argues that in precanonical Buddhism nirvana is an actual existent. The original and early Buddhist concepts of nirvana may have been similar to those found in competing Śramaṇa (strivers/ascetics) traditions such as Jainism and Upanishadic Vedism. Similar ideas were proposed by Edward Conze and M. Falk, citing sources which speak of an eternal and "invisible infinite consciousness, which shines everywhere" as point to the view that nirvana is a kind of Absolute, and arguing that the nirvanic element, as an "essence" or pure consciousness, is immanent within samsara, an "abode" or "place" of prajña, which is gained by the enlightened. In the Theravada tradition, nibbāna is regarded as an uncompounded or unconditioned (asankhata) dhamma (phenomenon, event) which is "transmundane", and which is beyond our normal dualistic conceptions. In Theravada Abhidhamma texts like the Vibhanga, nibbana or the asankhata-dhatu (unconditioned element) is defined thus: Luminous mind Another influential concept in Indian Buddhism is the idea of luminous mind which became associated with Buddha-nature. In the Early Buddhist Texts there are various mentions of luminosity or radiance which refer to the development of the mind in meditation. In the Saṅgīti-sutta for example, it relates to the attainment of samadhi, where the perception of light (āloka sañña) leads to a mind endowed with luminescence (sappabhāsa). According to Analayo, the Upakkilesa-sutta and its parallels mention that the presence of defilements "results in a loss of whatever inner light or luminescence (obhāsa) had been experienced during meditation". The Pali Dhātuvibhaṅga-sutta uses the metaphor of refining gold to describe equanimity reached through meditation, which is said to be "pure, bright, soft, workable, and luminous". The Pali Anguttara Nikaya (A.I.8-10) states: The term is given no direct doctrinal explanation in the Pali discourses, but later Buddhist schools explained it using various concepts developed by them. The Theravada school identifies the "luminous mind" with the bhavanga, a concept first proposed in the Theravāda Abhidhamma. The later schools of the Mahayana identify it with both the Mahayana concepts of bodhicitta and tathagatagarbha. The notion is of central importance in the philosophy and practice of Dzogchen. Buddha-nature Buddha nature or tathagata-garbha (literally "Buddha womb") is that which allows sentient beings to become Buddhas. Various Mahayana texts such as the Tathāgatagarbha sūtras focus on this idea and over time it became a very influential doctrine in Indian Buddhism, as well in East Asian and Tibetan Buddhism. The Buddha nature teachings may be regarded as a form of nondualism. According to Sally B King, all beings are said to be or possess tathagata-garbha, which is nondual Thusness or Dharmakaya. This reality, states King, transcends the "duality of self and not-self", the "duality of form and emptiness" and the "two poles of being and non being". There various interpretations and views on Buddha-nature and the concept became very influential in India, China and Tibet, where it also became a source of much debate. In later Indian Yogācāra, a new sub-school developed which adopted the doctrine of tathagata-garbha into the Yogācāra system. The influence of this hybrid school can be seen in texts like the Lankavatara Sutra and the Ratnagotravibhaga. This synthesis of Yogācāra tathagata-garbha became very influential in later Buddhist traditions, such as Indian Vajrayana, Chinese Buddhism and Tibetan Buddhism. Advaya According to Kameshwar Nath Mishra, one connotation of advaya in Indic Sanskrit Buddhist texts is that it refers to the middle way between two opposite extremes (such as eternalism and annihilationism), and thus it is "not two". One of these Sanskrit Mahayana sutras, the Vimalakīrti Nirdeśa Sūtra contains a chapter on the "Dharma gate of non-duality" (advaya dharma dvara pravesa) which is said to be entered once one understands how numerous pairs of opposite extremes are to be rejected as forms of grasping. These extremes which must be avoided in order to understand ultimate reality are described by various characters in the text, and include: Birth and extinction, 'I' and 'Mine', Perception and non-perception, defilement and purity, good and not-good, created and uncreated, worldly and unworldly, samsara and nirvana, enlightenment and ignorance, form and emptiness and so on. The final character to attempt to describe ultimate reality is the bodhisattva Manjushri, who states:It is in all beings wordless, speechless, shows no signs, is not possible of cognizance, and is above all questioning and answering. Vimalakīrti responds to this statement by maintaining completely silent, therefore expressing that the nature of ultimate reality is ineffable (anabhilāpyatva) and inconceivable (acintyatā), beyond verbal designation (prapañca) or thought constructs (vikalpa). The Laṅkāvatāra Sūtra, a text associated with Yogācāra Buddhism, also uses the term "advaya" extensively. In the Mahayana Buddhist philosophy of Madhyamaka, the two truths or ways of understanding reality, are said to be advaya (not two). As explained by the Indian philosopher Nagarjuna, there is a non-dual relationship, that is, there is no absolute separation, between conventional and ultimate truth, as well as between samsara and nirvana. The concept of nonduality is also important in the other major Indian Mahayana tradition, the Yogacara school, where it is seen as the absence of duality between the perceiving subject (or "grasper") and the object (or "grasped"). It is also seen as an explanation of emptiness and as an explanation of the content of the awakened mind which sees through the illusion of subject-object duality. However, in this conception of non-dualism, there are still a multiplicity of individual mind streams (citta santana) and thus Yogacara does not teach an idealistic monism. These basic ideas have continued to influence Mahayana Buddhist doctrinal interpretations of Buddhist traditions such as Dzogchen, Mahamudra, Zen, Huayan and Tiantai as well as concepts such as Buddha-nature, luminous mind, Indra's net, rigpa and shentong. Madhyamaka Madhyamaka, also known as Śūnyavāda (the emptiness teaching), refers primarily to a Mahāyāna Buddhist school of philosophy founded by Nāgārjuna. In Madhyamaka, Advaya refers to the fact that the two truths are not separate or different., as well as the non-dual relationship of saṃsāra (the round of rebirth and suffering) and nirvāṇa (cessation of suffering, liberation). According to Murti, in Madhyamaka, "Advaya" is an epistemological theory, unlike the metaphysical view of Hindu Advaita. Madhyamaka advaya is closely related to the classical Buddhist understanding that all things are impermanent (anicca) and devoid of "self" (anatta) or "essenceless" (niḥsvabhāva), and that this emptiness does not constitute an "absolute" reality in itself. In Madhyamaka, the two "truths" (satya) refer to conventional (saṃvṛti) and ultimate (paramārtha) truth. The ultimate truth is "emptiness", or non-existence of inherently existing "things", and the "emptiness of emptiness": emptiness does not in itself constitute an absolute reality. Conventionally, "things" exist, but ultimately, they are "empty" of any existence on their own, as described in Nagarjuna's magnum opus, the Mūlamadhyamakakārikā (MMK): As Jay Garfield notes, for Nagarjuna, to understand the two truths as totally different from each other is to reify and confuse the purpose of this doctrine, since it would either destroy conventional realities such as the Buddha's teachings and the empirical reality of the world (making Madhyamaka a form of nihilism) or deny the dependent origination of phenomena (by positing eternal essences). Thus the non-dual doctrine of the middle way lies beyond these two extremes. "Emptiness" is a consequence of pratītyasamutpāda (dependent arising), the teaching that no dharma ("thing", "phenomena") has an existence of its own, but always comes into existence in dependence on other dharmas. According to Madhyamaka all phenomena are empty of "substance" or "essence" because they are dependently co-arisen. Likewise it is because they are dependently co-arisen that they have no intrinsic, independent reality of their own. Madhyamaka also rejects the existence of absolute realities or beings such as Brahman or Self. In the highest sense, "ultimate reality" is not an ontological Absolute reality that lies beneath an unreal world, nor is it the non-duality of a personal self (atman) and an absolute Self (cf. Purusha). Instead, it is the knowledge which is based on a deconstruction of such reifications and Conceptual proliferations. It also means that there is no "transcendental ground", and that "ultimate reality" has no existence of its own, but is the negation of such a transcendental reality, and the impossibility of any statement on such an ultimately existing transcendental reality: it is no more than a fabrication of the mind. However, according to Nagarjuna, even the very schema of ultimate and conventional, samsara and nirvana, is not a final reality, and he thus famously deconstructs even these teachings as being empty and not different from each other in the MMK where he writes: According to Nancy McCagney, what this refers to is that the two truths depend on each other; without emptiness, conventional reality cannot work, and vice versa. It does not mean that samsara and nirvana are the same, or that they are one single thing, as in Advaita Vedanta, but rather that they are both empty, open, without limits, and merely exist for the conventional purpose of teaching the Buddha Dharma. Referring to this verse, Jay Garfield writes that:to distinguish between samsara and nirvana would be to suppose that each had a nature and that they were different natures. But each is empty, and so there can be no inherent difference. Moreover, since nirvana is by definition the cessation of delusion and of grasping and, hence, of the reification of self and other and of confusing imputed phenomena for inherently real phenomena, it is by definition the recognition of the ultimate nature of things. But if, as Nagarjuna argued in Chapter XXIV, this is simply to see conventional things as empty, not to see some separate emptiness behind them, then nirvana must be ontologically grounded in the conventional. To be in samsara is to see things as they appear to deluded consciousness and to interact with them accordingly. To be in nirvana, then, is to see those things as they are – as merely empty, dependent, impermanent, and nonsubstantial, not to be somewhere else, seeing something else. However the actual Sanskrit term "advaya" does not appear in the MMK, and only appears in one single work by Nagarjuna, the Bodhicittavivarana. The later Madhyamikas, states Yuichi Kajiyama, developed the Advaya definition as a means to Nirvikalpa-Samadhi by suggesting that "things arise neither from their own selves nor from other things, and that when subject and object are unreal, the mind, being not different, cannot be true either; thereby one must abandon attachment to cognition of nonduality as well, and understand the lack of intrinsic nature of everything". Thus, the Buddhist nondualism or Advaya concept became a means to realizing absolute emptiness. Yogācāra tradition In the Mahayana tradition of Yogācāra (Skt; "yoga practice"), adyava (Tibetan: gnyis med) refers to overcoming the conceptual and perceptual dichotomies of cognizer and cognized, or subject and object. The concept of adyava in Yogācāra is an epistemological stance on the nature of experience and knowledge, as well as a phenomenological exposition of yogic cognitive transformation. Early Buddhism schools such as Sarvastivada and Sautrāntika, that thrived through the early centuries of the common era, postulated a dualism (dvaya) between the mental activity of grasping (grāhaka, "cognition", "subjectivity") and that which is grasped (grāhya, "cognitum", intentional object). Yogacara postulates that this dualistic relationship is a false illusion or superimposition (samaropa). Yogācāra also taught the doctrine which held that only mental cognitions really exist (vijñapti-mātra), instead of the mind-body dualism of other Indian Buddhist schools. This is another sense in which reality can be said to be non-dual, because it is "consciousness-only". There are several interpretations of this main theory, which has been widely translated as representation-only, ideation-only, impressions-only and perception-only.Siderits, Mark, Buddhism as philosophy, 2017, p. 146. Some scholars see it as a kind of subjective or epistemic Idealism (similar to Kant's theory) while others argue that it is closer to a kind of phenomenology or representationalism. According to Mark Siderits the main idea of this doctrine is that we are only ever aware of mental images or impressions which manifest themselves as external objects, but "there is actually no such thing outside the mind." For Alex Wayman, this doctrine means that "the mind has only a report or representation of what the sense organ had sensed." Jay Garfield and Paul Williams both see the doctrine as a kind of Idealism in which only mentality exists. However, even the idealistic interpretation of Yogācāra is not an absolute monistic idealism like Advaita Vedanta or Hegelianism, since in Yogācāra, even consciousness "enjoys no transcendent status" and is just a conventional reality. Indeed, according to Jonathan Gold, for Yogācāra, the ultimate truth is not consciousness, but an ineffable and inconceivable "thusness" or "thatness" (tathatā). Also, Yogācāra affirms the existence of individual mindstreams, and thus Kochumuttom also calls it a realistic pluralism. The Yogācārins defined three basic modes by which we perceive our world. These are referred to in Yogācāra as the three natures (trisvabhāva) of experience. They are: Parikalpita (literally, "fully conceptualized"): "imaginary nature", wherein things are incorrectly comprehended based on conceptual and linguistic construction, attachment and the subject object duality. It is thus equivalent to samsara. Paratantra (literally, "other dependent"): "dependent nature", by which the dependently originated nature of things, their causal relatedness or flow of conditionality. It is the basis which gets erroneously conceptualized, Pariniṣpanna (literally, "fully accomplished"): "absolute nature", through which one comprehends things as they are in themselves, that is, empty of subject-object and thus is a type of non-dual cognition. This experience of "thatness" (tathatā) is uninfluenced by any conceptualization at all. To move from the duality of the Parikalpita to the non-dual consciousness of the Pariniṣpanna, Yogācāra teaches that there must be a transformation of consciousness, which is called the "revolution of the basis" (parāvṛtty-āśraya). According to Dan Lusthaus, this transformation which characterizes awakening is a "radical psycho-cognitive change" and a removal of false "interpretive projections" on reality (such as ideas of a self, external objects, etc.). The Mahāyānasūtrālamkāra, a Yogācāra text, also associates this transformation with the concept of non-abiding nirvana and the non-duality of samsara and nirvana. Regarding this state of Buddhahood, it states: Its operation is nondual (advaya vrtti) because of its abiding neither in samsara nor in nirvana (samsaranirvana-apratisthitatvat), through its being both conditioned and unconditioned (samskrta-asamskrtatvena).This refers to the Yogācāra teaching that even though a Buddha has entered nirvana, they do no "abide" in some quiescent state separate from the world but continue to give rise to extensive activity on behalf of others. This is also called the non-duality between the compounded (samskrta, referring to samsaric existence) and the uncompounded (asamskrta, referring to nirvana). It is also described as a "not turning back" from both samsara and nirvana. For the later thinker Dignaga, non-dual knowledge or advayajñāna is also a synonym for prajñaparamita (transcendent wisdom) which liberates one from samsara. Tantric Buddhism Buddhist Tantra, also known as Vajrayana, Mantrayana or Esoteric Buddhism, drew upon all these previous Indian Buddhist ideas and nondual philosophies to develop innovative new traditions of Buddhist practice and new religious texts called the Buddhist tantras (from the 6th century onwards). Tantric Buddhism was influential in China and is the main form of Buddhism in the Himalayan regions, especially Tibetan Buddhism. The concept of advaya has various meanings in Buddhist Tantra. According to Tantric commentator Lilavajra, Buddhist Tantra's "utmost secret and aim" is Buddha nature. This is seen as a "non-dual, self-originated Wisdom (jnana), an effortless fount of good qualities". In Buddhist Tantra, there is no strict separation between the sacred (nirvana) and the profane (samsara), and all beings are seen as containing an immanent seed of awakening or Buddhahood. The Buddhist Tantras also teach that there is a non-dual relationship between emptiness and compassion (karuna), this unity is called bodhicitta. They also teach a "nondual pristine wisdom of bliss and emptiness". Advaya is also said to be the co-existence of Prajña (wisdom) and Upaya (skill in means). These nondualities are also related to the idea of yuganaddha, or "union" in the Tantras. This is said to be the "indivisible merging of innate great bliss (the means) and clear light (emptiness)" as well as the merging of relative and ultimate truths and the knower and the known, during Tantric practice. Buddhist Tantras also promote certain practices which are antinomian, such as sexual rites or the consumption of disgusting or repulsive substances (the "five ambrosias", feces, urine, blood, semen, and marrow.). These are said to allow one to cultivate nondual perception of the pure and impure (and similar conceptual dualities) and thus it allows one to prove one's attainment of nondual gnosis (advaya jñana). Indian Buddhist Tantra also views humans as a microcosmos which mirrors the macrocosmos. Its aim is to gain access to the awakened energy or consciousness of Buddhahood, which is nondual, through various practices. East-Asian Buddhism Chinese Chinese Buddhism was influenced by the philosophical strains of Indian Buddhist nondualism such as the Madhymaka doctrines of emptiness and the two truths as well as Yogacara and tathagata-garbha. For example, Chinese Madhyamaka philosophers like Jizang, discussed the nonduality of the two truths. Chinese Yogacara also upheld the Indian Yogacara views on nondualism. One influential text in Chinese Buddhism which synthesizes Tathagata-garbha and Yogacara views is the Awakening of Faith in the Mahayana, which may be a Chinese composition. In Chinese Buddhism, the polarity of absolute and relative realities is also expressed as "essence-function". This was a result of an ontological interpretation of the two truths as well as influences from native Taoist and Confucian metaphysics. In this theory, the absolute is essence, the relative is function. They can't be seen as separate realities, but interpenetrate each other. This interpretation of the two truths as two ontological realities would go on to influence later forms of East Asian metaphysics. As Chinese Buddhism continued to develop in new innovative directions, it gave rise to new traditions like Tiantai and Chan (Zen), which also upheld their own unique teachings on non-duality. The Tiantai school for example, taught a threefold truth, instead of the classic "two truths" of Indian Madhyamaka. Its "third truth" was seen as the nondual union of the two truths which transcends both. Tiantai metaphysics is an immanent holism, which sees every phenomenon, moment or event as conditioned and manifested by the whole of reality. Every instant of experience is a reflection of every other, and hence, suffering and nirvana, good and bad, Buddhahood and evildoing, are all "inherently entailed" within each other. Each moment of consciousness is simply the Absolute itself, infinitely immanent and self reflecting. Two doctrines of the Huayan school (Flower Garland), which flourished in China during the Tang period, are considered nondual by some scholars. King writes that the Fourfold Dharmadhatu and the doctrine of the mutual containment and interpenetration of all phenomena (dharmas) or "perfect interfusion" (yuanrong, 圓融) are classic nondual doctrines. This can be described as the idea that all phenomena "are representations of the wisdom of Buddha without exception" and that "they exist in a state of mutual dependence, interfusion and balance without any contradiction or conflict." According to this theory, any phenomenon exists only as part of the total nexus of reality, its existence depends on the total network of all other things, which are all equally connected to each other and contained in each other. Another Huayan metaphor used to express this view, called Indra's net, is also considered nondual by some. Zen The Buddha-nature and Yogacara philosophies have had a strong influence on Chán and Zen. The teachings of Zen are expressed by a set of polarities: Buddha-nature – sunyata; absolute-relative; sudden and gradual enlightenment. The Lankavatara-sutra, a popular sutra in Zen, endorses the Buddha-nature and emphasizes purity of mind, which can be attained in gradations. The Diamond-sutra, another popular sutra, emphasizes sunyata, which "must be realized totally or not at all". The Prajnaparamita Sutras emphasize the non-duality of form and emptiness: form is emptiness, emptiness is form, as the Heart Sutra says. According to Chinul, Zen points not to mere emptiness, but to suchness or the dharmadhatu. The idea that the ultimate reality is present in the daily world of relative reality fitted into the Chinese culture which emphasized the mundane world and society. But this does not explain how the absolute is present in the relative world. This question is answered in such schemata as the Five Ranks of Tozan and the Oxherding Pictures. The continuous pondering of the break-through kōan (shokan) or Hua Tou, "word head", leads to kensho, an initial insight into "seeing the (Buddha-)nature". According to Victor Sogen Hori, a central theme of many koans is the "identity of opposites", and point to the original nonduality. Hori describes kensho, when attained through koan-study, as the absence of subject–object duality. The aim of the so-called break-through koan is to see the "nonduality of subject and object", in which "subject and object are no longer separate and distinct". Zen Buddhist training does not end with kenshō. Practice is to be continued to deepen the insight and to express it in daily life, to fully manifest the nonduality of absolute and relative. To deepen the initial insight of kensho, shikantaza and kōan-study are necessary. This trajectory of initial insight followed by a gradual deepening and ripening is expressed by Linji Yixuan in his Three Mysterious Gates, the Four Ways of Knowing of Hakuin, the Five Ranks, and the Ten Ox-Herding Pictures which detail the steps on the Path. Korean The polarity of absolute and relative is also expressed as "essence-function". The absolute is essence, the relative is function. They can't be seen as separate realities, but interpenetrate each other. The distinction does not "exclude any other frameworks such as neng-so or 'subject-object' constructions", though the two "are completely different from each other in terms of their way of thinking". In Korean Buddhism, essence-function is also expressed as "body" and "the body's functions". A metaphor for essence-function is "a lamp and its light", a phrase from the Platform Sutra, where Essence is lamp and Function is light. Tibetan Buddhism Adyava: Gelugpa school Prasangika Madhyamaka The Gelugpa school, following Tsongkhapa, adheres to the adyava Prasaṅgika Mādhyamaka view, which states that all phenomena are sunyata, empty of self-nature, and that this "emptiness" is itself only a qualification, not a concretely existing "absolute" reality. Shentong In Tibetan Buddhism, the essentialist position is represented by shentong, while the nominalist, or non-essentialist position, is represented by rangtong. Shentong is a philosophical sub-school found in Tibetan Buddhism. Its adherents generally hold that the nature of mind (svasaṃvedana), the substratum of the mindstream, is "empty" of "other", i.e., empty of all qualities other than an inherently existing, ineffable nature. Shentong has often been incorrectly associated with the Cittamātra (Yogacara) position, but is in fact also Madhyamaka, and is present primarily as the main philosophical theory of the Jonang school, although it is also taught by the Sakya and Kagyu schools. According to Shentongpa (proponents of shentong), the emptiness of ultimate reality should not be characterized in the same way as the emptiness of apparent phenomena because it is prabhāśvara-saṃtāna, or "luminous mindstream" endowed with limitless Buddha qualities. It is empty of all that is false, not empty of the limitless Buddha qualities that are its innate nature. The contrasting Prasaṅgika view that all phenomena are sunyata, empty of self-nature, and that this "emptiness" is not a concretely existing "absolute" reality, is labeled rangtong, "empty of self-nature". The shentong-view is related to the Ratnagotravibhāga sutra and the Yogacara-Madhyamaka synthesis of Śāntarakṣita. The truth of sunyata is acknowledged, but not considered to be the highest truth, which is the empty nature of mind. Insight into sunyata is preparatory for the recognition of the nature of mind. Dzogchen Dzogchen is concerned with the "natural state" and emphasizes direct experience. The state of nondual awareness is called rigpa. This primordial nature is clear light, unproduced and unchanging, free from all defilements. Through meditation, the Dzogchen practitioner experiences that thoughts have no substance. Mental phenomena arise and fall in the mind, but fundamentally they are empty. The practitioner then considers where the mind itself resides. Through careful examination one realizes that the mind is emptiness. Karma Lingpa (1326–1386) revealed "Self-Liberation through seeing with naked awareness" (rigpa ngo-sprod,) which is attributed to Padmasambhava. The text gives an introduction, or pointing-out instruction (ngo-spro), into rigpa, the state of presence and awareness. In this text, Karma Lingpa writes the following regarding the unity of various terms for nonduality: Garab Dorje's three statements Garab Dorje (c. 665) epitomized the Dzogchen teaching in three principles, known as "Striking the Vital Point in Three Statements" (Tsik Sum Né Dek), said to be his last words. These three statements are believed to convey the heart of his teachings and serve as a concise and profound encapsulation of Dzogchen's view, its practice of contemplation, and the role of conduct. They give in short the development a student has to undergo: Garab Dorje's three statements were integrated into the Nyingthig traditions, the most popular of which in the Longchen Nyingthig by Jigme Lingpa (1730–1798). The statements are: Introducing directly the face of rigpa itself (ngo rang tok tu tré). Dudjom Rinpoche states this refers to: "Introducing directly the face of the naked mind as the rigpa itself, the innate primordial wisdom." Deciding upon one thing and one thing only (tak chik tok tu ché). Dudjom states: "Because all phenomena, whatever manifests, whether saṃsāra or nirvāṇa, are none other than the rigpa’s own play, there is complete and direct decision that there is nothing other than the abiding of the continual flow of rigpa." Confidence directly in the liberation of rising thoughts (deng drol tok tu cha). Dudjom comments: "In the recognition of namtok [arising thoughts], whatever arises, whether gross or subtle, there is direct confidence in the simultaneity of the arising and dissolution in the expanse of dharmakāya, which is the unity of rigpa and śūnyatā." Hinduism Vedanta Several schools of Vedanta are informed by Samkhya, the earliest Indian school of dualism, but teach a form of nondualism. The best-known is Advaita Vedanta, but other nondual Vedanta schools also have a significant influence and following, such as Vishishtadvaita Vedanta and Dvaitadvaita, both of which are bhedabheda. "Advaita" refers to Atman-Brahman as the single universal existence beyond the plurality of the world, recognized as pure awareness or the witness-consciousness, as in Vedanta, Shaktism and Shaivism. Although the term is best known from the Advaita Vedanta school of Adi Shankara, "advaita" is used in treatises by numerous medieval era Indian scholars, as well as modern schools and teachers. The Hindu concept of Advaita refers to the idea that all of the universe is one essential reality, and that all facets and aspects of the universe is ultimately an expression or appearance of that one reality. According to Dasgupta and Mohanta, non-dualism developed in various strands of Indian thought, both Vedic and Buddhist, from the Upanishadic period onward. The oldest traces of nondualism in Indian thought may be found in the Chandogya Upanishad, which pre-dates the earliest Buddhism. Pre-sectarian Buddhism may also have been responding to the teachings of the Chandogya Upanishad, rejecting some of its Atman-Brahman related metaphysics. Advaita appears in different shades in various schools of Hinduism such as in Advaita Vedanta, Vishishtadvaita Vedanta (Vaishnavism), Suddhadvaita Vedanta (Vaishnavism), non-dual Shaivism and Shaktism. In the Advaita Vedanta of Adi Shankara, advaita implies that all of reality is one with Brahman, that the Atman (self) and Brahman (ultimate unchanging reality) are one. The advaita ideas of some Hindu traditions contrasts with the schools that defend dualism or Dvaita, such as that of Madhvacharya who stated that the experienced reality and God are two (dual) and distinct. Advaita Vedanta The nonduality of the Advaita Vedanta is of the identity of Brahman and the Atman. As in Samkhya, Atman is awareness, the witness-consciousness. Advaita has become a broad current in Indian culture and religions, influencing subsequent traditions like Kashmir Shaivism. The oldest surviving manuscript on Advaita Vedanta is by Gauḍapāda (6th century CE), who has traditionally been regarded as the teacher of Govinda bhagavatpāda and the grandteacher of Adi Shankara. Advaita is best known from the Advaita Vedanta tradition of Adi Shankara (788-820 CE), who states that Brahman, the single unified eternal truth, is pure Being, Consciousness and Bliss (Sat-cit-ananda). Advaita, states Murti, is the knowledge of Brahman and self-consciousness (Vijnana) without differences. The goal of Vedanta is to know the "truly real" and thus become one with it. According to Advaita Vedanta, Brahman is the highest Reality, The universe, according to Advaita philosophy, does not simply come from Brahman, it is Brahman. Brahman is the single binding unity behind the diversity in all that exists in the universe. Brahman is also that which is the cause of all changes. Brahman is the "creative principle which lies realized in the whole world". The nondualism of Advaita, relies on the Hindu concept of Ātman which is a Sanskrit word that means "essence" or "real self" of the individual; it is also appropriated as "soul". Ātman is the first principle, the true self of an individual beyond identification with phenomena, the essence of an individual. Atman is the Universal Principle, one eternal undifferentiated self-luminous consciousness, asserts Advaita Vedanta school of Hinduism. Advaita Vedanta philosophy considers Atman as self-existent awareness, limitless, non-dual and same as Brahman. Advaita school asserts that there is "soul, self" within each living entity which is fully identical with Brahman. This identity holds that there is One Aawareness that connects and exists in all living beings, regardless of their shapes or forms, there is no distinction, no superior, no inferior, no separate devotee soul (Atman), no separate God soul (Brahman). The Oneness unifies all beings, there is the divine in every being, and all existence is a single Reality, state the Advaita Vedantins. The nondualism concept of Advaita Vedanta asserts that each soul is non-different from the infinite Brahman. Three levels of reality Advaita Vedanta adopts sublation as the criterion to postulate three levels of ontological reality: (paramartha, absolute), the Reality that is metaphysically true and ontologically accurate. It is the state of experiencing that "which is absolutely real and into which both other reality levels can be resolved". This experience can't be sublated (exceeded) by any other experience. (vyavahara), or samvriti-saya, consisting of the empirical or pragmatic reality. It is ever-changing over time, thus empirically true at a given time and context but not metaphysically true. It is "our world of experience, the phenomenal world that we handle every day when we are awake". It is the level in which both jiva (living creatures or individual souls) and Iswara are true; here, the material world is also true. (pratibhasika, apparent reality, unreality), "reality based on imagination alone". It is the level of experience in which the mind constructs its own reality. A well-known example is the perception of a rope in the dark as being a snake. Similarities and differences with Buddhism Scholars state that Advaita Vedanta was influenced by Mahayana Buddhism, given the common terminology and methodology and some common doctrines. Eliot Deutsch and Rohit Dalvi state: Advaita Vedanta is related to Buddhist philosophy, which promotes ideas like the two truths doctrine and the doctrine that there is only consciousness (vijñapti-mātra). It is possible that the Advaita philosopher Gaudapada was influenced by Buddhist ideas. Shankara harmonised Gaudapada's ideas with the Upanishadic texts, and developed a very influential school of orthodox Hinduism. The Buddhist term vijñapti-mātra is often used interchangeably with the term citta-mātra, but they have different meanings. The standard translation of both terms is "consciousness-only" or "mind-only". Advaita Vedanta has been called "idealistic monism" by scholars, but some disagree with this label. Another concept found in both Madhyamaka Buddhism and Advaita Vedanta is Ajativada ("ajāta"), which Gaudapada adopted from Nagarjuna's philosophy. Gaudapada "wove [both doctrines] into a philosophy of the Mandukaya Upanisad, which was further developed by Shankara. Michael Comans states there is a fundamental difference between Buddhist thought and that of Gaudapada, in that Buddhism has as its philosophical basis the doctrine of Dependent Origination according to which "everything is without an essential nature (nihsvabhāva), and everything is empty of essential nature (svabhava-shunya)", while Gaudapada does not rely on this principle at all. Gaudapada's Ajativada is an outcome of reasoning applied to an unchanging nondual reality according to which "there exists a Reality (sat) that is unborn (aja)" that has essential nature (svabhava), and this is the "eternal, fearless, undecaying Self (Atman) and Brahman". Thus, Gaudapada differs from Buddhist scholars such as Nagarjuna, states Comans, by accepting the premises and relying on the fundamental teaching of the Upanishads. Among other things, Vedanta school of Hinduism holds the premise, "Atman exists, as self evident truth", a concept it uses in its theory of nondualism. Buddhism, in contrast, holds the premise, "Atman does not exist (or, An-atman) as self evident". Mahadevan suggests that Gaudapada adopted Buddhist terminology and adapted its doctrines to his Vedantic goals, much like early Buddhism adopted Upanishadic terminology and adapted its doctrines to Buddhist goals; both used pre-existing concepts and ideas to convey new meanings. Dasgupta and Mohanta note that Buddhism and Shankara's Advaita Vedanta are not opposing systems, but "different phases of development of the same non-dualistic metaphysics from the Upanishadic period to the time of Sankara". Vishishtadvaita Vedanta Vishishtadvaita Vedanta is another main school of Vedanta and teaches the nonduality of the qualified whole, in which Brahman alone exists, but is characterized by multiplicity. It can be described as "qualified monism", or "qualified non-dualism", or "attributive monism". According to this school, the world is real, yet underlying all the differences is an all-embracing unity, of which all "things" are an "attribute". Ramanuja, the main proponent of Vishishtadvaita philosophy contends that the Prasthanatrayi ("The three courses") – namely the Upanishads, the Bhagavad Gita, and the Brahma Sutras – are to be interpreted in a way that shows this unity in diversity, for any other way would violate their consistency. Vedanta Desika defines Vishishtadvaita using the statement: Asesha Chit-Achit Prakaaram Brahmaikameva Tatvam – "Brahman, as qualified by the sentient and insentient modes (or attributes), is the only reality." Neo-Vedanta Neo-Vedanta, also called "neo-Hinduism" is a modern interpretation of Hinduism which developed in response to western colonialism and orientalism, and aims to present Hinduism as a "homogenized ideal of Hinduism" with Advaita Vedanta as its central doctrine. Unitarian Universalism had a strong impact on Ram Mohan Roy and the Brahmo Samaj, and subsequently on Swami Vivekananda. Vivekananda was one of the main representatives of Neo-Vedanta, a modern interpretation of Hinduism in line with western esoteric traditions, especially Transcendentalism, New Thought and Theosophy. His reinterpretation was, and is, very successful, creating a new understanding and appreciation of Hinduism within and outside India, and was the principal reason for the enthusiastic reception of yoga, transcendental meditation and other forms of Indian spiritual self-improvement in the West. Narendranath Datta (Swami Vivekananda) became a member of a Freemasonry lodge "at some point before 1884" and of the Sadharan Brahmo Samaj in his twenties, a breakaway faction of the Brahmo Samaj led by Keshab Chandra Sen and Debendranath Tagore. Ram Mohan Roy (1772-1833), the founder of the Brahmo Samaj, had a strong sympathy for the Unitarians, who were closely connected to the Transcendentalists, who in turn were interested in and influenced by Indian religions early on. It was in this cultic milieu that Narendra became acquainted with Western esotericism. Debendranath Tagore brought this "neo-Hinduism" closer in line with western esotericism, a development which was furthered by Keshab Chandra Sen, who was also influenced by transcendentalism, which emphasised personal religious experience over mere reasoning and theology. Sen's influence brought Vivekananda fully into contact with western esotericism, and it was also via Sen that he met Ramakrishna. Vivekananda's acquaintance with western esotericism made him very successful in western esoteric circles, beginning with his speech in 1893 at the Parliament of Religions. Vivekananda adapted traditional Hindu ideas and religiosity to suit the needs and understandings of his western audiences, who were especially attracted by and familiar with western esoteric traditions and movements like Transcendentalism and New thought. In 1897 he founded the Ramakrishna Mission, which was instrumental in the spread of Neo-Vedanta in the west, and attracted people like Alan Watts. Aldous Huxley, author of The Perennial Philosophy, was associated with another neo-Vedanta organisation, the Vedanta Society of Southern California, founded and headed by Swami Prabhavananda. Together with Gerald Heard, Christopher Isherwood, and other followers he was initiated by the Swami and was taught meditation and spiritual practices. Neo-Vedanta, as represented by Vivekananda and Radhakrishnan, is indebted to Advaita vedanta, but also reflects Advaya-philosophy. A main influence on neo-Advaita was Ramakrishna, himself a bhakta and tantrika, and the guru of Vivekananda. According to Michael Taft, Ramakrishna reconciled the dualism of formlessness and form. Ramakrishna regarded the Supreme Being to be both Personal and Impersonal, active and inactive: Radhakrishnan acknowledged the reality and diversity of the world of experience, which he saw as grounded in and supported by the absolute or Brahman. According to Anil Sooklal, Vivekananda's neo-Advaita "reconciles Dvaita or dualism and Advaita or non-dualism": Radhakrishnan also reinterpreted Shankara's notion of maya. According to Radhakrishnan, maya is not a strict absolute idealism, but "a subjective misperception of the world as ultimately real". According to Sarma, standing in the tradition of Nisargadatta Maharaj, Advaitavāda means "spiritual non-dualism or absolutism", in which opposites are manifestations of the Absolute, which itself is immanent and transcendent: Neo-Vedanta was well-received among Theosophists, Christian Science, and the New Thought movement; Christian Science in turn influenced the self-study teaching A Course in Miracles. Kashmir Shaivism Advaita is also a central concept in various schools of Shaivism, such as Kashmir Shaivism and Shiva Advaita which is generally known as Veerashaivism. Kashmir Shaivism is a school of Śaivism, described by Abhinavagupta as "paradvaita", meaning "the supreme and absolute non-dualism". It is categorized by various scholars as monistic idealism (absolute idealism, theistic monism, realistic idealism, transcendental physicalism or concrete monism). Kashmir Saivism is based on a strong monistic interpretation of the Bhairava Tantras and its subcategory the Kaula Tantras, which were tantras written by the Kapalikas. There was additionally a revelation of the Siva Sutras to Vasugupta. Kashmir Saivism claimed to supersede the dualistic Shaiva Siddhanta. Somananda, the first theologian of monistic Saivism, was the teacher of Utpaladeva, who was the grand-teacher of Abhinavagupta, who in turn was the teacher of Ksemaraja. The philosophy of Kashmir Shaivism can be seen in contrast to Shankara's Advaita. Advaita Vedanta holds that Brahman is inactive (niṣkriya) and the phenomenal world is a false appearance (māyā) of Brahman, like snake seen in semi-darkness is a false appearance of Rope lying there. In Kashmir Shavisim, all things are a manifestation of the Universal Consciousness, Chit or Brahman. Kashmir Shavisim sees the phenomenal world (Śakti) as real: it exists, and has its being in Consciousness (Chit). Kashmir Shaivism was influenced by, and took over doctrines from, several orthodox and heterodox Indian religious and philosophical traditions. These include Vedanta, Samkhya, Patanjali Yoga and Nyayas, and various Buddhist schools, including Yogacara and Madhyamika, but also Tantra and the Nath-tradition. Contemporary Indian traditions Primal awareness is also part of other Indian traditions, which are less strongly, or not all, organised in monastic and institutional organisations. Although often called "Advaita Vedanta", these traditions have their origins in vernacular movements and "householder" traditions, and have close ties to the Nath, Nayanars and Sant Mat traditions. Natha Sampradaya and Inchegeri Sampradaya The Natha Sampradaya, with Nath yogis such as Gorakhnath, introduced Sahaja, the concept of a spontaneous spirituality. According to Ken Wilber, this state reflects nonduality. The Nath-tradition has been influential in the west through the Inchagiri Sampradaya, a lineage of Hindu Navnath and Lingayat teachers from Maharashtra which is well known due to the popularity of Nisargadatta Maharaj. Neo-Advaita Neo-Advaita is a new religious movement based on a modern Western interpretation of Advaita Vedanta, especially the teachings of Ramana Maharshi. According to Arthur Versluis, neo-Advaita is part of a larger religious current which he calls immediatism. Neo-Advaita has been criticized for this immediatism and its lack of preparatory practices. Notable neo-advaita teachers are H. W. L. Poonja and his students Gangaji, Andrew Cohen, and Eckhart Tolle. Other eastern religions Sikhism Many newer, contemporary Sikhs have suggested that human souls and the monotheistic God are two different realities (dualism), distinguishing it from the monistic and various shades of nondualistic philosophies of other Indian religions. However, some Sikh scholars have attempted to explore nondualism exegesis of Sikh scriptures, such as during the neocolonial reformist movement by Bhai Vir Singh. According to Mandair, Singh interprets the Sikh scriptures as teaching nonduality. Sikh scholar Bhai Mani Singh is quoted as saying that Sikhism has all the essence of Vedanta philosophy. Historically, the Sikh symbol of Ik Oankaar has had a monistic meaning, and has been reduced to simply meaning, "There is but One God", which is incorrect. Older exegesis of Sikh scripture, such as the Faridkot Teeka, has always described Sikh metaphysics as a non-dual, panentheistic universe. Taoism Taoism's wu wei (Chinese wu, not; wei, doing) is a term with various translations and interpretations designed to distinguish it from passivity. Commonly understood as "effortless action", this concept intersects with the core notions of nondualism. Wu wei encourages individuals to flow with the natural rhythms of existence, moving beyond dualistic perspectives and embracing a harmonious unity with the universe. This holistic approach to life, characterized by spontaneous and unforced action, aligns with the essence of nondualism, emphasizing interconnectedness, oneness, and the dissolution of dualistic boundaries. By seamlessly integrating effortless action in both physical deeds and mental states, wu wei embodies the nondual philosophy's essence. The concept of Yin and Yang, often mistakenly conceived of as a symbol of dualism, is actually meant to convey the notion that all apparent opposites are complementary parts of a non-dual whole. Western traditions A modern strand of thought sees "nondual consciousness" as a universal psychological state, which is a common stratum and of the same essence in different spiritual traditions. It is derived from Neo-Vedanta and neo-Advaita, but has historical roots in neo-Platonism, Western esotericism, and Perennialism. The idea of nondual consciousness as "the central essence" is a universalistic and perennialist idea, which is part of a modern mutual exchange and synthesis of ideas between western spiritual and esoteric traditions and Asian religious revival and reform movements. Central elements in the western traditions are Neo-Platonism, which had a strong influence on Christian contemplation or mysticism, and its accompanying apophatic theology. Medieval Abrahamic religions Christian contemplation and mysticism In Christian mysticism, contemplative prayer and Apophatic theology are central elements. In contemplative prayer, the mind is focused by constant repetition a phrase or word. Saint John Cassian recommended use of the phrase "O God, make speed to save me: O Lord, make haste to help me". Another formula for repetition is the name of Jesus or the Jesus Prayer, which has been called "the mantra of the Orthodox Church", although the term "Jesus Prayer" is not found in the Fathers of the Church. The author of The Cloud of Unknowing recommended use of a monosyllabic word, such as "God" or "Love". Apophatic theology is derived from Neo-Platonism via Pseudo-Dionysius the Areopagite. In this approach, the notion of God is stripped from all positive qualifications, leaving a "darkness" or "unground", it had a strong influence on western mysticism. A notable example is Meister Eckhart, who also attracted attention from Zen-Buddhists like D.T. Suzuki in modern times, due to the similarities between Buddhist thought and Neo-Platonism. The Cloud of Unknowing – an anonymous work of Christian mysticism written in Middle English in the latter half of the 14th century – advocates a mystic relationship with God. The text describes a spiritual union with God through the heart. The author of the text advocates centering prayer, a form of inner silence. According to the text, God can not be known through knowledge or from intellection. It is only by emptying the mind of all created images and thoughts that we can arrive to experience God. Continuing on this line of thought, God is completely unknowable by the mind. God is not known through the intellect but through intense contemplation, motivated by love, and stripped of all thought. Thomism, though not non-dual in the ordinary sense, considers the unity of God so absolute that even the duality of subject and predicate, to describe him, can be true only by analogy. In Thomist thought, even the Tetragrammaton is only an approximate name, since "I am" involves a predicate whose own essence is its subject. The former nun and contemplative Bernadette Roberts is considered a nondualist by Jerry Katz. Hypostatic-union is an incomplete form of non-duality applied to a tertiary entity, neglecting the subjective self. Jewish Hasidism and Kabbalism According to Jay Michaelson, nonduality begins to appear in the medieval Jewish textual tradition which peaked in Hasidism: One of the most striking contributions of the Kabbalah, which became a central idea in Chasidic thought, was a highly innovative reading of the monotheistic idea. The belief in one God is no longer perceived as the mere rejection of other deities or intermediaries, but a denial of any existence outside of God. Western philosophy Baruch Spinoza's formulation of pantheism in the 17th century constitutes a seminal European manifestation of nondualism. His philosophical work, especially expounded in Ethics posits a radical idea that fuses divinity with the material world, suggesting that God and the universe are not separate entities but different facets of a single underlying substance. In his worldview, the finite and the infinite are harmoniously interwoven, challenging René Descartes' dualistic perspective. One of Friedrich Nietzsche's philosophical insights also resonates with nondualism. Nietzsche wrote that "We cease to think when we refuse to do so under the constraint of language." This idea is explored in his book On Truth and Lies in a Nonmoral Sense. His scrutiny of conventional thought and language urges a departure from linguistic boundaries. This perspective aligns with the nondual notion of transcending dualistic concepts and engaging with reality in a more immediate, intuitive manner. Nondual consciousness as common essence According to the common-core thesis, different descriptions can mask quite similar if not identical experiences: An influential contemporary proponent of Perennialism was Aldous Huxley, who was influenced by Vivekananda's Neo-Vedanta and Universalism, and popularized the notion of a common mystical core in his book The Perennial Philosophy. Elias Amidon describes this common core as an "indescribable, but definitely recognizable, reality that is the ground of all being". According to Amidon, this reality is signified by "many names" from "spiritual traditions throughout the world": According to Renard, these names are based on an experience or intuition of "the Real". According to Renard, nondualism as common essence prefers the term "nondualism", instead of monism, because this understanding is "nonconceptual", "not graspapable in an idea". Even to call this "ground of reality", "One", or "Oneness" is attributing a characteristic to that ground of reality. The only thing that can be said is that it is "not two" or "non-dual": According to Renard, Alan Watts has been one of the main contributors to the popularisation of the non-monistic understanding of "nondualism". Perennial philosophy The Perennial philosophy has its roots in the Renaissance interest in neo-Platonism and its idea of The One, from which all existence emanates. Marsilio Ficino (1433–1499) sought to integrate Hermeticism with Greek and Jewish-Christian thought, discerning a Prisca theologia which could be found in all ages. Giovanni Pico della Mirandola (1463–94) suggested that truth could be found in many, rather than just two, traditions. He proposed a harmony between the thought of Plato and Aristotle, and saw aspects of the Prisca theologia in Averroes, the Koran, the Cabala and other sources. Agostino Steuco (1497–1548) coined the term philosophia perennis. Orientalism The western world has been exposed to Indian religions since the late 18th century. The first western translation of a Sanskrit text was made in 1785. It marked a growing interest in Indian culture and languages. The first translation of the dualism and nondualism discussing Upanishads appeared in two parts in 1801 and 1802 and influenced Arthur Schopenhauer, who called them "the consolation of my life". Early translations also appeared in other European languages. Scholarly debates Religious experience According to Victor Sogen Hori, the study of religious experience can be traced back to William James, who first used the term "religious experience" in his book, The Varieties of Religious Experience. The origins of the use of this term can be dated further back. In the 18th, 19th, and 20th centuries, several historical figures put forth very influential views that religion and its beliefs can be grounded in experience itself. While Kant held that moral experience justified religious beliefs, John Wesley in addition to stressing individual moral exertion thought that the religious experiences in the Methodist movement (paralleling the Romantic Movement) were foundational to religious commitment as a way of life. Wayne Proudfoot traces the roots of the notion of "religious experience" to the German theologian Friedrich Schleiermacher (1768–1834), who argued that religion is based on a feeling of the infinite. The notion of "religious experience" was used by Schleiermacher and Albert Ritschl to defend religion against the growing scientific and secular critique, and defend the view that human (moral and religious) experience justifies religious beliefs. Such religious empiricism would be later seen as highly problematic and was – during the period in-between world wars – famously rejected by Karl Barth. In the 20th century, religious as well as moral experience as justification for religious beliefs still holds sway. Some influential modern scholars holding this liberal theological view are Charles Raven and the Oxford physicist/theologian Charles Coulson. The notion of "religious experience" was adopted by many scholars of religion, of which William James was the most influential. Criticism The notion of "experience" has been criticised. Robert Sharf points out that "experience" is a typical Western term, which has found its way into Asian religiosity via western influences. Insight is not the "experience" of some transcendental reality, but is a cognitive event, the (intuitive) understanding or "grasping" of some specific understanding of reality, as in kensho or anubhava. "Pure experience" does not exist; all experience is mediated by intellectual and cognitive activity. A pure consciousness without concepts, reached by cleaning the doors of perception, would be an overwhelming chaos of sensory input without coherence. Rejection of the common core thesis The "common-core thesis" is criticised by "diversity theorists" such as S.T Katz and W. Proudfoot. They argue that The idea of a common essence has been questioned by Yandell, who discerns various "religious experiences" and their corresponding doctrinal settings, which differ in structure and phenomenological content, and in the "evidential value" they present. Yandell discerns five sorts: Numinous experiences – Monotheism (Jewish, Christian, Vedantic) Nirvanic experiences – Buddhism, "according to which one sees that the self is but a bundle of fleeting states" Kevala experiences – Jainism, "according to which one sees the self as an indestructible subject of experience" Moksha experiences – Hinduism, Brahman "either as a cosmic person, or, quite differently, as qualityless" Nature mystical experience The specific teachings and practices of a specific tradition may determine what "experience" someone has, which means that this "experience" is not the proof of the teaching, but a result of the teaching. The notion of what exactly constitutes "liberating insight" varies between the various traditions, and even within the traditions. Bronkhorst for example notices that the conception of what exactly "liberating insight" is in Buddhism was developed over time. Whereas originally it may not have been specified, later on the Four Truths served as such, to be superseded by pratityasamutpada, and still later, in the Hinayana schools, by the doctrine of the non-existence of a substantial self or person. And Schmithausen notices that still other descriptions of this "liberating insight" exist in the Buddhist canon. Phenomenology Nondual awareness, also called pure consciousness or awareness, contentless consciousness, consciousness-as-such, and Minimal Phenomenal Experience, is a topic of phenomenological research. As described in Samkhya-Yoga and other systems of meditation, and referred to as, for example, Turya and Atman, pure awareness manifests in advanced states of meditation. Pure consciousness is distinguished from the workings of the mind, and "consists in nothing but the being seen of what is seen". Gamma & Metzinger (2021) present twelve factors in their phenomenological analysis of pure awareness experienced by meditators, including luminosity; emptiness and non-egoic self-awareness; and witness-consciousness. See also One taste The Void References Notes Citations Works cited Printed sources Web-sources Further reading External links Buddha-nature Eastern philosophy Philosophy of religion Pluralism (philosophy) Postmodern theory
0.763755
0.997659
0.761967
Animalism (philosophy)
In the philosophical subdiscipline of ontology, animalism is a theory of personal identity that asserts that humans are animals. The concept of animalism is advocated by philosophers Eric T. Olson, Peter van Inwagen, Paul Snowdon, Stephan Blatti, David Hershenov and David Wiggins. The view stands in contrast to positions such as John Locke's psychological criterion for personal identity or various forms of mind–body dualism, such as Richard Swinburne's account. Thinking-animal argument A common argument for animalism is known as the thinking-animal argument. It asserts the following: A person that occupies a given space also has a Homo sapiens animal occupying the same space. The Homo sapiens animal is thinking. The person occupying the space is thinking. Therefore, a human person is also a human animal. Use of term in ethics A less common, but perhaps increasing, use of the term animalism is to refer to the ethical view that all or most animals are worthy of moral consideration. It may be similar, though not necessarily, to sentientism. See also Human evolution References Footnotes Bibliography Further reading Conceptions of self Metaphysical theories
0.777196
0.9804
0.761963
Pre-Socratic philosophy
Pre-Socratic philosophy, also known as Early Greek Philosophy, is ancient Greek philosophy before Socrates. Pre-Socratic philosophers were mostly interested in cosmology, the beginning and the substance of the universe, but the inquiries of these early philosophers spanned the workings of the natural world as well as human society, ethics, and religion. They sought explanations based on natural law rather than the actions of gods. Their work and writing has been almost entirely lost. Knowledge of their views comes from testimonia, i.e. later authors' discussions of the work of pre-Socratics. Philosophy found fertile ground in the ancient Greek world because of the close ties with neighboring civilizations and the rise of autonomous civil entities, poleis. Pre-Socratic philosophy began in the 6th century BC with the three Milesians: Thales, Anaximander, and Anaximenes. They all attributed the arche (a word that could take the meaning of "origin", "substance" or "principle") of the world to, respectively, water, apeiron (the unlimited), and air. Another three pre-Socratic philosophers came from nearby Ionian towns: Xenophanes, Heraclitus, and Pythagoras. Xenophanes is known for his critique of the anthropomorphism of gods. Heraclitus, who was notoriously difficult to understand, is known for his maxim on impermanence, ta panta rhei, and for attributing fire to be the arche of the world. Pythagoras created a cult-like following that advocated that the universe was made up of numbers. The Eleatic school (Parmenides, Zeno of Elea, and Melissus) followed in the 5th century BC. Parmenides claimed that only one thing exists and nothing can change. Zeno and Melissus mainly defended Parmenides' opinion. Anaxagoras and Empedocles offered a pluralistic account of how the universe was created. Leucippus and Democritus are known for their atomism, and their views that only void and matter exist. The Sophists advanced philosophical relativism. The Pre-Socratics have had significant impact on several concepts of Western philosophy, such as naturalism and rationalism, and paved the way for scientific methodology. Terminology Pre-Socratic is a term adopted in the 19th century to refer to this group of philosophers. It was first used by the German philosopher J.A. Eberhard as "vorsokratische Philosophie''' in the late 18th century. In earlier literature they were referred to as physikoi ("physicists", after physis, "nature"), and their activity, as physiologoi (physical or natural philosophers), with this usage arising with Aristotle to differentiate them from theologoi (theologians) and mythologoi (storytellers and bards who conveyed Greek mythology), who attributed natural phenomena to the gods. The term was coined to highlight a fundamental change in philosophical inquiries between the philosophers who lived before Socrates, who were interested in the structure of nature and cosmos (i.e., the universe, with the implication that the universe had order to it), and Socrates and his successors, who were mostly interested in ethics and politics. The term comes with drawbacks, as several of the pre-Socratics were highly interested in ethics and how to live the best life. Further, the term implies that the pre-Socratics are less significant than Socrates, or even that they were merely a stage (implying teleology) to classical era philosophy. The term is also chronologically inaccurate, as the last of the pre-Socratics were contemporaries of Socrates. According to James Warren, the distinction between the pre-Socratic philosophers and philosophers of the classical era is demarcated not so much by Socrates, but by geography and what texts survived. The shift from the pre-Socratic to the classical periods involves a shift from philosophers being dispersed throughout the Greek-speaking world to their being concentrated in Athens. Further, starting in the classical period we have complete surviving texts, whereas in the pre-Socratic era we have only fragments. Scholar André Laks distinguishes two traditions of separating pre-Socratics from Socratics, dating back to the classical era and running through current times. The first tradition is the Socratic-Ciceronian, which uses the content of their philosophical inquires to divide the two groups: the pre-Socratics were interested in nature whereas Socrates focused on human affairs. The other tradition, the Platonic-Aristotelian, emphasizes method as the distinction between the two groups, as Socrates moved to a more epistemological approach of studying various concepts. Because of the drawbacks of the term pre-Socratic, Early Greek Philosophy is also used, most commonly in Anglo-Saxon literature. André Laks and Glenn W. Most have especially popularized this shift in describing the era as "Early Greek Philosophy" over "Pre-Socratic Philosophy" through their comprehensive, nine volume Loeb editions of Early Greek Philosophy. In their first volume, they distinguish their systematic approach from that of Hermann Diels, beginning with the choice of "Early Greek Philosophy" over "pre-Socratic philosophy" most notably because Socrates is contemporary and sometimes even prior to philosophers traditionally considered "pre-Socratic" (e.g., the Atomists). Sources Very few fragments of the works of the pre-Socratic philosophers have survived. The knowledge we have of the pre-Socratics derives from the accounts of later writers such as Plato, Aristotle, Plutarch, Diogenes Laërtius, Stobaeus, and Simplicius, and some early Christian theologians, especially Clement of Alexandria and Hippolytus of Rome. Many of the works are titled Peri Physeos, or On Nature, a title probably attributed later by other authors. These accounts, known as testimonia (testimonies), often come from biased writers. Consequently, it is sometimes difficult to determine the actual line of argument some pre-Socratics used in supporting their views. Adding more difficulty to their interpretation is the obscure language they used. Plato paraphrased the pre-Socratics and showed no interest in accurately representing their views. Aristotle was more accurate, but saw them under the scope of his philosophy. Theophrastus, Aristotle's successor, wrote an encyclopedic book Opinion of the Physicists that was the standard work about the pre-Socratics in ancient times. It is now lost, but Simplicius relied on it heavily in his accounts. In 1903, the German professors H. Diels and W. Kranz published Die Fragmente der Vorsokratiker (The Fragments of the pre-Socratics), which collected all of the known fragments. Scholars now use this book to reference the fragments using a coding scheme called Diels–Kranz numbering. The first two characters of the scheme are "DK" for Diels and Kranz. Next is a number representing a specific philosopher. After that is a code regarding whether the fragment is a testimonia, coded as "A", or "B" if is a direct quote from the philosopher. Last is a number assigned to the fragment, which may include a decimal to reflect specific lines of a fragment. For example, "DK59B12.3" identifies line 3 of Anaxagoras fragment 12. A similar way of referring to quotes is the system prefixed with "LM" by André Laks and Glenn W. Most who edited Early Greek Philosophy in 2016. Collectively, these fragments are called doxography (derived from the latin doxographus; derived from the Greek word for "opinion" doxa). Historical background Philosophy emerged in ancient Greece in the 6th century BC. The pre-Socratic era lasted about two centuries, during which the expanding Persian Achaemenid Empire was stretching to the west, while the Greeks were advancing in trade and sea routes, reaching Cyprus and Syria. The first pre-Socratics lived in Ionia, on the western coast of Anatolia. Persians conquered the towns of Ionia 540 BC and Persian tyrants then ruled them. The Greeks revolted in 499 BC, but ultimately were defeated in 494 BC. Slowly but steadily Athens became the philosophical center of Greece by the middle of the fifth century. Athens was entering its Classical Era, with philosophers such as Socrates, Plato, and Aristotle, but the impact of the pre-Socratics continued. Several factors contributed to the birth of pre-Socratic philosophy in Ancient Greece. Ionian towns, especially Miletus, had close trade relations with Egypt and Mesopotamia, cultures with observations about the natural world that differed from those of the Greeks. Apart from technical skills and cultural influences, of paramount significance was that the Greeks acquired the alphabet 800 BC. Another factor was the ease and frequency of intra-Greek travel, which led to the blending and comparison of ideas. During the sixth century BC, various philosophers and other thinkers moved easily around Greece, especially visiting pan-Hellenic festivals. While long-distance communication was difficult during ancient times, persons, philosophers, and books moved through other parts of the Greek peninsula, the Aegean islands, and Magna Graecia, a coastal area in Southern Italy. The democratic political system of independent poleis also contributed to the rise of philosophy. Most Greek towns were not ruled by autocrats or priests, allowing citizens to question freely a wide range of issues. Various poleis flourished and became wealthy, especially Miletus. which was a centre of trade and production during the early phases of pre-Socratic philosophy. Trade of grain, oil, wine, and other commodities among each polis and colonies meant these towns were not isolated but embedded – and economically dependent – on a complex and changeable web of trade routes. Greek mythology also influenced the birth of philosophy. The philosophers' ideas, were, to a certain extent, answers to questions that were subtly present in the work of Homer and Hesiod. The pre-Socratics arose from a world dominated by myths, sacred places, and local deities. The work of epic poets such as Homer, Hesiod and others reflected this environment. They are considered predecessors of the pre-Socratics since they seek to address the origin of the world and to organize traditional folklore and legends systematically. Greek popular religion contained many features of the religions of neighboring civilizations, such as the Egyptians, Mesopotamians, and Hittites. The first pre-Socratic philosophers also traveled extensively to other lands, meaning that pre-Socratic thought had roots abroad as well as domestically. Homer, in his two epic poems, not only personifies gods and other natural phenomena, such as the Night, but he hints at some views on the origin and the nature of the world that came under scrutiny by the pre-Socratics. In his epic poem Theogony (literally meaning the birth of gods) Hesiod (c. 700 BC) describes the origin of gods, and apart from the solid mythical structure, one can notice an attempt towards organizing beliefs using some form of rationalization; an example would be that Night gives birth to Death, Sleep and Dreams. Transmigration of life, a belief of the Orphics, a religious cult originating from Thrace, had affected the thought of the 5th century BC but the overall influence of their cosmology on philosophy is disputed. Pherecydes, a poet, magician, and contemporary of Thales, in his book describes a particular cosmogony, asserting that three gods pre-existed – a step towards rationality. General features The most important feature of pre-Socratic philosophy was the use of reason to explain the universe. The pre-Socratic philosophers shared the intuition that there was a single explanation that could explain both the plurality and the singularity of the whole – and that explanation would not be direct actions of the gods. The pre-Socratic philosophers rejected traditional mythological explanations of the phenomena they saw around them in favor of more rational explanations, initiating analytic and critical thought. Their efforts were directed at the investigation of the ultimate basis and essential nature of the external world. Many sought the material principle (arche) of things, and the method of their origin and disappearance. They emphasized the rational unity of things and rejected supernatural explanations, seeking natural principles at work in the world and human society. The pre-Socratics saw the world as a cosmos, an ordered arrangement that could be understood via rational inquiry. In their effort to make sense of the cosmos they coined new terms and concepts such as rhythm, symmetry, analogy, deductionism, reductionism, mathematization of nature and others. An important term that is met in the thought of several pre-Socratic philosophers is arche. Depending on the context, it can take various related meanings. It could mean the beginning or origin with the undertone that there is an effect on the things to follow. Also, it might mean a principle or a cause (especially in Aristotelian tradition). A common feature of the pre-Socratics is the absence of empiricism and experimentation in order to prove their theories. This may have been because of a lack of instruments, or because of a tendency to view the world as a unity, undeconstructable, so it would be impossible for an external eye to observe tiny fractions of nature under experimental control. According to Jonathan Barnes, a professor of ancient philosophy, pre-Socratic philosophy exhibits three significant features: they were internal, systematic and economical. Internal meaning they tried to explain the world with characteristics found within this world. Systematic because they tried to universalize their findings. Economical because they tried to invoke only a few new terms. Based on these features, they reached their most significant achievement, they changed the course of human thought from myth to philosophy and science. The pre-Socratics were not atheists; however, they minimized the extent of the gods' involvement in natural phenomena such as thunder or totally eliminated the gods from the natural world. Pre-Socratic philosophy encompasses the first of the three phases of ancient Greek philosophy, which spanned around a thousand years. The pre-Socratic phase itself is divided into three phases. The first phase of pre-Socratic philosophy, mainly the Milesians, Xenophanes, and Heraclitus, consisted of rejecting traditional cosmogony and attempting to explain nature based on empirical observations and interpretations. A second phase – that of the Eleatics – resisted the idea that change or motion can happen. Based on their radical monism, they believed that only one substance exists and forms Kosmos. The Eleatics were also monists (believing that only one thing exists and everything else is just a transformation of it). In the third phase, the post-Eleatics (mainly Empedocles, Anaxagoras, and Democritus) opposed most Eleatic teaching and returned to the naturalism of the Milesians. The pre-Socratics were succeeded by the second phase of ancient philosophy, where the philosophical movements of Platonism, Cynicism, Cyrenaicism, Aristotelianism, Pyrrhonism, Epicureanism, Academic skepticism, and Stoicism rose to prominence until 100 BC. In the third phase, philosophers studied their predecessors. Pre-Socratic philosophers Milesian beginning: Thales, Anaximander and Anaximenes The Milesian school was located in Miletus, Ionia, in the 6th century BC. It consisted of Thales, Anaximander, and Anaximenes, who most probably had a teacher-pupil relationship. They were mainly occupied with the origin and substance of the world; each of them attributed the Whole to a single arche (beginning or principle), starting the tradition of naturalistic monism. Thales Thales ( 624–546 BC) is considered to be the father of philosophy. None of his writings have survived. He is considered the first western philosopher since he was the first to use reason, to use proof, and to generalize. He created the word cosmos, the first word to describe the universe. He contributed to geometry and predicted the eclipse of 585 BC. Thales may have been of Phoenician ancestry. Miletus was a meeting point and trade centre of the then great civilizations, and Thales visited the neighbouring civilizations, Egypt, Mesopotamia, Crete, and Phoenicia. In Egypt, geometry was advanced as a means of separating agricultural fields. Thales, though, advanced geometry with his abstract deductive reasoning reaching universal generalizations. Proclus, a later Athenian philosopher, attributed the theorem now known as Thales's theorem to Thales. He is also known for being the first to claim that the base angles of isosceles triangles are equal, and that a diameter bisects the circle. Thales visited Sardis, as many Greeks then, where astronomical records were kept and used astronomical observations for practical matters (oil harvesting). Thales was widely considered a genius in ancient times and was revered as one of the Seven Sages of Greece. Most importantly, what marks Thales as the first philosopher is the posing of the fundamental philosophical question about the origin and the substance of the world, while providing an answer based on empirical evidence and reasoning. He attributed the origin of the world to an element instead of a divine being. Our knowledge of Thales' claim derives from Aristotle. Aristotle, while discussing opinions of previous philosophers, tells us that "Thales, the founder of this type of philosophy, says the principle (arche) is water." What he meant by arche, is a matter of interpretation (it might be the origin, the element, or an ontological matrix), but regardless of the various interpretations, he conceived the world as One thing instead of a collection of various items and speculated on the binding/original elements. Another important aspect of Thales' philosophy is his claim that everything is full of gods. What he meant by that is again a matter of interpretation, that could be from a theistic view to an atheist one. But the most plausible explanation, suggested by Aristotle, is that Thales is advocating a theory of hylozoism, that the universe, the sum of all things that exist, is divine and alive. Lastly, another notable claim by Thales is that earth "rests on water"- maybe that was a conclusion after observing fish fossils on land. Anaximander Anaximander (610–546 BC), also from Miletus, was 25 years younger than Thales. He was a member of the elite of Miletus, wealthy and a statesman. He showed interest in many fields, including mathematics and geography. He drew the first map of the world, was the first to conclude that the earth is spherical, and made instruments to mark time, something like a clock. In response to Thales, he postulated as the first principle an undefined, unlimited substance without qualities (apeiron), out of which the primary opposites, hot and cold, moist and dry, became differentiated. His answer was an attempt to explain observable changes by attributing them to a single source that transforms to various elements. Like Thales, he provided a naturalistic explanation for phenomena previously given supernatural explanations. He is also known for speculating on the origin of mankind. He proclaimed that the earth is not situated in another structure but lies unsupported in the middle of the universe. Further, he developed a rudimentary evolutionary explanation for biodiversity in which constant universal powers affected the lives of animals. According to Giorgio de Santillana, a philosophy professor at the Massachusetts Institute of Technology, Anaximander's conception of a universe governed by laws shaped the philosophical thinking of centuries to come and was as important as the discovery of fire or Einstein's breakthroughs in science. Anaximenes Little is known of Anaximenes' (585–525 BC) life. He was a younger contemporary and friend of Anaximander, and the two worked together on various intellectual projects. He also wrote a book on nature in prose. Anaximenes took for his principle aēr (air), conceiving it as being modified, via thickening and thinning, into the other classical elements: fire, wind, clouds, water, and earth. While his theory resembled that of Anaximander, as they both claimed a single source of the universe, Anaximenes suggested sophisticated mechanisms in which air is transformed to other elements, mainly because of changes of density. Since the classical era, he was considered the father of naturalistic explanations. Anaximenes expanded Anaximander's attempt to find a unitary cause explaining natural phenomena both living and nonliving, without, according to James Warren, having to "reduce living things in some way to mere locations of material change". Xenophanes Xenophanes was born in Colophon, an Ionian town near Miletus. He was a well-traveled poet whose primary interests were theology and epistemology. Concerning theology, he pointed out that we did not know whether there was one god or many gods, or in such case whether there was a hierarchy among them. To critique the anthropomorphic representation of the gods by his contemporary Greeks, he pointed out that different nations depicted their gods as looking like themselves. He famously said that if oxen, horses, or lions could draw, they would draw their gods as oxen, horses, or lions. This critique was not limited to the looks of gods but also their behaviour. Greek mythology, mostly shaped by the poets Homer and Hesiod, attributed moral failures such as jealously and adultery to the gods. Xenophanes opposed this. He thought gods must be morally superior to humans. Xenophanes, however, never claimed the gods were omnipotent, omnibenevolent, or omniscient. Xenophanes also offered naturalistic explanations for phenomena such as the sun, the rainbow and St. Elmo's fire. Traditionally these were attributed to divine intervention but according to Xenophanes they were actually effects of clouds. These explanations of Xenophanes indicate empiricism in his thought and might constitute a kind of proto-scientism. Scholars have overlooked his cosmology and naturalism since Aristotle (maybe due to Xenophanes' lack of teleology) until recently but current literature suggests otherwise. Concerning epistemology, Xenophanes questioned the validity of human knowledge. Humans usually tend to assert their beliefs are real and represent truth. While Xenophanes was a pessimist about the capability of humans to reach knowledge, he also believed in gradual progress through critical thinking. Xenophanes tried to find naturalistic explanations for meteorological and cosmological phenomena. Ancient philosophy historian Alexander Mourelatos notes Xenophanes used a pattern of thought that is still in use by modern metaphysics. Xenophanes, by reducing meteorological phenomena to clouds, created an argument that "X in reality is Y", for example B32, "What they call Iris [the rainbow] that too is in reality a cloud: one that appears to the eye as purple, red, and green. This is still use[d] today 'lightning is massive electrical discharge' or 'items such as tables are a cloud of micro-particles'." Mourelatos comments that the type of analogy that the cloud analogy is remains present in scientific language and "...is the modern philosopher's favourite subject for illustrations of inter-theoretic identity". According to Aristotle and Diogenes Laertius, Xenophanes was Parmenides' teacher; but is a matter of debate in current literature whether Xenophanes should also be considered an Eleatic. Heraclitus The hallmark of Heraclitus' philosophy is flux. In fragment DK B30, Heraclitus writes: This world-order [Kosmos], the same of all, no god nor man did create, but it ever was and is and will be: everliving fire, kindling in measures and being quenched in measures. Heraclitus posited that all things in nature are in a state of perpetual flux. Like previous monist philosophers, Heraclitus claimed that the arche of the world was fire, which was subject to change – that makes him a materialist monist. From fire all things originate and all things return to it again in a process of eternal cycles. Fire becomes water and earth and vice versa. These everlasting modifications explain his view that the cosmos was and is and will be. The idea of continual flux is also met in the "river fragments". There, Heraclitus claims we can not step into the same river twice, a position summarized with the slogan ta panta rhei (everything flows). One fragment reads: "Into the same rivers we both step and do not step; we both are and are not" (DK 22 B49a). Heraclitus is seemingly suggesting that not only the river is constantly changing, but we do as well, even hinting at existential questions about humankind. Another key concept of Heraclitus is that opposites somehow mirror each other, a doctrine called unity of opposites. Two fragments relating to this concept state, "As the same thing in us is living and dead, waking and sleeping, young and old. For these things having changed around are those, and those in turn having changed around are these" (B88) and "Cold things warm up, the hot cools off, wet becomes dry, dry becomes wet" (B126). Heraclitus' doctrine on the unity of opposites suggests that unity of the world and its various parts is kept through the tension produced by the opposites. Furthermore, each polar substance contains its opposite, in a continual circular exchange and motion that results in the stability of the cosmos. Another of Heraclitus' famous axioms highlights this doctrine (B53): "War is father of all and king of all; and some he manifested as gods, some as men; some he made slaves, some free", where war means the creative tension that brings things into existence. A fundamental idea in Heraclitus is logos, an ancient Greek word with a variety of meanings; Heraclitus might have used a different meaning of the word with each usage in his book. Logos seems like a universal law that unites the cosmos, according to a fragment: "Listening not to me but to the logos, it is wise to agree (homologein) that all things are one" (DK 22 B50). While logos is everywhere, very few people are familiar with it. B 19 reads: [hoi polloi] "...do not know how to listen [to Logos] or how to speak [the truth]". Heraclitus' thought on logos influenced the Stoics, who referred to him to support their belief that rational law governs the universe. Pythagoreanism Pythagoras (582–496 BC) was born on Samos, a small island near Miletus. He moved to Croton at about age 30, where he established his school and acquired political influence. Some decades later he had to flee Croton and relocate to Metapontum. Pythagoras was famous for studying numbers and the geometrical relations of numbers. A large following of Pythagoreans adopted and extended his doctrine. They advanced his ideas, reaching the claim that everything consists of numbers, the universe is made by numbers and everything is a reflection of analogies and geometrical relations. Numbers, music and philosophy, all interlinked, could comfort the beauty-seeking human soul and hence Pythagoreans espoused the study of mathematics. Pythagorianism perceived the world as perfect harmony, dependent on number, and aimed at inducing humankind likewise to lead a harmonious life, including ritual and dietary recommendations. Their way of life was ascetic, restraining themselves from various pleasures and food. They were vegetarians and placed enormous value on friendship. Pythagoras politically was an advocate of a form of aristocracy, a position which later Pythagoreans rejected, but generally, they were reactionary and notably repressed women. Other pre-Socratic philosophers mocked Pythagoras for his belief in reincarnation. Notable Pythagorians included Philolaus (470-380 BC), Alcmaeon of Croton, Archytas (428-347 BC) and Echphantus. The most notable was Alcmaeon, a medical and philosophical writer. Alcmaeon noticed that most organs in the body come in pairs and suggested that human health depends on harmony between opposites (hot/cold, dry/wet), and illness is due to an imbalance of them. He was the first to think of the brain as the center of senses and thinking. Philolaus advanced cosmology through his discovery of heliocentricism – the idea that the Sun lies in the middle of the Earth's orbit and other planets. Pythagoreanism influenced later Christian currents as Neoplatonism, and its pedagogical methods were adapted by Plato. Furthermore, there seems to be a continuity in some aspects of Plato's philosophy. As Carl A. Huffman notes, Plato had a tendency to invoke mathematics in explaining natural phenomena, and he also believed in the immortality, even divinity of the human soul. The Eleatics: Parmenides, Zeno and Melissus The Eleatic school is named after Elea, an ancient Greek town on the southern Italian Peninsula. Parmenides is considered the founder of the school. Other eminent Eleatics include Zeno of Elea and Melissus of Samos. According to Aristotle and Diogenes Laertius, Xenophanes was Parmenides' teacher, and it is debated whether Xenophanes should also be considered an Eleatic. Parmenides was born in Elea to a wealthy family around 515 BC. Parmenides of Elea was interested in many fields, such as biology and astronomy. He was the first to deduce that the earth is spherical. He was also involved in his town's political life. Parmenides' contributions were paramount not only to ancient philosophy but to all of western metaphysics and ontology. Parmenides wrote a hard to interpret poem, named On Nature or On What-is, that substantially influenced later Greek philosophy. Only 150 fragments of this poem survive. It tells a story of a young man (kouros in ancient Greek) dedicated to finding the truth carried by a goddess on a long journey to the heavens. The poem consists of three parts, the proem (i.e., preface), the Way of Truth and the Way of Opinion. Very few pieces from the Way of Opinion survive. In that part, Parmenides must have been dealing with cosmology, judging from other authors' references. The Way of Truth was then, and is still today, considered of much more importance. In the Way of Truth, the goddess criticizes the logic of people who do not distinguish the real from the non-existent ("What-is" and "What-is-Not"). In this poem Parmenides unfolds his philosophy: that all things are One, and therefore nothing can be changed or altered. Hence, all the things that we think to be true, even ourselves, are false representations. What-is, according to Parmenides, is a physical sphere that is unborn, unchanged, and infinite. This is a monist vision of the world, far more radical than that of Heraclitus. The goddess teaches Kouros to use his reasoning to understand whether various claims are true or false, discarding senses as fallacious. Other fundamental issues raised by Parmenides' poem are the doctrine that nothing comes from nothing and the unity of being and thinking. As quoted by DK fragment 3: To gar auto noein estin te kai einai (For to think and to be is one and the same). Zeno and Melissus continued Parmenides' thought on cosmology. Zeno is mostly known for his paradoxes, i.e., self-contradictory statements which served as proofs that Parmenides' monism was valid, and that pluralism was invalid. The most common theme of those paradoxes involved traveling a distance, but since that distance comprises infinite points, the traveler could never accomplish it. His most famous is the Achilles paradox, which is mentioned by Aristotelis: "The second is called the 'Achilles' and says that the slowest runner will never be caught by the fastest, because it is necessary for the pursuer first to arrive at the point from which the pursued set off, so it is necessary that the slower will always be a little ahead." (Aristotle Phys. 239b14–18 [DK 29 A26]) Melissus defended and advanced Parmenides' theory using prose, without invoking divinity or mythical figures. He tried to explain why humans think various non-existent objects exist. The Eleatics' focus on Being through means of logic initiated the philosophical discipline of ontology. Other philosophers influenced by the Eleatics (such as the Sophists, Plato, and Aristotle) further advanced logic, argumentation, mathematics and especially elenchos (proof). The Sophists even placed Being under the scrutiny of elenchos. Because of the Eleatics reasoning was acquiring a formal method. The Pluralists: Anaxagoras and Empedocles The Pluralist school marked a return to Milesian natural philosophy, though much more refined because of Eleatic criticism. Anaxagoras was born in Ionia, but was the first major philosopher to emigrate to Athens. He was soon associated with the Athenian statesman Pericles and, probably due to this association, was accused by a political opponent of Pericles for impiety as Anaxagoras held that the sun was not associated with divinity; it was merely a huge burning stone. Pericles helped Anaxagoras flee Athens and return to Ionia. Anaxagoras was also a major influence on Socrates. Anaxagoras is known for his "theory of everything". He claimed that "in everything there is a share of everything." Interpretations differ as to what he meant. Anaxagoras was trying to stay true to the Eleatic principle of the everlasting (What-is) while also explaining the diversity of the natural world. Anaxagoras accepted Parmenides' doctrine that everything that exists (What-is) has existed forever, but contrary to the Eleatics, he added the ideas of panspermia and nous. All objects were mixtures of various elements, such as air, water, and others. One special element was nous, i.e., mind, which is present in living things and causes motion. According to Anaxagoras, Nous was one of the elements that make up the cosmos. Things that had nous were alive. According to Anaxagoras, all things are composites of some basic elements; although it is not clear what these elements are. All objects are a mixture of these building blocks and have a portion of each element, except nous. Nous was also considered a building block of the cosmos, but it exists only in living objects. Anaxagoras writes: "In everything there is a portion (moira) of everything except mind (nous), but there are some things in which mind too is present." Nous was not just an element of things, somehow it was the cause of setting the universe into motion. Anaxagoras advanced Milesian thought on epistemology, striving to establish an explanation that could be valid for all natural phenomena. Influenced by the Eleatics, he also furthered the exploration of metatheoretical questions such as the nature of knowledge. Empedocles was born in Akragas, a town in the southern Italian peninsula. According to Diogenes Laertius, Empedocles wrote two books in the form of poems: Peri Physeos (On nature) and the Katharmoi (Purifications). Some contemporary scholars argue these books might be one; all agree that interpreting Empedocles is difficult. On cosmological issues, Empedocles takes from the Eleatic school the idea that the universe is unborn, has always been and always will be. He also continues Anaxagoras' thought on the four "roots" (i.e., classical elements), that by intermixing, they create all things around us. These roots are fire, air, earth, and water. Crucially, he adds two more components, the immaterial forces of love and strife. These two forces are opposite and by acting upon the material of the four roots unite in harmony or tear apart the four roots, with the resulting mixture being all things that exist. Empedocles uses an analogy of how this is possible: as a painter uses a few basic colors to create a painting, the same happens with the four roots. It is not quite clear if love and strife co-operate or have a greater plan, but love and strife are in a continual cycle that generates life. Other beings, apart from the four roots and love and strife, according to Empedocles' Purifications are mortals, gods, and daemons. Like Pythagoras, Empedocles believed in the soul's transmigration and was vegetarian. Atomists: Leucippus and Democritus Leucippus and Democritus both lived in Abdera, in Thrace. They are most famous for their atomic cosmology even though their thought included many other fields of philosophy, such as ethics, mathematics, aesthetics, politics, and even embryology. The atomic theory of Leucippus and Democritus was a response to the Eleatic school, who held that motion is not possible because everything is occupied with What-is. Democritus and Leucippus reverted the Eleatic axiom, claiming that since motions exist, What-is-not must also exist; hence void exists. Democritus and Leucippus were skeptics regarding the reliability of our senses, but they were confident that motion exists. Atoms, according to Democritus and Leucippus, had some characteristics of the Eleatic What-is: they were homogeneous and indivisible. These characteristics allowed answers to Zeno's paradoxes. Atoms move within the void, interact with each other, and form the plurality of the world we live in, in a purely mechanical manner. One conclusion of the Atomists was determinism - the philosophical view that all events are determined completely by previously existing causes. As Leucippus said, (DK 67 B2) "Nothing comes to be random but everything is by reason and out of necessity." Democritus concluded that since everything is atoms and void, several of our senses are not real but conventional. Color, for example, is not a property of atoms; hence our perception of color is a convention. As Democritus said, (DK 68 B9) "By convention sweet, by convention bitter, by convention hot, by convention cold, by convention colour; in reality atoms and void." This can be interpreted in two ways. According to James Warren there is an eliminativist interpretation, such that Democritus means that color is not real, and there is a relativist interpretation, such that Democritus means that color and taste are not real but are perceived as such by our senses through sensory interaction. Sophists The sophists were a philosophical and educational movement that flourished in ancient Greece before Socrates. They attacked traditional thinking, from gods to morality, paving the way for further advances of philosophy and other disciplines such as drama, social sciences, mathematics, and history. Plato disparaged the sophists, causing long-lasting harm to their reputation. Plato thought philosophy should be reserved for those who had the appropriate intellect to understand it; whereas the sophists would teach anyone who would pay tuition. The sophists taught rhetoric and how to address issues from multiple viewpoints. Since the sophists and their pupils were persuasive speakers at court or in public, they were accused of moral and epistemological relativism, which indeed some sophists appeared to advocate. Prominent sophists include Protagoras, Gorgias, Hippias, Thrasymachus, Prodicus, Callicles, Antiphon, and Critias. Protagoras is mostly known for two of his quotes. One is that "[humans are..] the measure of all things, of things that are that they are, of things that are not that they are not" which is commonly interpreted as affirming philosophical relativism, but it can also be interpreted as claiming that knowledge is only relevant to humankind, that moral rightness and other forms of knowledge are relevant to and limited to human mind and perception. The other quote is, "Concerning the gods, I cannot ascertain whether they exist or whether they do not, or what form they have; for there are many obstacles to knowing, including the obscurity of the question and the brevity of human life." Gorgias wrote a book named On Nature, in which he attacked the Eleatics' concepts of What-is and What-is-not. He claimed it is absurd to hold that nonexistence exists, and that What-is was impossible since it had to either be generated or be unlimited and neither is sufficient. There is an ongoing debate among modern scholars whether he was a serious thinker, a precursor of extreme relativism and skepticism, or merely a charlatan. Antiphon placed natural law against the law of the city. One need not obey the city's laws as long as one will not get caught. One could argue that Antiphon was a careful hedonist—rejecting dangerous pleasures. Philolaous of Croton and Diogenes of Apollonia Philolaus of Croton and Diogenes of Apollonia from Thrace (born 460 BC) are considered the last generation of pre-Socratics. Rather than advancing a cosmological perspective on how our universe is constructed, they are mostly noted for advancing abstract thinking and argumentation. Pythagorianism, Anaxagoras and Empedocles influenced Philolaus. He attempted to explain both the variety and unity of the cosmos. He addressed the need to explain how the various masses of the universe interact among them and coined the term Harmonia, a binding force that allows mass to take shape. The structure of the cosmos consisted of apeira (unlimiteds) and perainonta (limiters). Diogenes of Apollonia returned to Milesian monism, but with a rather more elegant thought. As he says in DK64 B2 "It seems to me, overall, that all things are alterations of the same things and are the same thing". He explains that things, even when changing shapes, remain ontologically the same. Topics Knowledge The mythologoi, Homer and Hesiod, along with other poets, centuries before the pre-Socratics, thought that true knowledge was exclusive to the divine. But starting with Xenophanes, the pre-Socratics moved towards a more natural approach to knowledge. The pre-Socratics sought a method to understand the cosmos, while being aware that there is a limit to human knowledge. While Pythagoras and Empedocles linked their self-proclaimed wisdom to their divinely inspired status, they tried to teach or urge mortals to seek the truth about the natural realm—Pythagoras by means of mathematics and geometry and Empedocles by exposure to experiences. Xenophanes thought that human knowledge was merely an opinion that cannot be validated or proven to be true. According to Jonathan Warren, Xenophanes set the outline of the nature of knowledge. Later, Heraclitus and Parmenides stressed the capability of humans to understand how things stand in nature through direct observation, inquiry, and reflection. Theology Pre-Socratic thought contributed to the demythologization of the Greek popular religion. The narrative of their thought contributed to shifting the course of ancient Greek philosophy and religion away from the realm of divinity and even paved the way for teleological explanations. They attacked the traditional representations of gods that Homer and Hesiod had established and put Greek popular religion under scrutiny, initiating the schism between natural philosophy and theology. Pre-Socratic philosophers did not have atheistic beliefs, but it should be kept in mind that being an atheist those days was not without social or legal dangers. Despite that, arguments rejecting deities were not barred from the public sphere which can be seen in Protagoras's quotation on the gods: "About the gods I am able to know neither that they exist nor that they do not exist." The theological thought starts with the Milesian philosophers. It is evident in Anaximander's idea of the apeiron steering everything, which had other abilities usually attributed to Zeus. Later, Xenophanes developed a critique of the anthropomorphism of the gods. Xenophanes set three preconditions for God: he had to be all good, immortal and not resembling humans in appearance, which had a major impact on western religious thought. The theological thought of Heraclitus and Parmenides is not entirely certain, but it is generally accepted that they believed in some kind of divinity. The Pythagoreans and Empedocles believed in the transmigration of souls. Anaxagoras asserted that cosmic intelligence (nous) gives life to things. Diogenes of Appollonia expanded this line of thinking and might have constructed the first teleological argument "it would not be possible without Intellection for it so to be divided up that it has the measures of all things — of winter and summer and night and day and rains and winds and fair weather. The other things, too, if one wishes to consider them, one would find disposed of in the best possible way." While some pre-Socratics were trying to find alternatives to divinity, others were setting the foundation of explaining the universe in terms of teleology and intelligent design by a divine force. Medicine Prior to the pre-Socratics, health and illness were thought to be governed by gods. Pre-Socratic philosophy and medicine advanced in parallel, with medicine as a part of philosophy and vice versa. It was Hippocrates (often hailed as the father of medicine) who separated – but not completely – the two domains. Physicians incorporated pre-Socratic philosophical ideas about the nature of the world in their theoretical framework, blurring the border between the two domains. An example is the study of epilepsy, which in popular religion was thought to be a divine intervention to human life, but Hippocrates' school attributed it to nature, just as Milesian rationalism demythologized other natural phenomena such as earthquakes. The systematic study of anatomy, physiology, and illnesses led to the discovery of cause-effect relations and a more sophisticated terminology and understanding of the diseases that ultimately yielded rational science. Cosmology The pre-Socratics were the first to attempt to provide reductive explanations for a plethora of natural phenomena. Firstly, they were preoccupied with the mystery of the cosmic matter—what was the basic substance of the universe? Anaximander suggested apeiron (limitless), which hints, as Aristotle analyzed, there is no beginning and no end to it, both chronologically and within the space. Anaximenes placed aêr (air) as the primary principle, probably after realizing the importance of air to life and/or the need to explain various observable changes. Heraclitus, also seeking to address the issue of the ever changing world, placed fire as the primary principle of the universe, that transforms to water and earth to produce the universe. Ever-transforming nature is summarized by Heraclitus' axiom panta rhei (everything is in a state of flux). Parmenides suggested two ever-lasting primary building blocs, night and day, which together form the universe. Empedocles increased the building blocks to four and named them roots, while also adding Love and Strife, to serve as the driving force for the roots to mingle. Anaxagoras extended even more the plurality of Empedocles, claiming everything is in everything, myriads of substances were mixing among each other except one, Nous (mind) that orchestrates everything—but did not attribute divine characteristics to Nous. Leucippus and Democritus asserted the universe consists of atoms and void, while the motion of atoms is responsible for the changes we observe. Rationalism, observation and the beginning of scientific thought The pre-Socratic intellectual revolution is widely considered to have been the first step towards liberation of the human mind from the mythical world and initiated a march towards reason and scientific thought that yielded modern western philosophy and science. The pre-Socratics sought to understand the various aspects of nature by means of rationalism, observations, and offering explanations that could be deemed as scientific, giving birth to what became Western rationalism. Thales was the first to seek for a unitary arche of the world. Whether arche meant the beginning, the origin, the main principle or the basic element is unclear, but was the first attempt to reduce the explanations of the universe to a single cause, based on reason and not guided by any sort of divinity. Anaximander offered the principle of sufficient reason, a revolutionary argument that would also yield the principle that nothing comes out of nothing. Most of pre-Socratics seemed indifferent to the concept of teleology, especially the Atomists who fiercely rejected the idea. According to them, the various phenomena were the consequence of the motion of atoms without any purpose. Xenophanes also advanced a critique of anthropomorphic religion by highlighting in a rational way the inconsistency of depictions of the gods in Greek popular religion. Undoubtedly, pre-Socratics paved the way towards science, but whether what they did could constitute science is a matter of debate. Thales had offered the first account of a reduction and a generalization, a significant milestone towards scientific thought. Other pre-Socratics also sought to answer the question of arche, offering various answers, but the first step towards scientific thought was already taken. Philosopher Karl Popper, in his seminal work Back to Presocratics (1958) traces the roots of modern science (and the West) to the early Greek philosophers. He writes: "There can be little doubt that the Greek tradition of philosophical criticism had its main source in Ionia ... It thus leads the tradition which created the rational or scientific attitude, and with it our Western civilization, the only civilization, which is based upon science (though, of course, not upon science alone)." Elsewhere in the same study Popper diminishes the significance of the label they should carry as purely semantics. "There is the most perfect possible continuity of thought between [the Presocratics'] theories and the later developments in physics. Whether they are called philosophers, or pre-scientists, or scientists, matters very little." Other scholars did not share the same view. F. M. Cornford considered the Ionanians as dogmatic speculators, due to their lack of empiricism. Reception and legacy Antiquity The pre-Socratics had a direct influence on classical antiquity in many ways. The philosophic thought produced by the pre-Socratics heavily influenced later philosophers, historians and playwrights. One line of influence was the Socrato-Ciceronian tradition, while the other was the Platonic-Aristotelian. Socrates, Xenophon and Cicero were highly influenced by the physiologoi (naturalists) as they were named in ancient times. The naturalists impressed young Socrates and he was interested in the quest for the substance of the cosmos, but his interest waned as he became steadily more focused on epistemology, virtue, and ethics rather than the natural world. According to Xenophon, the reason was that Socrates believed humans incapable of comprehending the cosmos. Plato, in the Phaedo, claims that Socrates was uneasy with the materialistic approach of the pre-Socratics, particularly Anaxagoras. Cicero analyzed his views on the pre-Socratics in his Tusculanae Disputationes, as he distinguished the theoretical nature of pre-Socratic thought from previous "sages" who were interested in more practical issues. Xenophon, like Cicero, saw the difference between pre-Socratics and Socrates being his interest in human affairs (ta anthropina). The pre-Socratics deeply influenced both Plato and Aristotle. Aristotle discussed the pre-Socratics in the first book of Metaphysics, as an introduction to his own philosophy and the quest for arche. He was the first to state that philosophy starts with Thales. It is not clear whether Thales talked of water as arche, or that was a retrospective interpretation by Aristotle, who was examining his predecessors under the scope of his views. More crucially, Aristotle criticized the pre-Socratics for not identifying a purpose as a final cause, a fundamental idea in Aristotelian metaphysics. Plato also attacked pre-Socratic materialism. Later, during the Hellenistic era, philosophers of various currents focused on the study of nature and advanced pre-Socratic ideas. The Stoics incorporated features from Anaxagoras and Heraclitus, such as nous and fire respectively. The Epicureans saw Democritus' atomism as their predecessor while the Sceptics were linked to Xenophanes. Modern era The pre-Socratics, along with the rest of ancient Greece, invented the central concepts of Western civilization: freedom, democracy, individual autonomy and rationalism. Francis Bacon, a 16th-century philosopher known for advancing the scientific method, was probably the first philosopher of the modern era to use pre-Socratic axioms extensively in his texts. He criticized the pre-Socratic theory of knowledge by Xenophanes and others, claiming that their deductive reasoning could not yield meaningful results—an opinion contemporary philosophy of science rejects. Bacon's fondness for the pre-Socratics, especially Democritus' atomist theory, might have been because of his anti-Aristotelianism. Friedrich Nietzsche admired the pre-Socratics deeply, calling them "tyrants of the spirit" to mark their antithesis and his preference against Socrates and his successors. Nietzsche also weaponized pre-Socratic antiteleology, coupled with the materialism exemplified by Democritus, for his attack on Christianity and its morals. Nietzsche saw the pre-Socratics as the first ancestors of contemporary science—linking Empedocles to Darwinism and Heraclitus to physicist Helmholtz. According to his narrative, limned in many of his books, the pre-Socratic era was the glorious era of Greece, while the so-called Golden Age that followed was an age of decay, according to Nietzsche. Nietzsche incorporated the pre-Socratics in his Apollonian and Dionysian dialectics, with them representing the creative Dionysian aspect of the duo. Martin Heidegger found the roots of his phenomenology and later thinking of Things and the Fourfold in the pre-Socratics, considering Anaximander, Parmenides, and Heraclitus as the original thinkers on being, which he identified in their work as physis [φύσις] (emergence, contrasted against κρύπτεσθαι, kryptesthai, in Heraclitus' Fragment 123) or aletheia [αλήθεια] (truth as unconcealment). References Cited sources Further reading Cornford, F. M. 1991. From Religion to Philosophy: A Study in the Origins of Western Speculation. Princeton, NJ: Princeton University Press. Graham, D. W. 2010. The Texts of Early Greek Philosophy: The Complete Fragments and Selected Testimonies of the Major Presocratics. 2 vols. Cambridge, UK: Cambridge University Press. Furley, D. J., and R. E. Allen, eds. 1970. Studies in Presocratic Philosophy. Vol. 1, The Beginnings of Philosophy. London: Routledge & Kegan Paul. Jaeger, W. 1947. The Theology of the Early Greek Philosophers. Oxford: Oxford Univ. Press. Lloyd, G. E. R., Early Greek Science: Thales to Aristotle. New York: Norton, 1970. Luchte, James. 2011. Early Greek Thought: Before the Dawn. New York:Continuum International Publishing Group Robb, K., ed. 1983. Language and Thought in Early Greek Philosophy. La Salle, IL: Hegeler Institute. Stamatellos, G. 2012, Introduction to Presocratics: A Thematic Approach to Early Greek Philosophy with Key Readings, Wiley-Blackwell. Vlastos, G. 1995. Studies in Greek Philosophy. Vol. 1, The Presocratics. Edited by D. W. Graham. Princeton, NJ: Princeton Univ. Press. Vassallo, Ch. 2021 The Presocratics at Herculaneum: A Study of Early Greek Philosophy in the Epicurean Tradition.'' Berlin-Boston: De Gruyter. External links Collections Containing Articles on Presocratic Philosophy
0.764047
0.99727
0.761961
Regress argument (epistemology)
In epistemology, the regress argument is the argument that any proposition requires a justification. However, any justification itself requires support. This means that any proposition whatsoever can be endlessly (infinitely) questioned, resulting in infinite regress. It is a problem in epistemology and in any general situation where a statement has to be justified. The argument is also known as diallelus (Latin) or diallelon, from Greek di' allelon "through or by means of one another" and as the epistemic regress problem. It is an element of the Münchhausen trilemma. Structure Assuming that knowledge is justified true belief, then: Suppose that P is some piece of knowledge. Then P is a justified true belief. The only thing that can justify P is another statement – let's call it P1; so P1 justifies P. But if P1 is to be a satisfactory justification for P, then we must know that P1 is true. But for P1 to be known, it must also be a justified true belief. That justification will be another statement - let's call it P2; so P2 justifies P1. But if P2 is to be a satisfactory justification for P1, then we must know that P2 is true But for P2 to count as knowledge, it must itself be a justified true belief. That justification will in turn be another statement - let's call it P3; so P3 justifies P2. and so on, ad infinitum. Responses Throughout history many responses to this problem have been generated. The major counter-arguments are some statements do not need justification, the chain of reasoning loops back on itself, the sequence never finishes, belief cannot be justified as beyond doubt. Foundationalism Perhaps the chain begins with a belief that is justified, but which is not justified by another belief. Such beliefs are called basic beliefs. In this solution, which is called foundationalism, all beliefs are justified by basic beliefs. Foundationalism seeks to escape the regress argument by claiming that there are some beliefs for which it is improper to ask for a justification. (See also a priori.) Foundationalism is the belief that a chain of justification begins with a belief that is justified, but which is not justified by another belief. Thus, a belief is justified if and only if: it is a basic/foundational belief, or it is justified by a basic belief it is justified by a chain of beliefs that is ultimately justified by a basic belief or beliefs. Foundationalism can be compared to a building. Ordinary individual beliefs occupy the upper stories of the building; basic, or foundational beliefs are down in the basement, in the foundation of the building, holding everything else up. In a similar way, individual beliefs, say about economics or ethics, rest on more basic beliefs, say about the nature of human beings; and those rest on still more basic beliefs, say about the mind; and in the end the entire system rests on a set of basic beliefs which are not justified by other beliefs. Coherentism Alternatively, the chain of reasoning may loop around on itself, forming a circle. In this case, the justification of any statement is used, perhaps after a long chain of reasoning, in justifying itself, and the argument is circular. This is a version of coherentism. Coherentism is the belief that an idea is justified if and only if it is part of a coherent system of mutually supporting beliefs (i.e., beliefs that support each other). In effect Coherentism denies that justification can only take the form of a chain. Coherentism replaces the chain with a holistic web. The most common objection to naïve Coherentism is that it relies on the idea that circular justification is acceptable. In this view, P ultimately supports P, begging the question. Coherentists reply that it is not just P that is supporting P, but P along with the totality of the other statements in the whole system of belief. Coherentism accepts any belief that is part of a coherent system of beliefs. In contrast, P can cohere with P1 and P2 without P, P1 or P2 being true. Instead, Coherentists might say that it is very unlikely that the whole system would be both untrue and consistent, and that if some part of the system was untrue, it would almost certainly be inconsistent with some other part of the system. A third objection is that some beliefs arise from experience and not from other beliefs. An example is that one is looking into a room which is totally dark. The lights turn on momentarily and one sees a white canopy bed in the room. The belief that there is a white canopy bed in this room is based entirely on experience and not on any other belief. Of course other possibilities exist, such as that the white canopy bed is entirely an illusion or that one is hallucinating, but the belief remains well-justified. Coherentists might respond that the belief which supports the belief that there is a white canopy bed in this room is that one saw the bed, however briefly. This appears to be an immediate qualifier which does not depend on other beliefs, and thus seems to prove that Coherentism is not true because beliefs can be justified by concepts other than beliefs. But others have argued that the experience of seeing the bed is indeed dependent on other beliefs, about what a bed, a canopy and so on, actually look like. Another objection is that the rule demanding "coherence" in a system of ideas seems to be an unjustified belief. Infinitism Infinitism argues that the chain can go on forever. Critics argue that this means there is never adequate justification for any statement in the chain. Skepticism Skeptics reject the three above responses and argue that beliefs cannot be justified as beyond doubt. Note that many skeptics do not deny that things may appear in a certain way. However, such sense impressions cannot, in the skeptical view, be used to find beliefs that cannot be doubted. Also, skeptics do not deny that, for example, many laws of nature give the appearance of working or that doing certain things give the appearance of producing pleasure/pain or even that reason and logic seem to be useful tools. Skepticism is in this view valuable since it encourages continued investigation. Synthesized approaches Common sense The method of common sense espoused by such philosophers as Thomas Reid and G. E. Moore points out that whenever we investigate anything at all, whenever we start thinking about some subject, we have to make assumptions. When one tries to support one's assumptions with reasons, one must make yet more assumptions. Since it is inevitable that we will make some assumptions, why not assume those things that are most obvious: the matters of common sense that no one ever seriously doubts. "Common sense" here does not mean old adages like "Chicken soup is good for colds" but statements about the background in which our experiences occur. Examples would be "Human beings typically have two eyes, two ears, two hands, two feet", or "The world has a ground and a sky" or "Plants and animals come in a wide variety of sizes and colors" or "I am conscious and alive right now". These are all the absolutely most obvious sorts of claims that one could possibly make; and, said Reid and Moore, these are the claims that make up common sense. This view can be seen as either a version of foundationalism, with common sense statements taking the role of basic statements, or as a version of Coherentism. In this case, commonsense statements are statements that are so crucial to keeping the account coherent that they are all but impossible to deny. If the method of common sense is correct, then philosophers may take the principles of common sense for granted. They do not need criteria in order to judge whether a proposition is true or not. They can also take some justifications for granted, according to common sense. They can get around Sextus' problem of the criterion because there is no infinite regress or circle of reasoning, because the principles of common sense ground the entire chain of reasoning. Critical philosophy Another escape from the diallelus is critical philosophy, which denies that beliefs should ever be justified at all. Rather, the job of philosophers is to subject all beliefs (including beliefs about truth criteria) to criticism, attempting to discredit them rather than justifying them. Then, these philosophers say, it is rational to act on those beliefs that have best withstood criticism, whether or not they meet any specific criterion of truth. Karl Popper expanded on this idea to include a quantitative measurement he called verisimilitude, or truth-likeness. He showed that even if one could never justify a particular claim, one can compare the verisimilitude of two competing claims by criticism to judge which is superior to the other. Pragmatism The pragmatist philosopher William James suggests that, ultimately, everyone settles at some level of explanation based on one's personal preferences that fit the particular individual's psychological needs. People select whatever level of explanation fits their needs, and things other than logic and reason determine those needs. In The Sentiment of Rationality, James compares the philosopher, who insists on a high degree of justification, and the boor, who accepts or rejects ideals without much thought: See also Plato's Justification (epistemology) References Concepts in epistemology Justification (epistemology) Philosophical arguments Philosophical problems
0.774746
0.983487
0.761952
Naturalism (philosophy)
In philosophy, naturalism is the idea that only natural laws and forces (as opposed to supernatural ones) operate in the universe. In its primary sense, it is also known as ontological naturalism, metaphysical naturalism, pure naturalism, philosophical naturalism and antisupernaturalism. "Ontological" refers to ontology, the philosophical study of what exists. Philosophers often treat naturalism as equivalent to materialism, but there are important distinctions between the philosophies. For example, philosopher Paul Kurtz argues that nature is best accounted for by reference to material principles. These principles include mass, energy, and other physical and chemical properties accepted by the scientific community. Further, this sense of naturalism holds that spirits, deities, and ghosts are not real and that there is no "purpose" in nature. This stronger formulation of naturalism is commonly referred to as metaphysical naturalism. On the other hand, the more moderate view that naturalism should be assumed in one's working methods as the current paradigm, without any further consideration of whether naturalism is true in the robust metaphysical sense, is called methodological naturalism. With the exception of pantheists – who believe that nature is identical with divinity while not recognizing a distinct personal anthropomorphic god – theists challenge the idea that nature contains all of reality. According to some theists, natural laws may be viewed as secondary causes of God(s). In the 20th century, Willard Van Orman Quine, George Santayana, and other philosophers argued that the success of naturalism in science meant that scientific methods should also be used in philosophy. According to this view, science and philosophy are not always distinct from one another, but instead form a continuum. History of naturalism Ancient and medieval philosophy Naturalism is most notably a Western phenomenon, but an equivalent idea has long existed in the East. Naturalism was the foundation of two out of six orthodox schools and one heterodox school of Hinduism. Samkhya, one of the oldest schools of Indian philosophy puts nature (Prakriti) as the primary cause of the universe, without assuming the existence of a personal God or Ishvara. The Carvaka, Nyaya, Vaisheshika schools originated in the 7th, 6th, and 2nd century BCE, respectively. Similarly, though unnamed and never articulated into a coherent system, one tradition within Confucian philosophy embraced a form of Naturalism dating to the Wang Chong in the 1st century, if not earlier, but it arose independently and had little influence on the development of modern naturalist philosophy or on Eastern or Western culture. Western metaphysical naturalism originated in ancient Greek philosophy. The earliest pre-Socratic philosophers, especially the Milesians (Thales, Anaximander, and Anaximenes) and the atomists (Leucippus and Democritus), were labeled by their peers and successors "the physikoi" (from the Greek φυσικός or physikos, meaning "natural philosopher" borrowing on the word φύσις or physis, meaning "nature") because they investigated natural causes, often excluding any role for gods in the creation or operation of the world. This eventually led to fully developed systems such as Epicureanism, which sought to explain everything that exists as the product of atoms falling and swerving in a void. Aristotle surveyed the thought of his predecessors and conceived of nature in a way that charted a middle course between their excesses. With the rise and dominance of Christianity in the West and the later spread of Islam, metaphysical naturalism was generally abandoned by intellectuals. Thus, there is little evidence for it in medieval philosophy. Modern philosophy It was not until the early modern era of philosophy and the Age of Enlightenment that naturalists like Benedict Spinoza (who put forward a theory of psychophysical parallelism), David Hume, and the proponents of French materialism (notably Denis Diderot, Julien La Mettrie, and Baron d'Holbach) started to emerge again in the 17th and 18th centuries. In this period, some metaphysical naturalists adhered to a distinct doctrine, materialism, which became the dominant category of metaphysical naturalism widely defended until the end of the 19th century. Thomas Hobbes was a proponent of naturalism in ethics who acknowledged normative truths and properties. Immanuel Kant rejected (reductionist) materialist positions in metaphysics, but he was not hostile to naturalism. His transcendental philosophy is considered to be a form of liberal naturalism. In late modern philosophy, Naturphilosophie, a form of natural philosophy, was developed by Friedrich Wilhelm Joseph von Schelling and Georg Wilhelm Friedrich Hegel as an attempt to comprehend nature in its totality and to outline its general theoretical structure. A version of naturalism that arose after Hegel was Ludwig Feuerbach's anthropological materialism, which influenced Karl Marx and Friedrich Engels's historical materialism, Engels's "materialist dialectic" philosophy of nature (Dialectics of Nature), and their follower Georgi Plekhanov's dialectical materialism. Another notable school of late modern philosophy advocating naturalism was German materialism: members included Ludwig Büchner, Jacob Moleschott, and Carl Vogt. The current usage of the term naturalism "derives from debates in America in the first half of the 20th century. The self-proclaimed 'naturalists' from that period included John Dewey, Ernest Nagel, Sidney Hook, and Roy Wood Sellars." Contemporary philosophy A politicized version of naturalism that has arisen in contemporary philosophy is Ayn Rand's Objectivism. Objectivism is an expression of capitalist ethical idealism within a naturalistic framework. An example of a more progressive naturalistic philosophy is secular humanism. The current usage of the term naturalism "derives from debates in America in the first half of the last century. Currently, metaphysical naturalism is more widely embraced than in previous centuries, especially but not exclusively in the natural sciences and the Anglo-American, analytic philosophical communities. While the vast majority of the population of the world remains firmly committed to non-naturalistic worldviews, contemporary defenders of naturalism and/or naturalistic theses and doctrines today include Kai Nielsen, J. J. C. Smart, David Malet Armstrong, David Papineau, Paul Kurtz, Brian Leiter, Daniel Dennett, Michael Devitt, Fred Dretske, Paul and Patricia Churchland, Mario Bunge, Jonathan Schaffer, Hilary Kornblith, Leonard Olson, Quentin Smith, Paul Draper and Michael Martin, among many other academic philosophers. According to David Papineau, contemporary naturalism is a consequence of the build-up of scientific evidence during the twentieth century for the "causal closure of the physical", the doctrine that all physical effects can be accounted for by physical causes. In contemporary continental philosophy, Quentin Meillassoux proposed speculative materialism, a post-Kantian return to David Hume which can strengthen classical materialist ideas. This speculative approach to philosophical naturalism has been further developed by other contemporary thinkers including Ray Brassier and Drew M. Dalton. Etymology The term "methodological naturalism" is much more recent, though. According to Ronald Numbers, it was coined in 1983 by Paul de Vries, a Wheaton College philosopher. De Vries distinguished between what he called "methodological naturalism", a disciplinary method that says nothing about God's existence, and "metaphysical naturalism", which "denies the existence of a transcendent God". The term "methodological naturalism" had been used in 1937 by Edgar S. Brightman in an article in The Philosophical Review as a contrast to "naturalism" in general, but there the idea was not really developed to its more recent distinctions. Description According to Steven Schafersman, naturalism is a philosophy that maintains that; "Nature encompasses all that exists throughout space and time; Nature (the universe or cosmos) consists only of natural elements, that is, of spatio-temporal physical substance – mass –energy. Non-physical or quasi-physical substance, such as information, ideas, values, logic, mathematics, intellect, and other emergent phenomena, either supervene upon the physical or can be reduced to a physical account; Nature operates by the laws of physics and in principle, can be explained and understood by science and philosophy; The supernatural does not exist, i.e., only nature is real. Naturalism is therefore a metaphysical philosophy opposed primarily by supernaturalism". Or, as Carl Sagan succinctly put it: "The Cosmos is all that is or ever was or ever will be." In addition Arthur C. Danto states that naturalism, in recent usage, is a species of philosophical monism according to which whatever exists or happens is natural in the sense of being susceptible to explanation through methods which, although paradigmatically exemplified in the natural sciences, are continuous from domain to domain of objects and events. Hence, naturalism is polemically defined as repudiating the view that there exists or could exist any entities which lie, in principle, beyond the scope of scientific explanation. Arthur Newell Strahler states: "The naturalistic view is that the particular universe we observe came into existence and has operated through all time and in all its parts without the impetus or guidance of any supernatural agency." "The great majority of contemporary philosophers urge that that reality is exhausted by nature, containing nothing 'supernatural', and that the scientific method should be used to investigate all areas of reality, including the 'human spirit'." Philosophers widely regard naturalism as a "positive" term, and "few active philosophers nowadays are happy to announce themselves as 'non-naturalists'". "Philosophers concerned with religion tend to be less enthusiastic about 'naturalism'" and that despite an "inevitable" divergence due to its popularity, if more narrowly construed, (to the chagrin of John McDowell, David Chalmers and Jennifer Hornsby, for example), those not so disqualified remain nonetheless content "to set the bar for 'naturalism' higher." Alvin Plantinga stated that Naturalism is presumed to not be a religion. However, in one very important respect it resembles religion by performing the cognitive function of a religion. There is a set of deep human questions to which a religion typically provides an answer. In like manner naturalism gives a set of answers to these questions". Providing assumptions required for science According to Robert Priddy, all scientific study inescapably builds on at least some essential assumptions that cannot be tested by scientific processes; that is, that scientists must start with some assumptions as to the ultimate analysis of the facts with which it deals. These assumptions would then be justified partly by their adherence to the types of occurrence of which we are directly conscious, and partly by their success in representing the observed facts with a certain generality, devoid of ad hoc suppositions." Kuhn also claims that all science is based on assumptions about the character of the universe, rather than merely on empirical facts. These assumptions – a paradigm – comprise a collection of beliefs, values and techniques that are held by a given scientific community, which legitimize their systems and set the limitations to their investigation. For naturalists, nature is the only reality, the "correct" paradigm, and there is no such thing as supernatural, i.e. anything above, beyond, or outside of nature. The scientific method is to be used to investigate all reality, including the human spirit. Some claim that naturalism is the implicit philosophy of working scientists, and that the following basic assumptions are needed to justify the scientific method: That there is an objective reality shared by all rational observers."The basis for rationality is acceptance of an external objective reality." "Objective reality is clearly an essential thing if we are to develop a meaningful perspective of the world. Nevertheless its very existence is assumed." "Our belief that objective reality exist is an assumption that it arises from a real world outside of ourselves. As infants we made this assumption unconsciously. People are happy to make this assumption that adds meaning to our sensations and feelings, than live with solipsism." "Without this assumption, there would be only the thoughts and images in our own mind (which would be the only existing mind) and there would be no need of science, or anything else." That this objective reality is governed by natural laws; "Science, at least today, assumes that the universe obeys knowable principles that don't depend on time or place, nor on subjective parameters such as what we think, know or how we behave." Hugh Gauch argues that science presupposes that "the physical world is orderly and comprehensible." That reality can be discovered by means of systematic observation and experimentation.Stanley Sobottka said: "The assumption of external reality is necessary for science to function and to flourish. For the most part, science is the discovering and explaining of the external world." "Science attempts to produce knowledge that is as universal and objective as possible within the realm of human understanding." That Nature has uniformity of laws and most if not all things in nature must have at least a natural cause.Biologist Stephen Jay Gould referred to these two closely related propositions as the constancy of nature's laws and the operation of known processes. Simpson agrees that the axiom of uniformity of law, an unprovable postulate, is necessary in order for scientists to extrapolate inductive inference into the unobservable past in order to meaningfully study it. "The assumption of spatial and temporal invariance of natural laws is by no means unique to geology since it amounts to a warrant for inductive inference which, as Bacon showed nearly four hundred years ago, is the basic mode of reasoning in empirical science. Without assuming this spatial and temporal invariance, we have no basis for extrapolating from the known to the unknown and, therefore, no way of reaching general conclusions from a finite number of observations. (Since the assumption is itself vindicated by induction, it can in no way "prove" the validity of induction — an endeavor virtually abandoned after Hume demonstrated its futility two centuries ago)." Gould also notes that natural processes such as Lyell's "uniformity of process" are an assumption: "As such, it is another a priori assumption shared by all scientists and not a statement about the empirical world." According to R. Hooykaas: "The principle of uniformity is not a law, not a rule established after comparison of facts, but a principle, preceding the observation of facts ... It is the logical principle of parsimony of causes and of economy of scientific notions. By explaining past changes by analogy with present phenomena, a limit is set to conjecture, for there is only one way in which two things are equal, but there are an infinity of ways in which they could be supposed different." That experimental procedures will be done satisfactorily without any deliberate or unintentional mistakes that will influence the results. That experimenters won't be significantly biased by their presumptions. That random sampling is representative of the entire population.A simple random sample (SRS) is the most basic probabilistic option used for creating a sample from a population. The benefit of SRS is that the investigator is guaranteed to choose a sample that represents the population that ensures statistically valid conclusions. Methodological naturalism Methodological naturalism, the second sense of the term "naturalism", (see above) is "the adoption or assumption of philosophical naturalism … with or without fully accepting or believing it.” Robert T. Pennock used the term to clarify that the scientific method confines itself to natural explanations without assuming the existence or non-existence of the supernatural. "We may therefore be agnostic about the ultimate truth of [philosophical] naturalism, but nevertheless adopt it and investigate nature as if nature is all that there is." According to Ronald Numbers, the term "methodological naturalism" was coined in 1983 by Paul de Vries, a Wheaton College philosopher. Both Schafersman and Strahler assert that it is illogical to try to decouple the two senses of naturalism. "While science as a process only requires methodological naturalism, the practice or adoption of methodological naturalism entails a logical and moral belief in philosophical naturalism, so they are not logically decoupled." This “[philosophical] naturalistic view is espoused by science as its fundamental assumption." But Eugenie Scott finds it imperative to do so for the expediency of deprogramming the religious. "Scientists can defuse some of the opposition to evolution by first recognizing that the vast majority of Americans are believers, and that most Americans want to retain their faith." Scott apparently believes that "individuals can retain religious beliefs and still accept evolution through methodological naturalism. Scientists should therefore avoid mentioning metaphysical naturalism and use methodological naturalism instead." "Even someone who may disagree with my logic … often understands the strategic reasons for separating methodological from philosophical naturalism—if we want more Americans to understand evolution." Scott’s approach has found success as illustrated in Ecklund’s study where some religious scientists reported that their religious beliefs affect the way they think about the implications – often moral – of their work, but not the way they practice science within methodological naturalism. Papineau notes that "Philosophers concerned with religion tend to be less enthusiastic about metaphysical naturalism and that those not so disqualified remain content "to set the bar for 'naturalism' higher." In contrast to Schafersman, Strahler, and Scott, Robert T. Pennock, an expert witness at the Kitzmiller v. Dover Area School District trial and cited by the Judge in his Memorandum Opinion. described "methodological naturalism" stating that it is not based on dogmatic metaphysical naturalism. Pennock further states that as supernatural agents and powers "are above and beyond the natural world and its agents and powers" and "are not constrained by natural laws", only logical impossibilities constrain what a supernatural agent cannot do. In addition he says: "If we could apply natural knowledge to understand supernatural powers, then, by definition, they would not be supernatural." "Because the supernatural is necessarily a mystery to us, it can provide no grounds on which one can judge scientific models." "Experimentation requires observation and control of the variables.... But by definition we have no control over supernatural entities or forces." The position that the study of the function of nature is also the study of the origin of nature is in contrast with opponents who take the position that functioning of the cosmos is unrelated to how it originated. While they are open to supernatural fiat in its invention and coming into existence, during scientific study to explain the functioning of the cosmos, they do not appeal to the supernatural. They agree that allowing "science to appeal to untestable supernatural powers to explain how nature functions would make the scientist's task meaningless, undermine the discipline that allows science to make progress, and would be as profoundly unsatisfying as the ancient Greek playwright's reliance upon the deus ex machina to extract his hero from a difficult predicament." Views on methodological naturalism W. V. O. Quine W. V. O. Quine describes naturalism as the position that there is no higher tribunal for truth than natural science itself. In his view, there is no better method than the scientific method for judging the claims of science, and there is neither any need nor any place for a "first philosophy", such as (abstract) metaphysics or epistemology, that could stand behind and justify science or the scientific method. Therefore, philosophy should feel free to make use of the findings of scientists in its own pursuit, while also feeling free to offer criticism when those claims are ungrounded, confused, or inconsistent. In Quine's view, philosophy is "continuous with" science, and both are empirical. Naturalism is not a dogmatic belief that the modern view of science is entirely correct. Instead, it simply holds that science is the best way to explore the processes of the universe and that those processes are what modern science is striving to understand. Karl Popper Karl Popper equated naturalism with inductive theory of science. He rejected it based on his general critique of induction (see problem of induction), yet acknowledged its utility as means for inventing conjectures. Popper instead proposed that science should adopt a methodology based on falsifiability for demarcation, because no number of experiments can ever prove a theory, but a single experiment can contradict one. Popper holds that scientific theories are characterized by falsifiability. Alvin Plantinga Alvin Plantinga, Professor Emeritus of Philosophy at Notre Dame, and a Christian, has become a well-known critic of naturalism. He suggests, in his evolutionary argument against naturalism, that the probability that evolution has produced humans with reliable true beliefs, is low or inscrutable, unless the evolution of humans was guided (for example, by God). According to David Kahan of the University of Glasgow, in order to understand how beliefs are warranted, a justification must be found in the context of supernatural theism, as in Plantinga's epistemology. (See also supernormal stimuli). Plantinga argues that together, naturalism and evolution provide an insurmountable "defeater for the belief that our cognitive faculties are reliable", i.e., a skeptical argument along the lines of Descartes' evil demon or brain in a vat. The argument is controversial and has been criticized as seriously flawed, for example, by Elliott Sober. Robert T. Pennock Robert T. Pennock states that as supernatural agents and powers "are above and beyond the natural world and its agents and powers" and "are not constrained by natural laws", only logical impossibilities constrain what a supernatural agent cannot do. He says: "If we could apply natural knowledge to understand supernatural powers, then, by definition, they would not be supernatural." As the supernatural is necessarily a mystery to us, it can provide no grounds on which one can judge scientific models. "Experimentation requires observation and control of the variables.... But by definition we have no control over supernatural entities or forces." Science does not deal with meanings; the closed system of scientific reasoning cannot be used to define itself. Allowing science to appeal to untestable supernatural powers would make the scientist's task meaningless, undermine the discipline that allows science to make progress, and "would be as profoundly unsatisfying as the ancient Greek playwright's reliance upon the deus ex machina to extract his hero from a difficult predicament." Naturalism of this sort says nothing about the existence or nonexistence of the supernatural, which by this definition is beyond natural testing. As a practical consideration, the rejection of supernatural explanations would merely be pragmatic, thus it would nonetheless be possible for an ontological supernaturalist to espouse and practice methodological naturalism. For example, scientists may believe in God while practicing methodological naturalism in their scientific work. This position does not preclude knowledge that is somehow connected to the supernatural. Generally however, anything that one can examine and explain scientifically would not be supernatural, simply by definition. Criticism Colin Murray Turbayne The Australian philosopher Colin Murray Turbayne puts forth an objection to naturalism which is based upon linguistic grounds. His objections refer to several of the concepts which form the a priori foundation for naturalism in general. In particular, Turbayne calls attention to the concepts of "substance" and "substratum" which in his view convey little if any meaning at best. He asserts that along with several "physicalist" constructs, these concepts have been mistakenly incorporated through the use of deductive reasoning into the hypotheses underlying materialism in the modern world. In addition, he argues further that they are more properly characterized as being purely metaphorical in nature rather than literal descriptions of an independent objective truth. Specifically, he identifies the "mechanistic" metaphors utilized by Isaac Newton and the mind-body dualism which was embraced by René Descartes as being particularly problematic. Turbayne argues that over time humanity has become victimized by mistaking such metaphorical constructs for literal truths, which now form the basis for considerable obfuscation and confusion within the realms of metaphysics and epistemology. He concludes by observing that humanity can readily adopt more useful models of the natural world only after first acknowledging the manner in which such purely metaphorical constructs have taken on the guise of literal truth within much of the modern world. See also Atheism Clockwork universe Daoism Deism Dysteleology Empiricism Hylomorphism Legal naturalism Liberal naturalism Materialism Monism Naturalist computationalism Naturalistic fallacy Naturalistic pantheism Philosophy of nature Physicalism Platonized naturalism Poetic naturalism Religious naturalism Scientism Sociological naturalism Supernaturalism Transcendental naturalism Vaisheshika References Citations References Further reading Mario De Caro and David Macarthur (eds) Naturalism in Question. Cambridge, Mass: Harvard University Press, 2004. Mario De Caro and David Macarthur (eds) Naturalism and Normativity. New York: Columbia University Press, 2010. Friedrich Albert Lange, The History of Materialism, London: Kegan Paul, Trench, Trubner & Co Ltd, 1925, David Macarthur, "Quinean Naturalism in Question," Philo. vol 11, no. 1 (2008). Sander Verhaeg, Working from Within: The Nature and Development of Quine's Naturalism. New York: Oxford University Press, 2018. External links Nature Metaphysical theories Metatheory of science
0.763954
0.997334
0.761918
Feminist philosophy
Feminist philosophy is an approach to philosophy from a feminist perspective and also the employment of philosophical methods to feminist topics and questions. Feminist philosophy involves both reinterpreting philosophical texts and methods in order to supplement the feminist movement and attempts to criticise or re-evaluate the ideas of traditional philosophy from within a feminist framework. Main features Feminist philosophy is united by a central concern with gender. It also typically involves some form of commitment to justice for women, whatever form that may take. Aside from these uniting features, feminist philosophy is a diverse field covering a wide range of topics from a variety of approaches. Broadening further, feminist philosophy entails how race, sexuality, socioeconomic class, and other factors of identity impact gender inequalities. Feminist philosophers, as philosophers, are found in both the analytic and continental traditions, and a myriad of different viewpoints are taken on philosophical issues within those traditions. Feminist philosophers, as feminists, can also belong to many different varieties of feminism. Feminist philosophy can be understood to have three main functions: Drawing on philosophical methodologies and theories to articulate and theorize about feminist concerns and perspectives. This can include providing a philosophical analysis of concepts regarding identity (such as race, socio-economic status, gender, sexuality, ability, and religion) and concepts that are very widely used and theorised within feminist theory more broadly. Feminist philosophy has also been an important source for arguments for gender equality. Investigating sexism and androcentrism within the philosophical tradition. This can involve critiquing texts and theories that are typically classified as part of the philosophical canon, especially by focusing on their presentation of women and women's experiences or the exclusion of women from the philosophical tradition. Another significant trend is the rediscovery of the work of many female philosophers whose contributions have not been recognised. Contributing to philosophy with new approaches to existing questions as well as new questions and fields of research in light of their critical inquiries into the philosophical tradition and reflecting their concern with gender. Feminist philosophy existed before the twentieth century but became labelled as such in relation to the discourse of second-wave feminism of the 1960s and 1970s. Many theories during the second wave focused primarily on gender equality in the workplace and education. An important project of feminist philosophy that emerged from the third-wave feminism movement has been to incorporate the diversity of experiences of women from different racial groups and socioeconomic classes, as well as of women around the globe. Subfields Feminist philosophers work within a broad range of subfields, including: Feminist epistemology, which challenges traditional philosophical ideas of knowledge and rationality as objective, universal, or value-neutral. Feminist epistemologists often argue for the importance of perspective, social situation, and values in generating knowledge, including in the sciences. Feminist ethics often argues that the emphasis on objectivity, rationality, and universality in traditional moral thought excludes women's ethical realities. One of the most notable developments is the ethics of care, which values empathy, responsibility, and non-violence in the development of moral systems. Care ethics also involve a greater recognition of interpersonal connections and relations of care and dependency, and feminist ethics uses this to critique how ethics of justice is often rooted in patriarchal understandings of morality. Some feminist ethicists have shown concern about how values ascribed to an ethics of care are often associated with femaleness, and how such a connection can bolster ideas about moral development as essentially gendered. Feminist phenomenology investigates how both cognitive faculties (e.g., thinking, interpreting, remembering, knowing) and the construction of normativity within social orders combine to shape an individual's reality. Phenomenology in feminist philosophy is often applied to develop improved conceptions of gendered embodied experience, of intersubjectivity and relational life, and to the community, society, and political phenomena. Feminist phenomenology goes beyond other representation-focused discourses by centering personal and embodied experiences, as well as recognizing how experience often operates outside of language, so can be difficult to articulate. Reflection upon time as a construct is a more recent development in feminist phenomenology; recent works have begun investigating temporality's place in the field, and how a more complex understanding of temporality can further illuminate realities of gendered experience and existence. Feminist aesthetics, which concerns the role of gender and sexuality in art and aesthetic theorising, and deals with issues related to the subjectivity of creators, the reproduction of gendered norms in art, the role of art in enculturation, and representation of women in art, both as subjects and creators. An understanding of "women" and "artists" as mutually exclusive identities has been reproduced since at least the era of romanticism, and this division has made interventions by feminist aesthetics necessary to challenge the patriarchal and masculine state of aesthetics. Feminist metaphysics, focuses largely on the ontology of gender and sex and the nature of social construction. Feminist historians of philosophy also examine sex biases inherent in traditional metaphysical theories. One of the main points at which this field diverges from classical metaphysics is in its attempts to ground social constructs into understandings of the "fundamental" and "natural", around which metaphysics is built around. Feminist metaphysics attempts to balance the relationship between social constructs and reality by recognizing how the distinction between what is perceived as "real" and what is "socially constructed" creates a binary that fails to acknowledge the interplay between the two concepts. Similarly, this field works to challenge systems of classifications that are deemed natural, and therefore unbiased, by revealing how such systems are affected by political and moral ideologies and biases. Some theorists have raised questions regarding whether certain fundamental aspects of metaphysics inherently oppose a feminist approach, and so the relationship between feminism and metaphysics remains somewhat precarious. Feminist philosophy of science, which is rooted in interdisciplinary academic feminism, works to challenge how the production of scientific knowledge as well as the methodologies employed in such productions are not free of bias. Contrary to other perceptions of science, the feminist philosophy of science recognizes the practice of science as value-rich instead of value-free, suggesting that ideologies, such as those related to gender, are tied up within the models and practices that constitute what science is and what knowledge it produces. See also Analytical feminism Ethics of care Ethics of justice Feminist philosophy of science Hypatia transracialism controversy Nikidion Socialist feminism Women in philosophy References Further reading Fulfer, Katy & Ryman, Emma (2013). What is Feminist Phenomenology? Gatens, M., Feminism and Philosophy: Perspectives on Difference and Equality (Indiana University Press, 1991) Lee, Emily S. (2011). The Epistemology of the Question of Authenticity, in Place of Strategic Essentialism. Hypatia, 26(2), 258–279. van Leeuwen, Anne. (2012). Beauvoir, Irigaray, and the Possibility of Feminist Phenomenology. The Journal of Speculative Philosophy, 26(2), 474–484. Smith, David W. (2013) Phenomenology. In Stanford Encyclopedia of Philosophy. Stone, Alison (2007). An Introduction to Feminist Philosophy. Cambridge, UK: Polity. pp. 2–3. . Feminist theory
0.777985
0.979318
0.761895
Solipsism syndrome
Solipsism syndrome refers to a psychological state and condition in which a person feels that reality is not external to their mind. Periods of extended isolation may predispose people to this condition. In particular, the syndrome has been identified as a potential concern for individuals living in outer space for extended periods of time. Overview The philosophical definition of solipsism is the idea that only one's mind is sure to exist. In a solipsistic position, a person only believes their mind or self is sure to exist. This is part of self-existence theory or the view of the self. Individuals experiencing solipsism syndrome feel reality is not 'real' in the sense of being external to their own minds. The syndrome is characterized by feelings of loneliness, detachment and indifference to the outside world. Solipsism syndrome is not currently recognized as a psychiatric disorder by the American Psychiatric Association, though it shares similarities with depersonalization-derealization disorder, which is recognized. Solipsism syndrome is distinct from solipsism, which is a philosophical position that nothing exists or can be known to exist outside of one's own mind rather than a psychological state. Advocates of this philosophy do not necessarily suffer from solipsism syndrome, and sufferers do not necessarily subscribe to solipsism as a school of intellectual thought. Periods of extended isolation may predispose people to solipsism syndrome. In particular, the syndrome has been identified as a potential challenge for astronauts and cosmonauts on long-term missions, and these concerns influence the design of artificial habitats. See also Attribution (psychology) Depersonalization-derealization disorder Depersonalization Derealization Existential crisis Self-disorder Solipsism References External links NASA's Space Settlements: A Design Study Appendix A Is Reality Real? Help for Solipsism Syndrome sufferers Mental states Psychopathological syndromes Solipsism
0.770221
0.989175
0.761883
Conceptual framework
A conceptual framework is an analytical tool with several variations and contexts. It can be applied in different categories of work where an overall picture is needed. It is used to make conceptual distinctions and organize ideas. Strong conceptual frameworks capture something real and do this in a way that is easy to remember and apply. Examples Isaiah Berlin used the metaphor of a "fox" and a "hedgehog" to make conceptual distinctions in how important philosophers and authors view the world. Berlin describes hedgehogs as those who use a single idea or organizing principle to view the world (such as Dante Alighieri, Blaise Pascal, Fyodor Dostoyevsky, Plato, Henrik Ibsen and Georg Wilhelm Friedrich Hegel). Foxes, on the other hand, incorporate a type of pluralism and view the world through multiple, sometimes conflicting, lenses (examples include Johann Wolfgang von Goethe, James Joyce, William Shakespeare, Aristotle, Herodotus, Molière, and Honoré de Balzac). Economists use the conceptual framework of supply and demand to distinguish between the behavior and incentive systems of firms and consumers. Like many other conceptual frameworks, supply and demand can be presented through visual or graphical representations (see demand curve). Both political science and economics use principal agent theory as a conceptual framework. The politics-administration dichotomy is a long-standing conceptual framework used in public administration. All three of these cases are examples of a macro level conceptual framework. Overview The use of the term conceptual framework crosses both scale (large and small theories) and contexts (social science, marketing, applied science, art etc.). The explicit definition of what a conceptual framework is and its application can therefore vary. Conceptual frameworks are beneficial as organizing devices in empirical research. One set of scholars has applied the notion of a conceptual framework to deductive, empirical research at the micro- or individual study level. They employ American football plays as a useful metaphor to clarify the meaning of conceptual framework (used in the context of a deductive empirical study). Likewise, conceptual frameworks are abstract representations, connected to the research project's goal that direct the collection and analysis of data (on the plane of observation – the ground). Critically, a football play is a "plan of action" tied to a particular, timely, purpose, usually summarized as long or short yardage. Shields and Rangarajan (2013) argue that it is this tie to "purpose" that makes American football plays such a good metaphor. They define a conceptual framework as "the way ideas are organized to achieve a research project's purpose". Like football plays, conceptual frameworks are connected to a research purpose or aim. Explanation is the most common type of research purpose employed in empirical research. The formal hypothesis of a scientific investigation is the framework associated with explanation. Explanatory research usually focuses on "why" or "what caused" a phenomenon. Formal hypotheses posit possible explanations (answers to the why question) that are tested by collecting data and assessing the evidence (usually quantitative using statistical tests). For example, Kai Huang wanted to determine what factors contributed to residential fires in U.S. cities. Three factors were posited to influence residential fires. These factors (environment, population, and building characteristics) became the hypotheses or conceptual framework he used to achieve his purpose – explain factors that influenced home fires in U.S. cities. Types Several types of conceptual frameworks have been identified, and line up with a research purpose in the following ways: Working hypothesis – exploration or exploratory research Pillar questions – exploration or exploratory research Descriptive categories – description or descriptive research Practical ideal type – analysis (gauging) Models of operations research – decision making Formal hypothesis – explanation and prediction Note that Shields and Rangarajan (2013) do not claim that the above are the only framework-purpose pairing. Nor do they claim the system is applicable to inductive forms of empirical research. Rather, the conceptual framework-research purpose pairings they propose are useful and provide new scholars a point of departure to develop their own research design. Frameworks have also been used to explain conflict theory and the balance necessary to reach what amounts to resolution. Within these conflict frameworks, visible and invisible variables function under concepts of relevance. Boundaries form and within these boundaries, tensions regarding laws and chaos (or freedom) are mitigated. These frameworks often function like cells, with sub-frameworks, stasis, evolution and revolution. Anomalies may exist without adequate "lenses" or "filters" to see them and may become visible only when the tools exist to define them. See also Analogy Inquiry Conceptual model Theory References Further reading Shields, Patricia and Rangarajan, Nandhini. (2013). A Playbook for Research Methods: Integrating Conceptual Frameworks and Project Management. Stillwater, OK; New Forums Press Research Conceptual modelling
0.76482
0.996126
0.761857
Critical realism (philosophy of the social sciences)
Critical realism is a philosophical approach to understanding science, and in particular social science, initially developed by Roy Bhaskar (1944–2014). It specifically opposes forms of empiricism and positivism by viewing science as concerned with identifying causal mechanisms. In the last decades of the twentieth century it also stood against various forms of postmodernism and poststructuralism by insisting on the reality of objective existence. In contrast to positivism's methodological foundation, and poststructuralism's epistemological foundation, critical realism insists that (social) science should be built from an explicit ontology. Critical realism is one of a range of types of philosophical realism, as well as forms of realism advocated within social science such as analytic realism and subtle realism. Contemporary critical realism Overview Bhaskar developed a general philosophy of science that he described as transcendental realism and a special philosophy of the human sciences that he called critical naturalism. The two terms were combined by other authors to form the umbrella term critical realism. Transcendental realism attempts to establish that in order for scientific investigation to take place, the object of that investigation must have real, manipulable, internal mechanisms that can be actualized to produce particular outcomes. This is what we do when we conduct experiments. This stands in contrast to empiricist scientists' claim that all scientists can do is observe the relationship between cause and effect and impose meaning. Whilst empiricism, and positivism more generally, locate causal relationships at the level of events, critical realism locates them at the level of the generative mechanism, arguing that causal relationships are irreducible to empirical constant conjunctions of David Hume's doctrine; in other words, a constant conjunctive relationship between events is neither sufficient nor even necessary to establish a causal relationship. The implication of this is that science should be understood as an ongoing process in which scientists improve the concepts they use to understand the mechanisms that they study. It should not, in contrast to the claim of empiricists, be about the identification of a coincidence between a postulated independent variable and dependent variable. Positivism and naive falsificationism are also rejected on the grounds that a mechanism may exist but either a) go unactivated, b) be activated, but not perceived, or c) be activated, but counteracted by other mechanisms, which results in its having unpredictable effects. Thus, non-realisation of a posited mechanism cannot (in contrast to the claim of some positivists) be taken to signify its non-existence. Falsificationism can be viewed at the statement level (naive falsificationism) or at the theorem level (more common in practice). In this way, the two approaches can be reconciled to some extent. Critical naturalism argues that the transcendental realist model of science is equally applicable to both the physical and the human worlds. However, it argues, when we study the human world we are studying something fundamentally different from the physical world and must, therefore, adapt our strategy to studying it. Critical naturalism, therefore, prescribes social scientific methods which seek to identify the mechanisms producing social events, but with a recognition that these are in a much greater state of flux than those of the physical world (as human structures change much more readily than those of, say, a leaf). In particular, we must understand that human agency is made possible by social structures that themselves require the reproduction of certain actions/pre-conditions. Further, the individuals that inhabit these social structures are capable of consciously reflecting upon, and changing, the actions that produce them—a practice that is in part facilitated by social scientific research. Critical realism has become an influential movement in British sociology and social science in general as a reaction to, and reconciliation of postmodern critiques. Developments Since Bhaskar made the first big steps in popularising the theory of critical realism in the 1970s, it has become one of the major strands of social scientific method, rivalling positivism/empiricism, and post-structuralism/relativism/interpretivism. After his development of critical realism, Bhaskar went on to develop a philosophical system he calls dialectical critical realism, which is most clearly outlined in his weighty book, Dialectic: The Pulse of Freedom. An accessible introduction to Bhaskar's writings was written by Andrew Collier. Andrew Sayer has written accessible texts on critical realism in social science. Danermark et al. have also produced an accessible account. Margaret Archer is associated with this school, as is the ecosocialist writer Peter Dickens. David Graeber relies on critical realism, which he understands as a form of 'heraclitean' philosophy, emphasizing flux and change over stable essences, in his anthropological book on the concept of value, Toward an anthropological theory of value: the false coin of our own dreams. Recently, attention has turned to the challenge of implementing critical realism in applied social research, including its use in studying organizations.). Other authors (Fletcher 2016, Parr 2015, Bunt 2018, Hoddy 2018) have discussed which specific research methodologies and methods are conducive (or not) to research guided by critical realism as a philosophy of science. Critical realist meta-theories At its core, critical realism offers a theory of being and existence (ontology), but it takes a more open position in relation to the theory of knowledge (epistemology). As a result, a wide range of approaches have developed that seek to offer a framework for social research. Because they are not theories in specific disciplines nor theories relating to specific aspects of society, these approaches are generally known as 'meta-theories'. Critical realist meta-theories include: the transformational model of social activity, the morphogenetic approach, Cambridge social ontology, critical discourse analysis, cultural political economy, critical realist feminism, and critical realist Marxism. The morphogenetic approach The morphogenetic approach is a critical realist framework for analysing social change originally developed by Margaret Archer in her text Social Origins of Educational Systems and systematised in a trilogy of social theory texts, Culture and Agency (1988), Realist Social Theory (1995), and Being Human (2000). The approach was developed primarily as a critical realist response to the structure-agency problem in which "we are simultaneously free and constrained and we also have some awareness of it". At the centre of Archer's answer to this problem is 'analytical dualism', which entails an analytical separation of structure and agency so that the interaction between them can be studied and modelled by researchers; on this basis, Archer rejects alternative approaches that 'conflate' structure and agency into the single concept of 'practice', primarily directing her critique at Giddens' structuration theory. Archer extends the notion of analytical dualism to the distinction between "the material and the ideational aspects of social life", identifying 'culture' as a third fundamental aspect of society, alongside structure and agency. Therefore, the analysis of social change depends on modelling structure (S), agency (A), and culture (C), so that "social life comes in a SAC – always and everywhere". These concepts form the basis for the 'morphogenetic cycle', which splits social change into three processes: [T1] conditioning → [T2-T3] interaction → [T4] elaboration: At T1, agents (as individuals and as groups) are conditioned by the social structure and cultural system. From T2 to T3, agents act, react, and interact At T4, the social structure and cultural system are changed (morphogenesis) or maintained (morphostasis). The morphogenetic approach has been advanced by Douglas Porpora, whose Reconstructing Sociology sought to introduce morphogenetic critical realism into the mainstream of American sociology. Before becoming explicitly aligned with the morphogenetic approach and critical realism, Porpora published two papers on the nature of culture and social structure that later had a major influence on morphogenetic critical realism. Cambridge social ontology Cambridge social ontology is an approach to ontology that is primarily associated with the work of philosopher Tony Lawson. The approach is centred on the Cambridge Social Ontology Group and its weekly Realist Workshop hosted by the University of Cambridge and led by Lawson. While the group subscribes to critical realism, it identifies its aims with the study of ontology more generally rather than a necessary allegiance with the critical realist philosophy. At the heart of the Cambridge approach is a theory of social positioning in which any social system creates roles (or 'places' or 'slots') that are occupied by individuals. Each of these roles is attached to a series of rights and obligations; for example, one of the rights of a university lecturer is the right to use a university library and one of their obligations to deliver lectures. These rights and obligations interlock to form social structures, so that the rights of an individual in one social position usually correspond with the obligations of an individual in another; for example, the rights of the lecturer might correspond to the obligations of a librarian. In some cases, it is not individuals that occupy these social positions but 'communities', which are defined as "an identifiable, restricted and relatively enduring coherent grouping of people who share some set of concerns". It is important to stress that these communities can exist at a wide range of scales, they are not necessarily attached to a particular geographical space, and they can overlap and nest in various complex ways. Therefore, individuals sit within social systems by occupying a role, and they sit within communities by sharing in the community's interests in some way. A final crucial concept of the Cambridge social ontology approach is the notion of 'collective practices': a collective practice is a way of proceeding that (implicitly) bears the status of being (collectively) accepted within a community. In other words, collective practices are common ways of acting in any given situation that are reinforced through conformity, such as the forming of queues to pay for goods in stores or the etiquette of a particular game or sport. Critical discourse analysis Discourse analysis is the analysis of texts and other meaningful signs with the purpose of understanding and/or explaining social phenomena. Critical discourse analysis (CDA) is primarily concerned with analysing the relationship between discourse and social relations of power in any given context. In contrast to post-structuralist and postmodernist approaches to discourse analysis (such as the Essex school), CDA relies on philosophical distinctions between discourse and other aspects of reality, especially insisting on the relative independence of power relations, material existence and individual agency. While not all CDA explicitly ascribes to critical realism (see, for example the work of Ruth Wodak or Teun van Dijk), a critical realist ontology provides philosophical underpinnings for the social distinctions inherent to its approach to analysis. The main proponent of a critical realist approach to CDA is Norman Fairclough, whose philosophical underpinnings shifted from a Foucauldian perspective in his 1992 book Discourse and Social Change to an explicitly critical realist approach in his 1999 collaboration with Lillian Chouliaraki Discourse in Late Modernity. Fairclough has subsequently published work developing the critical realist foundations of his version of CDA, particularly in collaboration with his Lancaster University colleagues Andrew Sayer and Bob Jessop. Fairclough explains how the main concepts of transcendental realism underpin his approach to the analysis of texts. Firstly, there is a distinction between the knowledge (the 'transitive dimension') and that which knowledge is about (the 'intransitive dimension'); this underpins the CDA distinction between discourse and other aspects of reality. Secondly, there is the distinction between experienced events (the 'empirical'), events themselves (the 'actual'), and the underlying mechanisms that give rise to events (the 'real'); this underpins the distinction between the reading of a text (the empirical), the text itself (the actual) and the causal structures underpinning the text's social effects (the real). While these critical realist distinctions are not commonly used in the empirical application of Fairlcough's CDA, they are fundamental to the underlying social theory that justifies its application. More recently, other theorists have further developed CDA's critical realist underpinnings by focusing on the distinction between structure and agency, the distinction between discourse and 'non-discourse', and the concept of social practices. Cultural political economy Long-term collaborators Ngai-Ling Sum and Bob Jessop initially developed 'cultural political economy' (CPE) in a forum of the journal New Political Economy, responding to the strict disciplinarity of existing approaches to political economy. CPE also has roots in Jessop's seminal collaboration with Norman Fairclough and Andrew Sayer, which outlined a critical realist approach to 'semiosis', the inter-subjective production of meaning. CPE is most extensively outlined in Sum and Jessop's 2013 book Cultural Political Economy, where critical realism and the strategic-relational approach are identified as the twin foundations of the approach. These foundations lead to a central distinction at the heart of CPE between the 'semiotic and structural aspects of social life'. The 'semiotic' entails (a) the process by which individuals come to understand, apprehend, and make sense of the natural and social world, and (b) the process by which people (individually and in groups) come to create meaning through communication and signification, especially (though not exclusively) through the formation and use of language. The semiotic is held to be foundational to all social relations and causally efficacious, so that it is both a part of social relations and a causal force in its own right. For the 'structural' aspects of social life, Sum and Jessop adopt the phrase 'structuration' from Anthony Giddens, but reject his broader approach because of its atemporality and its conflation of agents and their actions. In CPE, as in all critical realist meta-theories, social structure is held to be socially constructed, embedded in semiosis, but also not reducible to those semiotic processes, having its own material existence in social institutions, the actions of individuals, and the physical world. Jessop explains that 'semiotic' and 'structural' aspects of social life change over time through three evolutionary mechanisms: (i) variation - there is constant variation in human practices and social arrangements, but especially at times of crisis; (ii) selection - some practices, semiotic constructions, and structural arrangements are selected, especially as the possible routes to recovery from a crisis; (iii) retention - from the selected arrangements and practices, those that prove to be effective are retained, especially when they help overcome a crisis. It is important to note that this process of variation-selection-retention, is not a functionalist account in which society is continuously 'improving', because the process is shaped by the strategies of individual agents and social structures of (unequal) power. Critical realist Marxism A development of Bhaskar's critical realism lies at the ontological root of some contemporary streams of Marxist political and economic theory. These authors consider that realist philosophy described by Bhaskar in A Realist Theory of Science is compatible with Marx's work in that it differentiates between an intransitive reality, which exists independently of human knowledge of it, and the socially produced world of science and empirical knowledge. This dualist logic is present in the Marxian theory of ideology, according to which social reality may be very different from its empirically observable surface appearance. Notably, Alex Callinicos has argued for a 'critical realist' ontology in the philosophy of social science and explicitly acknowledges Bhaskar's influence (while also rejecting the latter's 'spiritualist turn' in his later work). The relationship between critical realist philosophy and Marxism has also been discussed in an article co-authored by Bhaskar and Callinicos and published in the Journal of Critical Realism. Disciplinary applications Economics Heterodox economists like Tony Lawson, Lars Pålsson Syll, Frederic Lee or Geoffrey Hodgson have used the ideas of critical realism in economics, especially the dynamic idea of macro-micro interaction. According to critical realist economists, the central aim of economic theory is to provide explanations in terms of hidden generative structures. This position combines transcendental realism with a critique of mainstream economics. It argues that mainstream economics (i) relies excessively on deductivist methodology, (ii) embraces an uncritical enthusiasm for formalism, and (iii) believes in strong conditional predictions in economics despite repeated failures. The world that mainstream economists study is the empirical world. But according to critical realists this world is "out of phase" (Lawson) with the underlying ontology of economic regularities. The mainstream view is thus a limited reality because empirical realists presume that the objects of inquiry are solely "empirical regularities"—that is, objects and events at the level of the experienced. The critical realist views the domain of real causal mechanisms as the appropriate object of economic science, whereas the positivist view is that the reality is exhausted in empirical, i.e. experienced reality. Tony Lawson argues that economics ought to embrace a "social ontology" to include the underlying causes of economic phenomena. Ecological economics The British ecological economist Clive Spash holds the opinion that critical realism offers a thorough basis—as a philosophy of science—for the theoretical foundation of ecological economics. He therefore uses a critical realist lens for conducting research in (ecological) economics. However, also other scholars base ecological economics on a critical realist foundation, such as Leigh Price from Rhodes University. Ecology, climate change and environmental sustainability Critical realism's implications for ecology, climate change and environmental sustainability were explored by Roy Bhaskar and others in their 2010 book Interdisciplinarity and Climate Change: Transforming Knowledge and Practice for Our Global Future. Nordic ecophilosophers such as Karl Georg Høyer, Sigmund Kvaløy Setreng and Trond Gansmo Jakobsen saw the value of critical realism as a basis for the approach to ecology popularized by the Norwegian philosopher Arne Næss, versions of which are sometimes called . Roy Bhaskar, Petter Næss, and Karl Høyer collaborated on an edited volume entitled Ecophilosophy in a World of Crisis: Critical Realism and the Nordic Contributions. Zimbabwean-born ecophilosopher Leigh Price has used critical realism to develop a philosophy for ecology that she calls , and she has argued for a common-sense approach to climate change and environmental management. She also has used Bhaskar's critical realist ontology to arrive at a definition of ecological resilience as "the process by which the internal complexity of an ecosystem and its coherence as a whole – stemming from the relative 'richness' or 'modularity' of emergent structures and behaviours/growth/life-history of species – results in the inter-dependencies of its components or their binding as totalities such that the identity of the ecosystem tends to remain intact, despite intrinsic and/or extrinsic entropic forces". Other academics in this field who have worked with critical realism include Jenneth Parker, Research Director at Schumaker Institute for Sustainable Systems and Sarah Cornell, Associate Professor at Stockholm Resilience Centre. International relations Since 2000, critical realist philosophy has also been increasingly influential in the field of international relations (IR) theory. In 2011, Iver B. Neumann said it was "almost all the rage" among those IR scholars who are concerned with questions of philosophy of science. Bob Jessop, Colin Wight, Milja Kurki, Jonathan Joseph and Hidemi Suganami have all published major works on the utility of beginning IR research from a critical realist social ontology—an ontology they all credit Roy Bhaskar with originating. Education Critical realism (CR) offers a framework that can be used to approach complex questions at the interface between educational theory and educational practice. Nevertheless, CR is not a theory but a philosophical approach intended to under-labour for social science research. As a meta-theory, it does not explain any social phenomenon. Instead, the processes and techniques of the discipline, in this case, education, will provide the means for translating CR principles into a substantive study. This means that for any study framed under a CR approach, there is a need to choose a social theory (that shares a realist ontology) that explains why things are the way they are rather than some other way. As in the different disciplines described above, in educational research under a CR approach, the overall aim is to explain the educational phenomena in terms of the hidden generative mechanisms that make the events we observe happen. Rebecca Eynon of the Oxford Internet Institute believes that when investigating issues in the field of educational technology it is fundamental to address the real problems that as she argues, relate to the more profound and most of the time, imperceptible structural issues that constrain technology use. In the field of educational technology, particularly when exploring how technology is used or appropriated by teachers and students, an understanding of the social world as complex and multi-layered is helpful. Clive Lawson of the Cambridge Social Ontology Group has addressed the topic of technology from a CR perspective. The book Isolation and Technology (2017) sets out a persuasive 'ontology of technology' and applies this perspective to explain the causal powers of technology, which for educational purposes is highly relevant. His main argument is that technology has the power to enlarge human capabilities but only if the technology/artefact is enrolled in the network of interdependencies in a particular system. He suggests a conception of technical activity "as that activity that harnesses the causal capacities and powers of material artefacts in order to extend human capabilities" (p. 109). David Scott has written extensively about CR and education. In his book Education, Epistemology and Critical Realism (2010), he argues for a need to pay greater attention to the meta-theories which underpin educational research. An important issue for educational research, Scott argues, is the relationship between structure and agency. The work of Margaret Archer uses the morphogenetic cycle (explained in one of the sections above) as an analytical tool that allows the researcher to explore the interplay between structure and agency at any given moment in time. She uses analytical dualism, a methodological manoeuvre that helps, only for the sake of analysis, to separate structure from agency to explore their interplay at a particular moment in time. The latter was utilised by Robert Archer in his book Education Policy and Realist Social Theory (2002). Health Critical realism has been used widely within health research in several different ways, including (i) informing methodological decisions, (ii) understanding the causes of health and illness, and (iii) informing ways of improving health—whether in healthcare programmes or public health promotion. In a similar pattern to that seen in other fields, researchers studying health and illness have used critical realism to orient their methodological decisions. Critical realism has been argued to represent a philosophical approach for health sciences that is alternative and preferable to the empirical emphasis within positivism and the relativist emphasis within constructivism. Comparable arguments are made in a range of fields such as the sociology of health and illness, mental health research, and nursing. In the view of Wiltshire, use of critical realism to orient methodological decisions helps to encourage interdisciplinary health research by disrupting long-standing qualitative-quantitative divides between disciplinary traditions. Critical realism has also been discussed in comparison to alternatives within health and rehabilitation science; in this area, DeForge and Shaw concluded that, "critical realists tend to forefront ontological considerations and focus on the hidden, taken-for-granted structures from 'the domain of the real'." One significant methodological implication within health research has been the introduction of evaluation frameworks that are underpinned by critical realist ideas. Evaluation research is important for healthcare research in particular because new health-related interventions and programmes need to be assessed for effectiveness. Clark and colleagues summarise the contribution of critical realism in this domain by claiming that it is useful for In a recent presentation, Alderson positions critical realism as a toolkit of practical ideas that helps researchers to extend and clarify their analyses. Research that has tried to better understand the causes of health and illness have also turned to critical realism. Scambler has applied sociology to the understanding of medicine, health and illness, where he presents the role of class relations and political power in reproducing and exacerbating health inequalities. Other research into the social determinants of health has drawn on critical realism in understanding, for example, healthcare inequalities, the rural determinants of health, and the non-determinant causal relationship between poor housing and illness. Others have found critical realism useful in seeking an appropriate social theory of health determination through the complex pathways and mechanisms that come to impact health and illness. As well, critical realism has been used to advance an account of the causes of mental ill-health. Critical realism has also been used in health research to inform ways of improving health—whether in healthcare programmes or public health promotion. Clark and colleagues argue critical realism can help to understand and evaluate heart health programmes, noting that their approach "embraces measurement of objective effectiveness but also examines the mechanisms, organizational and contextual-related factors causing these outcomes." It has also been used as an explanatory framework regarding health decisions, such as the use of home-dialysis for patients with chronic kidney disease. Another useful example in the context of nursing practice argues that critical realism offers a philosophy that is a natural fit with human and health science enquiry, including nursing. At the level of public health, Connelly has strongly advocated for critical realist ideas, concluding that "for health promotion theory and practice to make a difference an engagement with critical realism is now long overdue." Critical realism is also applied in empirical studies, such as ethnographic study in Nigeria arguing that understanding the underlying mechanisms associated with smoking in different societies will enable effective implementation of tobacco control policies that work in various settings. See also Critical realism (philosophy of perception) References Further reading Alderson, P. 2013. Childhoods Real and Imagined: An Introduction to childhood studies and critical realism, Volume 1. London: Routledge. Alderson, P. 2021. Critical Realism for Health and Illness Research: A Practical Introduction. Bristol: Policy Press. Archer, M., Bhaskar, R., Collier, A., Lawson, T. and Norrie, A., 1998, Critical Realism: Essential Readings, (London, Routledge). Archer, R. (2002) Education Policy and Realist Social Theory, (London, Routledge). Bhaskar, R., 1975 [1997], A Realist Theory of Science: 2nd edition, (London, Verso). Bhaskar, R., 1998, The Possibility of Naturalism: A Philosophical Critique of the Contemporary Human Sciences: Third Edition, (London, Routledge) Bhaskar, R., 1993, Dialectic: The Pulse of Freedom, (London, Verso). Bhaskar, Roy, Berth Danermark, and Leigh Price. Interdisciplinarity and wellbeing: a critical realist general theory of interdisciplinarity. Routledge, 2017. Bhaskar, R. (2016) Enlightened Common Sense: The Philosophy of Critical Realism, edited with a preface by Hartwig, M. London: Routledge. Collier, A. 1994, 'Critical Realism: An Introduction to Roy Bhaskar's Philosophy', (London, Verso).Frauley, J. and Pearce, F. (eds). 2007. Critical Realism and the Social Sciences. (Toronto and Buffalo. University of Toronto Press). Danermark, B., M. Ekström, L. Jakobs & J.Ch. Karlsson, Explaining Society: An Introduction to Critical Realism in the Social Sciences. (Critical Realism: Interventions), Routledge, Abingdon 2002. Hartwig, M. 2007 Dictionary of Critical Realism. London: Routledge. Lopez, J. and Potter, G., 2001, After Postmodernism: An Introduction to Critical Realism, (London, The Athlone Press). Maton, K., & Moore, R. (Eds.). (2010). Social realism, knowledge and the sociology of education: Coalitions of the mind. London: Continuum. Næss, Petter, and Leigh Price, eds. 2016. Crisis system: A critical realist and environmental critique of economics and the economy. Routledge. Price, Leigh, and Heila Lotz-Sistka, eds. 2015. Critical realism, environmental learning and social-ecological change. Routledge. Sayer, A. (1992) Method in Social Science: A Realist Approach, (London, Routledge) Sayer, A. (2000) Realism and Social Science, (London, Sage) External links Bhaskar and American Critical Realism Critical theory Metatheory of science Philosophy of social science Philosophical realism
0.770348
0.988932
0.761821
Noumenon
In philosophy, a noumenon (, ; from ; : noumena) is knowledge posited as an object that exists independently of human sense. The term noumenon is generally used in contrast with, or in relation to, the term phenomenon, which refers to any object of the senses. Immanuel Kant first developed the notion of the noumenon as part of his transcendental idealism, suggesting that while we know the noumenal world to exist because human sensibility is merely receptive, it is not itself sensible and must therefore remain otherwise unknowable to us. In Kantian philosophy, the noumenon is often associated with the unknowable "thing-in-itself". However, the nature of the relationship between the two is not made explicit in Kant's work, and remains a subject of debate among Kant scholars as a result. Etymology The Greek word (plural ) is the neuter middle-passive present participle of , which in turn originates from the word , an Attic contracted form of . A rough equivalent in English would be "that which is thought", or "the object of an act of thought". Historical predecessors The Indian Vedānta philosophy (specifically Advaita), the roots of which go back to the Vedic period, talks of the ātman (self) in similar terms as the noumenon. Regarding the equivalent concepts in Plato, Ted Honderich writes: "Platonic Ideas and Forms are noumena, and phenomena are things displaying themselves to the senses... This dichotomy is the most characteristic feature of Plato's dualism; that noumena and the noumenal world are objects of the highest knowledge, truths, and values is Plato's principal legacy to philosophy." Kantian noumena Overview As expressed in Kant's Critique of Pure Reason, human understanding is structured by "concepts of the understanding" or pure categories of understanding, found prior to experience in the mind and which make outer experiences possible as counterpart to the rational faculties of the mind. By Kant's account, when one employs a concept to describe or categorize noumena (the objects of inquiry, investigation or analysis of the workings of the world), one is also employing a way of describing or categorizing phenomena (the observable manifestations of those objects of inquiry, investigation or analysis). Kant posited methods by which human understanding makes sense of and thus intuits phenomena that appear to the mind: the concepts of the transcendental aesthetic, as well as that of the transcendental analytic, transcendental logic and transcendental deduction. Taken together, Kant's "categories of understanding" are the principles of the human mind which necessarily are brought to bear in attempting to understand the world in which we exist (that is, to understand, or attempt to understand, "things in themselves"). In each instance the word "transcendental" refers to the process that the human mind must exercise to understand or grasp the form of, and order among, phenomena. Kant asserts that to "transcend" a direct observation or experience is to use reason and classifications to strive to correlate with the phenomena that are observed. Humans can make sense out of phenomena in these various ways, but in doing so can never know the "things-in-themselves", the actual objects and dynamics of the natural world in their noumenal dimension - this being the negative, correlate to phenomena and that which escapes the limits of human understanding. By Kant's Critique, our minds may attempt to correlate in useful ways, perhaps even closely accurate ways, with the structure and order of the various aspects of the universe, but cannot know these "things-in-themselves" (noumena) directly. Rather, we must infer the extent to which the human rational faculties can reach the object of "things-in-themselves" by our observations of the manifestations of those things that can be perceived via the physical senses, that is, of phenomena, and by ordering these perceptions in the mind help infer the validity of our perceptions to the rational categories used to understand them in a rational system. This rational system (transcendental analytic), being the categories of the understanding as free from empirical contingency. According to Kant, objects of which we are cognizant via the physical senses are merely representations of unknown somethings—what Kant refers to as the transcendental object—as interpreted through the a priori or categories of the understanding. These unknown somethings are manifested within the noumenon—although we can never know how or why as our perceptions of these unknown somethings via our physical senses are bound by the limitations of the categories of the understanding and we are therefore never able to fully know the "thing-in-itself". Noumenon and the thing-in-itself Many accounts of Kant's philosophy treat "noumenon" and "thing-in-itself" as synonymous, and there is textual evidence for this relationship. However, Stephen Palmquist holds that "noumenon" and "thing-in-itself" are only loosely synonymous, inasmuch as they represent the same concept viewed from two different perspectives, and other scholars also argue that they are not identical. Schopenhauer criticised Kant for changing the meaning of "noumenon". However, this opinion is far from unanimous. Kant's writings show points of difference between noumena and things-in-themselves. For instance, he regards things-in-themselves as existing: ...though we cannot know these objects as things in themselves, we must yet be in a position at least to think them as things in themselves; otherwise we should be landed in the absurd conclusion that there can be appearance without anything that appears. He is much more doubtful about noumena: But in that case a noumenon is not for our understanding a special [kind of] object, namely, an intelligible object; the [sort of] understanding to which it might belong is itself a problem. For we cannot in the least represent to ourselves the possibility of an understanding which should know its object, not discursively through categories, but intuitively in a non-sensible intuition. A crucial difference between the noumenon and the thing-in-itself is that to call something a noumenon is to claim a kind of knowledge, whereas Kant insisted that the thing-in-itself is unknowable. Interpreters have debated whether the latter claim makes sense: it seems to imply that we know at least one thing about the thing-in-itself (i.e., that it is unknowable). But Stephen Palmquist explains that this is part of Kant's definition of the term, to the extent that anyone who claims to have found a way of making the thing-in-itself knowable must be adopting a non-Kantian position. Positive and negative noumena Kant also makes a distinction between positive and negative noumena: If by 'noumenon' we mean a thing so far as it is not an object of our sensible intuition, and so abstract from our mode of intuiting it, this is a noumenon in the negative sense of the term. But if we understand by it an object of a non-sensible intuition, we thereby presuppose a special mode of intuition, namely, the intellectual, which is not that which we possess, and of which we cannot comprehend even the possibility. This would be 'noumenon' in the positive sense of the term. The positive noumena, if they existed, would be immaterial entities that can only be apprehended by a special, non-sensory faculty: "intellectual intuition" (nicht sinnliche Anschauung). Kant doubts that we have such a faculty, because for him intellectual intuition would mean that thinking of an entity, and its being represented, would be the same. He argues that humans have no way to apprehend positive noumena: Since, however, such a type of intuition, intellectual intuition, forms no part whatsoever of our faculty of knowledge, it follows that the employment of the categories can never extend further than to the objects of experience. Doubtless, indeed, there are intelligible entities corresponding to the sensible entities; there may also be intelligible entities to which our sensible faculty of intuition has no relation whatsoever; but our concepts of understanding, being mere forms of thought for our sensible intuition, could not in the least apply to them. That, therefore, which we entitle 'noumenon' must be understood as being such only in a negative sense. The noumenon as a limiting concept Even if noumena are unknowable, they are still needed as a limiting concept, Kant tells us. Without them, there would be only phenomena, and since potentially we have complete knowledge of our phenomena, we would in a sense know everything. In his own words: Further, the concept of a noumenon is necessary, to prevent sensible intuition from being extended to things in themselves, and thus to limit the objective validity of sensible knowledge. What our understanding acquires through this concept of a noumenon, is a negative extension; that is to say, understanding is not limited through sensibility; on the contrary, it itself limits sensibility by applying the term noumena to things in themselves (things not regarded as appearances). But in so doing it at the same time sets limits to itself, recognising that it cannot know these noumena through any of the categories, and that it must therefore think them only under the title of an unknown something. Furthermore, for Kant, the existence of a noumenal world limits reason to what he perceives to be its proper bounds, making many questions of traditional metaphysics, such as the existence of God, the soul, and free will unanswerable by reason. Kant derives this from his definition of knowledge as "the determination of given representations to an object". As there are no appearances of these entities in the phenomenal, Kant is able to make the claim that they cannot be known to a mind that works upon "such knowledge that has to do only with appearances". These questions are ultimately the "proper object of faith, but not of reason". The dual-object and dual-aspect interpretations Kantian scholars have long debated two contrasting interpretations of the thing-in-itself. One is the dual object view, according to which the thing-in-itself is an entity distinct from the phenomena to which it gives rise. The other is the dual aspect view, according to which the thing-in-itself and the thing-as-it-appears are two "sides" of the same thing. This view is supported by the textual fact that "Most occurrences of the phrase 'things-in-themselves' are shorthand for the phrase, 'things considered in themselves' (Dinge an sich selbst betrachtet)." Although we cannot see things apart from the way we do in fact perceive them via the physical senses, we can think them apart from our mode of sensibility (physical perception); thus making the thing-in-itself a kind of noumenon or object of thought. Criticisms of Kant's noumenon Pre-Kantian critique Though the term noumenon did not come into common usage until Kant, the idea that undergirds it, that matter has an absolute existence which causes it to emanate certain phenomena, had historically been subjected to criticism. George Berkeley, who pre-dated Kant, asserted that matter, independent of an observant mind, is metaphysically impossible. Qualities associated with matter, such as shape, color, smell, texture, weight, temperature, and sound are all dependent on minds, which allow only for relative perception, not absolute perception. The complete absence of such minds (and more importantly an omnipotent mind) would render those same qualities unobservable and even unimaginable. Berkeley called this philosophy immaterialism. Essentially there could be no such thing as matter without a mind. Schopenhauer's critique Schopenhauer claimed that Kant used the word noumenon incorrectly. He explained in his "Critique of the Kantian philosophy", which first appeared as an appendix to The World as Will and Representation: The difference between abstract and intuitive cognition, which Kant entirely overlooks, was the very one that ancient philosophers indicated as φαινόμενα [phainomena] and νοούμενα [nooumena]; the opposition and incommensurability between these terms proved very productive in the philosophemes of the Eleatics, in Plato's doctrine of Ideas, in the dialectic of the Megarics, and later in the scholastics, in the conflict between nominalism and realism. This latter conflict was the late development of a seed already present in the opposed tendencies of Plato and Aristotle. But Kant, who completely and irresponsibly neglected the issue for which the terms φαινομένα and νοούμενα were already in use, then took possession of the terms as if they were stray and ownerless, and used them as designations of things in themselves and their appearances. The noumenon's original meaning of "that which is thought" is not compatible with the "thing-in-itself," the latter being Kant's term for things as they exist apart from their existence as images in the mind of an observer. In a footnote to this passage, Schopenhauer provides the following passage from the Outlines of Pyrrhonism (Bk. I, ch. 13) of Sextus Empiricus to demonstrate the original distinction between phenomenon and noumenon according to ancient philosophers: νοούμενα φαινομένοις ἀντετίθη Ἀναξαγόρας ('Anaxagoras opposed what is thought to what appears.') See also Always already Anatta Condition of possibility Essence–energies distinction Haecceity Hypokeimenon Ineffability Master argument by George Berkeley Observation Qualia Schopenhauer's criticism of the Kantian philosophy Transcendental idealism Unobservable Notes References Bibliography External links The surd of metaphysics; an inquiry into the question: Are there things-in-themselves? (1903) by Paul Carus, 1852–1919 Concepts in metaphysics Kantianism
0.764383
0.996609
0.761791