text
stringlengths
559
401k
source
stringlengths
13
121
In mathematics, a field of sets is a mathematical structure consisting of a pair ( X , F ) {\displaystyle (X,{\mathcal {F}})} consisting of a set X {\displaystyle X} and a family F {\displaystyle {\mathcal {F}}} of subsets of X {\displaystyle X} called an algebra over X {\displaystyle X} that contains the empty set as an element, and is closed under the operations of taking complements in X , {\displaystyle X,} finite unions, and finite intersections. Fields of sets should not be confused with fields in ring theory nor with fields in physics. Similarly the term "algebra over X {\displaystyle X} " is used in the sense of a Boolean algebra and should not be confused with algebras over fields or rings in ring theory. Fields of sets play an essential role in the representation theory of Boolean algebras. Every Boolean algebra can be represented as a field of sets. == Definitions == A field of sets is a pair ( X , F ) {\displaystyle (X,{\mathcal {F}})} consisting of a set X {\displaystyle X} and a family F {\displaystyle {\mathcal {F}}} of subsets of X , {\displaystyle X,} called an algebra over X , {\displaystyle X,} that has the following properties: Closed under complementation in X {\displaystyle X} : X ∖ F ∈ F for all F ∈ F . {\displaystyle X\setminus F\in {\mathcal {F}}{\text{ for all }}F\in {\mathcal {F}}.} Contains the empty set (or contains X {\displaystyle X} ) as an element: ∅ ∈ F . {\displaystyle \varnothing \in {\mathcal {F}}.} Assuming that (1) holds, this condition (2) is equivalent to: X ∈ F . {\displaystyle X\in {\mathcal {F}}.} Any/all of the following equivalent conditions hold: Closed under binary unions: F ∪ G ∈ F for all F , G ∈ F . {\displaystyle F\cup G\in {\mathcal {F}}{\text{ for all }}F,G\in {\mathcal {F}}.} Closed under binary intersections: F ∩ G ∈ F for all F , G ∈ F . {\displaystyle F\cap G\in {\mathcal {F}}{\text{ for all }}F,G\in {\mathcal {F}}.} Closed under finite unions: F 1 ∪ ⋯ ∪ F n ∈ F for all integers n ≥ 1 and all F 1 , … , F n ∈ F . {\displaystyle F_{1}\cup \cdots \cup F_{n}\in {\mathcal {F}}{\text{ for all integers }}n\geq 1{\text{ and all }}F_{1},\ldots ,F_{n}\in {\mathcal {F}}.} Closed under finite intersections: F 1 ∩ ⋯ ∩ F n ∈ F for all integers n ≥ 1 and all F 1 , … , F n ∈ F . {\displaystyle F_{1}\cap \cdots \cap F_{n}\in {\mathcal {F}}{\text{ for all integers }}n\geq 1{\text{ and all }}F_{1},\ldots ,F_{n}\in {\mathcal {F}}.} In other words, F {\displaystyle {\mathcal {F}}} forms a subalgebra of the power set Boolean algebra of X {\displaystyle X} (with the same identity element X ∈ F {\displaystyle X\in {\mathcal {F}}} ). Many authors refer to F {\displaystyle {\mathcal {F}}} itself as a field of sets. Elements of X {\displaystyle X} are called points while elements of F {\displaystyle {\mathcal {F}}} are called complexes and are said to be the admissible sets of X . {\displaystyle X.} A field of sets ( X , F ) {\displaystyle (X,{\mathcal {F}})} is called a σ-field of sets and the algebra F {\displaystyle {\mathcal {F}}} is called a σ-algebra if the following additional condition (4) is satisfied: Any/both of the following equivalent conditions hold: Closed under countable unions: ⋃ i = 1 ∞ F i := F 1 ∪ F 2 ∪ ⋯ ∈ F {\displaystyle \bigcup _{i=1}^{\infty }F_{i}:=F_{1}\cup F_{2}\cup \cdots \in {\mathcal {F}}} for all F 1 , F 2 , … ∈ F . {\displaystyle F_{1},F_{2},\ldots \in {\mathcal {F}}.} Closed under countable intersections: ⋂ i = 1 ∞ F i := F 1 ∩ F 2 ∩ ⋯ ∈ F {\displaystyle \bigcap _{i=1}^{\infty }F_{i}:=F_{1}\cap F_{2}\cap \cdots \in {\mathcal {F}}} for all F 1 , F 2 , … ∈ F . {\displaystyle F_{1},F_{2},\ldots \in {\mathcal {F}}.} == Fields of sets in the representation theory of Boolean algebras == === Stone representation === For an arbitrary set Y , {\displaystyle Y,} its power set 2 Y {\displaystyle 2^{Y}} (or, somewhat pedantically, the pair ( Y , 2 Y ) {\displaystyle (Y,2^{Y})} of this set and its power set) is a field of sets. If Y {\displaystyle Y} is finite (namely, n {\displaystyle n} -element), then 2 Y {\displaystyle 2^{Y}} is finite (namely, 2 n {\displaystyle 2^{n}} -element). It appears that every finite field of sets (it means, ( X , F ) {\displaystyle (X,{\mathcal {F}})} with F {\displaystyle {\mathcal {F}}} finite, while X {\displaystyle X} may be infinite) admits a representation of the form ( Y , 2 Y ) {\displaystyle (Y,2^{Y})} with finite Y {\displaystyle Y} ; it means a function f : X → Y {\displaystyle f:X\to Y} that establishes a one-to-one correspondence between F {\displaystyle {\mathcal {F}}} and 2 Y {\displaystyle 2^{Y}} via inverse image: S = f − 1 [ B ] = { x ∈ X ∣ f ( x ) ∈ B } {\displaystyle S=f^{-1}[B]=\{x\in X\mid f(x)\in B\}} where S ∈ F {\displaystyle S\in {\mathcal {F}}} and B ∈ 2 Y {\displaystyle B\in 2^{Y}} (that is, B ⊂ Y {\displaystyle B\subset Y} ). One notable consequence: the number of complexes, if finite, is always of the form 2 n . {\displaystyle 2^{n}.} To this end one chooses Y {\displaystyle Y} to be the set of all atoms of the given field of sets, and defines f {\displaystyle f} by f ( x ) = A {\displaystyle f(x)=A} whenever x ∈ A {\displaystyle x\in A} for a point x ∈ X {\displaystyle x\in X} and a complex A ∈ F {\displaystyle A\in {\mathcal {F}}} that is an atom; the latter means that a nonempty subset of A {\displaystyle A} different from A {\displaystyle A} cannot be a complex. In other words: the atoms are a partition of X {\displaystyle X} ; Y {\displaystyle Y} is the corresponding quotient set; and f {\displaystyle f} is the corresponding canonical surjection. Similarly, every finite Boolean algebra can be represented as a power set – the power set of its set of atoms; each element of the Boolean algebra corresponds to the set of atoms below it (the join of which is the element). This power set representation can be constructed more generally for any complete atomic Boolean algebra. In the case of Boolean algebras which are not complete and atomic we can still generalize the power set representation by considering fields of sets instead of whole power sets. To do this we first observe that the atoms of a finite Boolean algebra correspond to its ultrafilters and that an atom is below an element of a finite Boolean algebra if and only if that element is contained in the ultrafilter corresponding to the atom. This leads us to construct a representation of a Boolean algebra by taking its set of ultrafilters and forming complexes by associating with each element of the Boolean algebra the set of ultrafilters containing that element. This construction does indeed produce a representation of the Boolean algebra as a field of sets and is known as the Stone representation. It is the basis of Stone's representation theorem for Boolean algebras and an example of a completion procedure in order theory based on ideals or filters, similar to Dedekind cuts. Alternatively one can consider the set of homomorphisms onto the two element Boolean algebra and form complexes by associating each element of the Boolean algebra with the set of such homomorphisms that map it to the top element. (The approach is equivalent as the ultrafilters of a Boolean algebra are precisely the pre-images of the top elements under these homomorphisms.) With this approach one sees that Stone representation can also be regarded as a generalization of the representation of finite Boolean algebras by truth tables. === Separative and compact fields of sets: towards Stone duality === A field of sets is called separative (or differentiated) if and only if for every pair of distinct points there is a complex containing one and not the other. A field of sets is called compact if and only if for every proper filter over X {\displaystyle X} the intersection of all the complexes contained in the filter is non-empty. These definitions arise from considering the topology generated by the complexes of a field of sets. (It is just one of notable topologies on the given set of points; it often happens that another topology is given, with quite different properties, in particular, not zero-dimensional). Given a field of sets X = ( X , F ) {\displaystyle \mathbf {X} =(X,{\mathcal {F}})} the complexes form a base for a topology. We denote by T ( X ) {\displaystyle T(\mathbf {X} )} the corresponding topological space, ( X , T ) {\displaystyle (X,{\mathcal {T}})} where T {\displaystyle {\mathcal {T}}} is the topology formed by taking arbitrary unions of complexes. Then T ( X ) {\displaystyle T(\mathbf {X} )} is always a zero-dimensional space. T ( X ) {\displaystyle T(\mathbf {X} )} is a Hausdorff space if and only if X {\displaystyle \mathbf {X} } is separative. T ( X ) {\displaystyle T(\mathbf {X} )} is a compact space with compact open sets F {\displaystyle {\mathcal {F}}} if and only if X {\displaystyle \mathbf {X} } is compact. T ( X ) {\displaystyle T(\mathbf {X} )} is a Boolean space with clopen sets F {\displaystyle {\mathcal {F}}} if and only if X {\displaystyle \mathbf {X} } is both separative and compact (in which case it is described as being descriptive) The Stone representation of a Boolean algebra is always separative and compact; the corresponding Boolean space is known as the Stone space of the Boolean algebra. The clopen sets of the Stone space are then precisely the complexes of the Stone representation. The area of mathematics known as Stone duality is founded on the fact that the Stone representation of a Boolean algebra can be recovered purely from the corresponding Stone space whence a duality exists between Boolean algebras and Boolean spaces. == Fields of sets with additional structure == === Sigma algebras and measure spaces === If an algebra over a set is closed under countable unions (hence also under countable intersections), it is called a sigma algebra and the corresponding field of sets is called a measurable space. The complexes of a measurable space are called measurable sets. The Loomis-Sikorski theorem provides a Stone-type duality between countably complete Boolean algebras (which may be called abstract sigma algebras) and measurable spaces. A measure space is a triple ( X , F , μ ) {\displaystyle (X,{\mathcal {F}},\mu )} where ( X , F ) {\displaystyle (X,{\mathcal {F}})} is a measurable space and μ {\displaystyle \mu } is a measure defined on it. If μ {\displaystyle \mu } is in fact a probability measure we speak of a probability space and call its underlying measurable space a sample space. The points of a sample space are called sample points and represent potential outcomes while the measurable sets (complexes) are called events and represent properties of outcomes for which we wish to assign probabilities. (Many use the term sample space simply for the underlying set of a probability space, particularly in the case where every subset is an event.) Measure spaces and probability spaces play a foundational role in measure theory and probability theory respectively. In applications to Physics we often deal with measure spaces and probability spaces derived from rich mathematical structures such as inner product spaces or topological groups which already have a topology associated with them - this should not be confused with the topology generated by taking arbitrary unions of complexes. === Topological fields of sets === A topological field of sets is a triple ( X , T , F ) {\displaystyle (X,{\mathcal {T}},{\mathcal {F}})} where ( X , T ) {\displaystyle (X,{\mathcal {T}})} is a topological space and ( X , F ) {\displaystyle (X,{\mathcal {F}})} is a field of sets which is closed under the closure operator of T {\displaystyle {\mathcal {T}}} or equivalently under the interior operator i.e. the closure and interior of every complex is also a complex. In other words, F {\displaystyle {\mathcal {F}}} forms a subalgebra of the power set interior algebra on ( X , T ) . {\displaystyle (X,{\mathcal {T}}).} Topological fields of sets play a fundamental role in the representation theory of interior algebras and Heyting algebras. These two classes of algebraic structures provide the algebraic semantics for the modal logic S4 (a formal mathematical abstraction of epistemic logic) and intuitionistic logic respectively. Topological fields of sets representing these algebraic structures provide a related topological semantics for these logics. Every interior algebra can be represented as a topological field of sets with the underlying Boolean algebra of the interior algebra corresponding to the complexes of the topological field of sets and the interior and closure operators of the interior algebra corresponding to those of the topology. Every Heyting algebra can be represented by a topological field of sets with the underlying lattice of the Heyting algebra corresponding to the lattice of complexes of the topological field of sets that are open in the topology. Moreover the topological field of sets representing a Heyting algebra may be chosen so that the open complexes generate all the complexes as a Boolean algebra. These related representations provide a well defined mathematical apparatus for studying the relationship between truth modalities (possibly true vs necessarily true, studied in modal logic) and notions of provability and refutability (studied in intuitionistic logic) and is thus deeply connected to the theory of modal companions of intermediate logics. Given a topological space the clopen sets trivially form a topological field of sets as each clopen set is its own interior and closure. The Stone representation of a Boolean algebra can be regarded as such a topological field of sets, however in general the topology of a topological field of sets can differ from the topology generated by taking arbitrary unions of complexes and in general the complexes of a topological field of sets need not be open or closed in the topology. ==== Algebraic fields of sets and Stone fields ==== A topological field of sets is called algebraic if and only if there is a base for its topology consisting of complexes. If a topological field of sets is both compact and algebraic then its topology is compact and its compact open sets are precisely the open complexes. Moreover, the open complexes form a base for the topology. Topological fields of sets that are separative, compact and algebraic are called Stone fields and provide a generalization of the Stone representation of Boolean algebras. Given an interior algebra we can form the Stone representation of its underlying Boolean algebra and then extend this to a topological field of sets by taking the topology generated by the complexes corresponding to the open elements of the interior algebra (which form a base for a topology). These complexes are then precisely the open complexes and the construction produces a Stone field representing the interior algebra - the Stone representation. (The topology of the Stone representation is also known as the McKinsey–Tarski Stone topology after the mathematicians who first generalized Stone's result for Boolean algebras to interior algebras and should not be confused with the Stone topology of the underlying Boolean algebra of the interior algebra which will be a finer topology). === Preorder fields === A preorder field is a triple ( X , ≤ , F ) {\displaystyle (X,\leq ,{\mathcal {F}})} where ( X , ≤ ) {\displaystyle (X,\leq )} is a preordered set and ( X , F ) {\displaystyle (X,{\mathcal {F}})} is a field of sets. Like the topological fields of sets, preorder fields play an important role in the representation theory of interior algebras. Every interior algebra can be represented as a preorder field with its interior and closure operators corresponding to those of the Alexandrov topology induced by the preorder. In other words, for all S ∈ F {\displaystyle S\in {\mathcal {F}}} : I n t ( S ) = { x ∈ X : there exists a y ∈ S with y ≤ x } {\displaystyle \mathrm {Int} (S)=\{x\in X:{\text{ there exists a }}y\in S{\text{ with }}y\leq x\}} and C l ( S ) = { x ∈ X : there exists a y ∈ S with x ≤ y } {\displaystyle \mathrm {Cl} (S)=\{x\in X:{\text{ there exists a }}y\in S{\text{ with }}x\leq y\}} Similarly to topological fields of sets, preorder fields arise naturally in modal logic where the points represent the possible worlds in the Kripke semantics of a theory in the modal logic S4, the preorder represents the accessibility relation on these possible worlds in this semantics, and the complexes represent sets of possible worlds in which individual sentences in the theory hold, providing a representation of the Lindenbaum–Tarski algebra of the theory. They are a special case of the general modal frames which are fields of sets with an additional accessibility relation providing representations of modal algebras. ==== Algebraic and canonical preorder fields ==== A preorder field is called algebraic (or tight) if and only if it has a set of complexes A {\displaystyle {\mathcal {A}}} which determines the preorder in the following manner: x ≤ y {\displaystyle x\leq y} if and only if for every complex S ∈ A {\displaystyle S\in {\mathcal {A}}} , x ∈ S {\displaystyle x\in S} implies y ∈ S {\displaystyle y\in S} . The preorder fields obtained from S4 theories are always algebraic, the complexes determining the preorder being the sets of possible worlds in which the sentences of the theory closed under necessity hold. A separative compact algebraic preorder field is said to be canonical. Given an interior algebra, by replacing the topology of its Stone representation with the corresponding canonical preorder (specialization preorder) we obtain a representation of the interior algebra as a canonical preorder field. By replacing the preorder by its corresponding Alexandrov topology we obtain an alternative representation of the interior algebra as a topological field of sets. (The topology of this "Alexandrov representation" is just the Alexandrov bi-coreflection of the topology of the Stone representation.) While representation of modal algebras by general modal frames is possible for any normal modal algebra, it is only in the case of interior algebras (which correspond to the modal logic S4) that the general modal frame corresponds to topological field of sets in this manner. === Complex algebras and fields of sets on relational structures === The representation of interior algebras by preorder fields can be generalized to a representation theorem for arbitrary (normal) Boolean algebras with operators. For this we consider structures ( X , ( R i ) I , F ) {\displaystyle (X,(R_{i})_{I},{\mathcal {F}})} where ( X , ( R i ) I ) {\displaystyle (X,(R_{i})_{I})} is a relational structure i.e. a set with an indexed family of relations defined on it, and ( X , F ) {\displaystyle (X,{\mathcal {F}})} is a field of sets. The complex algebra (or algebra of complexes) determined by a field of sets X = ( X , ( R i ) I , F ) {\displaystyle \mathbf {X} =(X,\left(R_{i}\right)_{I},{\mathcal {F}})} on a relational structure, is the Boolean algebra with operators C ( X ) = ( F , ∩ , ∪ , ′ , ∅ , X , ( f i ) I ) {\displaystyle {\mathcal {C}}(\mathbf {X} )=({\mathcal {F}},\cap ,\cup ,\prime ,\emptyset ,X,(f_{i})_{I})} where for all i ∈ I , {\displaystyle i\in I,} if R i {\displaystyle R_{i}} is a relation of arity n + 1 , {\displaystyle n+1,} then f i {\displaystyle f_{i}} is an operator of arity n {\displaystyle n} and for all S 1 , … , S n ∈ F {\displaystyle S_{1},\ldots ,S_{n}\in {\mathcal {F}}} f i ( S 1 , … , S n ) = { x ∈ X : there exist x 1 ∈ S 1 , … , x n ∈ S n such that R i ( x 1 , … , x n , x ) } {\displaystyle f_{i}(S_{1},\ldots ,S_{n})=\left\{x\in X:{\text{ there exist }}x_{1}\in S_{1},\ldots ,x_{n}\in S_{n}{\text{ such that }}R_{i}(x_{1},\ldots ,x_{n},x)\right\}} This construction can be generalized to fields of sets on arbitrary algebraic structures having both operators and relations as operators can be viewed as a special case of relations. If F {\displaystyle {\mathcal {F}}} is the whole power set of X {\displaystyle X} then C ( X ) {\displaystyle {\mathcal {C}}(\mathbf {X} )} is called a full complex algebra or power algebra. Every (normal) Boolean algebra with operators can be represented as a field of sets on a relational structure in the sense that it is isomorphic to the complex algebra corresponding to the field. (Historically the term complex was first used in the case where the algebraic structure was a group and has its origins in 19th century group theory where a subset of a group was called a complex.) == See also == == Notes == == References == Goldblatt, R., Algebraic Polymodal Logic: A Survey, Logic Journal of the IGPL, Volume 8, Issue 4, p. 393-450, July 2000 Goldblatt, R., Varieties of complex algebras, Annals of Pure and Applied Logic, 44, p. 173-242, 1989 Johnstone, Peter T. (1982). Stone spaces (3rd ed.). Cambridge: Cambridge University Press. ISBN 0-521-33779-8. Naturman, C.A., Interior Algebras and Topology, Ph.D. thesis, University of Cape Town Department of Mathematics, 1991 Patrick Blackburn, Johan F.A.K. van Benthem, Frank Wolter ed., Handbook of Modal Logic, Volume 3 of Studies in Logic and Practical Reasoning, Elsevier, 2006 == External links == "Algebra of sets", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Algebra of sets, Encyclopedia of Mathematics.
Wikipedia/Set_algebra
In mathematics — specifically, in measure theory and functional analysis — the cylindrical σ-algebra or product σ-algebra is a type of σ-algebra which is often used when studying product measures or probability measures of random variables on Banach spaces. For a product space, the cylinder σ-algebra is the one that is generated by cylinder sets. In the context of a Banach space X {\displaystyle X} and its dual space of continuous linear functionals X ′ , {\displaystyle X',} the cylindrical σ-algebra A ( X , X ′ ) {\displaystyle {\mathfrak {A}}(X,X')} is defined to be the coarsest σ-algebra (that is, the one with the fewest measurable sets) such that every continuous linear function on X {\displaystyle X} is a measurable function. In general, A ( X , X ′ ) {\displaystyle {\mathfrak {A}}(X,X')} is not the same as the Borel σ-algebra on X , {\displaystyle X,} which is the coarsest σ-algebra that contains all open subsets of X . {\displaystyle X.} == Definition == Consider two topological vector spaces N {\displaystyle N} and M {\displaystyle M} with dual pairing ⟨ , ⟩ := ⟨ , ⟩ N , M {\displaystyle \langle ,\rangle :=\langle ,\rangle _{N,M}} , then we can define the so called Borel cylinder sets C f 1 , … , f m , B = { x ∈ N : ( ⟨ x , f 1 ⟩ , … , ⟨ x , f m ⟩ ) ∈ B } {\displaystyle C_{f_{1},\dots ,f_{m},B}=\{x\in N\colon (\langle x,f_{1}\rangle ,\dots ,\langle x,f_{m}\rangle )\in B\}} for some f 1 , … , f m ∈ M {\displaystyle f_{1},\dots ,f_{m}\in M} and B ∈ B ( R m ) {\displaystyle B\in {\mathcal {B}}(\mathbb {R} ^{m})} . The family of all these sets is denoted as A f 1 , … , f n {\displaystyle {\mathfrak {A}}_{f_{1},\dots ,f_{n}}} . Then Cyl ⁡ ( N , M ) = ⨂ n A f 1 , … , f n {\displaystyle \operatorname {Cyl} (N,M)=\bigotimes _{n}{\mathfrak {A}}_{f_{1},\dots ,f_{n}}} is called the cylindrical algebra. Equivalently one can also look at the open cylinder sets and get the same algebra. The cylindrical σ-algebra A ( N , M ) = σ ( Cyl ⁡ ( N , M ) ) {\displaystyle {\mathfrak {A}}(N,M)=\sigma (\operatorname {Cyl} (N,M))} is the σ-algebra generated by the cylinderical algebra. == Properties == Let X {\displaystyle X} a Hausdorff locally convex space which is also a hereditarily Lindelöf space, then A ( X , X ′ ) = B ( X ) . {\displaystyle {\mathfrak {A}}(X,X')={\mathcal {B}}(X).} == See also == Cylinder set – natural basic set in product spacesPages displaying wikidata descriptions as a fallback Cylinder set measure – way to generate a measure over product spacesPages displaying wikidata descriptions as a fallback == References == Ledoux, Michel; Talagrand, Michel (1991). Probability in Banach spaces. Berlin: Springer-Verlag. pp. xii+480. ISBN 3-540-52013-9. MR 1102015. (See chapter 2) Lunardi, Alessandra; Miranda, Michele; Pallara, Diego (2016), Infinite Dimensional Analysis, Lecture Notes, 19th Internet Seminar, Dipartimento di Matematica e Informatica Università degli Studi di Ferrara (See chapter 2)
Wikipedia/Cylinder_σ-algebra
Scientific modelling is an activity that produces models representing empirical objects, phenomena, and physical processes, to make a particular part or feature of the world easier to understand, define, quantify, visualize, or simulate. It requires selecting and identifying relevant aspects of a situation in the real world and then developing a model to replicate a system with those features. Different types of models may be used for different purposes, such as conceptual models to better understand, operational models to operationalize, mathematical models to quantify, computational models to simulate, and graphical models to visualize the subject. Modelling is an essential and inseparable part of many scientific disciplines, each of which has its own ideas about specific types of modelling. The following was said by John von Neumann. ... the sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work—that is, correctly to describe phenomena from a reasonably wide area. There is also an increasing attention to scientific modelling in fields such as science education, philosophy of science, systems theory, and knowledge visualization. There is a growing collection of methods, techniques and meta-theory about all kinds of specialized scientific modelling. == Overview == A scientific model seeks to represent empirical objects, phenomena, and physical processes in a logical and objective way. All models are in simulacra, that is, simplified reflections of reality that, despite being approximations, can be extremely useful. Building and disputing models is fundamental to the scientific enterprise. Complete and true representation may be impossible, but scientific debate often concerns which is the better model for a given task, e.g., which is the more accurate climate model for seasonal forecasting. Attempts to formalize the principles of the empirical sciences use an interpretation to model reality, in the same way logicians axiomatize the principles of logic. The aim of these attempts is to construct a formal system that will not produce theoretical consequences that are contrary to what is found in reality. Predictions or other statements drawn from such a formal system mirror or map the real world only insofar as these scientific models are true. For the scientist, a model is also a way in which the human thought processes can be amplified. For instance, models that are rendered in software allow scientists to leverage computational power to simulate, visualize, manipulate and gain intuition about the entity, phenomenon, or process being represented. Such computer models are in silico. Other types of scientific models are in vivo (living models, such as laboratory rats) and in vitro (in glassware, such as tissue culture). == Basics == === Modelling as a substitute for direct measurement and experimentation === Models are typically used when it is either impossible or impractical to create experimental conditions in which scientists can directly measure outcomes. Direct measurement of outcomes under controlled conditions (see Scientific method) will always be more reliable than modeled estimates of outcomes. Within modeling and simulation, a model is a task-driven, purposeful simplification and abstraction of a perception of reality, shaped by physical, legal, and cognitive constraints. It is task-driven because a model is captured with a certain question or task in mind. Simplifications leave all the known and observed entities and their relation out that are not important for the task. Abstraction aggregates information that is important but not needed in the same detail as the object of interest. Both activities, simplification, and abstraction, are done purposefully. However, they are done based on a perception of reality. This perception is already a model in itself, as it comes with a physical constraint. There are also constraints on what we are able to legally observe with our current tools and methods, and cognitive constraints that limit what we are able to explain with our current theories. This model comprises the concepts, their behavior, and their relations informal form and is often referred to as a conceptual model. In order to execute the model, it needs to be implemented as a computer simulation. This requires more choices, such as numerical approximations or the use of heuristics. Despite all these epistemological and computational constraints, simulation has been recognized as the third pillar of scientific methods: theory building, simulation, and experimentation. === Simulation === A simulation is a way to implement the model, often employed when the model is too complex for the analytical solution. A steady-state simulation provides information about the system at a specific instant in time (usually at equilibrium, if such a state exists). A dynamic simulation provides information over time. A simulation shows how a particular object or phenomenon will behave. Such a simulation can be useful for testing, analysis, or training in those cases where real-world systems or concepts can be represented by models. === Structure === Structure is a fundamental and sometimes intangible notion covering the recognition, observation, nature, and stability of patterns and relationships of entities. From a child's verbal description of a snowflake, to the detailed scientific analysis of the properties of magnetic fields, the concept of structure is an essential foundation of nearly every mode of inquiry and discovery in science, philosophy, and art. === Systems === A system is a set of interacting or interdependent entities, real or abstract, forming an integrated whole. In general, a system is a construct or collection of different elements that together can produce results not obtainable by the elements alone. The concept of an 'integrated whole' can also be stated in terms of a system embodying a set of relationships which are differentiated from relationships of the set to other elements, and form relationships between an element of the set and elements not a part of the relational regime. There are two types of system models: 1) discrete in which the variables change instantaneously at separate points in time and, 2) continuous where the state variables change continuously with respect to time. === Generating a model === Modelling is the process of generating a model as a conceptual representation of some phenomenon. Typically a model will deal with only some aspects of the phenomenon in question, and two models of the same phenomenon may be essentially different—that is to say, that the differences between them comprise more than just a simple renaming of components. Such differences may be due to differing requirements of the model's end users, or to conceptual or aesthetic differences among the modelers and to contingent decisions made during the modelling process. Considerations that may influence the structure of a model might be the modeler's preference for a reduced ontology, preferences regarding statistical models versus deterministic models, discrete versus continuous time, etc. In any case, users of a model need to understand the assumptions made that are pertinent to its validity for a given use. Building a model requires abstraction. Assumptions are used in modelling in order to specify the domain of application of the model. For example, the special theory of relativity assumes an inertial frame of reference. This assumption was contextualized and further explained by the general theory of relativity. A model makes accurate predictions when its assumptions are valid, and might well not make accurate predictions when its assumptions do not hold. Such assumptions are often the point with which older theories are succeeded by new ones (the general theory of relativity works in non-inertial reference frames as well). === Evaluating a model === A model is evaluated first and foremost by its consistency to empirical data; any model inconsistent with reproducible observations must be modified or rejected. One way to modify the model is by restricting the domain over which it is credited with having high validity. A case in point is Newtonian physics, which is highly useful except for the very small, the very fast, and the very massive phenomena of the universe. However, a fit to empirical data alone is not sufficient for a model to be accepted as valid. Factors important in evaluating a model include: Ability to explain past observations Ability to predict future observations Cost of use, especially in combination with other models Refutability, enabling estimation of the degree of confidence in the model Simplicity, or even aesthetic appeal People may attempt to quantify the evaluation of a model using a utility function. === Visualization === Visualization is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of man. Examples from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes. === Space mapping === Space mapping refers to a methodology that employs a "quasi-global" modelling formulation to link companion "coarse" (ideal or low-fidelity) with "fine" (practical or high-fidelity) models of different complexities. In engineering optimization, space mapping aligns (maps) a very fast coarse model with its related expensive-to-compute fine model so as to avoid direct expensive optimization of the fine model. The alignment process iteratively refines a "mapped" coarse model (surrogate model). == Types == == Applications == === Modelling and simulation === One application of scientific modelling is the field of modelling and simulation, generally referred to as "M&S". M&S has a spectrum of applications which range from concept development and analysis, through experimentation, measurement, and verification, to disposal analysis. Projects and programs may use hundreds of different simulations, simulators and model analysis tools. The figure shows how modelling and simulation is used as a central part of an integrated program in a defence capability development process. == See also == Abductive reasoning – Inference seeking the simplest and most likely explanation All models are wrong – Aphorism in statistics Data and information visualization – Visual representation of data Heuristic – Problem-solving method Inverse problem – Process of calculating the causal factors that produced a set of observations Scientific visualization – Interdisciplinary branch of science concerned with presenting scientific data visually Statistical model – Type of mathematical model == References == == Further reading == Nowadays there are some 40 magazines about scientific modelling which offer all kinds of international forums. Since the 1960s there is a strongly growing number of books and magazines about specific forms of scientific modelling. There is also a lot of discussion about scientific modelling in the philosophy-of-science literature. A selection: Rainer Hegselmann, Ulrich Müller and Klaus Troitzsch (eds.) (1996). Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View. Theory and Decision Library. Dordrecht: Kluwer. Paul Humphreys (2004). Extending Ourselves: Computational Science, Empiricism, and Scientific Method. Oxford: Oxford University Press. Johannes Lenhard, Günter Küppers and Terry Shinn (Eds.) (2006) "Simulation: Pragmatic Constructions of Reality", Springer Berlin. Tom Ritchey (2012). "Outline for a Morphology of Modelling Methods: Contribution to a General Theory of Modelling". In: Acta Morphologica Generalis, Vol 1. No 1. pp. 1–20. William Silvert (2001). "Modelling as a Discipline". In: Int. J. General Systems. Vol. 30(3), pp. 261. Sergio Sismondo and Snait Gissis (eds.) (1999). Modeling and Simulation. Special Issue of Science in Context 12. Eric Winsberg (2018) "Philosophy and Climate Science" Cambridge: Cambridge University Press Eric Winsberg (2010) "Science in the Age of Computer Simulation" Chicago: University of Chicago Press Eric Winsberg (2003). "Simulated Experiments: Methodology for a Virtual World". In: Philosophy of Science 70: 105–125. Tomáš Helikar, Jim A Rogers (2009). "ChemChains: a platform for simulation and analysis of biochemical networks aimed to laboratory scientists". BioMed Central. == External links == Models. Entry in the Internet Encyclopedia of Philosophy Models in Science. Entry in the Stanford Encyclopedia of Philosophy The World as a Process: Simulations in the Natural and Social Sciences, in: R. Hegselmann et al. (eds.), Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View, Theory and Decision Library. Dordrecht: Kluwer 1996, 77-100. Research in simulation and modelling of various physical systems Modelling Water Quality Information Center, U.S. Department of Agriculture Ecotoxicology & Models A Morphology of Modelling Methods. Acta Morphologica Generalis, Vol 1. No 1. pp. 1–20.
Wikipedia/Scientific_modeling
The method of loci is a strategy for memory enhancement, which uses visualizations of familiar spatial environments in order to enhance the recall of information. The method of loci is also known as the memory journey, memory palace, journey method, memory spaces, or mind palace technique. This method is a mnemonic device adopted in ancient Roman and Greek rhetorical treatises (in the anonymous Rhetorica ad Herennium, Cicero's De Oratore, and Quintilian's Institutio Oratoria). Many memory contest champions report using this technique to recall faces, digits, and lists of words. It is the term most often found in specialised works on psychology, neurobiology, and memory, though it was used in the same general way at least as early as the first half of the nineteenth century in works on rhetoric, logic, and philosophy. John O'Keefe and Lynn Nadel refer to:... "the method of loci", an imaginal technique known to the ancient Greeks and Romans and described by Yates (1966) in her book The Art of Memory as well as by Luria (1969). In this technique the subject memorizes the layout of some building, or the arrangement of shops on a street, or any geographical entity which is composed of a number of discrete loci. When desiring to remember a set of items the subject 'walks' through these loci in their imagination and commits an item to each one by forming an image between the item and any feature of that locus. Retrieval of items is achieved by 'walking' through the loci, allowing the latter to activate the desired items. The efficacy of this technique has been well established (Ross and Lawrence 1968, Crovitz 1969, 1971, Briggs, Hawkins and Crovitz 1970, Lea 1975), as is the minimal interference seen with its use. The items to be remembered in this mnemonic system are mentally associated with specific physical locations. The method relies on memorized spatial relationships to establish order and recollect memorial content. It is also known as the "Journey Method", used for storing lists of related items, or the "Roman Room" technique, which is most effective for storing unrelated information. == Contemporary usage == Many effective memorisers today resort to the "method of loci" to some degree. Contemporary memory competition, in particular the World Memory Championship, was initiated in 1991 and the first United States championship was held in 1997. Part of the competition requires committing to memory and recalling a sequence of digits, two-digit numbers, alphabetic letters, or playing cards. In a simple method of doing this, contestants, using various strategies well before competing, commit to long-term memory a unique vivid image associated with each item. They have also committed to long-term memory a familiar route with firmly established stop-points or loci. Then in the competition they need only deposit the image that they have associated with each item at the loci. To recall, they retrace the route, "stop" at each locus, and "observe" the image. They then translate this back to the associated item. For example, Ed Cooke, a Grand Master of Memory, describes to Josh Foer in his book Moonwalking with Einstein how he uses the method of loci. First, he describes a very familiar location where he can clearly remember many different smaller locations like his sink in his childhood home or his dog's bed. Cooke also advises that the more outlandish and vulgar the symbol used to memorize the material, the more likely it will stick. Memory champions elaborate on this by combining images. Eight-time World Memory Champion Dominic O'Brien uses this technique. The 2006 World Memory Champion, Clemens Mayer, used a 300-point-long journey through his house for his world record in "number half marathon", memorising 1040 random digits in a half-hour. An anonymous individual has used the method of loci to memorise pi to over 65,536 (216) digits. The technique is taught as a metacognitive technique in learning-to-learn courses. It is generally applied to encoding the key ideas of a subject. Two approaches are: Link the key ideas of a subject and then deep-learn those key ideas in relation to each other, and Think through the key ideas of a subject in depth, re-arrange the ideas in relation to an argument, then link the ideas to loci in good order. The method of loci has also been shown to help those with depression remember positive, self-affirming memories. A study at the University of Maryland evaluated participants' ability to accurately recall two sets of familiar faces, using a traditional desktop, and with a head-mounted display. The study was designed to utilize the method of loci technique, with virtual environments resembling memory palaces. The study found an 8.8% recall improvement in favor of the head-mounted display, in part due to participants being able to employ their vestibular and proprioceptive sensations. == Method == The Rhetorica ad Herennium and most other sources recommend that the method of loci should be integrated with other forms of elaborative encoding (i.e., adding visual, auditory, or other details) to strengthen memory. However, due to the strength of spatial memory, simply mentally placing objects in real or imagined locations without further elaboration can be effective for simple associations. A variation of the "method of loci" involves creating imaginary locations (houses, palaces, roads, and cities) to which the same procedure is applied. It is accepted that there is a greater cost involved in the initial setup, but thereafter the performance is in line with the standard loci method. The purported advantage is to create towns and cities that each represent a topic or an area of study, thus offering an efficient filing of the information and an easy path for the regular review necessary for long-term memory storage. Something that is likely a reference to the "method of loci" techniques survives to this day in the common English phrases "in the first place", "in the second place", and so forth. The technique is also used for second-language vocabulary learning, as polyglot Timothy Doner described in his 2014 TED talk. == Applicability of the term == The designation is not used with strict consistency. In some cases it refers broadly to what is otherwise known as the art of memory, the origins of which are related, according to tradition, in the story of Simonides of Ceos and the collapsing banquet hall. For example, after relating the story of how Simonides relied on remembered seating arrangements to call to mind the faces of recently deceased guests, Stephen M. Kosslyn remarks "[t]his insight led to the development of a technique the Greeks called the method of loci, which is a systematic way of improving one's memory by using imagery." Skoyles and Sagan indicate that "an ancient technique of memorization called Method of Loci, by which memories are referenced directly onto spatial maps" originated with the story of Simonides. Referring to mnemonic methods, Verlee Williams mentions, "One such strategy is the 'loci' method, which was developed by Simonides, a Greek poet of the fifth and sixth centuries BC." Loftus cites the foundation story of Simonides (more or less taken from Frances Yates) and describes some of the most basic aspects of the use of space in the art of memory. She states, "This particular mnemonic technique has come to be called the "method of loci". While place or position certainly figured prominently in ancient mnemonic techniques, no designation equivalent to "method of loci" was used exclusively to refer to mnemonic schemes relying upon space for organization. In other cases the designation is generally consistent, but more specific: "The Method of Loci is a Mnemonic Device involving the creation of a Visual Map of one's house." This term can be misleading: the ancient principles and techniques of the art of memory, hastily glossed in some of the works, cited above, depended equally upon images and places. The designator "method of loci" does not convey the equal weight placed on both elements. Training in the art or arts of memory as a whole, as attested in classical antiquity, was far more inclusive and comprehensive in the treatment of this subject. == Spatial mnemonics and specific brain activation == Brain scans of "superior memorizers", 90% of whom use the method of loci technique, have shown that it involves activation of regions of the brain involved in spatial awareness, such as the medial parietal cortex, retrosplenial cortex, and the right posterior hippocampus. The medial parietal cortex is most associated with encoding and retrieving of information. Patients who have medial parietal cortex damage have trouble linking landmarks with certain locations; many of these patients are unable to give or follow directions and often get lost. The retrosplenial cortex is also linked to memory and navigation. In one study on the effects of selective granular retrosplenial cortex lesions in rats, the researcher found that damage to the retrosplenial cortex led to impaired spatial learning abilities. Rats with damage to this area failed to recall which areas of the maze they had already visited, rarely explored different arms of the maze, almost never recalled the maze in future trials, and took longer to reach the end of the maze, as compared to rats with a fully working retrosplenial cortex. In a classic study in cognitive neuroscience, O'Keefe and Nadel proposed "that the hippocampus is the core of a neural memory system providing an objective spatial framework within which the items and events of an organism's experience are located and interrelated." This theory has generated considerable debate and further experiment. It has been noted that "[t]he hippocampus underpins our ability to navigate, to form and recollect memories, and to imagine future experiences. How activity across millions of hippocampal neurons supports these functions is a fundamental question in neuroscience, wherein the size, sparseness, and organization of the hippocampal neural code are debated." In a more recent study, memory champions during resting periods did not exhibit specific regional brain differences, but distributed functional brain network connectivity changes compared to control subjects. When volunteers trained use of the method of loci for six weeks, the training-induced changes in brain connectivity were similar to the brain network organization that distinguished memory champions from controls. == Fictional portrayals == Fictional portrayals of the method of loci extend as far back as ancient Greek myths. In the novels Hannibal (1999) and Hannibal Rising (2006), by Thomas Harris, a detailed description of Hannibal Lecter's memory palace is provided. We catch up to him as the swift slippers of his mind pass from the foyer into the Great Hall of the Seasons. The palace is built according to the rules discovered by Simonides of Ceos and elaborated by Cicero four hundred years later; it is airy, high-ceilinged, furnished with objects and tableaux that are vivid, striking, sometimes shocking and absurd, and often beautiful. The displays are well spaced and well lighted like those of a great museum. [...] On the floor before the painting is this tableau, life-sized in painted marble. A parade in Arlington National Cemetery led by Jesus, thirty-three, driving a '27 Model-T Ford truck, a "Tin Lizzie", with J. Edgar Hoover standing in the truck bed wearing a tutu and waving to an unseen crowd. Marching behind him is Clarice Starling carrying a .308 Enfield rifle at shoulder arms. In the first episode of Bordertown (2016), detective Kari Sorjonen explains the memory palace concept, and, throughout the series, he marks rectangles with tape on his basement floor where he stands to imagine himself at various significant loci in a case, organized into memory palaces. The television series The Mentalist, which premiered in late 2008, mentions memory palaces on multiple occasions. The main character Patrick Jane claims to use a memory palace to memorise cards and gamble successfully. In the eleventh episode of season two, Jane teaches his colleague Wayne Rigsby how to construct a memory palace, explaining that they are good for memorising large chunks of information at a time. In "The Reunion Job", Episode 2 of Season 3 of the television show Leverage, the criminal team must "hack" the Roman Room of a tech giant, as he's created a memory palace out of his senior in high school to remember his passwords. In the 2003 film Dreamcatcher, the character Jonesy has an elaborate memory palace which plays a major role in the plot and is shown several times in the film, depicted as a physical building that Jonesy is walking through as a way to represent him accessing the memories. In the BBC television series Sherlock, which premiered in 2010, the title character uses mind palaces to remember various things throughout the show. In Hilary Mantel's 2009 novel Wolf Hall, the fictionalized version of Thomas Cromwell describes "memory palace" techniques and his uses of it. In the 2017 medical drama The Good Doctor, series protagonist Shaun Murphy uses the Method of Loci to figure out various medical diagnoses. In the 2020 video game The Sinking City, the main character Charles Reed is a detective that keeps points of interest in a mind palace menu. In the 2020 video game Twin Mirror, the main character Sam Higgs uses the mind palace in various points of the game to relive memories and investigate. In the 2023 video game Alan Wake II, FBI Agent Saga Anderson uses an adapted version, which she calls the "Mind Place", throughout the story to review cases and associated evidence. == See also == Catherine of Siena's "inner cell" Mental image Spatial memory == Citations == == General and cited references == Bolzoni, Lina (2001). The Gallery of Memory. University of Toronto Press. ISBN 978-0802043306. Bolzoni, Lina (2004). The Web of Images. Ashgate Publishers. ISBN 978-0754605515. Brown, Derren (2007). Tricks of the Mind. London: Transworld publishers. Carruthers, Mary; Ziolkowski, Jan (2002). The Medieval Craft of Memory: An anthology of texts and pictures. University of Pennsylvania Press. ISBN 978-0812218817. Carruthers, Mary (1990). The Book of Memory. Cambridge University Press. ISBN 978-0521716314. Carruthers, Mary (1998). The Craft of Thought. Cambridge University Press. ISBN 978-0521795418. Dann, Jack (1995) The Memory Cathedral: A Secret History of Leonardo da Vinci: Bantam Books 0553378570 Dresler, Martin, et al."Mnemonic Training Reshapes Brain Networks to Support Superior Memory", Neuron, 8 March 2017. Dudai, Yadin (2002). Memory from A to Z. Oxford University Press. ISBN 978-0198520870. Foer, Joshua (2011). Moonwalking with Einstein: The Art and Science of Remembering Everything. New York: Penguin Press. ISBN 978-1-59420-229-2. Lyndon, Donlyn; Moore, Charles W. (1994). Chambers for a Memory Palace. Cambridge, Massachusetts: The MIT Press. ISBN 9780262621052. Rossi, Paolo (2000). Logic and the Art of Memory. University of Chicago Press. ISBN 978-0226728261. Small, Jocelyn P. (1997). Wax Tablets of the Mind. London: Routledge. ISBN 978-0415149839. Spence, Jonathan D. (1984). The Memory Palace of Matteo Ricci. New York: Viking Penguin. ISBN 978-0-14-008098-8. Yates, Frances A. (1966). The Art of Memory. Chicago: University of Chicago Press. ISBN 978-0226950013.
Wikipedia/Method_of_loci
In set theory, Zermelo–Fraenkel set theory, named after mathematicians Ernst Zermelo and Abraham Fraenkel, is an axiomatic system that was proposed in the early twentieth century in order to formulate a theory of sets free of paradoxes such as Russell's paradox. Today, Zermelo–Fraenkel set theory, with the historically controversial axiom of choice (AC) included, is the standard form of axiomatic set theory and as such is the most common foundation of mathematics. Zermelo–Fraenkel set theory with the axiom of choice included is abbreviated ZFC, where C stands for "choice", and ZF refers to the axioms of Zermelo–Fraenkel set theory with the axiom of choice excluded. Informally, Zermelo–Fraenkel set theory is intended to formalize a single primitive notion, that of a hereditary well-founded set, so that all entities in the universe of discourse are such sets. Thus the axioms of Zermelo–Fraenkel set theory refer only to pure sets and prevent its models from containing urelements (elements that are not themselves sets). Furthermore, proper classes (collections of mathematical objects defined by a property shared by their members where the collections are too big to be sets) can only be treated indirectly. Specifically, Zermelo–Fraenkel set theory does not allow for the existence of a universal set (a set containing all sets) nor for unrestricted comprehension, thereby avoiding Russell's paradox. Von Neumann–Bernays–Gödel set theory (NBG) is a commonly used conservative extension of Zermelo–Fraenkel set theory that does allow explicit treatment of proper classes. There are many equivalent formulations of the axioms of Zermelo–Fraenkel set theory. Most of the axioms state the existence of particular sets defined from other sets. For example, the axiom of pairing says that given any two sets a {\displaystyle a} and b {\displaystyle b} there is a new set { a , b } {\displaystyle \{a,b\}} containing exactly a {\displaystyle a} and b {\displaystyle b} . Other axioms describe properties of set membership. A goal of the axioms is that each axiom should be true if interpreted as a statement about the collection of all sets in the von Neumann universe (also known as the cumulative hierarchy). The metamathematics of Zermelo–Fraenkel set theory has been extensively studied. Landmark results in this area established the logical independence of the axiom of choice from the remaining Zermelo-Fraenkel axioms and of the continuum hypothesis from ZFC. The consistency of a theory such as ZFC cannot be proved within the theory itself, as shown by Gödel's second incompleteness theorem. == History == The modern study of set theory was initiated by Georg Cantor and Richard Dedekind in the 1870s. However, the discovery of paradoxes in naive set theory, such as Russell's paradox, led to the desire for a more rigorous form of set theory that was free of these paradoxes. In 1908, Ernst Zermelo proposed the first axiomatic set theory, Zermelo set theory. However, as first pointed out by Abraham Fraenkel in a 1921 letter to Zermelo, this theory was incapable of proving the existence of certain sets and cardinal numbers whose existence was taken for granted by most set theorists of the time, notably the cardinal number aleph-omega ( ℵ ω {\displaystyle \aleph _{\omega }} ) and the set { Z 0 , P ( Z 0 ) , P ( P ( Z 0 ) ) , P ( P ( P ( Z 0 ) ) ) , . . . } , {\displaystyle \{Z_{0},{\mathcal {P}}(Z_{0}),{\mathcal {P}}({\mathcal {P}}(Z_{0})),{\mathcal {P}}({\mathcal {P}}({\mathcal {P}}(Z_{0}))),...\},} where Z 0 {\displaystyle Z_{0}} is any infinite set and P {\displaystyle {\mathcal {P}}} is the power set operation. Moreover, one of Zermelo's axioms invoked a concept, that of a "definite" property, whose operational meaning was not clear. In 1922, Fraenkel and Thoralf Skolem independently proposed operationalizing a "definite" property as one that could be formulated as a well-formed formula in a first-order logic whose atomic formulas were limited to set membership and identity. They also independently proposed replacing the axiom schema of specification with the axiom schema of replacement. Appending this schema, as well as the axiom of regularity (first proposed by John von Neumann), to Zermelo set theory yields the theory denoted by ZF. Adding to ZF either the axiom of choice (AC) or a statement that is equivalent to it yields ZFC. == Formal language == Formally, ZFC is a one-sorted theory in first-order logic. The equality symbol can be treated as either a primitive logical symbol or a high-level abbreviation for having exactly the same elements. The former approach is the most common. The signature has a single predicate symbol, usually denoted ∈ {\displaystyle \in } , which is a predicate symbol of arity 2 (a binary relation symbol). This symbol symbolizes a set membership relation. For example, the formula a ∈ b {\displaystyle a\in b} means that a {\displaystyle a} is an element of the set b {\displaystyle b} (also read as a {\displaystyle a} is a member of b {\displaystyle b} ). There are different ways to formulate the formal language. Some authors may choose a different set of connectives or quantifiers. For example, the logical connective NAND alone can encode the other connectives, a property known as functional completeness. This section attempts to strike a balance between simplicity and intuitiveness. The language's alphabet consists of: A countably infinite amount of variables used for representing sets The logical connectives ¬ {\displaystyle \lnot } , ∧ {\displaystyle \land } , ∨ {\displaystyle \lor } The quantifier symbols ∀ {\displaystyle \forall } , ∃ {\displaystyle \exists } The equality symbol = {\displaystyle =} The set membership symbol ∈ {\displaystyle \in } Brackets ( ) With this alphabet, the recursive rules for forming well-formed formulae (wff) are as follows: Let x {\displaystyle x} and y {\displaystyle y} be metavariables for any variables. These are the two ways to build atomic formulae (the simplest wffs): x = y {\displaystyle x=y} x ∈ y {\displaystyle x\in y} Let ϕ {\displaystyle \phi } and ψ {\displaystyle \psi } be metavariables for any wff, and x {\displaystyle x} be a metavariable for any variable. These are valid wff constructions: ¬ ϕ {\displaystyle \lnot \phi } ( ϕ ∧ ψ ) {\displaystyle (\phi \land \psi )} ( ϕ ∨ ψ ) {\displaystyle (\phi \lor \psi )} ∀ x ϕ {\displaystyle \forall x\phi } ∃ x ϕ {\displaystyle \exists x\phi } A well-formed formula can be thought as a syntax tree. The leaf nodes are always atomic formulae. Nodes ∧ {\displaystyle \land } and ∨ {\displaystyle \lor } have exactly two child nodes, while nodes ¬ {\displaystyle \lnot } , ∀ x {\displaystyle \forall x} and ∃ x {\displaystyle \exists x} have exactly one. There are countably infinitely many wffs, however, each wff has a finite number of nodes. == Axioms == There are many equivalent formulations of the ZFC axioms. The following particular axiom set is from Kunen (1980). The axioms in order below are expressed in a mixture of first-order logic and high-level abbreviations. Axioms 1–8 form ZF, while the axiom 9 turns ZF into ZFC. Following Kunen (1980), we use the equivalent well-ordering theorem in place of the axiom of choice for axiom 9. All formulations of ZFC imply that at least one set exists. Kunen includes an axiom that directly asserts the existence of a set, although he notes that he does so only "for emphasis". Its omission here can be justified in two ways. First, in the standard semantics of first-order logic in which ZFC is typically formalized, the domain of discourse must be nonempty. Hence, it is a logical theorem of first-order logic that something exists – usually expressed as the assertion that something is identical to itself, ∃ x ( x = x ) {\displaystyle \exists x(x=x)} . Consequently, it is a theorem of every first-order theory that something exists. However, as noted above, because in the intended semantics of ZFC, there are only sets, the interpretation of this logical theorem in the context of ZFC is that some set exists. Hence, there is no need for a separate axiom asserting that a set exists. Second, however, even if ZFC is formulated in so-called free logic, in which it is not provable from logic alone that something exists, the axiom of infinity asserts that an infinite set exists. This implies that a set exists, and so, once again, it is superfluous to include an axiom asserting as much. === Axiom of extensionality === Two sets are equal (are the same set) if they have the same elements. The converse of this axiom follows from the substitution property of equality. ZFC is constructed in first-order logic. Some formulations of first-order logic include identity; others do not. If the variety of first-order logic in which one is constructing set theory does not include equality " = {\displaystyle =} ", x = y {\displaystyle x=y} may be defined as an abbreviation for the following formula: ∀ z [ z ∈ x ⇔ z ∈ y ] ∧ ∀ w [ x ∈ w ⇔ y ∈ w ] . {\displaystyle \forall z[z\in x\Leftrightarrow z\in y]\land \forall w[x\in w\Leftrightarrow y\in w].} In this case, the axiom of extensionality can be reformulated as which says that if x {\displaystyle x} and y {\displaystyle y} have the same elements, then they belong to the same sets. === Axiom of regularity (also called the axiom of foundation) === Every non-empty set x {\displaystyle x} contains a member y {\displaystyle y} such that x {\displaystyle x} and y {\displaystyle y} are disjoint sets. or in modern notation: ∀ x ( x ≠ ∅ ⇒ ∃ y ( y ∈ x ∧ y ∩ x = ∅ ) ) . {\displaystyle \forall x\,(x\neq \varnothing \Rightarrow \exists y(y\in x\land y\cap x=\varnothing )).} This (along with the axioms of pairing and union) implies, for example, that no set is an element of itself and that every set has an ordinal rank. === Axiom schema of specification (or of separation, or of restricted comprehension) === Subsets are commonly constructed using set builder notation. For example, the even integers can be constructed as the subset of the integers Z {\displaystyle \mathbb {Z} } satisfying the congruence modulo predicate x ≡ 0 ( mod 2 ) {\displaystyle x\equiv 0{\pmod {2}}} : In general, the subset of a set z {\displaystyle z} obeying a formula φ ( x ) {\displaystyle \varphi (x)} with one free variable x {\displaystyle x} may be written as: The axiom schema of specification states that this subset always exists (it is an axiom schema because there is one axiom for each φ {\displaystyle \varphi } ). Formally, let φ {\displaystyle \varphi } be any formula in the language of ZFC with all free variables among x , z , w 1 , … , w n {\displaystyle x,z,w_{1},\ldots ,w_{n}} ( y {\displaystyle y} is not free in φ {\displaystyle \varphi } ). Then: Note that the axiom schema of specification can only construct subsets and does not allow the construction of entities of the more general form: This restriction is necessary to avoid Russell's paradox (let y = { x : x ∉ x } {\displaystyle y=\{x:x\notin x\}} then y ∈ y ⇔ y ∉ y {\displaystyle y\in y\Leftrightarrow y\notin y} ) and its variants that accompany naive set theory with unrestricted comprehension (since under this restriction y {\displaystyle y} only refers to sets within z {\displaystyle z} that don't belong to themselves, and y ∈ z {\displaystyle y\in z} has not been established, even though y ⊆ z {\displaystyle y\subseteq z} is the case, so y {\displaystyle y} stands in a separate position from which it can't refer to or comprehend itself; therefore, in a certain sense, this axiom schema is saying that in order to build a y {\displaystyle y} on the basis of a formula φ ( x ) {\displaystyle \varphi (x)} , we need to previously restrict the sets y {\displaystyle y} will regard within a set z {\displaystyle z} that leaves y {\displaystyle y} outside so y {\displaystyle y} can't refer to itself; or, in other words, sets shouldn't refer to themselves). In some other axiomatizations of ZF, this axiom is redundant in that it follows from the axiom schema of replacement and the axiom of the empty set. On the other hand, the axiom schema of specification can be used to prove the existence of the empty set, denoted ∅ {\displaystyle \varnothing } , once at least one set is known to exist. One way to do this is to use a property φ {\displaystyle \varphi } which no set has. For example, if w {\displaystyle w} is any existing set, the empty set can be constructed as Thus, the axiom of the empty set is implied by the nine axioms presented here. The axiom of extensionality implies the empty set is unique (does not depend on w {\displaystyle w} ). It is common to make a definitional extension that adds the symbol " ∅ {\displaystyle \varnothing } " to the language of ZFC. === Axiom of pairing === If x {\displaystyle x} and y {\displaystyle y} are sets, then there exists a set which contains x {\displaystyle x} and y {\displaystyle y} as elements, for example if x = {1,2} and y = {2,3} then z will be {{1,2},{2,3}} The axiom schema of specification must be used to reduce this to a set with exactly these two elements. The axiom of pairing is part of Z, but is redundant in ZF because it follows from the axiom schema of replacement if we are given a set with at least two elements. The existence of a set with at least two elements is assured by either the axiom of infinity, or by the axiom schema of specification and the axiom of the power set applied twice to any set. === Axiom of union === The union over the elements of a set exists. For example, the union over the elements of the set { { 1 , 2 } , { 2 , 3 } } {\displaystyle \{\{1,2\},\{2,3\}\}} is { 1 , 2 , 3 } . {\displaystyle \{1,2,3\}.} The axiom of union states that for any set of sets F {\displaystyle {\mathcal {F}}} , there is a set A {\displaystyle A} containing every element that is a member of some member of F {\displaystyle {\mathcal {F}}} : Although this formula doesn't directly assert the existence of ∪ F {\displaystyle \cup {\mathcal {F}}} , the set ∪ F {\displaystyle \cup {\mathcal {F}}} can be constructed from A {\displaystyle A} in the above using the axiom schema of specification: === Axiom schema of replacement === The axiom schema of replacement asserts that the image of a set under any definable function will also fall inside a set. Formally, let φ {\displaystyle \varphi } be any formula in the language of ZFC whose free variables are among x , y , A , w 1 , … , w n , {\displaystyle x,y,A,w_{1},\dotsc ,w_{n},} so that in particular B {\displaystyle B} is not free in φ {\displaystyle \varphi } . Then: (The unique existential quantifier ∃ ! {\displaystyle \exists !} denotes the existence of exactly one element such that it follows a given statement.) In other words, if the relation φ {\displaystyle \varphi } represents a definable function f {\displaystyle f} , A {\displaystyle A} represents its domain, and f ( x ) {\displaystyle f(x)} is a set for every x ∈ A , {\displaystyle x\in A,} then the range of f {\displaystyle f} is a subset of some set B {\displaystyle B} . The form stated here, in which B {\displaystyle B} may be larger than strictly necessary, is sometimes called the axiom schema of collection. === Axiom of infinity === Let S ( w ) {\displaystyle S(w)} abbreviate w ∪ { w } , {\displaystyle w\cup \{w\},} where w {\displaystyle w} is some set. (We can see that { w } {\displaystyle \{w\}} is a valid set by applying the axiom of pairing with x = y = w {\displaystyle x=y=w} so that the set z is { w } {\displaystyle \{w\}} ). Then there exists a set X such that the empty set ∅ {\displaystyle \varnothing } , defined axiomatically, is a member of X and, whenever a set y is a member of X then S ( y ) {\displaystyle S(y)} is also a member of X. or in modern notation: ∃ X [ ∅ ∈ X ∧ ∀ y ( y ∈ X ⇒ S ( y ) ∈ X ) ] . {\displaystyle \exists X\left[\varnothing \in X\land \forall y(y\in X\Rightarrow S(y)\in X)\right].} More colloquially, there exists a set X having infinitely many members. (It must be established, however, that these members are all different because if two elements are the same, the sequence will loop around in a finite cycle of sets. The axiom of regularity prevents this from happening.) The minimal set X satisfying the axiom of infinity is the von Neumann ordinal ω which can also be thought of as the set of natural numbers N . {\displaystyle \mathbb {N} .} === Axiom of power set === By definition, a set z {\displaystyle z} is a subset of a set x {\displaystyle x} if and only if every element of z {\displaystyle z} is also an element of x {\displaystyle x} : The Axiom of power set states that for any set x {\displaystyle x} , there is a set y {\displaystyle y} that contains every subset of x {\displaystyle x} : The axiom schema of specification is then used to define the power set P ( x ) {\displaystyle {\mathcal {P}}(x)} as the subset of such a y {\displaystyle y} containing the subsets of x {\displaystyle x} exactly: Axioms 1–8 define ZF. Alternative forms of these axioms are often encountered, some of which are listed in Jech (2003). Some ZF axiomatizations include an axiom asserting that the empty set exists. The axioms of pairing, union, replacement, and power set are often stated so that the members of the set x {\displaystyle x} whose existence is being asserted are just those sets which the axiom asserts x {\displaystyle x} must contain. The following axiom is added to turn ZF into ZFC: === Axiom of well-ordering (choice) === The last axiom, commonly known as the axiom of choice, is presented here as a property about well-orders, as in Kunen (1980). For any set X {\displaystyle X} , there exists a binary relation R {\displaystyle R} which well-orders X {\displaystyle X} . This means R {\displaystyle R} is a linear order on X {\displaystyle X} such that every nonempty subset of X {\displaystyle X} has a least element under the order R {\displaystyle R} . Given axioms 1 – 8, many statements are provably equivalent to axiom 9. The most common of these goes as follows. Let X {\displaystyle X} be a set whose members are all nonempty. Then there exists a function f {\displaystyle f} from X {\displaystyle X} to the union of the members of X {\displaystyle X} , called a "choice function", such that for all Y ∈ X {\displaystyle Y\in X} one has f ( Y ) ∈ Y {\displaystyle f(Y)\in Y} . A third version of the axiom, also equivalent, is Zorn's lemma. Since the existence of a choice function when X {\displaystyle X} is a finite set is easily proved from axioms 1–8, AC only matters for certain infinite sets. AC is characterized as nonconstructive because it asserts the existence of a choice function but says nothing about how this choice function is to be "constructed". == Motivation via the cumulative hierarchy == One motivation for the ZFC axioms is the cumulative hierarchy of sets introduced by John von Neumann. In this viewpoint, the universe of set theory is built up in stages, with one stage for each ordinal number. At stage 0, there are no sets yet. At each following stage, a set is added to the universe if all of its elements have been added at previous stages. Thus the empty set is added at stage 1, and the set containing the empty set is added at stage 2. The collection of all sets that are obtained in this way, over all the stages, is known as V. The sets in V can be arranged into a hierarchy by assigning to each set the first stage at which that set was added to V. It is provable that a set is in V if and only if the set is pure and well-founded. And V satisfies all the axioms of ZFC if the class of ordinals has appropriate reflection properties. For example, suppose that a set x is added at stage α, which means that every element of x was added at a stage earlier than α. Then, every subset of x is also added at (or before) stage α, because all elements of any subset of x were also added before stage α. This means that any subset of x which the axiom of separation can construct is added at (or before) stage α, and that the powerset of x will be added at the next stage after α. The picture of the universe of sets stratified into the cumulative hierarchy is characteristic of ZFC and related axiomatic set theories such as Von Neumann–Bernays–Gödel set theory (often called NBG) and Morse–Kelley set theory. The cumulative hierarchy is not compatible with other set theories such as New Foundations. It is possible to change the definition of V so that at each stage, instead of adding all the subsets of the union of the previous stages, subsets are only added if they are definable in a certain sense. This results in a more "narrow" hierarchy, which gives the constructible universe L, which also satisfies all the axioms of ZFC, including the axiom of choice. It is independent from the ZFC axioms whether V = L. Although the structure of L is more regular and well behaved than that of V, few mathematicians argue that V = L should be added to ZFC as an additional "axiom of constructibility". == Metamathematics == === Virtual classes === Proper classes (collections of mathematical objects defined by a property shared by their members which are too big to be sets) can only be treated indirectly in ZF (and thus ZFC). An alternative to proper classes while staying within ZF and ZFC is the virtual class notational construct introduced by Quine (1969), where the entire construct y ∈ { x | Fx } is simply defined as Fy. This provides a simple notation for classes that can contain sets but need not themselves be sets, while not committing to the ontology of classes (because the notation can be syntactically converted to one that only uses sets). Quine's approach built on the earlier approach of Bernays & Fraenkel (1958). Virtual classes are also used in Levy (2002), Takeuti & Zaring (1982), and in the Metamath implementation of ZFC. === Finite axiomatization === The axiom schemata of replacement and separation each contain infinitely many instances. Montague (1961) included a result first proved in his 1957 Ph.D. thesis: if ZFC is consistent, it is impossible to axiomatize ZFC using only finitely many axioms. On the other hand, von Neumann–Bernays–Gödel set theory (NBG) can be finitely axiomatized. The ontology of NBG includes proper classes as well as sets; a set is any class that can be a member of another class. NBG and ZFC are equivalent set theories in the sense that any theorem not mentioning classes and provable in one theory can be proved in the other. === Consistency === Gödel's second incompleteness theorem says that a recursively axiomatizable system that can interpret Robinson arithmetic can prove its own consistency only if it is inconsistent. Moreover, Robinson arithmetic can be interpreted in general set theory, a small fragment of ZFC. Hence the consistency of ZFC cannot be proved within ZFC itself (unless it is actually inconsistent). Thus, to the extent that ZFC is identified with ordinary mathematics, the consistency of ZFC cannot be demonstrated in ordinary mathematics. The consistency of ZFC does follow from the existence of a weakly inaccessible cardinal, which is unprovable in ZFC if ZFC is consistent. Nevertheless, it is deemed unlikely that ZFC harbors an unsuspected contradiction; it is widely believed that if ZFC were inconsistent, that fact would have been uncovered by now. This much is certain – ZFC is immune to the classic paradoxes of naive set theory: Russell's paradox, the Burali-Forti paradox, and Cantor's paradox. Abian & LaMacchia (1978) studied a subtheory of ZFC consisting of the axioms of extensionality, union, powerset, replacement, and choice. Using models, they proved this subtheory consistent, and proved that each of the axioms of extensionality, replacement, and power set is independent of the four remaining axioms of this subtheory. If this subtheory is augmented with the axiom of infinity, each of the axioms of union, choice, and infinity is independent of the five remaining axioms. Because there are non-well-founded models that satisfy each axiom of ZFC except the axiom of regularity, that axiom is independent of the other ZFC axioms. If consistent, ZFC cannot prove the existence of the inaccessible cardinals that category theory requires. Huge sets of this nature are possible if ZF is augmented with Tarski's axiom. Assuming that axiom turns the axioms of infinity, power set, and choice (7 – 9 above) into theorems. === Independence === Many important statements are independent of ZFC. The independence is usually proved by forcing, whereby it is shown that every countable transitive model of ZFC (sometimes augmented with large cardinal axioms) can be expanded to satisfy the statement in question. A different expansion is then shown to satisfy the negation of the statement. An independence proof by forcing automatically proves independence from arithmetical statements, other concrete statements, and large cardinal axioms. Some statements independent of ZFC can be proven to hold in particular inner models, such as in the constructible universe. However, some statements that are true about constructible sets are not consistent with hypothesized large cardinal axioms. Forcing proves that the following statements are independent of ZFC: Axiom of constructibility (V=L) (which is also not a ZFC axiom) Continuum hypothesis Diamond principle Martin's axiom (which is not a ZFC axiom) Suslin hypothesis Remarks: The consistency of V=L is provable by inner models but not forcing: every model of ZF can be trimmed to become a model of ZFC + V=L. The diamond principle implies the continuum hypothesis and the negation of the Suslin hypothesis. Martin's axiom plus the negation of the continuum hypothesis implies the Suslin hypothesis. The constructible universe satisfies the generalized continuum hypothesis, the diamond principle, Martin's axiom and the Kurepa hypothesis. The failure of the Kurepa hypothesis is equiconsistent with the existence of a strongly inaccessible cardinal. A variation on the method of forcing can also be used to demonstrate the consistency and unprovability of the axiom of choice, i.e., that the axiom of choice is independent of ZF. The consistency of choice can be (relatively) easily verified by proving that the inner model L satisfies choice. (Thus every model of ZF contains a submodel of ZFC, so that Con(ZF) implies Con(ZFC).) Since forcing preserves choice, we cannot directly produce a model contradicting choice from a model satisfying choice. However, we can use forcing to create a model which contains a suitable submodel, namely one satisfying ZF but not C. Another method of proving independence results, one owing nothing to forcing, is based on Gödel's second incompleteness theorem. This approach employs the statement whose independence is being examined, to prove the existence of a set model of ZFC, in which case Con(ZFC) is true. Since ZFC satisfies the conditions of Gödel's second theorem, the consistency of ZFC is unprovable in ZFC (provided that ZFC is, in fact, consistent). Hence no statement allowing such a proof can be proved in ZFC. This method can prove that the existence of large cardinals is not provable in ZFC, but cannot prove that assuming such cardinals, given ZFC, is free of contradiction. === Proposed additions === The project to unify set theorists behind additional axioms to resolve the continuum hypothesis or other meta-mathematical ambiguities is sometimes known as "Gödel's program". Mathematicians currently debate which axioms are the most plausible or "self-evident", which axioms are the most useful in various domains, and about to what degree usefulness should be traded off with plausibility; some "multiverse" set theorists argue that usefulness should be the sole ultimate criterion in which axioms to customarily adopt. One school of thought leans on expanding the "iterative" concept of a set to produce a set-theoretic universe with an interesting and complex but reasonably tractable structure by adopting forcing axioms; another school advocates for a tidier, less cluttered universe, perhaps focused on a "core" inner model. == Criticisms == ZFC has been criticized both for being excessively strong and for being excessively weak, as well as for its failure to capture objects such as proper classes and the universal set. Many mathematical theorems can be proven in much weaker systems than ZFC, such as Peano arithmetic and second-order arithmetic (as explored by the program of reverse mathematics). Saunders Mac Lane and Solomon Feferman have both made this point. Some of "mainstream mathematics" (mathematics not directly connected with axiomatic set theory) is beyond Peano arithmetic and second-order arithmetic, but still, all such mathematics can be carried out in ZC (Zermelo set theory with choice), another theory weaker than ZFC. Much of the power of ZFC, including the axiom of regularity and the axiom schema of replacement, is included primarily to facilitate the study of the set theory itself. On the other hand, among axiomatic set theories, ZFC is comparatively weak. Unlike New Foundations, ZFC does not admit the existence of a universal set. Hence the universe of sets under ZFC is not closed under the elementary operations of the algebra of sets. Unlike von Neumann–Bernays–Gödel set theory (NBG) and Morse–Kelley set theory (MK), ZFC does not admit the existence of proper classes. A further comparative weakness of ZFC is that the axiom of choice included in ZFC is weaker than the axiom of global choice included in NBG and MK. There are numerous mathematical statements independent of ZFC. These include the continuum hypothesis, the Whitehead problem, and the normal Moore space conjecture. Some of these conjectures are provable with the addition of axioms such as Martin's axiom or large cardinal axioms to ZFC. Some others are decided in ZF+AD where AD is the axiom of determinacy, a strong supposition incompatible with choice. One attraction of large cardinal axioms is that they enable many results from ZF+AD to be established in ZFC adjoined by some large cardinal axiom. The Mizar system and metamath have adopted Tarski–Grothendieck set theory, an extension of ZFC, so that proofs involving Grothendieck universes (encountered in category theory and algebraic geometry) can be formalized. == See also == Foundations of mathematics Inner model Large cardinal axiom Related axiomatic set theories: Morse–Kelley set theory Von Neumann–Bernays–Gödel set theory Tarski–Grothendieck set theory Constructive set theory Internal set theory == Notes == == Bibliography == == External links == Axioms of set Theory - Lec 02 - Frederic Schuller on YouTube "ZFC", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Stanford Encyclopedia of Philosophy articles by Joan Bagaria: Bagaria, Joan (31 January 2023). "Set Theory". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. — (31 January 2023). "Axioms of Zermelo–Fraenkel Set Theory". In — (ed.). Stanford Encyclopedia of Philosophy. Metamath version of the ZFC axioms — A concise and nonredundant axiomatization. The background first order logic is defined especially to facilitate machine verification of proofs. A derivation in Metamath of a version of the separation schema from a version of the replacement schema. Weisstein, Eric W. "Zermelo-Fraenkel Set Theory". MathWorld.
Wikipedia/Zermelo-Fraenkel_set_theory
Stephen Edelston Toulmin (; 25 March 1922 – 4 December 2009) was a British philosopher, author, and educator. Influenced by Ludwig Wittgenstein, Toulmin devoted his works to the analysis of moral reasoning. Throughout his writings, he sought to develop practical arguments which can be used effectively in evaluating the ethics behind moral issues. His works were later found useful in the field of rhetoric for analyzing rhetorical arguments. The Toulmin model of argumentation, a diagram containing six interrelated components used for analyzing arguments, and published in his 1958 book The Uses of Argument, was considered his most influential work, particularly in the field of rhetoric and communication, and in computer science. == Biography == Stephen Toulmin was born in London, UK, on 25 March 1922 to Geoffrey Edelson Toulmin and Doris Holman Toulmin. He earned his Bachelor of Arts degree from King's College, Cambridge, in 1943, where he was a Cambridge Apostle. Soon after, Toulmin was hired by the Ministry of Aircraft Production as a junior scientific officer, first at the Malvern Radar Research and Development Station and later at the Supreme Headquarters of the Allied Expeditionary Force in Germany. At the end of World War II, he returned to England to earn a Master of Arts degree in 1947 and a PhD in philosophy from Cambridge University, subsequently publishing his dissertation as An Examination of the Place of Reason in Ethics (1950). While at Cambridge, Toulmin came into contact with the Austrian philosopher Ludwig Wittgenstein, whose examination of the relationship between the uses and the meanings of language shaped much of Toulmin's own work. After graduating from Cambridge, he was appointed University Lecturer in Philosophy of Science at Oxford University from 1949 to 1954, during which period he wrote a second book, The Philosophy of Science: an Introduction (1953). Soon after, he was appointed to the position of Visiting Professor of History and Philosophy of Science at Melbourne University in Australia from 1954 to 1955, after which he returned to England, and served as Professor and Head of the Department of Philosophy at the University of Leeds from 1955 to 1959. While at Leeds, he published one of his most influential books in the field of rhetoric, The Uses of Argument (1958), which investigated the flaws of traditional logic. Although it was poorly received in England and satirized as "Toulmin's anti-logic book" by Toulmin's fellow philosophers at Leeds, the book was applauded by the rhetoricians in the United States, where Toulmin served as a visiting professor at New York, Stanford, and Columbia Universities in 1959. While in the States, Wayne Brockriede and Douglas Ehninger introduced Toulmin's work to communication scholars, as they recognized that his work provided a good structural model useful for the analysis and criticism of rhetorical arguments. In 1960, Toulmin returned to London to hold the position of director of the Unit for History of Ideas of the Nuffield Foundation. In 1965, Toulmin returned to the United States, where he held positions at various universities. In 1967, Toulmin served as literary executor for close friend N.R. Hanson, helping in the posthumous publication of several volumes. While at the University of California, Santa Cruz, Toulmin published Human Understanding: The Collective Use and Evolution of Concepts (1972), which examines the causes and the processes of conceptual change. In this book, Toulmin uses a novel comparison between conceptual change and Charles Darwin's model of biological evolution to analyse the process of conceptual change as an evolutionary process. The book confronts major philosophical questions as well. In 1973, while a professor in the Committee on Social Thought at the University of Chicago, he collaborated with Allan Janik, a philosophy professor at La Salle University, on the book Wittgenstein's Vienna, which advanced a thesis that underscores the significance of history to human reasoning: Contrary to philosophers who believe the absolute truth advocated in Plato's idealized formal logic, Toulmin argues that truth can be a relative quality, dependent on historical and cultural contexts (what other authors have termed "conceptual schemata"). From 1975 to 1978, he worked with the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, established by the United States Congress. During this time, he collaborated with Albert R. Jonsen to write The Abuse of Casuistry: A History of Moral Reasoning (1988), which demonstrates the procedures for resolving moral cases. One of his most recent works, Cosmopolis: The Hidden Agenda of Modernity (1990), written while Toulmin held the position of the Avalon Foundation Professor of the Humanities at Northwestern University, specifically criticizes the practical use and the thinning morality underlying modern science. Toulmin held distinguished professorships at a number of different universities, including Columbia, Dartmouth College, Michigan State, Northwestern, Stanford, the University of Chicago, and the University of Southern California School of International Relations. In 1997 the National Endowment for the Humanities (NEH) selected Toulmin for the Jefferson Lecture, the U.S. federal government's highest honor for achievement in the humanities. His lecture, "A Dissenter's Story" (alternatively entitled "A Dissenter's Life"), discussed the roots of modernity in rationalism and humanism, the "contrast of the reasonable and the rational", and warned of the "abstractions that may still tempt us back into the dogmatism, chauvinism and sectarianism our needs have outgrown". The NEH report of the speech further quoted Toulmin on the need to "make the technical and the humanistic strands in modern thought work together more effectively than they have in the past". On 2 March 2006 Toulmin received the Austrian Decoration for Science and Art. He was married four times, once to June Goodfield, with whom he collaborated on a series of books on the history of science. His children are Greg, of McLean, Va., Polly Macinnes of Skye, Scotland, Camilla Toulmin in the UK and Matthew Toulmin of Melbourne, Australia. On 4 December 2009 Toulmin died of a heart failure at the age of 87 in Los Angeles, California. == Meta-philosophy == === Objection to absolutism and relativism === Throughout many of his works, Toulmin pointed out that absolutism (represented by theoretical or analytic arguments) has limited practical value. Absolutism is derived from Plato's idealized formal logic, which advocates universal truth; accordingly, absolutists believe that moral issues can be resolved by adhering to a standard set of moral principles, regardless of context. By contrast, Toulmin contends that many of these so-called standard principles are irrelevant to real situations encountered by human beings in daily life. To develop his contention, Toulmin introduced the concept of argument fields. In The Uses of Argument (1958), Toulmin claims that some aspects of arguments vary from field to field, and are hence called "field-dependent", while other aspects of argument are the same throughout all fields, and are hence called "field-invariant". The flaw of absolutism, Toulmin believes, lies in its unawareness of the field-dependent aspect of argument; absolutism assumes that all aspects of argument are field invariant. In Human Understanding (1972), Toulmin suggests that anthropologists have been tempted to side with relativists because they have noticed the influence of cultural variations on rational arguments. In other words, the anthropologist or relativist overemphasizes the importance of the "field-dependent" aspect of arguments, and neglects or is unaware of the "field-invariant" elements. In order to provide solutions to the problems of absolutism and relativism, Toulmin attempts throughout his work to develop standards that are neither absolutist nor relativist for assessing the worth of ideas. In Cosmopolis (1990), he traces philosophers' "quest for certainty" back to René Descartes and Thomas Hobbes, and lauds John Dewey, Wittgenstein, Martin Heidegger, and Richard Rorty for abandoning that tradition. === Humanizing modernity === In Cosmopolis Toulmin seeks the origins of the modern emphasis on universality (philosophers' "quest for certainty"), and criticizes both modern science and philosophers for having ignored practical issues in preference for abstract and theoretical issues. The pursuit of absolutism and theoretical arguments lacking practicality, for example, is, in his view, one of the main defects of modern philosophy. Similarly, Toulmin sensed a thinning of morality in the field of sciences, which has diverted its attention from practical issues concerning ecology to the production of the atomic bomb. To solve this problem, Toulmin advocated a return to humanism consisting of four returns: a return to oral communication and discourse, a plea which has been rejected by modern philosophers, whose scholarly focus is on the printed page; a return to the particular or individual cases that deal with practical moral issues occurring in daily life (as opposed to theoretical principles that have limited practicality); a return to the local, or to concrete cultural and historical contexts; and, finally, a return to the timely, from timeless problems to things whose rational significance depends on the time lines of our solutions. He follows up on this critique in Return to Reason (2001), where he seeks to illuminate the ills that, in his view, universalism has caused in the social sphere, discussing, among other things, the discrepancy between mainstream ethical theory and real-life ethical quandaries. == Argumentation == === Toulmin model of argument === Arguing that absolutism lacks practical value, Toulmin aimed to develop a different type of argument, called practical arguments (also known as substantial arguments). In contrast to absolutists' theoretical arguments, Toulmin's practical argument is intended to focus on the justificatory function of argumentation, as opposed to the inferential function of theoretical arguments. Whereas theoretical arguments make inferences based on a set of principles to arrive at a claim, practical arguments first find a claim of interest, and then provide justification for it. Toulmin believed that reasoning is less an activity of inference, involving the discovering of new ideas, and more a process of testing and sifting already existing ideas—an act achievable through the process of justification. Toulmin believed that for a good argument to succeed, it needs to provide good justification for a claim. This, he believed, will ensure it stands up to criticism and earns a favourable verdict. In The Uses of Argument (1958), Toulmin proposed a layout containing six interrelated components for analyzing arguments: Claim (Conclusion) A conclusion whose merit must be established. In argumentative essays, it may be called the thesis. For example, if a person tries to convince a listener that he is a British citizen, the claim would be "I am a British citizen" (1). Ground (Fact, Evidence, Data) A fact one appeals to as a foundation for the claim. For example, the person introduced in 1 can support his claim with the supporting data "I was born in Bermuda" (2). Warrant A statement authorizing movement from the ground to the claim. In order to move from the ground established in 2, "I was born in Bermuda", to the claim in 1, "I am a British citizen", the person must supply a warrant to bridge the gap between 1 and 2 with the statement "A man born in Bermuda will legally be a British citizen" (3). Backing Credentials designed to certify the statement expressed in the warrant; backing must be introduced when the warrant itself is not convincing enough to the readers or the listeners. For example, if the listener does not deem the warrant in 3 as credible, the speaker will supply the legal provisions: "I trained as a barrister in London, specialising in citizenship, so I know that a man born in Bermuda will legally be a British citizen". Rebuttal (Reservation) Statements recognizing the restrictions which may legitimately be applied to the claim. It is exemplified as follows: "A man born in Bermuda will legally be a British citizen, unless he has betrayed Britain and has become a spy for another country". Qualifier Words or phrases expressing the speaker's degree of force or certainty concerning the claim. Such words or phrases include "probably", "possible", "impossible", "certainly", "presumably", "as far as the evidence goes", and "necessarily". The claim "I am definitely a British citizen" has a greater degree of force than the claim "I am a British citizen, presumably". (See also: Defeasible reasoning.) The first three elements, claim, ground, and warrant, are considered as the essential components of practical arguments, while the second triad, qualifier, backing, and rebuttal, may not be needed in some arguments. When Toulmin first proposed it, this layout of argumentation was based on legal arguments and intended to be used to analyze the rationality of arguments typically found in the courtroom. Toulmin did not realize that this layout could be applicable to the field of rhetoric and communication until his works were introduced to rhetoricians by Wayne Brockriede and Douglas Ehninger. Their Decision by Debate (1963) streamlined Toulmin's terminology and broadly introduced his model to the field of debate. Only after Toulmin published Introduction to Reasoning (1979) were the rhetorical applications of this layout mentioned in his works. One criticism of the Toulmin model is that it does not fully consider the use of questions in argumentation. The Toulmin model assumes that an argument starts with a fact or claim and ends with a conclusion, but ignores an argument's underlying questions. In the example "Harry was born in Bermuda, so Harry must be a British subject", the question "Is Harry a British subject?" is ignored, which also neglects to analyze why particular questions are asked and others are not. (See Issue mapping for an example of an argument-mapping method that emphasizes questions.) Toulmin's argument model has inspired research on, for example, goal structuring notation (GSN), widely used for developing safety cases, and argument maps and associated software. == Ethics == === Good reasons approach === In Reason in Ethics (1950), his doctoral dissertation, Toulmin sets out a Good Reasons approach of ethics, and criticizes what he considers to be the subjectivism and emotivism of philosophers such as A. J. Ayer because, in his view, they fail to do justice to ethical reasoning. === Revival of casuistry === By reviving casuistry (also known as case ethics), Toulmin sought to find the middle ground between the extremes of absolutism and relativism. Casuistry was practiced widely during the Middle Ages and the Renaissance to resolve moral issues. Although casuistry largely fell silent during the modern period, in The Abuse of Casuistry: A History of Moral Reasoning (1988), Toulmin collaborated with Albert R. Jonsen to demonstrate the effectiveness of casuistry in practical argumentation during the Middle Ages and the Renaissance, effectively reviving it as a permissible method of argument. Casuistry employs absolutist principles, called "type cases" or "paradigm cases", without resorting to absolutism. It uses the standard principles (for example, sanctity of life) as referential markers in moral arguments. An individual case is then compared and contrasted with the type case. Given an individual case that is completely identical to the type case, moral judgments can be made immediately using the standard moral principles advocated in the type case. If the individual case differs from the type case, the differences will be critically assessed in order to arrive at a rational claim. Through the procedure of casuistry, Toulmin and Jonsen identified three problematic situations in moral reasoning: first, the type case fits the individual case only ambiguously; second, two type cases apply to the same individual case in conflicting ways; third, an unprecedented individual case occurs, which cannot be compared or contrasted to any type case. Through the use of casuistry, Toulmin demonstrated and reinforced his previous emphasis on the significance of comparison to moral arguments, a significance not addressed in theories of absolutism or relativism. == Philosophy of science == === Evolutionary model === In 1972, Toulmin published Human Understanding, in which he asserts that conceptual change is an evolutionary process. In this book, Toulmin attacks Thomas Kuhn's account of conceptual change in his seminal work The Structure of Scientific Revolutions (1962). Kuhn believed that conceptual change is a revolutionary process (as opposed to an evolutionary process), during which mutually exclusive paradigms compete to replace one another. Toulmin criticized the relativist elements in Kuhn's thesis, arguing that mutually exclusive paradigms provide no ground for comparison, and that Kuhn made the relativists' error of overemphasizing the "field variant" while ignoring the "field invariant" or commonality shared by all argumentation or scientific paradigms. In contrast to Kuhn's revolutionary model, Toulmin proposed an evolutionary model of conceptual change comparable to Darwin's model of biological evolution. Toulmin states that conceptual change involves the process of innovation and selection. Innovation accounts for the appearance of conceptual variations, while selection accounts for the survival and perpetuation of the soundest conceptions. Innovation occurs when the professionals of a particular discipline come to view things differently from their predecessors; selection subjects the innovative concepts to a process of debate and inquiry in what Toulmin considers as a "forum of competitions". The soundest concepts will survive the forum of competition as replacements or revisions of the traditional conceptions. From the absolutists' point of view, concepts are either valid or invalid regardless of contexts. From the relativists' perspective, one concept is neither better nor worse than a rival concept from a different cultural context. From Toulmin's perspective, the evaluation depends on a process of comparison, which determines whether or not one concept will improve explanatory power more than its rival concepts. == Pantheon of skeptics == At a meeting of the executive council of the Committee for Skeptical Inquiry (CSI) in Denver, Colorado in April 2011, Toulmin was selected for inclusion in CSI's Pantheon of Skeptics. The Pantheon of Skeptics was created by CSI to remember the legacy of deceased fellows of CSI and their contributions to the cause of scientific skepticism. == Works == An Examination of the Place of Reason in Ethics (1950) ISBN 0-226-80843-2 The Philosophy of Science: An Introduction (1953) The Uses of Argument (1958) 2nd edition 2003: ISBN 0-521-53483-6 Metaphysical Beliefs, Three Essays (1957) with Ronald W. Hepburn and Alasdair MacIntyre The Riviera (1961) Seventeenth century science and the arts (1961) Foresight and Understanding: An Enquiry into the Aims of Science (1961) ISBN 0-313-23345-4 The Fabric of the Heavens (The Ancestry of Science, volume 1) (1961) with June Goodfield ISBN 0-226-80848-3 The Architecture of Matter (The Ancestry of Science, volume 2) (1962) with June Goodfield ISBN 0-226-80840-8 Night Sky at Rhodes (1963) The Discovery of Time (The Ancestry of Science, volume 3) (1965) with June Goodfield ISBN 0-226-80842-4 Physical Reality (1970) Human Understanding: The Collective Use and Evolution of Concepts (1972) ISBN 0-691-01996-7 Wittgenstein's Vienna (1973) with Allan Janik On the Nature of the Physician's Understanding (1976) Knowing and Acting: An Invitation to Philosophy (1976) ISBN 0-02-421020-X An Introduction to Reasoning with Allan Janik and Richard D. Rieke (1979), 2nd ed. 1984; 3rd edition 1997: ISBN 0-02-421160-5 The Return to Cosmology: Postmodern Science and the Theology of Nature (1985) ISBN 0-520-05465-2 The Abuse of Casuistry: A History of Moral Reasoning (1988) with Albert R. Jonsen ISBN 0-520-06960-9 Cosmopolis: The Hidden Agenda of Modernity (1990) ISBN 0-226-80838-6 Social Impact of AIDS in the United States (1993) with Albert R. Jonsen Beyond theory – changing organizations through participation (1996) with Björn Gustavsen (editors) Return to Reason (2001) ISBN 0-674-01235-6 == See also == Argumentation theory Cambridge University Moral Sciences Club == References == == Further reading == Hitchcock, David; Verheij, Bart, eds. (2006). Arguing on the Toulmin Model: New Essays in Argument Analysis and Evaluation. Springer-Verlag Netherlands. doi:10.1007/978-1-4020-4938-5_3. ISBN 978-1-4020-4937-8. OCLC 82229075. == External links == Stephen Toulmin: An Intellectual Odyssey at the Wayback Machine (archived 15 February 2006) Stephen Toulmin Interview with Stephen Toulmin in JAC Obituary in The Guardian
Wikipedia/Toulmin_model
The sociology of scientific knowledge (SSK) is the study of science as a social activity, especially dealing with "the social conditions and effects of science, and with the social structures and processes of scientific activity." The sociology of scientific ignorance (SSI) is complementary to the sociology of scientific knowledge. For comparison, the sociology of knowledge studies the impact of human knowledge and the prevailing ideas on societies and relations between knowledge and the social context within which it arises. Sociologists of scientific knowledge study the development of a scientific field and attempt to identify points of contingency or interpretative flexibility where ambiguities are present. Such variations may be linked to a variety of political, historical, cultural or economic factors. Crucially, the field does not set out to promote relativism or to attack the scientific project; the objective of the researcher is to explain why one interpretation rather than another succeeds due to external social and historical circumstances. The field emerged in the late 1960s and early 1970s and at first was an almost exclusively British practice. Other early centers for the development of the field were in France, Germany, and the United States (notably at Cornell University). Major theorists include Barry Barnes, David Bloor, Sal Restivo, Randall Collins, Gaston Bachelard, Harry Collins, Karin Knorr Cetina, Paul Feyerabend, Steve Fuller, Martin Kusch, Bruno Latour, Mike Mulkay, Derek J. de Solla Price, Lucy Suchman and Anselm Strauss. == Programmes and schools == The sociology of scientific knowledge in its Anglophone versions emerged in the 1970s in self-conscious opposition to the sociology of science associated with the American Robert K. Merton, generally considered one of the seminal authors in the sociology of science. Merton's was a kind of "sociology of scientists," which left the cognitive content of science out of sociological account; SSK by contrast aimed at providing sociological explanations of scientific ideas themselves, taking its lead from aspects of the work of Ludwik Fleck, Thomas S. Kuhn, but especially from established traditions in cultural anthropology (Durkheim, Mauss) as well as the late Wittgenstein. David Bloor, one of SSK's early champions, has contrasted the so-called 'weak programme' (or 'program'—either spelling is used) which merely gives social explanations for erroneous beliefs, with what he called the 'strong programme', which considers sociological factors as influencing all beliefs. The weak programme is more of a description of an approach than an organised movement. The term is applied to historians, sociologists and philosophers of science who merely cite sociological factors as being responsible for those beliefs that went wrong. Imre Lakatos and (in some moods) Thomas S. Kuhn might be said to adhere to it. The strong programme is particularly associated with the work of two groups: the 'Edinburgh School' (David Bloor, Barry Barnes, and their colleagues at the Science Studies Unit at the University of Edinburgh) in the 1970s and '80s, and the 'Bath School' (Harry Collins and others at the University of Bath) in the same period. "Edinburgh sociologists" and "Bath sociologists" promoted, respectively, the Strong Programme and Empirical Programme of Relativism (EPOR). Also associated with SSK in the 1980s was discourse analysis as applied to science (associated with Michael Mulkay at the University of York), as well as a concern with issues of reflexivity arising from paradoxes relating to SSK's relativist stance towards science and the status of its own knowledge-claims (Steve Woolgar, Malcolm Ashmore). The sociology of scientific knowledge has major international networks through its principal associations, 4S and EASST, with recently established groups in Japan, South Korea, Taiwan, and Latin America. It has made major contributions in recent years to a critical analysis of the biosciences and informatics. == The sociology of mathematical knowledge == Studies of mathematical practice and quasi-empiricism in mathematics are also rightly part of the sociology of knowledge since they focus on the community of those who practice mathematics. Since Eugene Wigner raised the issue in 1960 and Hilary Putnam made it more rigorous in 1975, the question of why fields such as physics and mathematics should agree so well has been debated. Proposed solutions point out that the fundamental constituents of mathematical thought, space, form-structure, and number-proportion are also the fundamental constituents of physics. It is also worthwhile to note that physics is more than merely modeling of reality and the objective basis is upon observational demonstration. Another approach is to suggest that there is no deep problem, that the division of human scientific thinking through using words such as 'mathematics' and 'physics' is only useful in their practical everyday function to categorize and distinguish. Fundamental contributions to the sociology of mathematical knowledge have been made by Sal Restivo and David Bloor. Restivo draws upon the work of scholars such as Oswald Spengler (The Decline of the West, 1918), Raymond Louis Wilder and Leslie Alvin White, as well as contemporary sociologists of knowledge and science studies scholars. David Bloor draws upon Ludwig Wittgenstein and other contemporary thinkers. They both claim that mathematical knowledge is socially constructed and has irreducible contingent and historical factors woven into it. More recently Paul Ernest has proposed a social constructivist account of mathematical knowledge, drawing on the works of both of these sociologists. == Criticism == SSK has received criticism from theorists of the actor-network theory (ANT) school of science and technology studies. These theorists criticise SSK for sociological reductionism and a human centered universe. SSK, they say, relies too heavily on human actors and social rules and conventions settling scientific controversies. The debate is discussed in an article titled Epistemological Chicken. == See also == == Notes == == References == Kusch, Martin (1998). "Sociology of scientific knowledge – research guide". Retrieved February 23, 2012. == Further reading == Baez, John (2010). "The Bogdanoff Affair". Bloor, David (1976) Knowledge and social imagery. London: Routledge. Bloor, David (1999) "Anti-Latour". Studies in History and Philosophy of Science Part A Volume 30, Issue 1, March 1999, Pages 81–112. Chu, Dominique (2013), The Science Myth---God, society, the self and what we will never know, ISBN 1782790470 Collins, H.M. (1975) The seven sexes: A study in the sociology of a phenomenon, or the replication of experiments in physics, Sociology, 9, 205-24. Collins, H.M. (1985). Changing order: Replication and induction in scientific practice. London: Sage. Collins, Harry and Steven Yearley. (1992). "Epistemological Chicken" in Science as Practice and Culture, A. Pickering (ed.). Chicago: The University of Chicago Press, 301-326. Edwards, D., Ashmore, M. & Potter, J. (1995). Death and furniture: The rhetoric, politics, and theology of bottom line arguments against relativism. History of the Human Sciences, 8, 25-49. Fleck, Ludwik (1935). Entstehung und Entwicklung einer wissenschaftlichen Tatsache. Einführung in die Lehre vom Denkstil und Denkkollektiv [Emergence and development of a scientific fact: Introduction to the study of thinking style and thinking collectives] (in German). Verlagsbuchhandlung, Basel: Schwabe. Fleck, Ludwik (1979). Genesis and development of a scientific fact. Chicago, Illinois: University of Chicago Press. Gilbert, G. N. & Mulkay, M. (1984). Opening Pandora's box: A sociological analysis of scientists' discourse. Cambridge: Cambridge University Press. Latour, B. & Woolgar, S. (1986). Laboratory life: The construction of scientific facts. 2nd Edition. Princeton: Princeton University Press. (not an SSK-book, but has a similar approach to science studies) Latour, B. (1987). Science in action : how to follow scientists and engineers through society. Cambridge, MA: Harvard University Press. (not an SSK-book, but has a similar approach to science studies) Pickering, A. (1984). Constructing Quarks: A sociological history of particle physics. Chicago; University of Chicago Press. Schantz, Richard and Markus Seidel (2011). The Problem of Relativism in the Sociology of (Scientific) Knowledge. Frankfurt: ontos. Shapin, S. & Schaffer, S. (1985). Leviathan and the Air-Pump. Princeton, NJ: Princeton University Press. Williams, R. & Edge, D. (1996). The Social Shaping of Technology. Research Policy, vol. 25, pp. 856–899 [1] Willard, Charles Arthur. (1996). Liberalism and the Problem of Knowledge: A New Rhetoric for Modern Democracy, University of Chicago Press. Zuckerman, Harriet. (1988). "The sociology of science." In NJ Smelser (Ed.), Handbook of sociology (p. 511–574). London: Sage. Jasanoff, S. Markle, G. Pinch T. & Petersen, J. (Eds)(2002), Handbook of science, technology and society, Rev Ed.. London: Sage. Other relevant materials Becker, Ernest (1968). The structure of evil; an essay on the unification of the science of man. New York: G. Braziller. Shapin, Steven (1995). "Here and Everywhere: Sociology of Scientific Knowledge" (PDF). Annual Review of Sociology. 21. Annual Reviews: 289–321. doi:10.1146/annurev.so.21.080195.001445. S2CID 3395517. Historical sociologist Simon Schaffer and Steven Shapin are interviewed on SSK The Sociology of Ignorance website featuring the sociology of scientific ignorance Strong Programme in Sociology of Knowledge and Actor-Network Theory: The Debate within Science Studies (includes questions posed to David Bloor and Bruno Latour related to their dispute, in Appendix) == External links == Sociology of Science at PhilPapers
Wikipedia/Sociology_of_science
Isocrates (; Ancient Greek: Ἰσοκράτης [isokrátɛ̂ːs]; 436–338 BC) was an ancient Greek rhetorician, one of the ten Attic orators. Among the most influential Greek rhetoricians of his time, Isocrates made many contributions to rhetoric and education through his teaching and written works. Greek rhetoric is commonly traced to Corax of Syracuse, who first formulated a set of rhetorical rules in the fifth century BC. His pupil Tisias was influential in the development of the rhetoric of the courtroom, and by some accounts was the teacher of Isocrates. Within two generations, rhetoric had become an important art, its growth driven by social and political changes such as democracy and courts of law. Isocrates starved himself to death - due to the perceived loss of Greek liberty, following the Battle of Chaeronea, two years before his 100th birthday. == Early life and influences == Isocrates was born into a prosperous family in Athens at the height of Athens' power shortly before the outbreak of the Peloponnesian War (431–404 BC). Suda writes that Isocrates was the son of Theodorus who owned a workshop that manufactured aulos. His mother's name was Heduto. He had a sister and three brothers; two of the brothers were Tisippos (Ancient Greek: Τίσιππος) and Theomnestos (Ancient Greek: Θεόμνηστος). Isocrates received a first-rate education. "He is reported to have studied with several prominent teachers, including Tisias (one of the traditional founders of rhetoric), the sophists Prodicus and Gorgias, and the moderate oligarch Theramenes, and to have associated with Socrates, but these reports may reflect later views of his intellectual roots more than historical fact". He passed his youth in a period following the death of Pericles, a time in which "wealth – both public and private – was dissipated", and "political decision were ill-conceived and violent" according to the 2020 Encyclopedia Britannica. Isocrates would have been 14 years old when the democracy voted to kill all the male citizens of the small Thracian city of Scione. There are accounts, including that of Isocrates himself, stating that the Peloponnesian War wiped out his father's estate, and Isocrates was forced to earn a living. Late in his life, he married a woman named Plathane (daughter of the sophist Hippias) and adopted Aphareus (writer), one of her sons by a previous marriage. == Career == There is no evidence for Isocrates' participation in public life during Peloponnesian War (431–404). His professional career is said to have begun with logography: he was a hired courtroom speechwriter. Athenian citizens did not hire lawyers; legal procedure required self-representation. Instead, they would hire people like Isocrates to write speeches for them. Isocrates had a great talent for this and he amassed a considerable fortune. According to Pliny the Elder (NH VII.30) he could sell a single oration for twenty talents. However, his weak voice meant that he was not himself a good public speaker. He played no direct part in state affairs, but he published many pamphlets which influenced the public and provide significant insight into major political issues of the day. === Pedagogy === Around 392 BC Isocrates set up his own school of rhetoric at the Lyceum. Prior to Isocrates, teaching consisted of first-generation Sophists, such as Gorgias and Protagoras, walking from town to town as itinerants, who taught any individuals interested in political occupations how to be effective in public speaking. Isocrates encouraged his students to wander and observe public behavior in the city (Athens) to learn through imitation. His students aimed to learn how to serve the city. "At the core of his teaching was an aristocratic notion of arete ("virtue, excellence"), which could be attained by pursuing philosophia – not so much the dialectical study of abstract subjects like epistemology and metaphysics that Plato marked as "philosophy" as the study and practical application of ethics, politics and public speaking". The philosopher Plato (a rival of Isocrates) founded his own academy in response to Isocrates' foundation. Isocrates accepted no more than nine pupils at a time. Many of them went on to be prominent philosophers, legislators, historians, orators, writers, and military and political leaders. The first students in Isocrates' school were Athenians. However, after he published the Panegyricus in 380 BC, his reputation spread to many other parts of Greece. Some of his students included Isaeus, Lycurgus, Hypereides, Ephorus, Theopompus, Speusippus, and Timotheus. Many of these students remained under the instruction of Isocrates for three to four years. Timotheus had such a great appreciation for Isocrates that he erected a statue at Eleusis and dedicated it to him. == Philosophy of rhetoric == According to George Norlin, Isocrates defined rhetoric as outward feeling and inward thought of not merely expression, but reason, feeling, and imagination. Like most who studied rhetoric before and after him, Isocrates believed it was used to persuade ourselves and others, but also used in directing public affairs. Isocrates described rhetoric as "that endowment of our human nature which raises us above mere animality and enables us to live the civilized life." Isocrates unambiguously defined his approach in the speech "Against the Sophists". This polemic was written to explain and advertise the reasoning and educational principles behind his new school. He promoted broad-based education by speaking against two types of teachers: the Eristics, who disputed about theoretical and ethical matters, and the Sophists, who taught political debate techniques. Also, while Isocrates is viewed by many as being a rhetor and practicing rhetoric, he refers to his study as philosophia—which he claims as his own. "Against the Sophists" is Isocrates' first published work where he gives an account of philosophy. His principal method is to contrast his ways of teaching with Sophism. While Isocrates does not go against the Sophist method of teaching as a whole, he emphasizes his disagreement with bad Sophistic practices. Isocrates' program of rhetorical education stressed the ability to use language to address practical problems, and he referred to his teachings as more of a philosophy than a school of rhetoric. He emphasized that students needed three things to learn: a natural aptitude which was inborn, knowledge training granted by teachers and textbooks, and applied practices designed by educators. He also stressed civic education, training students to serve the state. Students would practice composing and delivering speeches on various subjects. He considered natural ability and practice to be more important than rules or principles of rhetoric. Rather than delineating static rules, Isocrates stressed "fitness for the occasion," or kairos (the rhetor's ability to adapt to changing circumstances and situations). His school lasted for over fifty years, in many ways establishing the core of liberal arts education as we know it today, including oratory, composition, history, citizenship, culture, and morality. == Publications == Of the 60 orations in his name available in Roman times, 21 remained in transmission by the end of the medieval period. The earliest manuscripts dated from the ninth or tenth century, until fourth century copies of Isocrates' first three orations were found in a single codex during a 1990's excavation at Kellis, a site in the Dakhla Oasis of Egypt. We have nine letters in his name, but the authenticity of four of those has been questioned. He is said to have compiled a treatise, the Art of Rhetoric, but there is no known copy. Other surviving works include his autobiographical Antidosis, and educational texts such as Against the Sophists. Isocrates wrote a collection of ten known orations, three of which were directed to the rulers of Salamis on Cyprus. In To Nicocles, Isocrates suggests first how the new king might rule best. For the extent of the rest of the oration, Isocrates advises Nicocles of ways to improve his nature, such as the use of education and studying the best poets and sages. Isocrates concludes with the notion that, in finding the happy mean, it is better to fall short than to go to excess. His second oration concerning Nicocles was related to the rulers of Salamis on Cyprus; this was written for the king and his subjects. Isocrates again stresses that the surest sign of good understanding is education and the ability to speak well. The king uses this speech to communicate to the people what exactly he expects of them. Isocrates makes a point in stating that courage and cleverness are not always good, but moderation and justice are. The third oration about Cyprus is an encomium to Euagoras who is the father of Nicocles. Isocrates uncritically applauds Euagoras for forcibly taking the throne of Salamis and continuing rule until his assassination in 374 BC. Two years after his completion of the three orations, Isocrates wrote an oration for Archidamus, the prince of Sparta. Isocrates considered the settling of the Thebans colonists in Messene a violation of the Peace of Antalcidas. He was bothered most by the fact that this ordeal would not restore the true Messenians but rather the Helots, in turn making these slaves masters. Isocrates believed justice was most important, which secured the Spartan laws but he did not seem to recognize the rights of the Helots. Ten years later Isocrates wrote a letter to Archidamus, now the king of Sparta, urging him to reconcile the Greeks, stopping their wars with each other so that they could end the insolence of the Persians. At the end of the Social War in 355 BC, 80-year-old Isocrates wrote an oration addressed to the Athenian assembly entitled On the Peace; Aristotle called it On the Confederacy. Isocrates wrote this speech for the reading public, asking that both sides be given an unbiased hearing. Those in favour of peace have never caused misfortune, while those embracing war lurched into many disasters. Isocrates criticized the flatterers who had brought ruin to their public affairs. === Antidosis === === Panathenaicus === In Panathenaicus, Isocrates argues with a student about the literacy of the Spartans. In section 250, the student claims that the most intelligent of the Spartans admired and owned copies of some of Isocrates' speeches. The implication is that some Spartans had books, were able to read them, and were eager to do so. The Spartans, however, needed an interpreter to clear up any misunderstandings of double meanings which might lie concealed beneath the surface of complicated words. This text indicates that some Spartans were not illiterate. This text is important to scholars' understanding of literacy in Sparta because it indicates that Spartans were able to read and that they often put written documents to use in their public affairs. === Major orations === Ad Demonicum Ad Nicoclem Archidamus Busiris De Pace Evagoras Helena Nicocles Panegyricus Philippus == Legacy == Because of Plato's attacks on the sophists, Isocrates' school – having its roots, if not the entirety of its mission, in rhetoric, the domain of the sophists – came to be viewed as unethical and deceitful. Yet many of Plato's criticisms are hard to substantiate in the actual work of Isocrates; at the end of Phaedrus, Plato even shows Socrates praising Isocrates (though some scholars have taken this to be sarcasm). Isocrates saw the ideal orator as someone who must possess not only rhetorical gifts, but also a wide knowledge of philosophy, science, and the arts. He promoted the Greek ideals of freedom, self-control, and virtue; in this, he influenced several Roman rhetoricians, such as Cicero and Quintilian, and influenced the core concepts of liberal arts education. Although Isocrates has been largely marginalized in the history of philosophy, his contributions to the study and practice of rhetoric have received more attention. Thomas M. Conley argues that through Isocrates' influence on Cicero, whose writings on rhetoric were the most widely and continuously studied until the modern era, "it might be said that Isocrates, of all the Greeks, was the greatest." With the neo-Aristotelian turn in rhetoric, Isocrates' work sometimes gets cast as a mere precursor to Aristotle's systematic account in On Rhetoric. However, Ekaterina Haskins reads Isocrates as an enduring and worthwhile counter to Aristotelian rhetoric. Rather than the Aristotelian position on rhetoric as a neutral tool, Isocrates understands rhetoric as an identity-shaping performance that activates and sustains civic identity. The Isocratean position on rhetoric can be thought of as ancient antecedent to the twentieth century theorist Kenneth Burke's conception that rhetoric is rooted in identification. Isocrates' work has also been described as proto-Pragmatist, owing to his assertion that rhetoric makes use of probable knowledge with the aim resolving real problems in the world. Isocrates' innovations in the art of rhetoric paid closer attention to expression and rhythm than any other Greek writer, though because his sentences were so complex and artistic, he often sacrificed clarity. == See also == Anaximenes of Lampsacus Paideia Papyrus Oxyrhynchus 27 Protrepticus (Aristotle) == References == == Further reading == Benoit, William L. (1984). "Isocrates on Rhetorical Education". Communication Education. 33 (2): 109–119. doi:10.1080/03634528409384727. Bizzell, Patricia; Herzberg, Bruce, eds. (2001). The rhetorical tradition: Readings from classical times to the present (2nd ed.). Boston: Bedford/St. Martin's. ISBN 978-0-312-14839-3. Bury, J.B. (1913). A History of Greece. Macmillan: London. Eucken, von Christoph (1983). Isokrates: Seine Positionen in der Auseinandersetzung mit den zeitgenössischen Philosophen (in German). Berlin: W. de Gruyter. ISBN 978-3-11-008646-1. Golden, James L.; Berquist, Goodwin F.; Coleman, William E. (2007). The rhetoric of Western thought (9th ed.). Dubuque, Iowa: Kendall / Hunt. ISBN 978-0-7575-3838-4. Grube, G.M.A. (1965). The Greek and Roman Critics. London: Methuen. Haskins, Ekaterina V. (2004). Logos and power in Isocrates and Aristotle. Columbia, SC: University of South Carolina Press. ISBN 978-1-57003-526-5. Isocrates (1752), The Orations and Epistles, translated by Joshua Dinsdale (London, printed for T. Waller) Isocrates (2000). Isocrates I. David Mirhady, Yun Lee Too, trans. Austin: University of Texas Press. ISBN 978-0-292-75237-5. Isocrates (2004). Isocrates II. Translated by Terry L. Papillon. Austin: University of Texas Press. ISBN 978-0-292-70245-5. Isocrates. Loeb Classical Library. Translated by George Norlin; Larue van Hook. Cambridge, Massachusetts: Harvard University Press. 1968. ISBN 978-0-674-99231-3. Vol. 1 (1928), Vol. 2 (1929), Vol. 3 (1954 repr) Livingstone, Niall (2001). A commentary on Isocrates' Busiris. Boston: Brill. ISBN 978-90-04-12143-0. Muir, J.R. (2005). "Is our history of educational philosophy mostly wrong?: The case of Isocrates". Theory and Research in Education. 3 (2): 165–195. doi:10.1177/1477878505053300. S2CID 145489575. Muir, J.R. (2018). The Legacy of Isocrates and a Platonic Alternative. London: Routledge. Muir, J.R. (2022) Isocrates: Historiography, Methodology, and the Virtues of Educators. Cham, Switzerland: Springer. Papillon, Terry (1998). "Isocrates and the Greek Poetic Tradition" (PDF). Scholia. 7: 41–61. Poulakos, Takis (1997). Speaking for the polis: Isocrates' rhetorical education. Columbia, South Carolina: University of South Carolina Press. ISBN 978-1-57003-177-9. Poulakos, Takis; Depew, David J., eds. (2004). Isocrates and civic education. Austin: University of Texas Press. ISBN 978-0-292-70219-6. Waterfield, Robin (2002). "Notes". Plato's Phaedrus. Oxford University Press. Romilly, Jacqueline de (1985). Magic and rhetoric in ancient Greece. Cambridge, Massachusetts: Harvard University Press. ISBN 978-0-674-54152-8. Smith, Robert W.; Bryant, Donald C., eds. (1969). Ancient Greek and Roman Rhetoricians: A Biographical Dictionary. Columbia, Missouri: Artcraft Press. Too, Yun Lee (1995). The rhetoric of identity in Isocrates: text, power, pedagogy. Cambridge: Cambridge University Press. ISBN 978-0-521-47406-1. Too, Yun Lee (2008). A commentary on Isocrates' Antidosis. Oxford: Oxford University Press. ISBN 978-0-19-923807-1. Usener, Sylvia (1994). Isokrates, Platon und ihr Publikum: Hörer und Leser von Literatur im 4. Jahrhundert v. Chr (in German). Tübingen: Narr. ISBN 978-3-8233-4278-6. == External links == "Isocrates" . Encyclopædia Britannica. Vol. 14 (11th ed.). 1911. "Plutarch", Life of Isocrates (attalus.org) B. Keith Murphy (Fort Valley State University) – Isocrates English Translation of various texts Isocrates (436–338 B.C.) Isocratis sermo de regno ad Nicoclem regem. Bartholomei Facii Orationes at Somni
Wikipedia/Isocrates
Jurisprudence, also known as theory of law or philosophy of law, is the examination in a general perspective of what law is and what it ought to be. It investigates issues such as the definition of law; legal validity; legal norms and values; and the relationship between law and other fields of study, including economics, ethics, history, sociology, and political philosophy. Modern jurisprudence began in the 18th century and was based on the first principles of natural law, civil law, and the law of nations. Contemporary philosophy of law addresses problems internal to law and legal systems and problems of law as a social institution that relates to the larger political and social context in which it exists. Jurisprudence can be divided into categories both by the type of question scholars seek to answer and by the theories of jurisprudence, or schools of thought, regarding how those questions are best answered: Natural law holds that there are rational objective limits to the power of rulers, the foundations of law are accessible through reason, and it is from these laws of nature that human laws gain force. Analytic jurisprudence attempts to describe what law is. The two historically dominant theories in analytic jurisprudence are legal positivism and natural law theory. According to Legal Positivists, what law is and what law ought to be have no necessary connection to one another, so it is theoretically possible to engage in analytic jurisprudence without simultaneously engaging in normative jurisprudence. According to Natural Law Theorists, there is a necessary connection between what law is and what it ought to be, so it is impossible to engage in analytic jurisprudence without simultaniously engaging in normative jurisprudence. Normative jurisprudence attempts to prescribe what law ought to be. It is concerned with the goal or purpose of law and what moral or political theories provide a foundation for the law. It attempts to determine what the proper function of law should be, what sorts of acts should be subject to legal sanctions, and what sorts of punishment should be permitted. Sociological jurisprudence studies the nature and functions of law in the light of social scientific knowledge. It emphasises variation of legal phenomena between different cultures and societies. It relies especially on empirically-oriented social theory, but draws theoretical resources from diverse disciplines. Experimental jurisprudence seeks to investigate the content of legal concepts using the methods of social science, unlike the philosophical methods of traditional jurisprudence. The terms "philosophy of law" and "jurisprudence" are often used interchangeably, though jurisprudence sometimes encompasses forms of reasoning that fit into economics or sociology. == Overview == Whereas lawyers are interested in what the law is on a specific issue in a specific jurisdiction, analytical philosophers of law are interested in identifying the features of law shared across cultures, times, and places. Taken together, these foundational features of law offer the kind of universal definition philosophers are after. The general approach allows philosophers to ask questions about, for example, what separates law from morality, politics, or practical reason. While the field has traditionally focused on giving an account of law's nature, some scholars have begun to examine the nature of domains within law, e.g. tort law, contract law, or criminal law. These scholars focus on what makes certain domains of law distinctive and how one domain differs from another. A particularly fecund area of research has been the distinction between tort law and criminal law, which more generally bears on the difference between civil and criminal law. In addition to analytic jurisprudence, legal philosophy is also concerned with normative theories of law. "Normative jurisprudence involves normative, evaluative, and otherwise prescriptive questions about the law." == Etymology and terminology == The English word is derived from the Latin, iurisprudentia. Iuris is the genitive form of ius meaning law, and prudentia meaning prudence (also: discretion, foresight, forethought, circumspection). It refers to the exercise of good judgment, common sense, and caution, especially in the conduct of practical matters. The word first appeared in written English in 1628, at a time when the word prudence meant knowledge of, or skill in, a matter. It may have entered English via the French jurisprudence, which appeared earlier. == History == Ancient jurisprudence begins with various Dharmaśāstra texts of India. Dharmasutras of Āpastaṃba and Baudhāyana are examples. In Ancient China, the Daoists, Confucians, and Legalists all had competing theories of jurisprudence. Jurisprudence in ancient Rome had its origins with the periti—experts in the jus mos maiorum (traditional law), a body of oral laws and customs. Praetors established a working body of laws by judging whether or not singular cases were capable of being prosecuted either by the edicta, the annual pronunciation of prosecutable offences, or in extraordinary situations, additions made to the edicta. A iudex (originally a magistrate, later a private individual appointed to judge a specific case) would then prescribe a remedy according to the facts of the case. The sentences of the iudex were supposed to be simple interpretations of the traditional customs, but—apart from considering what traditional customs applied in each case—soon developed a more equitable interpretation, coherently adapting the law to newer social exigencies. The law was then adjusted with evolving institutiones (legal concepts), while remaining in the traditional mode. Praetors were replaced in the 3rd century BC by a laical body of prudentes. Admission to this body was conditional upon proof of competence or experience. Under the Roman Empire, schools of law were created, and practice of the law became more academic. From the early Roman Empire to the 3rd century, a relevant body of literature was produced by groups of scholars, including the Proculians and Sabinians. The scientific nature of the studies was unprecedented in ancient times. After the 3rd century, juris prudentia became a more bureaucratic activity, with few notable authors. It was during the Eastern Roman Empire (5th century) that legal studies were once again undertaken in depth, and it is from this cultural movement that Justinian's Corpus Juris Civilis was born. Modern jurisprudence began in the 18th century and was based on the first principles of natural law, civil law, and the law of nations. == Natural law == Natural law holds that there are rational objective limits to the power of rulers, the foundations of law are accessible through reason, and it is from these laws of nature that human laws gain force. The moral theory of natural law asserts that law is inherent in nature and constitutive of morality, at least in part, and that an objective moral order, external to human legal systems, underlies natural law. On this view, while legislators can enact and even successfully enforce immoral laws, such laws are legally invalid. The view is captured by the maxim: "an unjust law is no law at all", where 'unjust' means 'contrary to the natural law.' Natural law theory has medieval origins in the philosophy of Thomas Aquinas, especially in his Treatise on law. In late 20th century, John Finnis revived interest in the theory and provided a modern reworking of it. For one, Finnis has argued that the maxim "an unjust law is no law at all" is a poor guide to the classical Thomist position. In its general sense, natural law theory may be compared to both state-of-nature law and general law understood on the basis of being analogous to the laws of physical science. Natural law is often contrasted to positive law which asserts law as the product of human activity and human volition. Another approach to natural-law jurisprudence generally asserts that human law must be in response to compelling reasons for action. There are two readings of the natural-law jurisprudential stance. The strong natural law thesis holds that if a human law fails to be in response to compelling reasons, then it is not properly a "law" at all. This is captured, imperfectly, in the famous maxim: lex iniusta non est lex (an unjust law is no law at all). The weak natural law thesis holds that if a human law fails to be in response to compelling reasons, then it can still be called a "law", but it must be recognised as a defective law. === Aristotle === Aristotle is often said to be the father of natural law. Like his philosophical forefathers Socrates and Plato, Aristotle posited the existence of natural justice or natural right (dikaion physikon, δικαίον φυσικόν, Latin ius naturale). His association with natural law is largely due to how he was interpreted by Thomas Aquinas. This was based on Aquinas' conflation of natural law and natural right, the latter of which Aristotle posits in Book V of the Nicomachean Ethics (Book IV of the Eudemian Ethics). Aquinas's influence was such as to affect a number of early translations of these passages, though more recent translations render them more literally. Aristotle's theory of justice is bound up in his idea of the golden mean. Indeed, his treatment of what he calls "political justice" derives from his discussion of "the just" as a moral virtue derived as the mean between opposing vices, just like every other virtue he describes. His longest discussion of his theory of justice occurs in Nicomachean Ethics and begins by asking what sort of mean a just act is. He argues that the term "justice" actually refers to two different but related ideas: general justice and particular justice. When a person's actions toward others are completely virtuous in all matters, Aristotle calls them "just" in the sense of "general justice"; as such, this idea of justice is more or less coextensive with virtue. "Particular" or "partial justice", by contrast, is the part of "general justice" or the individual virtue that is concerned with treating others equitably. Aristotle moves from this unqualified discussion of justice to a qualified view of political justice, by which he means something close to the subject of modern jurisprudence. Of political justice, Aristotle argues that it is partly derived from nature and partly a matter of convention. This can be taken as a statement that is similar to the views of modern natural law theorists. But it must also be remembered that Aristotle is describing a view of morality, not a system of law, and therefore his remarks as to nature are about the grounding of the morality enacted as law, not the laws themselves. The best evidence of Aristotle's having thought there was a natural law comes from the Rhetoric, where Aristotle notes that, aside from the "particular" laws that each people has set up for itself, there is a "common" law that is according to nature. The context of this remark, however, suggests only that Aristotle thought that it could be rhetorically advantageous to appeal to such a law, especially when the "particular" law of one's own city was adverse to the case being made, not that there actually was such a law. Aristotle, moreover, considered certain candidates for a universally valid, natural law to be wrong. Aristotle's theoretical paternity of the natural law tradition is consequently disputed. === Thomas Aquinas === Thomas Aquinas is the foremost classical proponent of natural theology, and the father of the Thomistic school of philosophy, for a long time the primary philosophical approach of the Roman Catholic Church. The work for which he is best known is the Summa Theologiae. One of the thirty-five Doctors of the Church, he is considered by many Catholics to be the Church's greatest theologian. Consequently, many institutions of learning have been named after him. Aquinas distinguished four kinds of law: eternal, natural, divine, and human: Eternal law refers to divine reason, known only to God. It is God's plan for the universe. Man needs this plan, for without it he would totally lack direction. Natural law is the "participation" in the eternal law by rational human creatures, and is discovered by reason Divine law is revealed in the scriptures and is God's positive law for mankind Human law is supported by reason and enacted for the common good. Natural law is based on "first principles": ... this is the first precept of the law, that good is to be done and promoted, and evil is to be avoided. All other precepts of the natural law are based on this ... The desires to live and to procreate are counted by Aquinas among those basic (natural) human values on which all other human values are based. === School of Salamanca === Francisco de Vitoria was perhaps the first to develop a theory of ius gentium (law of nations), and thus is an important figure in the transition to modernity. He extrapolated his ideas of legitimate sovereign power to international affairs, concluding that such affairs ought to be determined by forms respecting of the rights of all and that the common good of the world should take precedence before the good of any single state. This meant that relations between states ought to pass from being justified by force to being justified by law and justice. Some scholars have upset the standard account of the origins of International law, which emphasises the seminal text De iure belli ac pacis by Hugo Grotius, and argued for Vitoria and, later, Suárez's importance as forerunners and, potentially, founders of the field. Others, such as Koskenniemi, have argued that none of these humanist and scholastic thinkers can be understood to have founded international law in the modern sense, instead placing its origins in the post-1870 period. Francisco Suárez, regarded as among the greatest scholastics after Aquinas, subdivided the concept of ius gentium. Working with already well-formed categories, he carefully distinguished ius inter gentes from ius intra gentes. Ius inter gentes (which corresponds to modern international law) was something common to the majority of countries, although, being positive law, not natural law, it was not necessarily universal. On the other hand, ius intra gentes, or civil law, is specific to each nation. === Lon Fuller === Writing after World War II, Lon L. Fuller defended a secular and procedural form of natural law. He emphasised that the (natural) law must meet certain formal requirements (such as being impartial and publicly knowable). To the extent that an institutional system of social control falls short of these requirements, Fuller argued, we are less inclined to recognise it as a system of law, or to give it our respect. Thus, the law must have a morality that goes beyond the societal rules under which laws are made. === John Finnis === Sophisticated positivist and natural law theories sometimes resemble each other and may have certain points in common. Identifying a particular theorist as a positivist or a natural law theorist sometimes involves matters of emphasis and degree, and the particular influences on the theorist's work. The natural law theorists of the distant past, such as Aquinas and John Locke made no distinction between analytic and normative jurisprudence, while modern natural law theorists, such as John Finnis, who claim to be positivists, still argue that law is moral by nature. In his book Natural Law and Natural Rights (1980, 2011), John Finnis provides a restatement of natural law doctrine. == Analytic jurisprudence == Unlike experimental jurisprudence, which investigates the content of legal concepts using the methods of social science, analytical jurisprudence seeks to provide a general account of the nature of law through the tools of conceptual analysis. The account is general in the sense of targeting universal features of law that hold at all times and places. Analytic, or clarificatory, jurisprudence takes a neutral point of view and uses descriptive language when referring to various aspects of legal systems. This was a philosophical development that rejected natural law's fusing of what law is and what it ought to be. David Hume argued, in A Treatise of Human Nature, that people invariably slip from describing what the world is to asserting that we therefore ought to follow a particular course of action. But as a matter of pure logic, one cannot conclude that we ought to do something merely because something is the case. So analysing and clarifying the way the world is must be treated as a strictly separate question from normative and evaluative questions of what ought to be done. The most important questions of analytic jurisprudence are: "What are laws?"; "What is the law?"; "What is the relationship between law and power/sociology?"; and "What is the relationship between law and morality?" Legal positivism is the dominant theory, although there is a growing number of critics who offer their own interpretations. === Historical school === Historical jurisprudence came to prominence during the debate on the proposed codification of German law. In his book On the Vocation of Our Age for Legislation and Jurisprudence, Friedrich Carl von Savigny argued that Germany did not have a legal language that would support codification because the traditions, customs, and beliefs of the German people did not include a belief in a code. Historicists believe that law originates with society. === Sociological jurisprudence === An effort systematically to inform jurisprudence from sociological insights developed from the beginning of the twentieth century, as sociology began to establish itself as a distinct social science, especially in the United States and in continental Europe. In Germany, Austria and France, the work of the "free law" theorists (e.g. Ernst Fuchs, Hermann Kantorowicz, Eugen Ehrlich and François Gény) encouraged the use of sociological insights in the development of legal and juristic theory. The most internationally influential advocacy for a "sociological jurisprudence" occurred in the United States, where, throughout the first half of the twentieth century, Roscoe Pound, for many years the Dean of Harvard Law School, used this term to characterise his legal philosophy. In the United States, many later writers followed Pound's lead or developed distinctive approaches to sociological jurisprudence. In Australia, Julius Stone strongly defended and developed Pound's ideas. In the 1930s, a significant split between the sociological jurists and the American legal realists emerged. In the second half of the twentieth century, sociological jurisprudence as a distinct movement declined as jurisprudence came more strongly under the influence of analytical legal philosophy; but with increasing criticism of dominant orientations of legal philosophy in English-speaking countries in the present century, it has attracted renewed interest. Increasingly, its contemporary focus is on providing theoretical resources for jurists to aid their understanding of new types of regulation (for example, the diverse kinds of developing transnational law) and the increasingly important interrelations of law and culture, especially in multicultural Western societies. As an approach to jurisprudence, sociological jurisprudence uses the resources of social science to serve value-oriented juristic purposes. As such, it should be distinguished from sociology of law which as a field of social science has no necessary commitment to juristic aims. === Legal positivism === Legal positivism is the view that the content of law is dependent on social facts and that a legal system's existence is not constrained by morality. Within legal positivism, theorists agree that law's content is a product of social facts, but theorists disagree whether law's validity can be explained by incorporating moral values. Legal positivists who argue against the incorporation of moral values to explain law's validity are labeled exclusive (or hard) legal positivists. Joseph Raz's legal positivism is an example of exclusive legal positivism. Legal positivists who argue that law's validity can be explained by incorporating moral values are labeled inclusive (or soft) legal positivists. The legal positivist theories of H. L. A. Hart and Jules Coleman are examples of inclusive legal positivism. Legal positivism has traditionally been associated with three doctrines: the pedigree thesis, the separability thesis, and the discretion thesis. The pedigree thesis says that the right way to determine whether a directive is law is to look at the directive's source. The thesis claims that it is the fact that the directive was issued by the proper official within a legitimate government, for example, that determines the directive's legal validity—not the directive's moral or practical merits. The separability thesis states that law is conceptually distinct from morality. While law might contain morality, the separability thesis states that "it is in no sense a necessary truth that laws reproduce or satisfy certain demands of morality, though in fact they have often done so." Legal positivists disagree about the extent of the separability thesis. Exclusive legal positivists, notably Joseph Raz, go further than the standard thesis and deny that it is possible for morality to be a part of law at all. The discretion thesis states that judges create new law when they are given discretion to adjudicate cases where existing law underdetermines the result. ==== Thomas Hobbes ==== Hobbes was a social contractarian and believed that the law had peoples' tacit consent. He believed that society was formed from a state of nature to protect people from the state of war that would exist otherwise. In Leviathan, Hobbes argues that without an ordered society life would be "solitary, poor, nasty, brutish and short." It is commonly said that Hobbes's views on human nature were influenced by his times. The English Civil War and the Cromwellian dictatorship had taken place; and, in reacting to that, Hobbes felt that absolute authority vested in a monarch, whose subjects obeyed the law, was the basis of a civilized society. ==== Bentham and Austin ==== John Austin and Jeremy Bentham were early legal positivists who sought to provide a descriptive account of law that describes the law as it is. Austin explained the descriptive focus for legal positivism by saying, "The existence of law is one thing; its merit and demerit another. Whether it be or be not is one enquiry; whether it be or be not conformable to an assumed standard, is a different enquiry." For Austin and Bentham, a society is governed by a sovereign who has de facto authority. Through the sovereign's authority come laws, which for Austin and Bentham are commands backed by sanctions for non-compliance. Along with Hume, Bentham was an early and staunch supporter of the utilitarian concept, and was an avid prison reformer, advocate for democracy, and firm atheist. Bentham's views about law and jurisprudence were popularized by his student John Austin. Austin was the first chair of law at the new University of London, from 1829. Austin's utilitarian answer to "what is law?" was that law is "commands, backed by threat of sanctions, from a sovereign, to whom people have a habit of obedience". H. L. A. Hart criticized Austin and Bentham's early legal positivism because the command theory failed to account for individual's compliance with the law. ==== Hans Kelsen ==== Hans Kelsen is considered one of the preeminent jurists of the 20th century and has been highly influential in Europe and Latin America, although less so in common law countries. His Pure Theory of Law describes law as "binding norms", while at the same time refusing to evaluate those norms. That is, "legal science" is to be separated from "legal politics". Central to the Pure Theory of Law is the notion of a 'basic norm' (Grundnorm)—a hypothetical norm, presupposed by the jurist, from which all "lower" norms in the hierarchy of a legal system, beginning with constitutional law, are understood to derive their authority or the extent to which they are binding. Kelsen contends that the extent to which legal norms are binding, their specifically "legal" character, can be understood without tracing it ultimately to some suprahuman source such as God, personified Nature or—of great importance in his time—a personified State or Nation. ==== H. L. A. Hart ==== In the English-speaking world, the most influential legal positivist of the twentieth century was H. L. A. Hart, professor of jurisprudence at Oxford University. Hart argued that the law should be understood as a system of social rules. In The Concept of Law, Hart rejected Kelsen's views that sanctions were essential to law and that a normative social phenomenon, like law, cannot be grounded in non-normative social facts. Hart claimed that law is the union of primary rules and secondary rules. Primary rules require individuals to act or not act in certain ways and create duties for the governed to obey. Secondary rules are rules that confer authority to create new primary rules or modify existing ones. Secondary rules are divided into rules of adjudication (how to resolve legal disputes), rules of change (how laws are amended), and the rule of recognition (how laws are identified as valid). The validity of a legal system comes from the "rule of recognition", which is a customary practice of officials (especially barristers and judges) who identify certain acts and decisions as sources of law. In 1981, Neil MacCormick wrote a pivotal book on Hart (second edition published in 2008), which further refined and offered some important criticisms that led MacCormick to develop his own theory (the best example of which is his Institutions of Law, 2007). Other important critiques include those of Ronald Dworkin, John Finnis, and Joseph Raz. In recent years, debates on the nature of law have become increasingly fine-grained. One important debate is within legal positivism. One school is sometimes called "exclusive legal positivism" and is associated with the view that the legal validity of a norm can never depend on its moral correctness. A second school is labeled "inclusive legal positivism", a major proponent of which is Wil Waluchow, and is associated with the view that moral considerations may, but do not necessarily, determine the legal validity of a norm. ==== Joseph Raz ==== Joseph Raz's theory of legal positivism argues against the incorporation of moral values to explain law's validity. In Raz's 1979 book The Authority of Law, he criticised what he called the "weak social thesis" to explain law. He formulates the weak social thesis as "(a) Sometimes the identification of some laws turn on moral arguments, but also with, (b) In all legal systems the identification of some law turns on moral argument." Raz argues that law's authority is identifiable purely through social sources, without reference to moral reasoning. This view he calls "the sources thesis". Raz suggests that any categorisation of rules beyond their role as authority is better left to sociology than to jurisprudence. Some philosophers used to contend that positivism was the theory that held that there was "no necessary connection" between law and morality; but influential contemporary positivists—including Joseph Raz, John Gardner, and Leslie Green—reject that view. Raz claims it is a necessary truth that there are vices that a legal system cannot possibly have (for example, it cannot commit rape or murder). === Legal realism === Legal realism is the view that a theory of law should be descriptive and account for the reasons why judges decide cases as they do. Legal realism had some affinities with the sociology of law and sociological jurisprudence. The essential tenet of legal realism is that all law is made by humans and thus should account for reasons besides legal rules that led to a legal decision. There are two separate schools of legal realism: American legal realism and Scandinavian legal realism. American legal realism grew out of the writings of Oliver Wendell Holmes. At the start of Holmes's The Common Law, he claims that "[t]he life of the law has not been logic: it has been experience". This view was a reaction to legal formalism that was popular the time due to the Christopher Columbus Langdell. Holmes's writings on jurisprudence also laid the foundations for the predictive theory of law. In his article "The Path of the Law", Holmes argues that "the object of [legal] study...is prediction, the prediction of the incidence of the public force through the instrumentality of the courts." For the American legal realists of the early twentieth century, legal realism sought to describe the way judges decide cases. For legal realists such as Jerome Frank, judges start with the facts before them and then move to legal principles. Before legal realism, theories of jurisprudence turned this method around where judges were thought to begin with legal principles and then look to facts. It has become common today to identify Justice Oliver Wendell Holmes Jr., as the main precursor of American Legal Realism (other influences include Roscoe Pound, Karl Llewellyn, and Justice Benjamin Cardozo). Karl Llewellyn, another founder of the U.S. legal realism movement, similarly believed that the law is little more than putty in the hands of judges who are able to shape the outcome of cases based on their personal values or policy choices. The Scandinavian school of legal realism argued that law can be explained through the empirical methods used by social scientists. Prominent Scandinavian legal realists are Alf Ross, Axel Hägerström, and Karl Olivecrona. Scandinavian legal realists also took a naturalist approach to law. Despite its decline in popularity, legal realism continues to influence a wide spectrum of jurisprudential schools today, including critical legal studies, feminist legal theory, critical race theory, sociology of law, and law and economics. === Critical legal studies === Critical legal studies are a new theory of jurisprudence that has developed since the 1970s. In 1977 a group of members of the Law and Society Association struck out on a new theoretical direction. The legal ideas of Peter Gabel, Morton Horwitz, Duncan Kennedy, Karl Klare, Mark Tushnet, and Roberto Unger have now found influence in many law schools. The theory can generally be traced to American legal realism and is considered "the first movement in legal theory and legal scholarship in the United States to have espoused a committed Left political stance and perspective". It holds that the law is largely contradictory, and can be best analyzed as an expression of the policy goals of a dominant social group. Roberto Mangabeira Unger and other authors in the movement contrast critical legal studies as a method, critical in approach, from the impersonal purposes and principles made necessary in legal reasoning such as formalism. He writes that it was "consequently also by rejecting judges as the chief addressees of legal analysis, and refusing to take the question—how should judges decide cases?—as the defining problem in jurisprudence." According to Unger the new American legal analysis will unlock the democratic potential of free societies in the same way earlier capitalistic economies benefited from the protection of private rights such as contracts and property. === Constitutionalism === === Legal interpretivism === American legal philosopher Ronald Dworkin's legal theory attacks legal positivists that separate law's content from morality. In his book Law's Empire, Dworkin argued that law is an "interpretive" concept that requires barristers to find the best-fitting and most just solution to a legal dispute, given their constitutional traditions. According to him, law is not entirely based on social facts, but includes the best moral justification for the institutional facts and practices that form a society's legal tradition. It follows from Dworkin's view that one cannot know whether a society has a legal system in force, or what any of its laws are, until one knows some truths about the moral justifications of the social and political practices of that society. It is consistent with Dworkin's view—in contrast with the views of legal positivists or legal realists—that no-one in a society may know what its laws are, because no-one may know the best moral justification for its practices. Interpretation, according to Dworkin's "integrity theory of law", has two dimensions. To count as an interpretation, the reading of a text must meet the criterion of "fit". Of those interpretations that fit, however, Dworkin maintains that the correct interpretation is the one that portrays the practices of the community in their best light, or makes them "the best that they can be". But many writers have doubted whether there is a single best moral justification for the complex practices of any given community, and others have doubted whether, even if there is, it should be counted as part of the law of that community. === Therapeutic jurisprudence === Consequences of the operation of legal rules or legal procedures—or of the behavior of legal actors (such as lawyers and judges)—may be either beneficial (therapeutic) or harmful (anti-therapeutic) to people. Therapeutic jurisprudence ("TJ") studies law as a social force (or agent) and uses social science methods and data to study the extent to which a legal rule or practice affects the psychological well-being of the people it impacts. == Normative jurisprudence == In addition to the question, "What is law?", legal philosophy is also concerned with normative, or "evaluative" theories of law. What is the goal or purpose of law? What moral or political theories provide a foundation for the law? What is the proper function of law? What sorts of acts should be subject to punishment, and what sorts of punishment should be permitted? What is justice? What rights do we have? Is there a duty to obey the law? What value has the rule of law? Some of the different schools and leading thinkers are discussed below. === Virtue jurisprudence === Aretaic moral theories, such as contemporary virtue ethics, emphasize the role of character in morality. Virtue jurisprudence is the view that the laws should promote the development of virtuous character in citizens. Historically, this approach has been mainly associated with Aristotle or Thomas Aquinas. Contemporary virtue jurisprudence is inspired by philosophical work on virtue ethics. === Deontology === Deontology is the "theory of duty or moral obligation". The philosopher Immanuel Kant formulated one influential deontological theory of law. He argued that any rule we follow must be able to be universally applied, i.e. we must be willing for everyone to follow that rule. A contemporary deontological approach can be found in the work of the legal philosopher Ronald Dworkin. === Utilitarianism === Utilitarianism is the view that the laws should be crafted so as to produce the best consequences for the greatest number of people. Historically, utilitarian thinking about law has been associated with the philosopher Jeremy Bentham. John Stuart Mill was a pupil of Bentham's and was the torch bearer for utilitarian philosophy throughout the late nineteenth century. In contemporary legal theory, the utilitarian approach is frequently championed by scholars who work in the law and economics tradition. === John Rawls === John Rawls was an American philosopher; a professor of political philosophy at Harvard University; and author of A Theory of Justice (1971), Political Liberalism, Justice as Fairness: A Restatement, and The Law of Peoples. He is widely considered one of the most important English-language political philosophers of the 20th century. His theory of justice uses a method called "original position" to ask us which principles of justice we would choose to regulate the basic institutions of our society if we were behind a "veil of ignorance". Imagine we do not know who we are—our race, sex, wealth, status, class, or any distinguishing feature—so that we would not be biased in our own favour. Rawls argued from this "original position" that we would choose exactly the same political liberties for everyone, like freedom of speech, the right to vote, and so on. Also, we would choose a system where there is only inequality because that produces incentives enough for the economic well-being of all society, especially the poorest. This is Rawls's famous "difference principle". Justice is fairness, in the sense that the fairness of the original position of choice guarantees the fairness of the principles chosen in that position. There are many other normative approaches to the philosophy of law, including constitutionalism, critical legal studies and libertarian theories of law. == Experimental jurisprudence == Experimental jurisprudence seeks to investigate the content of legal concepts using the methods of social science, unlike the philosophical methods of traditional jurisprudence. == List of philosophers of law == == See also == == References == === Citations === === Notes === == Bibliography == == Further reading == Austin, John (1831). The Province of Jurisprudence Determined. Cotterrell, R. (1995). Law's Community: Legal Theory in Sociological Perspective. Oxford: Oxford University Press. Cotterrell, R. (2003). The Politics of Jurisprudence: A Critical Introduction to Legal Philosophy (2nd ed.). Oxford: Oxford University Press. Cotterrell, R. (2018). Sociological Jurisprudence: Juristic Thought and Social Inquiry. New York/London: Routledge. Freeman, M. D. A. (2014). Lloyd's Introduction to Jurisprudence (9th ed.). London: Sweet & Maxwell. Hartzler, H. Richard (1976). Justice, Legal Systems, and Social Structure. Port Washington, NY: Kennikat Press. Engle, Eric (July 2010). Lex Naturalis, Ius Naturalis: Law as Positive Reasoning & Natural Rationality. Eric Engle. ISBN 978-0-9807318-4-2. Hutchinson, Allan C., ed. (1989). Critical Legal Studies. Totowa, NJ: Rowman & Littlefield. Kempin Jr., Frederick G. (1963). Legal History: Law and Social Change. Englewood Cliffs, NJ: Prentice-Hall. Llewellyn, Karl N. (1986). Karl N. Llewellyn on Legal Realism. Birmingham, AL: Legal Classics Library. (Contains penetrating classic "The Bramble Bush" on nature of law). Murphy, Cornelius F. (1977). Introduction to Law, Legal Process, and Procedure. St. Paul, MN: West Publishing. Rawls, John (1999). A Theory of Justice, revised ed. Cambridge: Harvard University Press. (Philosophical treatment of justice). Wacks, Raymond (2009). Understanding Jurisprudence: An Introduction to Legal Theory Oxford University Press. Washington, Ellis (2002). The Inseparability of Law and Morality: Essays on Law, Race, Politics and Religion University Press of America. Washington, Ellis (2013). The Progressive Revolution, 2007–08 Writings-Vol. 1; 2009 Writings-Vol. 2, Liberal Fascism through the Ages University Press of America. Zinn, Howard (1990). Declarations of Independence: Cross-Examining American Ideology. New York: Harper Collins Publishers. Zippelius, Reinhold (2011). Rechtsphilosophie, 6th ed. Munich: C.H. Beck. ISBN 978-3-406-61191-9 Zippelius, Reinhold (2012). Das Wesen des Rechts (The Concept of Law), an introduction to Legal Theory, 6th ed., Stuttgart: W. Kohlhammer. ISBN 978-3-17-022355-4 Zippelius, Reinhold (2008). Introduction to German Legal Methods (Juristische Methodenlehre), translated from the tenth German Edition by Kirk W. Junker, P. Matthew Roy. Durham: Carolina Academic Press. Heinze, Eric, The Concept of Injustice (Routledge, 2013) Pillai, P. S. A. (2016). Jurisprudence and Legal Theory, 3rd Edition, Reprinted 2016: Eastern Book Company. ISBN 978-93-5145-326-0 == External links == LII Law about ... Jurisprudence. The Roman Law Library, incl. Responsa prudentium by Professor Yves Lassard and Alexandr Koptev. Evgeny Pashukanis - General Theory of Law and Marxism. Internet Encyclopedia: Philosophy of Law. The Opticon: Online Repository of Materials covering Spectrum of U.S. Jurisprudence. Bibliography on the Philosophy of Law. Peace Palace Library
Wikipedia/Legal_theory
Rhetoric of therapy is a concept coined by American academic Dana L. Cloud to describe "a set of political and cultural discourses that have adopted psychotherapy's lexicon—the conservative language of healing, coping, adaptation, and restoration of previously existing order—but in contexts of social and political conflict". Cloud argued that the rhetoric of therapy encourages people to focus on themselves and their private lives rather than attempt to reform flawed systems of social and political power. This form of persuasion is primarily used by politicians, managers, journalists and entertainers as a way to cope with the crisis of the American Dream. Cloud said "the discursive pattern of translating social and political problems into the language of individual responsibility and healing is a rhetoric because of its powerful persuasive force", and it is rhetoric of "therapy" because "of its focus on the personal life of the individual as locus of both problem and responsibility for change". == Functions == The rhetoric of therapy has two functions, according to Cloud: (1) to exhort conformity with the prevailing social order and (2) to encourage identification with therapeutic values: individualism, familism, self-help, and self-absorption. It is directed towards individuals who cope with unemployment, family stress, sexual and domestic violence, child abuse, and other traumas that result from systemic hegemony such as women's oppression, racism, and capitalism. == History == The origins of therapeutic discourse, along with advertising and other consumerist cultural forms, emerged during the industrialization of the West during the 18th century. The new emphasis on the acquisition of wealth during this period produced discourse about the "democratic self-determination of individuals conceived as autonomous, self-expressive, self-reliant subjects" or, in short, the "self-made man". Cloud argued that the rhetoric of the self-made man was introduced to veil the growing polarity between classes of owners and laborers and that it disguised the fact that success attained through self-determination was never a real possibility for blacks, immigrants, the working class, and women. Therefore, the language of personal responsibility, adaptation, and healing served not to liberate the working class, the poor, and the socially marginalized, but to persuade members of these classes that they are individually responsible for their plight. The rhetoric of therapy served as a diversion away from attention to social ills. One prominent movement that developed from the rhetoric of therapy was the self-help movement, which encouraged its audiences to take personal responsibility for solving their problems without attention to race, class, and gender issues. The twofold objective of this particular movement—mental health and positive thinking—is demonstrated in one of the quintessential books of this period, The Power of Positive Thinking by Norman Vincent Peale. Cloud analyzed different case studies to show how the established order is maintained by redirecting blame from the hegemonic system to the individual. Cloud said that the rhetoric of family values blames the absence of the "traditional" family as the cause of social ills. The rhetoric of therapy is used to divert attention from issues caused by hegemonic systems by promoting the idea that restoration of the traditional family structure will result in a harmonious society. A second example of the rhetoric of therapy is illustrated in Cloud's discussion of the extensive media coverage of groups that supported the Gulf War. Cloud says that the media intentionally devoted significant attention to groups that supported the war in an effort to instill blame, guilt, shame, and anxiety in individuals who openly opposed the war. Cloud writes that this was a government effort to control the nation's perception and response to the war that many deemed unjust. In such cases, the rhetoric of therapy is used to deflate the possibility of collective resistance and to inflate receptivity to prevailing social and political structures. == See also == == Notes == == References == Cloud, Dana L. (1998). Control and consolation in American culture and politics: rhetoric of therapy. Rhetoric and society. Vol. 1. Thousand Oaks, CA: SAGE Publications. ISBN 978-0761905066. OCLC 37268476. == Further reading == Cushman, Philip (1995). Constructing the self, constructing America: a cultural history of psychotherapy. Boston: Addison-Wesley. ISBN 978-0201626438. OCLC 30976460. Epstein, William M. (2006). Psychotherapy as religion: the civil divine in America. Reno, NV: University of Nevada Press. ISBN 978-0874176780. OCLC 62889079. Guilfoyle, Michael (February 2005). "From therapeutic power to resistance? Therapy and cultural hegemony". Theory & Psychology. 15 (1): 101–124. doi:10.1177/0959354305049748. S2CID 145491324. Hazleden, Rebecca (December 2003). "Love yourself: the relationship of the self with itself in popular self-help texts". Journal of Sociology. 39 (4): 413–428. doi:10.1177/0004869003394006. S2CID 144162898. House, Richard (August 1999). "'Limits to therapy and counselling': deconstructing a professional ideology". British Journal of Guidance & Counselling. 27 (3): 377–392. doi:10.1080/03069889908256278. Jacob, Jean Daniel (December 2012). "The rhetoric of therapy in forensic psychiatric nursing". Journal of Forensic Nursing. 8 (4): 178–187. doi:10.1111/j.1939-3938.2012.01146.x. PMID 23176358. S2CID 25871538. Rose, Nikolas S. (1996). Inventing our selves: psychology, power, and personhood. Cambridge studies in the history of psychology. Cambridge, UK; New York: Cambridge University Press. ISBN 978-0521434140. OCLC 33440952. Throop, Elizabeth A. (2009). Psychotherapy, American culture, and social policy: immoral individualism. Culture, mind, and society. New York: Palgrave Macmillan. ISBN 978-0230609457. OCLC 226357146. Tonn, Mari Boor (2005). "Taking conversation, dialogue, and therapy public". Rhetoric & Public Affairs. 8 (3): 405–430. doi:10.1353/rap.2005.0072. S2CID 143908004.
Wikipedia/Rhetoric_of_therapy
Ethnomethodology is the study of how social order is produced in and through processes of social interaction. It generally seeks to provide an alternative to mainstream sociological approaches. It can be seen as posing a challenge to the social sciences as a whole, as it re-specifies the assumed phenomena of those sciences as being themselves social achievements. Its early investigations led to the founding of conversation analysis, which has found its own place as an accepted discipline within the academy. According to Psathas, it is possible to distinguish five major approaches within the ethnomethodological family of disciplines (see § Varieties). Ethnomethodology is a fundamentally descriptive discipline which does not engage in the explanation or evaluation of the particular social order undertaken as a topic of study. It seeks "to discover the things that persons in particular situations do, the methods they use, to create the patterned orderliness of social life". However, applications have been found within many applied disciplines, such as software design and management studies. == Definition == The term's meaning can be broken down into its three constituent parts: ethno – method – ology, for the purpose of explanation. Using an appropriate Southern California example: ethno refers to a particular socio-cultural group (for example, a particular, local community of surfers); method refers to the methods and practices this particular group employs in its everyday activities (for example, related to surfing); and ology refers to the systematic description of these methods and practices. The focus of the investigation used in our example is the social order of surfing, the ethnomethodological interest is in the "how" (the methods and practices) of the production and maintenance of this social order. In essence ethnomethodology attempts to create classifications of the social actions of individuals within groups through drawing on the experience of the groups directly, without imposing on the setting the opinions of the researcher with regards to social order, as is the case with other forms of sociological investigation. == Origin and scope == The approach was originally developed by Harold Garfinkel, who attributed its origin to his work investigating the conduct of jury members in 1954. His interest was in describing the common sense methods through which members of a jury produce themselves in a jury room as a jury. Thus, their methods for: establishing matters of fact; developing evidence chains; determining the reliability of witness testimony; establishing the organization of speakers in the jury room itself; and determining the guilt or innocence of defendants, etc. are all topics of interest. Such methods serve to constitute the social order of being a juror for the members of the jury, as well as for researchers and other interested parties, in that specific social setting. This interest developed out of Garfinkel's critique of Talcott Parsons' attempt to derive a general theory of society. This critique originated in his reading of Alfred Schutz, though Garfinkel ultimately revised many of Schutz's ideas. Garfinkel also drew on his study of the principles and practices of financial accounting; the classic sociological theory and methods of Durkheim and Weber; and the traditional sociological concern with the Hobbesian "problem of order". For the ethnomethodologist, participants produce the order of social settings through their shared sense making practices. Thus, there is an essential natural reflexivity between the activity of making sense of a social setting and the ongoing production of that setting; the two are in effect identical. Furthermore, these practices (or methods) are witnessably enacted, making them available for study. This opens up a broad and multi-faceted area of inquiry. John Heritage writes: "In its open-ended reference to [the study of] any kind of sense-making procedure, the term represents a signpost to a domain of uncharted dimensions rather than a staking out of a clearly delineated territory." == Theory and methods == Ethnomethodology has often perplexed commentators, due to its radical approach to questions of theory and method. With regard to theory, Garfinkel has consistently advocated an attitude of ethnomethodological indifference, a principled agnosticism with regard to social theory which insists that the shared understandings of members of a social setting under study take precedence over any concepts which a social theorist might bring to the analysis from outside that setting. This can be perplexing to traditional social scientists, trained in the need for social theory. A multiplicity of theoretical references by Anne Rawls, in her introduction to Ethnomethodology's Program, might be interpreted to suggest a softening of this position towards the end of Garfinkel's life. However, the position is consistent with ethnomethodology's understanding of the significance of "member's methods", and with certain lines of philosophical thought regarding the philosophy of science (Polanyi 1958; Kuhn 1970; Feyerabend 1975), and the study of the actual practices of scientific procedure. It also has a strong correspondence with the later philosophy of Ludwig Wittgenstein, especially as applied to social studies by Peter Winch. Regarding theory, Garfinkel's work references the phenomenology of Husserl, Gurwitsch, Merleau-Ponty and, most frequently, to the works of the social phenomenologist Alfred Schutz. Sociologists Talcott Parsons and Emile Durkheim are also frequently referenced. However, the emphasis is upon 'misreading' texts, intended to refer to the activity of "misreading a description as instructions, the work of following which exhibits the phenomenon that the text describes." Ethnomethodology's policy of ethnomethodological indifference and its unique adequacy requirement of methods explicitly reject commitment to any sociological or philosophical theory. Similarly, ethnomethodology advocates no formal methods of enquiry, insisting that the research method be dictated by the nature of the phenomenon that is being studied. Ethnomethodologists have conducted their studies in a variety of ways, and the point of these investigations is "to discover the things that persons in particular situations do, the methods they use, to create the patterned orderliness of social life". Michael Lynch has noted that: "Leading figures in the field have repeatedly emphasised that there is no obligatory set of methods [employed by ethnomethodologists], and no prohibition against using any research procedure whatsoever, if it is adequate to the particular phenomena under study". == Some leading policies, methods and definitions == The fundamental assumption of ethnomethodological studies As characterised by Anne Rawls, speaking for Garfinkel: "If one assumes, as Garfinkel does, that the meaningful, patterned, and orderly character of everyday life is something that people must work to achieve, then one must also assume that they have some methods for doing so". That is, "...members of society must have some shared methods that they use to mutually construct the meaningful orderliness of social situations." Ethnomethodology is an empirical enterprise Rawls states: "Ethnomethodology is a thoroughly empirical enterprise devoted to the discovery of social order and intelligibility [sense making] as witnessable collective achievements." "The keystone of the [ethnomethodological] argument is that local [social] orders exist; that these orders are witnessable in the scenes in which they are produced; and that the possibility of [their] intelligibility is based on the actual existence and detailed enactment of these orders." Ethnomethodology is not, however, conventionally empiricist. Its empirical nature is specified in the weak form of the unique adequacy requirement. The unique adequacy requirement of methods (weak form) is that the researcher should have a 'vulgar competence' in the research setting. That is, they should be able to function as an ordinary member of that setting. The unique adequacy requirement of methods (strong form) is identical to the requirement for ethnomethodological indifference. Ethnomethodological indifference This is the policy of deliberate agnosticism, or indifference, towards the dictates, prejudices, methods and practices of sociological analysis as traditionally conceived (examples: theories of "deviance", analysis of behavior as rule governed, role theory, institutional (de)formations, theories of social stratification, etc.). Dictates and prejudices which serve to pre-structure traditional social scientific investigations independently of the subject matter taken as a topic of study, or the investigatory setting being subjected to scrutiny. The policy of ethnomethodological indifference is specifically not to be conceived of as indifference to the problem of social order taken as a group (member's) concern. First time through This is the practice of attempting to describe any social activity, regardless of its routine or mundane appearance, as if it were happening for the very first time. This is in an effort to expose how the observer of the activity assembles, or constitutes, the activity for the purposes of formulating any particular description. The point of such an exercise is to make available and underline the complexities of sociological analysis and description, particularly the indexical and reflexive properties of the actors', or observer's, own descriptions of what is taking place in any given situation. Such an activity will also reveal the observer's inescapable reliance on the hermeneutic circle as the defining "methodology" of social understanding for both lay persons and social scientists. Breaching experiment A method for revealing, or exposing, the common work that is performed by members of particular social groups in maintaining a clearly recognisable and shared social order. For example, driving the wrong way down a busy one-way street can reveal myriads of useful insights into the patterned social practices, and moral order, of the community of road users. The point of such an exercise—a person pretending to be a stranger or boarder in their own household—is to demonstrate that gaining insight into the work involved in maintaining any given social order can often best be revealed by breaching that social order and observing the results of that breach—especially those activities related to the reassembly of that social order, and the normalisation of that social setting. Sacks' gloss A question about an aspect of the social order that recommends, as a method of answering it, that the researcher should seek out members of society who, in their daily lives, are responsible for the maintenance of that aspect of the social order. This is in opposition to the idea that such questions are best answered by a sociologist. Sacks' original question concerned objects in public places and how it was possible to see that such objects did or did not belong to somebody. He found his answer in the activities of police officers who had to decide whether cars were abandoned. Durkheim's aphorism Durkheim famously recommended: "our basic principle, that of the objectivity of social facts". This is usually taken to mean that we should assume the objectivity of social facts as a principle of study (thus providing the basis of sociology as a science). Garfinkel's alternative reading of Durkheim is that we should treat the objectivity of social facts as an achievement of society's members, and make the achievement process itself the focus of study. An ethnomethodological respecification of Durkheim's statement via a "misreading" (see below) of his quote appears above. There is also a textual link/rationale provided in the literature. Both links involve a leap of faith on the part of the reader; that is, we don't believe that one method for this interpretation is necessarily better than the other, or that one form of justification for such an interpretation outweighs its competitor. Accounts Accounts are the ways members signify, describe or explain the properties of a specific social situation. They can consist of both verbal and non-verbal objectifications. They are always both indexical to the situation in which they occur (see below), and, simultaneously reflexive—they serve to constitute that situation. An account can consist of something as simple as a wink of the eye, a material object evidencing a state of affairs (documents, etc.), or something as complex as a story detailing the boundaries of the universe. Indexicality The concept of indexicality is a key core concept for ethnomethodology. Garfinkel states that it was derived from the concept of indexical expressions appearing in ordinary language philosophy (1967), wherein a statement is considered to be indexical insofar as it is dependent for its sense upon the context in which it is embedded (Bar-Hillel 1954:359–379). The phenomenon is acknowledged in various forms of analytical philosophy, and sociological theory and methods, but is considered to be both limited in scope and remedied through specification operationalisation. In ethnomethodology, the phenomenon is universalised to all forms of language and behavior, and is deemed to be beyond remedy for the purposes of establishing a scientific description and explanation of social behavior. The consequence of the degree of contextual dependence for a "segment" of talk or behavior can range from the problem of establishing a "working consensus" regarding the description of a phrase, concept or behavior, to the end-game of social scientific description itself. Note that any serious development of the concept must eventually assume a theory of meaning as its foundation (see Gurwitsch 1985). Without such a foundational underpinning, both the traditional social scientist and the ethnomethodologist are relegated to merely telling stories around the campfire (Brooks 1974). Misreading (a text) Misreading a text, or fragments of a text, does not denote making an erroneous reading of a text in whole or in part. As Garfinkel states, it means to denote an "alternate reading" of a text or fragment of a text. As such, the original and its misreading do not "translate point to point" but, "instead, they go together". No criteria are offered for the translation of an original text and its misreading—the outcome of such translations are in Garfinkel's term: "incommensurable." The misreading of texts or fragments of texts is a standard feature of ethnomethodology's way of doing theory, especially in regards to topics in phenomenology. Reflexivity Despite the fact that many sociologists use "reflexivity" as a synonym for "self-reflection," the way the term is used in ethnomethodology is different: it is meant "to describe the acausal and non-mentalistic determination of meaningful action-in-context". See also: Reflexivity (social theory). Documentary method of interpretation The documentary method is the method of understanding utilised by everyone engaged in trying to make sense of their social world—this includes the ethnomethodologist. Garfinkel recovered the concept from the work of Karl Mannheim and repeatedly demonstrates the use of the method in the case studies appearing in his central text, Studies in Ethnomethodology. Mannheim defined the term as a search for an identical homologous pattern of meaning underlying a variety of totally different realisations of that meaning. Garfinkel states that the documentary method of interpretation consists of treating an actual appearance as the "document of", "as pointing to", as "standing on behalf of", a presupposed underlying pattern. These "documents" serve to constitute the underlying pattern, but are themselves interpreted on the basis of what is already known about that underlying pattern. This seeming paradox is quite familiar to hermeneuticians who understand this phenomenon as a version of the hermeneutic circle. This phenomenon is also subject to analysis from the perspective of Gestalt theory (part/whole relationships), and the phenomenological theory of perception. Social orders Theoretically speaking, the object of ethnomethodological research is social order taken as a group member's concern. Methodologically, social order is made available for description in any specific social setting as an accounting of specific social orders: the sensible coherencies of accounts that order a specific social setting for the participants relative to a specific social project to be realised in that setting. Social orders themselves are made available for both participants and researchers through phenomena of order: the actual accounting of the partial (adumbrated) appearances of these sensibly coherent social orders. These appearances (parts, adumbrates) of social orders are embodied in specific accounts, and employed in a particular social setting by the members of the particular group of individuals party to that setting. Specific social orders have the same formal properties as identified by A. Gurwitsch in his discussion of the constituent features of perceptual noema, and, by extension, the same relationships of meaning described in his account of Gestalt Contextures (see Gurwitsch 1964:228–279). As such, it is little wonder that Garfinkel states: "you can't do anything unless you do read his texts". Ethnomethodology's field of investigation For ethnomethodology the topic of study is the social practices of real people in real settings, and the methods by which these people produce and maintain a shared sense of social order. == Differences with sociology == Since ethnomethodology has become anathema to certain sociologists, and since those practicing it like to perceive their own efforts as constituting a radical break from prior sociologies, there has been little attempt to link ethnomethodology to these prior sociologies. However, whilst ethnomethodology is distinct from sociological methods, it does not seek to compete with it, or provide remedies for any of its practices. The ethnomethodological approach differs as much from the sociological approach as sociology does from psychology even though both speak of social action. This does not mean that ethnomethodology does not use traditional sociological forms as a sounding board for its own programmatic development, or to establish benchmarks for the differences between traditional sociological forms of study and ethnomethodology as it only means that ethnomethodology was not established in order to: repair, criticize, undermine, or poke fun at traditional sociological forms. In essence the distinctive difference between sociological approaches and ethnomethodology is that the latter adopts a commonsense attitude towards knowledge. In contrast to traditional sociological forms of inquiry, it is a hallmark of the ethnomethodological perspective that it does not make theoretical or methodological appeals to: outside assumptions regarding the structure of an actor or actors' characterisation of social reality; refer to the subjective states of an individual or groups of individuals; attribute conceptual projections such as, "value states", "sentiments", "goal orientations", "mini-max economic theories of behavior", etc., to any actor or group of actors; or posit a specific "normative order" as a transcendental feature of social scenes, etc. For the ethnomethodologist, the methodic realisation of social scenes takes place within the actual setting under scrutiny, and is structured by the participants in that setting through the reflexive accounting of that setting's features. The job of the ethnomethodologist is to describe the methodic character of these activities, not account for them in a way that transcends that which is made available in and through the actual accounting practices of the individual's party to those settings. The differences can therefore be summed up as follows: While traditional sociology usually offers an analysis of society which takes the facticity (factual character, objectivity) of the social order for granted, ethnomethodology is concerned with the procedures (practices, methods) by which that social order is produced, and shared. While traditional sociology usually provides descriptions of social settings which compete with the actual descriptions offered by the individuals who are party to those settings, ethnomethodology seeks to describe the procedures (practices, methods) these individuals use in their actual descriptions of those settings. == Varieties == According to George Psathas, five types of ethnomethodological study can be identified (Psathas 1995:139–155). These may be characterised as: The organisation of practical actions and practical reasoning. Including the earliest studies, such as those in Garfinkel's seminal Studies in Ethnomethodology. The organisation of talk-in-interaction. More recently known as conversation analysis, Harvey Sacks established this approach in collaboration with his colleagues Emanuel Schegloff and Gail Jefferson. Talk-in-interaction within institutional or organisational settings. While early studies focused on talk abstracted from the context in which it was produced (usually using tape recordings of telephone conversations) this approach seeks to identify interactional structures that are specific to particular settings. The study of work. 'Work' is used here to refer to any social activity. The analytic interest is in how that work is accomplished within the setting in which it is performed. The haecceity of work. Just what makes an activity what it is? e.g. what makes a test a test, a competition a competition, or a definition a definition? Further discussion of the varieties and diversity of ethnomethodological investigations can be found in Maynard & Clayman's work. == Relationship with conversation analysis == The relationship between ethnomethodology and conversation analysis has been contentious at times, given their overlapping interests, the close collaboration between their founders and the subsequent divergence of interest among many practitioners. In as much as the study of social orders is "inexorably intertwined" with the constitutive features of talk about those social orders, ethnomethodology is committed to an interest in both conversational talk, and the role this talk plays in the constitution of that order. Talk is seen as indexical and embedded in a specific social order. It is also naturally reflexive to and constitutive of that order. Anne Rawls pointed out: "Many, in fact most, of those who have developed a serious interest in ethnomethodology have also used conversation analysis, developed by Sacks, Schegloff, and Jefferson, as one of their research tools.": 143  On the other hand, where the study of conversational talk is divorced from its situated context—that is, when it takes on the character of a purely technical method and "formal analytic" enterprise in its own right—it is not a form of ethnomethodology. The "danger" of misunderstanding here, as Rawls notes, is that conversation analysis can become just another formal analytic enterprise, like any other formal method which brings an analytical toolbox of preconceptions, formal definitions, and operational procedures to the situation/setting under study. When such analytical concepts are generated from within one setting and conceptually applied (generalised) to another, the (re)application represents a violation of the strong form of the unique adequacy requirement of methods. == Links with phenomenology == Even though ethnomethodology has been characterised as having a "phenomenological sensibility", and reliable commentators have acknowledged that "there is a strong influence of phenomenology on ethnomethodology" (Maynard and Kardash 2007:1484), some orthodox adherents to the discipline—those who follow the teachings of Garfinkel—do not explicitly represent it as a form of phenomenology. Garfinkel speaks of phenomenological texts and findings as being "appropriated" for the purposes of exploring topics in the study of social order. Even though ethnomethodology is not a form of phenomenology, the reading and understanding of phenomenological texts, and developing the capability of seeing phenomenologically is essential to the praxis of ethnomethodological studies. As Garfinkel states in regard to the work of the phenomenologist Aron Gurwitsch, especially his Field of Consciousness (1964: ethnomethodology's phenomenological urtext): "you can't do anything unless you do read his texts". == References == === Notes === === Bibliography === Bar-Hillel, Y. (1954) 'Indexical expressions', Mind 63 (251):359–379. Feyerabend, Paul (1975) Against Method, London, New Left Books. Garfinkel, H. (1967) Studies in Ethnomethodology, Prentice-Hall. Garfinkel, H. and Liberman, K. (2007) 'Introduction: the lebenswelt origins of the sciences', Human Studies, 30, 1, pp3–7. Gurwitsch, Aron (1964) The Field of Consciousness, Duquesne University Press. Hammersley, Martyn (2018) The Radicalism of Ethnomethodology, Manchester, Manchester University Press. Kuhn, Thomas (1970) The Structure of Scientific Revolutions, Chicago, Chicago University Press. Liberman, Ken (2014). More Studies in Ethnomethodology", SUNY Press, ISBN 978-1438446189 Lynch, Michael & Wes Sharrock. (2003). Harold Garfinkel, 4 Volumes, Sage, 2003. Sage "Masters" series. Compendium of theoretical papers, ethnomethodological studies, and discussions. Lynch, Michael & Wes Sharrock. (2011). Ethnomethodology, 4 Volumes, Sage, 2011. Sage "Research" series. Compendium of theoretical papers, ethnomethodological studies, and discussions. Maynard, Douglas and Kardash, Teddy (2007) 'Ethnomethodology'. pp. 1483–1486 in G. Ritzer (ed.) Encyclopedia of Sociology. Boston: Blackwell. Psathas, George. (1995). "Talk and Social Structure", and, "Studies of Work", Human Studies 18: 139–155. Typology of ethnomethodological studies of social practices. vom Lehn, Dirk. (2014). Harold Garfinkel: The Creation and Development of Ethnomethodology, Left Coast Press. ISBN 978-1-61132-979-7. == External links == Ethno/CA News A primary source for ethnomethodology and conversation analysis information and resources. AIEMCA.net The Australian Institute for Conversation Analysis and Ethnomethodology.
Wikipedia/Ethnomethodology
Stephen Edelston Toulmin (; 25 March 1922 – 4 December 2009) was a British philosopher, author, and educator. Influenced by Ludwig Wittgenstein, Toulmin devoted his works to the analysis of moral reasoning. Throughout his writings, he sought to develop practical arguments which can be used effectively in evaluating the ethics behind moral issues. His works were later found useful in the field of rhetoric for analyzing rhetorical arguments. The Toulmin model of argumentation, a diagram containing six interrelated components used for analyzing arguments, and published in his 1958 book The Uses of Argument, was considered his most influential work, particularly in the field of rhetoric and communication, and in computer science. == Biography == Stephen Toulmin was born in London, UK, on 25 March 1922 to Geoffrey Edelson Toulmin and Doris Holman Toulmin. He earned his Bachelor of Arts degree from King's College, Cambridge, in 1943, where he was a Cambridge Apostle. Soon after, Toulmin was hired by the Ministry of Aircraft Production as a junior scientific officer, first at the Malvern Radar Research and Development Station and later at the Supreme Headquarters of the Allied Expeditionary Force in Germany. At the end of World War II, he returned to England to earn a Master of Arts degree in 1947 and a PhD in philosophy from Cambridge University, subsequently publishing his dissertation as An Examination of the Place of Reason in Ethics (1950). While at Cambridge, Toulmin came into contact with the Austrian philosopher Ludwig Wittgenstein, whose examination of the relationship between the uses and the meanings of language shaped much of Toulmin's own work. After graduating from Cambridge, he was appointed University Lecturer in Philosophy of Science at Oxford University from 1949 to 1954, during which period he wrote a second book, The Philosophy of Science: an Introduction (1953). Soon after, he was appointed to the position of Visiting Professor of History and Philosophy of Science at Melbourne University in Australia from 1954 to 1955, after which he returned to England, and served as Professor and Head of the Department of Philosophy at the University of Leeds from 1955 to 1959. While at Leeds, he published one of his most influential books in the field of rhetoric, The Uses of Argument (1958), which investigated the flaws of traditional logic. Although it was poorly received in England and satirized as "Toulmin's anti-logic book" by Toulmin's fellow philosophers at Leeds, the book was applauded by the rhetoricians in the United States, where Toulmin served as a visiting professor at New York, Stanford, and Columbia Universities in 1959. While in the States, Wayne Brockriede and Douglas Ehninger introduced Toulmin's work to communication scholars, as they recognized that his work provided a good structural model useful for the analysis and criticism of rhetorical arguments. In 1960, Toulmin returned to London to hold the position of director of the Unit for History of Ideas of the Nuffield Foundation. In 1965, Toulmin returned to the United States, where he held positions at various universities. In 1967, Toulmin served as literary executor for close friend N.R. Hanson, helping in the posthumous publication of several volumes. While at the University of California, Santa Cruz, Toulmin published Human Understanding: The Collective Use and Evolution of Concepts (1972), which examines the causes and the processes of conceptual change. In this book, Toulmin uses a novel comparison between conceptual change and Charles Darwin's model of biological evolution to analyse the process of conceptual change as an evolutionary process. The book confronts major philosophical questions as well. In 1973, while a professor in the Committee on Social Thought at the University of Chicago, he collaborated with Allan Janik, a philosophy professor at La Salle University, on the book Wittgenstein's Vienna, which advanced a thesis that underscores the significance of history to human reasoning: Contrary to philosophers who believe the absolute truth advocated in Plato's idealized formal logic, Toulmin argues that truth can be a relative quality, dependent on historical and cultural contexts (what other authors have termed "conceptual schemata"). From 1975 to 1978, he worked with the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, established by the United States Congress. During this time, he collaborated with Albert R. Jonsen to write The Abuse of Casuistry: A History of Moral Reasoning (1988), which demonstrates the procedures for resolving moral cases. One of his most recent works, Cosmopolis: The Hidden Agenda of Modernity (1990), written while Toulmin held the position of the Avalon Foundation Professor of the Humanities at Northwestern University, specifically criticizes the practical use and the thinning morality underlying modern science. Toulmin held distinguished professorships at a number of different universities, including Columbia, Dartmouth College, Michigan State, Northwestern, Stanford, the University of Chicago, and the University of Southern California School of International Relations. In 1997 the National Endowment for the Humanities (NEH) selected Toulmin for the Jefferson Lecture, the U.S. federal government's highest honor for achievement in the humanities. His lecture, "A Dissenter's Story" (alternatively entitled "A Dissenter's Life"), discussed the roots of modernity in rationalism and humanism, the "contrast of the reasonable and the rational", and warned of the "abstractions that may still tempt us back into the dogmatism, chauvinism and sectarianism our needs have outgrown". The NEH report of the speech further quoted Toulmin on the need to "make the technical and the humanistic strands in modern thought work together more effectively than they have in the past". On 2 March 2006 Toulmin received the Austrian Decoration for Science and Art. He was married four times, once to June Goodfield, with whom he collaborated on a series of books on the history of science. His children are Greg, of McLean, Va., Polly Macinnes of Skye, Scotland, Camilla Toulmin in the UK and Matthew Toulmin of Melbourne, Australia. On 4 December 2009 Toulmin died of a heart failure at the age of 87 in Los Angeles, California. == Meta-philosophy == === Objection to absolutism and relativism === Throughout many of his works, Toulmin pointed out that absolutism (represented by theoretical or analytic arguments) has limited practical value. Absolutism is derived from Plato's idealized formal logic, which advocates universal truth; accordingly, absolutists believe that moral issues can be resolved by adhering to a standard set of moral principles, regardless of context. By contrast, Toulmin contends that many of these so-called standard principles are irrelevant to real situations encountered by human beings in daily life. To develop his contention, Toulmin introduced the concept of argument fields. In The Uses of Argument (1958), Toulmin claims that some aspects of arguments vary from field to field, and are hence called "field-dependent", while other aspects of argument are the same throughout all fields, and are hence called "field-invariant". The flaw of absolutism, Toulmin believes, lies in its unawareness of the field-dependent aspect of argument; absolutism assumes that all aspects of argument are field invariant. In Human Understanding (1972), Toulmin suggests that anthropologists have been tempted to side with relativists because they have noticed the influence of cultural variations on rational arguments. In other words, the anthropologist or relativist overemphasizes the importance of the "field-dependent" aspect of arguments, and neglects or is unaware of the "field-invariant" elements. In order to provide solutions to the problems of absolutism and relativism, Toulmin attempts throughout his work to develop standards that are neither absolutist nor relativist for assessing the worth of ideas. In Cosmopolis (1990), he traces philosophers' "quest for certainty" back to René Descartes and Thomas Hobbes, and lauds John Dewey, Wittgenstein, Martin Heidegger, and Richard Rorty for abandoning that tradition. === Humanizing modernity === In Cosmopolis Toulmin seeks the origins of the modern emphasis on universality (philosophers' "quest for certainty"), and criticizes both modern science and philosophers for having ignored practical issues in preference for abstract and theoretical issues. The pursuit of absolutism and theoretical arguments lacking practicality, for example, is, in his view, one of the main defects of modern philosophy. Similarly, Toulmin sensed a thinning of morality in the field of sciences, which has diverted its attention from practical issues concerning ecology to the production of the atomic bomb. To solve this problem, Toulmin advocated a return to humanism consisting of four returns: a return to oral communication and discourse, a plea which has been rejected by modern philosophers, whose scholarly focus is on the printed page; a return to the particular or individual cases that deal with practical moral issues occurring in daily life (as opposed to theoretical principles that have limited practicality); a return to the local, or to concrete cultural and historical contexts; and, finally, a return to the timely, from timeless problems to things whose rational significance depends on the time lines of our solutions. He follows up on this critique in Return to Reason (2001), where he seeks to illuminate the ills that, in his view, universalism has caused in the social sphere, discussing, among other things, the discrepancy between mainstream ethical theory and real-life ethical quandaries. == Argumentation == === Toulmin model of argument === Arguing that absolutism lacks practical value, Toulmin aimed to develop a different type of argument, called practical arguments (also known as substantial arguments). In contrast to absolutists' theoretical arguments, Toulmin's practical argument is intended to focus on the justificatory function of argumentation, as opposed to the inferential function of theoretical arguments. Whereas theoretical arguments make inferences based on a set of principles to arrive at a claim, practical arguments first find a claim of interest, and then provide justification for it. Toulmin believed that reasoning is less an activity of inference, involving the discovering of new ideas, and more a process of testing and sifting already existing ideas—an act achievable through the process of justification. Toulmin believed that for a good argument to succeed, it needs to provide good justification for a claim. This, he believed, will ensure it stands up to criticism and earns a favourable verdict. In The Uses of Argument (1958), Toulmin proposed a layout containing six interrelated components for analyzing arguments: Claim (Conclusion) A conclusion whose merit must be established. In argumentative essays, it may be called the thesis. For example, if a person tries to convince a listener that he is a British citizen, the claim would be "I am a British citizen" (1). Ground (Fact, Evidence, Data) A fact one appeals to as a foundation for the claim. For example, the person introduced in 1 can support his claim with the supporting data "I was born in Bermuda" (2). Warrant A statement authorizing movement from the ground to the claim. In order to move from the ground established in 2, "I was born in Bermuda", to the claim in 1, "I am a British citizen", the person must supply a warrant to bridge the gap between 1 and 2 with the statement "A man born in Bermuda will legally be a British citizen" (3). Backing Credentials designed to certify the statement expressed in the warrant; backing must be introduced when the warrant itself is not convincing enough to the readers or the listeners. For example, if the listener does not deem the warrant in 3 as credible, the speaker will supply the legal provisions: "I trained as a barrister in London, specialising in citizenship, so I know that a man born in Bermuda will legally be a British citizen". Rebuttal (Reservation) Statements recognizing the restrictions which may legitimately be applied to the claim. It is exemplified as follows: "A man born in Bermuda will legally be a British citizen, unless he has betrayed Britain and has become a spy for another country". Qualifier Words or phrases expressing the speaker's degree of force or certainty concerning the claim. Such words or phrases include "probably", "possible", "impossible", "certainly", "presumably", "as far as the evidence goes", and "necessarily". The claim "I am definitely a British citizen" has a greater degree of force than the claim "I am a British citizen, presumably". (See also: Defeasible reasoning.) The first three elements, claim, ground, and warrant, are considered as the essential components of practical arguments, while the second triad, qualifier, backing, and rebuttal, may not be needed in some arguments. When Toulmin first proposed it, this layout of argumentation was based on legal arguments and intended to be used to analyze the rationality of arguments typically found in the courtroom. Toulmin did not realize that this layout could be applicable to the field of rhetoric and communication until his works were introduced to rhetoricians by Wayne Brockriede and Douglas Ehninger. Their Decision by Debate (1963) streamlined Toulmin's terminology and broadly introduced his model to the field of debate. Only after Toulmin published Introduction to Reasoning (1979) were the rhetorical applications of this layout mentioned in his works. One criticism of the Toulmin model is that it does not fully consider the use of questions in argumentation. The Toulmin model assumes that an argument starts with a fact or claim and ends with a conclusion, but ignores an argument's underlying questions. In the example "Harry was born in Bermuda, so Harry must be a British subject", the question "Is Harry a British subject?" is ignored, which also neglects to analyze why particular questions are asked and others are not. (See Issue mapping for an example of an argument-mapping method that emphasizes questions.) Toulmin's argument model has inspired research on, for example, goal structuring notation (GSN), widely used for developing safety cases, and argument maps and associated software. == Ethics == === Good reasons approach === In Reason in Ethics (1950), his doctoral dissertation, Toulmin sets out a Good Reasons approach of ethics, and criticizes what he considers to be the subjectivism and emotivism of philosophers such as A. J. Ayer because, in his view, they fail to do justice to ethical reasoning. === Revival of casuistry === By reviving casuistry (also known as case ethics), Toulmin sought to find the middle ground between the extremes of absolutism and relativism. Casuistry was practiced widely during the Middle Ages and the Renaissance to resolve moral issues. Although casuistry largely fell silent during the modern period, in The Abuse of Casuistry: A History of Moral Reasoning (1988), Toulmin collaborated with Albert R. Jonsen to demonstrate the effectiveness of casuistry in practical argumentation during the Middle Ages and the Renaissance, effectively reviving it as a permissible method of argument. Casuistry employs absolutist principles, called "type cases" or "paradigm cases", without resorting to absolutism. It uses the standard principles (for example, sanctity of life) as referential markers in moral arguments. An individual case is then compared and contrasted with the type case. Given an individual case that is completely identical to the type case, moral judgments can be made immediately using the standard moral principles advocated in the type case. If the individual case differs from the type case, the differences will be critically assessed in order to arrive at a rational claim. Through the procedure of casuistry, Toulmin and Jonsen identified three problematic situations in moral reasoning: first, the type case fits the individual case only ambiguously; second, two type cases apply to the same individual case in conflicting ways; third, an unprecedented individual case occurs, which cannot be compared or contrasted to any type case. Through the use of casuistry, Toulmin demonstrated and reinforced his previous emphasis on the significance of comparison to moral arguments, a significance not addressed in theories of absolutism or relativism. == Philosophy of science == === Evolutionary model === In 1972, Toulmin published Human Understanding, in which he asserts that conceptual change is an evolutionary process. In this book, Toulmin attacks Thomas Kuhn's account of conceptual change in his seminal work The Structure of Scientific Revolutions (1962). Kuhn believed that conceptual change is a revolutionary process (as opposed to an evolutionary process), during which mutually exclusive paradigms compete to replace one another. Toulmin criticized the relativist elements in Kuhn's thesis, arguing that mutually exclusive paradigms provide no ground for comparison, and that Kuhn made the relativists' error of overemphasizing the "field variant" while ignoring the "field invariant" or commonality shared by all argumentation or scientific paradigms. In contrast to Kuhn's revolutionary model, Toulmin proposed an evolutionary model of conceptual change comparable to Darwin's model of biological evolution. Toulmin states that conceptual change involves the process of innovation and selection. Innovation accounts for the appearance of conceptual variations, while selection accounts for the survival and perpetuation of the soundest conceptions. Innovation occurs when the professionals of a particular discipline come to view things differently from their predecessors; selection subjects the innovative concepts to a process of debate and inquiry in what Toulmin considers as a "forum of competitions". The soundest concepts will survive the forum of competition as replacements or revisions of the traditional conceptions. From the absolutists' point of view, concepts are either valid or invalid regardless of contexts. From the relativists' perspective, one concept is neither better nor worse than a rival concept from a different cultural context. From Toulmin's perspective, the evaluation depends on a process of comparison, which determines whether or not one concept will improve explanatory power more than its rival concepts. == Pantheon of skeptics == At a meeting of the executive council of the Committee for Skeptical Inquiry (CSI) in Denver, Colorado in April 2011, Toulmin was selected for inclusion in CSI's Pantheon of Skeptics. The Pantheon of Skeptics was created by CSI to remember the legacy of deceased fellows of CSI and their contributions to the cause of scientific skepticism. == Works == An Examination of the Place of Reason in Ethics (1950) ISBN 0-226-80843-2 The Philosophy of Science: An Introduction (1953) The Uses of Argument (1958) 2nd edition 2003: ISBN 0-521-53483-6 Metaphysical Beliefs, Three Essays (1957) with Ronald W. Hepburn and Alasdair MacIntyre The Riviera (1961) Seventeenth century science and the arts (1961) Foresight and Understanding: An Enquiry into the Aims of Science (1961) ISBN 0-313-23345-4 The Fabric of the Heavens (The Ancestry of Science, volume 1) (1961) with June Goodfield ISBN 0-226-80848-3 The Architecture of Matter (The Ancestry of Science, volume 2) (1962) with June Goodfield ISBN 0-226-80840-8 Night Sky at Rhodes (1963) The Discovery of Time (The Ancestry of Science, volume 3) (1965) with June Goodfield ISBN 0-226-80842-4 Physical Reality (1970) Human Understanding: The Collective Use and Evolution of Concepts (1972) ISBN 0-691-01996-7 Wittgenstein's Vienna (1973) with Allan Janik On the Nature of the Physician's Understanding (1976) Knowing and Acting: An Invitation to Philosophy (1976) ISBN 0-02-421020-X An Introduction to Reasoning with Allan Janik and Richard D. Rieke (1979), 2nd ed. 1984; 3rd edition 1997: ISBN 0-02-421160-5 The Return to Cosmology: Postmodern Science and the Theology of Nature (1985) ISBN 0-520-05465-2 The Abuse of Casuistry: A History of Moral Reasoning (1988) with Albert R. Jonsen ISBN 0-520-06960-9 Cosmopolis: The Hidden Agenda of Modernity (1990) ISBN 0-226-80838-6 Social Impact of AIDS in the United States (1993) with Albert R. Jonsen Beyond theory – changing organizations through participation (1996) with Björn Gustavsen (editors) Return to Reason (2001) ISBN 0-674-01235-6 == See also == Argumentation theory Cambridge University Moral Sciences Club == References == == Further reading == Hitchcock, David; Verheij, Bart, eds. (2006). Arguing on the Toulmin Model: New Essays in Argument Analysis and Evaluation. Springer-Verlag Netherlands. doi:10.1007/978-1-4020-4938-5_3. ISBN 978-1-4020-4937-8. OCLC 82229075. == External links == Stephen Toulmin: An Intellectual Odyssey at the Wayback Machine (archived 15 February 2006) Stephen Toulmin Interview with Stephen Toulmin in JAC Obituary in The Guardian
Wikipedia/Toulmin_model_of_argument
In literary theory, a text is any object that can be "read", whether this object is a work of literature, a street sign, an arrangement of buildings on a city block, or styles of clothing. It is a set of signs that is available to be reconstructed by a reader (or observer) if sufficient interpretants are available. This set of signs is considered in terms of the informative message's content, rather than in terms of its physical form or the medium in which it is represented. Within the field of literary criticism, "text" also refers to the original information content of a particular piece of writing; that is, the "text" of a work is that primal symbolic arrangement of letters as originally composed, apart from later alterations, deterioration, commentary, translations, paratext, etc. Therefore, when literary criticism is concerned with the determination of a "text", it is concerned with the distinguishing of the original information content from whatever has been added to or subtracted from that content as it appears in a given textual document (that is, a physical representation of text). Since the history of writing predates the concept of the "text", most texts were not written with this concept in mind. Most written works fall within a narrow range of the types described by text theory. The concept of "text" becomes relevant if and when a "coherent written message is completed and needs to be referred to independently of the circumstances in which it was created." == Etymology == The word text has its origins in Quintilian's Institutio Oratoria, with the statement that "after you have chosen your words, they must be weaved together into a fine and delicate fabric", with the Latin for fabric being textum. == Uses of the term for analysis of work practice == Relying on literary theory, the notion of text has been used to analyse contemporary work practices. For example, Christensen (2016) rely on the concept of text for the analysis of work practice at a hospital. == See also == Text linguistics Textual criticism Textual scholarship Theme (narrative) == References == == Further reading ==
Wikipedia/Text_(literary_theory)
Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the study of deductively valid inferences or logical truths. It examines how conclusions follow from premises based on the structure of arguments alone, independent of their topic and content. Informal logic is associated with informal fallacies, critical thinking, and argumentation theory. Informal logic examines arguments expressed in natural language whereas formal logic uses formal language. When used as a countable noun, the term "a logic" refers to a specific logical formal system that articulates a proof system. Logic plays a central role in many fields, such as philosophy, mathematics, computer science, and linguistics. Logic studies arguments, which consist of a set of premises that leads to a conclusion. An example is the argument from the premises "it's Sunday" and "if it's Sunday then I don't have to work" leading to the conclusion "I don't have to work." Premises and conclusions express propositions or claims that can be true or false. An important feature of propositions is their internal structure. For example, complex propositions are made up of simpler propositions linked by logical vocabulary like ∧ {\displaystyle \land } (and) or → {\displaystyle \to } (if...then). Simple propositions also have parts, like "Sunday" or "work" in the example. The truth of a proposition usually depends on the meanings of all of its parts. However, this is not the case for logically true propositions. They are true only because of their logical structure independent of the specific meanings of the individual parts. Arguments can be either correct or incorrect. An argument is correct if its premises support its conclusion. Deductive arguments have the strongest form of support: if their premises are true then their conclusion must also be true. This is not the case for ampliative arguments, which arrive at genuinely new information not found in the premises. Many arguments in everyday discourse and the sciences are ampliative arguments. They are divided into inductive and abductive arguments. Inductive arguments are statistical generalizations, such as inferring that all ravens are black based on many individual observations of black ravens. Abductive arguments are inferences to the best explanation, for example, when a doctor concludes that a patient has a certain disease which explains the symptoms they suffer. Arguments that fall short of the standards of correct reasoning often embody fallacies. Systems of logic are theoretical frameworks for assessing the correctness of arguments. Logic has been studied since antiquity. Early approaches include Aristotelian logic, Stoic logic, Nyaya, and Mohism. Aristotelian logic focuses on reasoning in the form of syllogisms. It was considered the main system of logic in the Western world until it was replaced by modern formal logic, which has its roots in the work of late 19th-century mathematicians such as Gottlob Frege. Today, the most commonly used system is classical logic. It consists of propositional logic and first-order logic. Propositional logic only considers logical relations between full propositions. First-order logic also takes the internal parts of propositions into account, like predicates and quantifiers. Extended logics accept the basic intuitions behind classical logic and apply it to other fields, such as metaphysics, ethics, and epistemology. Deviant logics, on the other hand, reject certain classical intuitions and provide alternative explanations of the basic laws of logic. == Definition == The word "logic" originates from the Greek word logos, which has a variety of translations, such as reason, discourse, or language. Logic is traditionally defined as the study of the laws of thought or correct reasoning, and is usually understood in terms of inferences or arguments. Reasoning is the activity of drawing inferences. Arguments are the outward expression of inferences. An argument is a set of premises together with a conclusion. Logic is interested in whether arguments are correct, i.e. whether their premises support the conclusion. These general characterizations apply to logic in the widest sense, i.e., to both formal and informal logic since they are both concerned with assessing the correctness of arguments. Formal logic is the traditionally dominant field, and some logicians restrict logic to formal logic. === Formal logic === Formal logic (also known as symbolic logic) is widely used in mathematical logic. It uses a formal approach to study reasoning: it replaces concrete expressions with abstract symbols to examine the logical form of arguments independent of their concrete content. In this sense, it is topic-neutral since it is only concerned with the abstract structure of arguments and not with their concrete content. Formal logic is interested in deductively valid arguments, for which the truth of their premises ensures the truth of their conclusion. This means that it is impossible for the premises to be true and the conclusion to be false. For valid arguments, the logical structure of the premises and the conclusion follows a pattern called a rule of inference. For example, modus ponens is a rule of inference according to which all arguments of the form "(1) p, (2) if p then q, (3) therefore q" are valid, independent of what the terms p and q stand for. In this sense, formal logic can be defined as the science of valid inferences. An alternative definition sees logic as the study of logical truths. A proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true in all possible worlds and under all interpretations of its non-logical terms, like the claim "either it is raining, or it is not". These two definitions of formal logic are not identical, but they are closely related. For example, if the inference from p to q is deductively valid then the claim "if p then q" is a logical truth. Formal logic uses formal languages to express and analyze arguments. They normally have a very limited vocabulary and exact syntactic rules. These rules specify how their symbols can be combined to construct sentences, so-called well-formed formulas. This simplicity and exactness of formal logic make it capable of formulating precise rules of inference. They determine whether a given argument is valid. Because of the reliance on formal language, natural language arguments cannot be studied directly. Instead, they need to be translated into formal language before their validity can be assessed. The term "logic" can also be used in a slightly different sense as a countable noun. In this sense, a logic is a logical formal system. Distinct logics differ from each other concerning the rules of inference they accept as valid and the formal languages used to express them. Starting in the late 19th century, many new formal systems have been proposed. There are disagreements about what makes a formal system a logic. For example, it has been suggested that only logically complete systems, like first-order logic, qualify as logics. For such reasons, some theorists deny that higher-order logics are logics in the strict sense. === Informal logic === When understood in a wide sense, logic encompasses both formal and informal logic. Informal logic uses non-formal criteria and standards to analyze and assess the correctness of arguments. Its main focus is on everyday discourse. Its development was prompted by difficulties in applying the insights of formal logic to natural language arguments. In this regard, it considers problems that formal logic on its own is unable to address. Both provide criteria for assessing the correctness of arguments and distinguishing them from fallacies. Many characterizations of informal logic have been suggested but there is no general agreement on its precise definition. The most literal approach sees the terms "formal" and "informal" as applying to the language used to express arguments. On this view, informal logic studies arguments that are in informal or natural language. Formal logic can only examine them indirectly by translating them first into a formal language while informal logic investigates them in their original form. On this view, the argument "Birds fly. Tweety is a bird. Therefore, Tweety flies." belongs to natural language and is examined by informal logic. But the formal translation "(1) ∀ x ( B i r d ( x ) → F l i e s ( x ) ) {\displaystyle \forall x(Bird(x)\to Flies(x))} ; (2) B i r d ( T w e e t y ) {\displaystyle Bird(Tweety)} ; (3) F l i e s ( T w e e t y ) {\displaystyle Flies(Tweety)} " is studied by formal logic. The study of natural language arguments comes with various difficulties. For example, natural language expressions are often ambiguous, vague, and context-dependent. Another approach defines informal logic in a wide sense as the normative study of the standards, criteria, and procedures of argumentation. In this sense, it includes questions about the role of rationality, critical thinking, and the psychology of argumentation. Another characterization identifies informal logic with the study of non-deductive arguments. In this way, it contrasts with deductive reasoning examined by formal logic. Non-deductive arguments make their conclusion probable but do not ensure that it is true. An example is the inductive argument from the empirical observation that "all ravens I have seen so far are black" to the conclusion "all ravens are black". A further approach is to define informal logic as the study of informal fallacies. Informal fallacies are incorrect arguments in which errors are present in the content and the context of the argument. A false dilemma, for example, involves an error of content by excluding viable options. This is the case in the fallacy "you are either with us or against us; you are not with us; therefore, you are against us". Some theorists state that formal logic studies the general form of arguments while informal logic studies particular instances of arguments. Another approach is to hold that formal logic only considers the role of logical constants for correct inferences while informal logic also takes the meaning of substantive concepts into account. Further approaches focus on the discussion of logical topics with or without formal devices and on the role of epistemology for the assessment of arguments. == Basic concepts == === Premises, conclusions, and truth === ==== Premises and conclusions ==== Premises and conclusions are the basic parts of inferences or arguments and therefore play a central role in logic. In the case of a valid inference or a correct argument, the conclusion follows from the premises, or in other words, the premises support the conclusion. For instance, the premises "Mars is red" and "Mars is a planet" support the conclusion "Mars is a red planet". For most types of logic, it is accepted that premises and conclusions have to be truth-bearers. This means that they have a truth value: they are either true or false. Contemporary philosophy generally sees them either as propositions or as sentences. Propositions are the denotations of sentences and are usually seen as abstract objects. For example, the English sentence "the tree is green" is different from the German sentence "der Baum ist grün" but both express the same proposition. Propositional theories of premises and conclusions are often criticized because they rely on abstract objects. For instance, philosophical naturalists usually reject the existence of abstract objects. Other arguments concern the challenges involved in specifying the identity criteria of propositions. These objections are avoided by seeing premises and conclusions not as propositions but as sentences, i.e. as concrete linguistic objects like the symbols displayed on a page of a book. But this approach comes with new problems of its own: sentences are often context-dependent and ambiguous, meaning an argument's validity would not only depend on its parts but also on its context and on how it is interpreted. Another approach is to understand premises and conclusions in psychological terms as thoughts or judgments. This position is known as psychologism. It was discussed at length around the turn of the 20th century but it is not widely accepted today. ==== Internal structure ==== Premises and conclusions have an internal structure. As propositions or sentences, they can be either simple or complex. A complex proposition has other propositions as its constituents, which are linked to each other through propositional connectives like "and" or "if...then". Simple propositions, on the other hand, do not have propositional parts. But they can also be conceived as having an internal structure: they are made up of subpropositional parts, like singular terms and predicates. For example, the simple proposition "Mars is red" can be formed by applying the predicate "red" to the singular term "Mars". In contrast, the complex proposition "Mars is red and Venus is white" is made up of two simple propositions connected by the propositional connective "and". Whether a proposition is true depends, at least in part, on its constituents. For complex propositions formed using truth-functional propositional connectives, their truth only depends on the truth values of their parts. But this relation is more complicated in the case of simple propositions and their subpropositional parts. These subpropositional parts have meanings of their own, like referring to objects or classes of objects. Whether the simple proposition they form is true depends on their relation to reality, i.e. what the objects they refer to are like. This topic is studied by theories of reference. ==== Logical truth ==== Some complex propositions are true independently of the substantive meanings of their parts. In classical logic, for example, the complex proposition "either Mars is red or Mars is not red" is true independent of whether its parts, like the simple proposition "Mars is red", are true or false. In such cases, the truth is called a logical truth: a proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true under all interpretations of its non-logical terms. In some modal logics, this means that the proposition is true in all possible worlds. Some theorists define logic as the study of logical truths. ==== Truth tables ==== Truth tables can be used to show how logical connectives work or how the truth values of complex propositions depends on their parts. They have a column for each input variable. Each row corresponds to one possible combination of the truth values these variables can take; for truth tables presented in the English literature, the symbols "T" and "F" or "1" and "0" are commonly used as abbreviations for the truth values "true" and "false". The first columns present all the possible truth-value combinations for the input variables. Entries in the other columns present the truth values of the corresponding expressions as determined by the input values. For example, the expression " p ∧ q {\displaystyle p\land q} " uses the logical connective ∧ {\displaystyle \land } (and). It could be used to express a sentence like "yesterday was Sunday and the weather was good". It is only true if both of its input variables, p {\displaystyle p} ("yesterday was Sunday") and q {\displaystyle q} ("the weather was good"), are true. In all other cases, the expression as a whole is false. Other important logical connectives are ¬ {\displaystyle \lnot } (not), ∨ {\displaystyle \lor } (or), → {\displaystyle \to } (if...then), and ↑ {\displaystyle \uparrow } (Sheffer stroke). Given the conditional proposition p → q {\displaystyle p\to q} , one can form truth tables of its converse q → p {\displaystyle q\to p} , its inverse ( ¬ p → ¬ q {\displaystyle \lnot p\to \lnot q} ), and its contrapositive ( ¬ q → ¬ p {\displaystyle \lnot q\to \lnot p} ). Truth tables can also be defined for more complex expressions that use several propositional connectives. === Arguments and inferences === Logic is commonly defined in terms of arguments or inferences as the study of their correctness. An argument is a set of premises together with a conclusion. An inference is the process of reasoning from these premises to the conclusion. But these terms are often used interchangeably in logic. Arguments are correct or incorrect depending on whether their premises support their conclusion. Premises and conclusions, on the other hand, are true or false depending on whether they are in accord with reality. In formal logic, a sound argument is an argument that is both correct and has only true premises. Sometimes a distinction is made between simple and complex arguments. A complex argument is made up of a chain of simple arguments. This means that the conclusion of one argument acts as a premise of later arguments. For a complex argument to be successful, each link of the chain has to be successful. Arguments and inferences are either correct or incorrect. If they are correct then their premises support their conclusion. In the incorrect case, this support is missing. It can take different forms corresponding to the different types of reasoning. The strongest form of support corresponds to deductive reasoning. But even arguments that are not deductively valid may still be good arguments because their premises offer non-deductive support to their conclusions. For such cases, the term ampliative or inductive reasoning is used. Deductive arguments are associated with formal logic in contrast to the relation between ampliative arguments and informal logic. ==== Deductive ==== A deductively valid argument is one whose premises guarantee the truth of its conclusion. For instance, the argument "(1) all frogs are amphibians; (2) no cats are amphibians; (3) therefore no cats are frogs" is deductively valid. For deductive validity, it does not matter whether the premises or the conclusion are actually true. So the argument "(1) all frogs are mammals; (2) no cats are mammals; (3) therefore no cats are frogs" is also valid because the conclusion follows necessarily from the premises. According to an influential view by Alfred Tarski, deductive arguments have three essential features: (1) they are formal, i.e. they depend only on the form of the premises and the conclusion; (2) they are a priori, i.e. no sense experience is needed to determine whether they obtain; (3) they are modal, i.e. that they hold by logical necessity for the given propositions, independent of any other circumstances. Because of the first feature, the focus on formality, deductive inference is usually identified with rules of inference. Rules of inference specify the form of the premises and the conclusion: how they have to be structured for the inference to be valid. Arguments that do not follow any rule of inference are deductively invalid. The modus ponens is a prominent rule of inference. It has the form "p; if p, then q; therefore q". Knowing that it has just rained ( p {\displaystyle p} ) and that after rain the streets are wet ( p → q {\displaystyle p\to q} ), one can use modus ponens to deduce that the streets are wet ( q {\displaystyle q} ). The third feature can be expressed by stating that deductively valid inferences are truth-preserving: it is impossible for the premises to be true and the conclusion to be false. Because of this feature, it is often asserted that deductive inferences are uninformative since the conclusion cannot arrive at new information not already present in the premises. But this point is not always accepted since it would mean, for example, that most of mathematics is uninformative. A different characterization distinguishes between surface and depth information. The surface information of a sentence is the information it presents explicitly. Depth information is the totality of the information contained in the sentence, both explicitly and implicitly. According to this view, deductive inferences are uninformative on the depth level. But they can be highly informative on the surface level by making implicit information explicit. This happens, for example, in mathematical proofs. ==== Ampliative ==== Ampliative arguments are arguments whose conclusions contain additional information not found in their premises. In this regard, they are more interesting since they contain information on the depth level and the thinker may learn something genuinely new. But this feature comes with a certain cost: the premises support the conclusion in the sense that they make its truth more likely but they do not ensure its truth. This means that the conclusion of an ampliative argument may be false even though all its premises are true. This characteristic is closely related to non-monotonicity and defeasibility: it may be necessary to retract an earlier conclusion upon receiving new information or in light of new inferences drawn. Ampliative reasoning plays a central role in many arguments found in everyday discourse and the sciences. Ampliative arguments are not automatically incorrect. Instead, they just follow different standards of correctness. The support they provide for their conclusion usually comes in degrees. This means that strong ampliative arguments make their conclusion very likely while weak ones are less certain. As a consequence, the line between correct and incorrect arguments is blurry in some cases, such as when the premises offer weak but non-negligible support. This contrasts with deductive arguments, which are either valid or invalid with nothing in-between. The terminology used to categorize ampliative arguments is inconsistent. Some authors, like James Hawthorne, use the term "induction" to cover all forms of non-deductive arguments. But in a more narrow sense, induction is only one type of ampliative argument alongside abductive arguments. Some philosophers, like Leo Groarke, also allow conductive arguments as another type. In this narrow sense, induction is often defined as a form of statistical generalization. In this case, the premises of an inductive argument are many individual observations that all show a certain pattern. The conclusion then is a general law that this pattern always obtains. In this sense, one may infer that "all elephants are gray" based on one's past observations of the color of elephants. A closely related form of inductive inference has as its conclusion not a general law but one more specific instance, as when it is inferred that an elephant one has not seen yet is also gray. Some theorists, like Igor Douven, stipulate that inductive inferences rest only on statistical considerations. This way, they can be distinguished from abductive inference. Abductive inference may or may not take statistical observations into consideration. In either case, the premises offer support for the conclusion because the conclusion is the best explanation of why the premises are true. In this sense, abduction is also called the inference to the best explanation. For example, given the premise that there is a plate with breadcrumbs in the kitchen in the early morning, one may infer the conclusion that one's house-mate had a midnight snack and was too tired to clean the table. This conclusion is justified because it is the best explanation of the current state of the kitchen. For abduction, it is not sufficient that the conclusion explains the premises. For example, the conclusion that a burglar broke into the house last night, got hungry on the job, and had a midnight snack, would also explain the state of the kitchen. But this conclusion is not justified because it is not the best or most likely explanation. === Fallacies === Not all arguments live up to the standards of correct reasoning. When they do not, they are usually referred to as fallacies. Their central aspect is not that their conclusion is false but that there is some flaw with the reasoning leading to this conclusion. So the argument "it is sunny today; therefore spiders have eight legs" is fallacious even though the conclusion is true. Some theorists, like John Stuart Mill, give a more restrictive definition of fallacies by additionally requiring that they appear to be correct. This way, genuine fallacies can be distinguished from mere mistakes of reasoning due to carelessness. This explains why people tend to commit fallacies: because they have an alluring element that seduces people into committing and accepting them. However, this reference to appearances is controversial because it belongs to the field of psychology, not logic, and because appearances may be different for different people. Fallacies are usually divided into formal and informal fallacies. For formal fallacies, the source of the error is found in the form of the argument. For example, denying the antecedent is one type of formal fallacy, as in "if Othello is a bachelor, then he is male; Othello is not a bachelor; therefore Othello is not male". But most fallacies fall into the category of informal fallacies, of which a great variety is discussed in the academic literature. The source of their error is usually found in the content or the context of the argument. Informal fallacies are sometimes categorized as fallacies of ambiguity, fallacies of presumption, or fallacies of relevance. For fallacies of ambiguity, the ambiguity and vagueness of natural language are responsible for their flaw, as in "feathers are light; what is light cannot be dark; therefore feathers cannot be dark". Fallacies of presumption have a wrong or unjustified premise but may be valid otherwise. In the case of fallacies of relevance, the premises do not support the conclusion because they are not relevant to it. === Definitory and strategic rules === The main focus of most logicians is to study the criteria according to which an argument is correct or incorrect. A fallacy is committed if these criteria are violated. In the case of formal logic, they are known as rules of inference. They are definitory rules, which determine whether an inference is correct or which inferences are allowed. Definitory rules contrast with strategic rules. Strategic rules specify which inferential moves are necessary to reach a given conclusion based on a set of premises. This distinction does not just apply to logic but also to games. In chess, for example, the definitory rules dictate that bishops may only move diagonally. The strategic rules, on the other hand, describe how the allowed moves may be used to win a game, for instance, by controlling the center and by defending one's king. It has been argued that logicians should give more emphasis to strategic rules since they are highly relevant for effective reasoning. === Formal systems === A formal system of logic consists of a formal language together with a set of axioms and a proof system used to draw inferences from these axioms. In logic, axioms are statements that are accepted without proof. They are used to justify other statements. Some theorists also include a semantics that specifies how the expressions of the formal language relate to real objects. Starting in the late 19th century, many new formal systems have been proposed. A formal language consists of an alphabet and syntactic rules. The alphabet is the set of basic symbols used in expressions. The syntactic rules determine how these symbols may be arranged to result in well-formed formulas. For instance, the syntactic rules of propositional logic determine that " P ∧ Q {\displaystyle P\land Q} " is a well-formed formula but " ∧ Q {\displaystyle \land Q} " is not since the logical conjunction ∧ {\displaystyle \land } requires terms on both sides. A proof system is a collection of rules to construct formal proofs. It is a tool to arrive at conclusions from a set of axioms. Rules in a proof system are defined in terms of the syntactic form of formulas independent of their specific content. For instance, the classical rule of conjunction introduction states that P ∧ Q {\displaystyle P\land Q} follows from the premises P {\displaystyle P} and Q {\displaystyle Q} . Such rules can be applied sequentially, giving a mechanical procedure for generating conclusions from premises. There are different types of proof systems including natural deduction and sequent calculi. A semantics is a system for mapping expressions of a formal language to their denotations. In many systems of logic, denotations are truth values. For instance, the semantics for classical propositional logic assigns the formula P ∧ Q {\displaystyle P\land Q} the denotation "true" whenever P {\displaystyle P} and Q {\displaystyle Q} are true. From the semantic point of view, a premise entails a conclusion if the conclusion is true whenever the premise is true. A system of logic is sound when its proof system cannot derive a conclusion from a set of premises unless it is semantically entailed by them. In other words, its proof system cannot lead to false conclusions, as defined by the semantics. A system is complete when its proof system can derive every conclusion that is semantically entailed by its premises. In other words, its proof system can lead to any true conclusion, as defined by the semantics. Thus, soundness and completeness together describe a system whose notions of validity and entailment line up perfectly. == Systems of logic == Systems of logic are theoretical frameworks for assessing the correctness of reasoning and arguments. For over two thousand years, Aristotelian logic was treated as the canon of logic in the Western world, but modern developments in this field have led to a vast proliferation of logical systems. One prominent categorization divides modern formal logical systems into classical logic, extended logics, and deviant logics. === Aristotelian === Aristotelian logic encompasses a great variety of topics. They include metaphysical theses about ontological categories and problems of scientific explanation. But in a more narrow sense, it is identical to term logic or syllogistics. A syllogism is a form of argument involving three propositions: two premises and a conclusion. Each proposition has three essential parts: a subject, a predicate, and a copula connecting the subject to the predicate. For example, the proposition "Socrates is wise" is made up of the subject "Socrates", the predicate "wise", and the copula "is". The subject and the predicate are the terms of the proposition. Aristotelian logic does not contain complex propositions made up of simple propositions. It differs in this aspect from propositional logic, in which any two propositions can be linked using a logical connective like "and" to form a new complex proposition. In Aristotelian logic, the subject can be universal, particular, indefinite, or singular. For example, the term "all humans" is a universal subject in the proposition "all humans are mortal". A similar proposition could be formed by replacing it with the particular term "some humans", the indefinite term "a human", or the singular term "Socrates". Aristotelian logic only includes predicates for simple properties of entities. But it lacks predicates corresponding to relations between entities. The predicate can be linked to the subject in two ways: either by affirming it or by denying it. For example, the proposition "Socrates is not a cat" involves the denial of the predicate "cat" to the subject "Socrates". Using combinations of subjects and predicates, a great variety of propositions and syllogisms can be formed. Syllogisms are characterized by the fact that the premises are linked to each other and to the conclusion by sharing one term in each case. Thus, these three propositions contain three terms, referred to as major term, minor term, and middle term. The central aspect of Aristotelian logic involves classifying all possible syllogisms into valid and invalid arguments according to how the propositions are formed. For example, the syllogism "all men are mortal; Socrates is a man; therefore Socrates is mortal" is valid. The syllogism "all cats are mortal; Socrates is mortal; therefore Socrates is a cat", on the other hand, is invalid. === Classical === Classical logic is distinct from traditional or Aristotelian logic. It encompasses propositional logic and first-order logic. It is "classical" in the sense that it is based on basic logical intuitions shared by most logicians. These intuitions include the law of excluded middle, the double negation elimination, the principle of explosion, and the bivalence of truth. It was originally developed to analyze mathematical arguments and was only later applied to other fields as well. Because of this focus on mathematics, it does not include logical vocabulary relevant to many other topics of philosophical importance. Examples of concepts it overlooks are the contrast between necessity and possibility and the problem of ethical obligation and permission. Similarly, it does not address the relations between past, present, and future. Such issues are addressed by extended logics. They build on the basic intuitions of classical logic and expand it by introducing new logical vocabulary. This way, the exact logical approach is applied to fields like ethics or epistemology that lie beyond the scope of mathematics. ==== Propositional logic ==== Propositional logic comprises formal systems in which formulae are built from atomic propositions using logical connectives. For instance, propositional logic represents the conjunction of two atomic propositions P {\displaystyle P} and Q {\displaystyle Q} as the complex formula P ∧ Q {\displaystyle P\land Q} . Unlike predicate logic where terms and predicates are the smallest units, propositional logic takes full propositions with truth values as its most basic component. Thus, propositional logics can only represent logical relationships that arise from the way complex propositions are built from simpler ones. But it cannot represent inferences that result from the inner structure of a proposition. ==== First-order logic ==== First-order logic includes the same propositional connectives as propositional logic but differs from it because it articulates the internal structure of propositions. This happens through devices such as singular terms, which refer to particular objects, predicates, which refer to properties and relations, and quantifiers, which treat notions like "some" and "all". For example, to express the proposition "this raven is black", one may use the predicate B {\displaystyle B} for the property "black" and the singular term r {\displaystyle r} referring to the raven to form the expression B ( r ) {\displaystyle B(r)} . To express that some objects are black, the existential quantifier ∃ {\displaystyle \exists } is combined with the variable x {\displaystyle x} to form the proposition ∃ x B ( x ) {\displaystyle \exists xB(x)} . First-order logic contains various rules of inference that determine how expressions articulated this way can form valid arguments, for example, that one may infer ∃ x B ( x ) {\displaystyle \exists xB(x)} from B ( r ) {\displaystyle B(r)} . === Extended === Extended logics are logical systems that accept the basic principles of classical logic. They introduce additional symbols and principles to apply it to fields like metaphysics, ethics, and epistemology. ==== Modal logic ==== Modal logic is an extension of classical logic. In its original form, sometimes called "alethic modal logic", it introduces two new symbols: ◊ {\displaystyle \Diamond } expresses that something is possible while ◻ {\displaystyle \Box } expresses that something is necessary. For example, if the formula B ( s ) {\displaystyle B(s)} stands for the sentence "Socrates is a banker" then the formula ◊ B ( s ) {\displaystyle \Diamond B(s)} articulates the sentence "It is possible that Socrates is a banker". To include these symbols in the logical formalism, modal logic introduces new rules of inference that govern what role they play in inferences. One rule of inference states that, if something is necessary, then it is also possible. This means that ◊ A {\displaystyle \Diamond A} follows from ◻ A {\displaystyle \Box A} . Another principle states that if a proposition is necessary then its negation is impossible and vice versa. This means that ◻ A {\displaystyle \Box A} is equivalent to ¬ ◊ ¬ A {\displaystyle \lnot \Diamond \lnot A} . Other forms of modal logic introduce similar symbols but associate different meanings with them to apply modal logic to other fields. For example, deontic logic concerns the field of ethics and introduces symbols to express the ideas of obligation and permission, i.e. to describe whether an agent has to perform a certain action or is allowed to perform it. The modal operators in temporal modal logic articulate temporal relations. They can be used to express, for example, that something happened at one time or that something is happening all the time. In epistemology, epistemic modal logic is used to represent the ideas of knowing something in contrast to merely believing it to be the case. ==== Higher order logic ==== Higher-order logics extend classical logic not by using modal operators but by introducing new forms of quantification. Quantifiers correspond to terms like "all" or "some". In classical first-order logic, quantifiers are only applied to individuals. The formula " ∃ x ( A p p l e ( x ) ∧ S w e e t ( x ) ) {\displaystyle \exists x(Apple(x)\land Sweet(x))} " (some apples are sweet) is an example of the existential quantifier " ∃ {\displaystyle \exists } " applied to the individual variable " x {\displaystyle x} ". In higher-order logics, quantification is also allowed over predicates. This increases its expressive power. For example, to express the idea that Mary and John share some qualities, one could use the formula " ∃ Q ( Q ( M a r y ) ∧ Q ( J o h n ) ) {\displaystyle \exists Q(Q(Mary)\land Q(John))} ". In this case, the existential quantifier is applied to the predicate variable " Q {\displaystyle Q} ". The added expressive power is especially useful for mathematics since it allows for more succinct formulations of mathematical theories. But it has drawbacks in regard to its meta-logical properties and ontological implications, which is why first-order logic is still more commonly used. === Deviant === Deviant logics are logical systems that reject some of the basic intuitions of classical logic. Because of this, they are usually seen not as its supplements but as its rivals. Deviant logical systems differ from each other either because they reject different classical intuitions or because they propose different alternatives to the same issue. Intuitionistic logic is a restricted version of classical logic. It uses the same symbols but excludes some rules of inference. For example, according to the law of double negation elimination, if a sentence is not not true, then it is true. This means that A {\displaystyle A} follows from ¬ ¬ A {\displaystyle \lnot \lnot A} . This is a valid rule of inference in classical logic but it is invalid in intuitionistic logic. Another classical principle not part of intuitionistic logic is the law of excluded middle. It states that for every sentence, either it or its negation is true. This means that every proposition of the form A ∨ ¬ A {\displaystyle A\lor \lnot A} is true. These deviations from classical logic are based on the idea that truth is established by verification using a proof. Intuitionistic logic is especially prominent in the field of constructive mathematics, which emphasizes the need to find or construct a specific example to prove its existence. Multi-valued logics depart from classicality by rejecting the principle of bivalence, which requires all propositions to be either true or false. For instance, Jan Łukasiewicz and Stephen Cole Kleene both proposed ternary logics which have a third truth value representing that a statement's truth value is indeterminate. These logics have been applied in the field of linguistics. Fuzzy logics are multivalued logics that have an infinite number of "degrees of truth", represented by a real number between 0 and 1. Paraconsistent logics are logical systems that can deal with contradictions. They are formulated to avoid the principle of explosion: for them, it is not the case that anything follows from a contradiction. They are often motivated by dialetheism, the view that contradictions are real or that reality itself is contradictory. Graham Priest is an influential contemporary proponent of this position and similar views have been ascribed to Georg Wilhelm Friedrich Hegel. === Informal === Informal logic is usually carried out in a less systematic way. It often focuses on more specific issues, like investigating a particular type of fallacy or studying a certain aspect of argumentation. Nonetheless, some frameworks of informal logic have also been presented that try to provide a systematic characterization of the correctness of arguments. The pragmatic or dialogical approach to informal logic sees arguments as speech acts and not merely as a set of premises together with a conclusion. As speech acts, they occur in a certain context, like a dialogue, which affects the standards of right and wrong arguments. A prominent version by Douglas N. Walton understands a dialogue as a game between two players. The initial position of each player is characterized by the propositions to which they are committed and the conclusion they intend to prove. Dialogues are games of persuasion: each player has the goal of convincing the opponent of their own conclusion. This is achieved by making arguments: arguments are the moves of the game. They affect to which propositions the players are committed. A winning move is a successful argument that takes the opponent's commitments as premises and shows how one's own conclusion follows from them. This is usually not possible straight away. For this reason, it is normally necessary to formulate a sequence of arguments as intermediary steps, each of which brings the opponent a little closer to one's intended conclusion. Besides these positive arguments leading one closer to victory, there are also negative arguments preventing the opponent's victory by denying their conclusion. Whether an argument is correct depends on whether it promotes the progress of the dialogue. Fallacies, on the other hand, are violations of the standards of proper argumentative rules. These standards also depend on the type of dialogue. For example, the standards governing the scientific discourse differ from the standards in business negotiations. The epistemic approach to informal logic, on the other hand, focuses on the epistemic role of arguments. It is based on the idea that arguments aim to increase our knowledge. They achieve this by linking justified beliefs to beliefs that are not yet justified. Correct arguments succeed at expanding knowledge while fallacies are epistemic failures: they do not justify the belief in their conclusion. For example, the fallacy of begging the question is a fallacy because it fails to provide independent justification for its conclusion, even though it is deductively valid. In this sense, logical normativity consists in epistemic success or rationality. The Bayesian approach is one example of an epistemic approach. Central to Bayesianism is not just whether the agent believes something but the degree to which they believe it, the so-called credence. Degrees of belief are seen as subjective probabilities in the believed proposition, i.e. how certain the agent is that the proposition is true. On this view, reasoning can be interpreted as a process of changing one's credences, often in reaction to new incoming information. Correct reasoning and the arguments it is based on follow the laws of probability, for example, the principle of conditionalization. Bad or irrational reasoning, on the other hand, violates these laws. == Areas of research == Logic is studied in various fields. In many cases, this is done by applying its formal method to specific topics outside its scope, like to ethics or computer science. In other cases, logic itself is made the subject of research in another discipline. This can happen in diverse ways. For instance, it can involve investigating the philosophical assumptions linked to the basic concepts used by logicians. Other ways include interpreting and analyzing logic through mathematical structures as well as studying and comparing abstract properties of formal logical systems. === Philosophy of logic and philosophical logic === Philosophy of logic is the philosophical discipline studying the scope and nature of logic. It examines many presuppositions implicit in logic, like how to define its basic concepts or the metaphysical assumptions associated with them. It is also concerned with how to classify logical systems and considers the ontological commitments they incur. Philosophical logic is one of the areas within the philosophy of logic. It studies the application of logical methods to philosophical problems in fields like metaphysics, ethics, and epistemology. This application usually happens in the form of extended or deviant logical systems. === Metalogic === Metalogic is the field of inquiry studying the properties of formal logical systems. For example, when a new formal system is developed, metalogicians may study it to determine which formulas can be proven in it. They may also study whether an algorithm could be developed to find a proof for each formula and whether every provable formula in it is a tautology. Finally, they may compare it to other logical systems to understand its distinctive features. A key issue in metalogic concerns the relation between syntax and semantics. The syntactic rules of a formal system determine how to deduce conclusions from premises, i.e. how to formulate proofs. The semantics of a formal system governs which sentences are true and which ones are false. This determines the validity of arguments since, for valid arguments, it is impossible for the premises to be true and the conclusion to be false. The relation between syntax and semantics concerns issues like whether every valid argument is provable and whether every provable argument is valid. Metalogicians also study whether logical systems are complete, sound, and consistent. They are interested in whether the systems are decidable and what expressive power they have. Metalogicians usually rely heavily on abstract mathematical reasoning when examining and formulating metalogical proofs. This way, they aim to arrive at precise and general conclusions on these topics. === Mathematical logic === The term "mathematical logic" is sometimes used as a synonym of "formal logic". But in a more restricted sense, it refers to the study of logic within mathematics. Major subareas include model theory, proof theory, set theory, and computability theory. Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic. However, it can also include attempts to use logic to analyze mathematical reasoning or to establish logic-based foundations of mathematics. The latter was a major concern in early 20th-century mathematical logic, which pursued the program of logicism pioneered by philosopher-logicians such as Gottlob Frege, Alfred North Whitehead, and Bertrand Russell. Mathematical theories were supposed to be logical tautologies, and their program was to show this by means of a reduction of mathematics to logic. Many attempts to realize this program failed, from the crippling of Frege's project in his Grundgesetze by Russell's paradox, to the defeat of Hilbert's program by Gödel's incompleteness theorems. Set theory originated in the study of the infinite by Georg Cantor, and it has been the source of many of the most challenging and important issues in mathematical logic. They include Cantor's theorem, the status of the Axiom of Choice, the question of the independence of the continuum hypothesis, and the modern debate on large cardinal axioms. Computability theory is the branch of mathematical logic that studies effective procedures to solve calculation problems. One of its main goals is to understand whether it is possible to solve a given problem using an algorithm. For instance, given a certain claim about the positive integers, it examines whether an algorithm can be found to determine if this claim is true. Computability theory uses various theoretical tools and models, such as Turing machines, to explore this type of issue. === Computational logic === Computational logic is the branch of logic and computer science that studies how to implement mathematical reasoning and logical formalisms using computers. This includes, for example, automatic theorem provers, which employ rules of inference to construct a proof step by step from a set of premises to the intended conclusion without human intervention. Logic programming languages are designed specifically to express facts using logical formulas and to draw inferences from these facts. For example, Prolog is a logic programming language based on predicate logic. Computer scientists also apply concepts from logic to problems in computing. The works of Claude Shannon were influential in this regard. He showed how Boolean logic can be used to understand and implement computer circuits. This can be achieved using electronic logic gates, i.e. electronic circuits with one or more inputs and usually one output. The truth values of propositions are represented by voltage levels. In this way, logic functions can be simulated by applying the corresponding voltages to the inputs of the circuit and determining the value of the function by measuring the voltage of the output. === Formal semantics of natural language === Formal semantics is a subfield of logic, linguistics, and the philosophy of language. The discipline of semantics studies the meaning of language. Formal semantics uses formal tools from the fields of symbolic logic and mathematics to give precise theories of the meaning of natural language expressions. It understands meaning usually in relation to truth conditions, i.e. it examines in which situations a sentence would be true or false. One of its central methodological assumptions is the principle of compositionality. It states that the meaning of a complex expression is determined by the meanings of its parts and how they are combined. For example, the meaning of the verb phrase "walk and sing" depends on the meanings of the individual expressions "walk" and "sing". Many theories in formal semantics rely on model theory. This means that they employ set theory to construct a model and then interpret the meanings of expression in relation to the elements in this model. For example, the term "walk" may be interpreted as the set of all individuals in the model that share the property of walking. Early influential theorists in this field were Richard Montague and Barbara Partee, who focused their analysis on the English language. === Epistemology of logic === The epistemology of logic studies how one knows that an argument is valid or that a proposition is logically true. This includes questions like how to justify that modus ponens is a valid rule of inference or that contradictions are false. The traditionally dominant view is that this form of logical understanding belongs to knowledge a priori. In this regard, it is often argued that the mind has a special faculty to examine relations between pure ideas and that this faculty is also responsible for apprehending logical truths. A similar approach understands the rules of logic in terms of linguistic conventions. On this view, the laws of logic are trivial since they are true by definition: they just express the meanings of the logical vocabulary. Some theorists, like Hilary Putnam and Penelope Maddy, object to the view that logic is knowable a priori. They hold instead that logical truths depend on the empirical world. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world. According to this view, they may be explored by studying general patterns of the fundamental sciences. For example, it has been argued that certain insights of quantum mechanics refute the principle of distributivity in classical logic, which states that the formula A ∧ ( B ∨ C ) {\displaystyle A\land (B\lor C)} is equivalent to ( A ∧ B ) ∨ ( A ∧ C ) {\displaystyle (A\land B)\lor (A\land C)} . This claim can be used as an empirical argument for the thesis that quantum logic is the correct logical system and should replace classical logic. == History == Logic was developed independently in several cultures during antiquity. One major early contributor was Aristotle, who developed term logic in his Organon and Prior Analytics. He was responsible for the introduction of the hypothetical syllogism and temporal modal logic. Further innovations include inductive logic as well as the discussion of new logical concepts such as terms, predicables, syllogisms, and propositions. Aristotelian logic was highly regarded in classical and medieval times, both in Europe and the Middle East. It remained in wide use in the West until the early 19th century. It has now been superseded by later work, though many of its key insights are still present in modern systems of logic. Ibn Sina (Avicenna) was the founder of Avicennian logic, which replaced Aristotelian logic as the dominant system of logic in the Islamic world. It influenced Western medieval writers such as Albertus Magnus and William of Ockham. Ibn Sina wrote on the hypothetical syllogism and on the propositional calculus. He developed an original "temporally modalized" syllogistic theory, involving temporal logic and modal logic. He also made use of inductive logic, such as his methods of agreement, difference, and concomitant variation, which are critical to the scientific method. Fakhr al-Din al-Razi was another influential Muslim logician. He criticized Aristotelian syllogistics and formulated an early system of inductive logic, foreshadowing the system of inductive logic developed by John Stuart Mill. During the Middle Ages, many translations and interpretations of Aristotelian logic were made. The works of Boethius were particularly influential. Besides translating Aristotle's work into Latin, he also produced textbooks on logic. Later, the works of Islamic philosophers such as Ibn Sina and Ibn Rushd (Averroes) were drawn on. This expanded the range of ancient works available to medieval Christian scholars since more Greek work was available to Muslim scholars that had been preserved in Latin commentaries. In 1323, William of Ockham's influential Summa Logicae was released. It is a comprehensive treatise on logic that discusses many basic concepts of logic and provides a systematic exposition of types of propositions and their truth conditions. In Chinese philosophy, the School of Names and Mohism were particularly influential. The School of Names focused on the use of language and on paradoxes. For example, Gongsun Long proposed the white horse paradox, which defends the thesis that a white horse is not a horse. The school of Mohism also acknowledged the importance of language for logic and tried to relate the ideas in these fields to the realm of ethics. In India, the study of logic was primarily pursued by the schools of Nyaya, Buddhism, and Jainism. It was not treated as a separate academic discipline and discussions of its topics usually happened in the context of epistemology and theories of dialogue or argumentation. In Nyaya, inference is understood as a source of knowledge (pramāṇa). It follows the perception of an object and tries to arrive at conclusions, for example, about the cause of this object. A similar emphasis on the relation to epistemology is also found in Buddhist and Jainist schools of logic, where inference is used to expand the knowledge gained through other sources. Some of the later theories of Nyaya, belonging to the Navya-Nyāya school, resemble modern forms of logic, such as Gottlob Frege's distinction between sense and reference and his definition of number. The syllogistic logic developed by Aristotle predominated in the West until the mid-19th century, when interest in the foundations of mathematics stimulated the development of modern symbolic logic. Many see Gottlob Frege's Begriffsschrift as the birthplace of modern logic. Gottfried Wilhelm Leibniz's idea of a universal formal language is often considered a forerunner. Other pioneers were George Boole, who invented Boolean algebra as a mathematical system of logic, and Charles Peirce, who developed the logic of relatives. Alfred North Whitehead and Bertrand Russell, in turn, condensed many of these insights in their work Principia Mathematica. Modern logic introduced novel concepts, such as functions, quantifiers, and relational predicates. A hallmark of modern symbolic logic is its use of formal language to precisely codify its insights. In this regard, it departs from earlier logicians, who relied mainly on natural language. Of particular influence was the development of first-order logic, which is usually treated as the standard system of modern logic. Its analytical generality allowed the formalization of mathematics and drove the investigation of set theory. It also made Alfred Tarski's approach to model theory possible and provided the foundation of modern mathematical logic. == See also == == References == === Notes === === Citations === === Bibliography === == External links ==
Wikipedia/logic
Basic research, also called pure research, fundamental research, basic science, or pure science, is a type of scientific research with the aim of improving scientific theories for better understanding and prediction of natural or other phenomena. In contrast, applied research uses scientific theories to develop technology or techniques, which can be used to intervene and alter natural or other phenomena. Though often driven simply by curiosity, basic research often fuels the technological innovations of applied science. The two aims are often practiced simultaneously in coordinated research and development. In addition to innovations, basic research serves to provide insights and public support of nature, possibly improving conservation efforts. Technological innovations may influence engineering concepts, such as the beak of a kingfisher influencing the design of a high-speed bullet train. == Overview == Basic research advances fundamental knowledge about the world. It focuses on creating and refuting or supporting theories that explain observed phenomena. Pure research is the source of most new scientific ideas and ways of thinking about the world. It can be exploratory, descriptive, or explanatory; however, explanatory research is the most common. Basic research generates new ideas, principles, and theories, which may not be immediately utilized but nonetheless form the basis of progress and development in different fields. Today's computers, for example, could not exist without research in pure mathematics conducted over a century ago, for which there was no known practical application at the time. Basic research rarely helps practitioners directly with their everyday concerns; nevertheless, it stimulates new ways of thinking that have the potential to revolutionize and dramatically improve how practitioners deal with a problem in the future. == By country == In the United States, basic research is funded mainly by the federal government and done mainly at universities and institutes. As government funding has diminished in the 2010s, however, private funding is increasingly important. == Basic versus applied science == Applied science focuses on the development of technology and techniques. In contrast, basic science develops scientific knowledge and predictions, principally in natural sciences but also in other empirical sciences, which are used as the scientific foundation for applied science. Basic science develops and establishes information to predict phenomena and perhaps to understand nature, whereas applied science uses portions of basic science to develop interventions via technology or technique to alter events or outcomes. Applied and basic sciences can interface closely in research and development. The interface between basic research and applied research has been studied by the National Science Foundation. A worker in basic scientific research is motivated by a driving curiosity about the unknown. When his explorations yield new knowledge, he experiences the satisfaction of those who first attain the summit of a mountain or the upper reaches of a river flowing through unmapped territory. Discovery of truth and understanding of nature are his objectives. His professional standing among his fellows depends upon the originality and soundness of his work. Creativeness in science is of a cloth with that of the poet or painter.It conducted a study in which it traced the relationship between basic scientific research efforts and the development of major innovations, such as oral contraceptives and videotape recorders. This study found that basic research played a key role in the development in all of the innovations. The number of basic science research that assisted in the production of a given innovation peaked between 20 and 30 years before the innovation itself. While most innovation takes the form of applied science and most innovation occurs in the private sector, basic research is a necessary precursor to almost all applied science and associated instances of innovation. Roughly 76% of basic research is conducted by universities. A distinction can be made between basic science and disciplines such as medicine and technology. They can be grouped as STM (science, technology, and medicine; not to be confused with STEM [science, technology, engineering, and mathematics]) or STS (science, technology, and society). These groups are interrelated and influence each other, although they may differ in the specifics such as methods and standards. The Nobel Prize mixes basic with applied sciences for its award in Physiology or Medicine. In contrast, the Royal Society of London awards distinguish natural science from applied science. == See also == Blue skies research Hard and soft science Metascience Normative science Physics Precautionary principle Pure mathematics Pure Chemistry == References == == Further reading == Levy, David M. (2002). "Research and Development". In David R. Henderson (ed.). Concise Encyclopedia of Economics (1st ed.). Library of Economics and Liberty. OCLC 317650570, 50016270, 163149563
Wikipedia/Fundamental_sciences
Reason is the capacity of consciously applying logic by drawing valid conclusions from new or existing information, with the aim of seeking the truth. It is associated with such characteristically human activities as philosophy, religion, science, language, mathematics, and art, and is normally considered to be a distinguishing ability possessed by humans. Reason is sometimes referred to as rationality. Reasoning involves using more-or-less rational processes of thinking and cognition to extrapolate from one's existing knowledge to generate new knowledge, and involves the use of one's intellect. The field of logic studies the ways in which humans can use formal reasoning to produce logically valid arguments and true conclusions. Reasoning may be subdivided into forms of logical reasoning, such as deductive reasoning, inductive reasoning, and abductive reasoning. Aristotle drew a distinction between logical discursive reasoning (reason proper), and intuitive reasoning,: VI.7  in which the reasoning process through intuition—however valid—may tend toward the personal and the subjectively opaque. In some social and political settings logical and intuitive modes of reasoning may clash, while in other contexts intuition and formal reason are seen as complementary rather than adversarial. For example, in mathematics, intuition is often necessary for the creative processes involved with arriving at a formal proof, arguably the most difficult of formal reasoning tasks. Reasoning, like habit or intuition, is one of the ways by which thinking moves from one idea to a related idea. For example, reasoning is the means by which rational individuals understand the significance of sensory information from their environments, or conceptualize abstract dichotomies such as cause and effect, truth and falsehood, or good and evil. Reasoning, as a part of executive decision making, is also closely identified with the ability to self-consciously change, in terms of goals, beliefs, attitudes, traditions, and institutions, and therefore with the capacity for freedom and self-determination. Psychologists and cognitive scientists have attempted to study and explain how people reason, e.g. which cognitive and neural processes are engaged, and how cultural factors affect the inferences that people draw. The field of automated reasoning studies how reasoning may or may not be modeled computationally. Animal psychology considers the question of whether animals other than humans can reason. == Etymology and related words == In the English language and other modern European languages, "reason", and related words, represent words which have always been used to translate Latin and classical Greek terms in their philosophical sense. The original Greek term was "λόγος" logos, the root of the modern English word "logic" but also a word that could mean for example "speech" or "explanation" or an "account" (of money handled). As a philosophical term logos was translated in its non-linguistic senses in Latin as ratio. This was originally not just a translation used for philosophy, but was also commonly a translation for logos in the sense of an account of money. French raison is derived directly from Latin, and this is the direct source of the English word "reason". The earliest major philosophers to publish in English, such as Francis Bacon, Thomas Hobbes, and John Locke also routinely wrote in Latin and French, and compared their terms to Greek, treating the words "logos", "ratio", "raison" and "reason" as interchangeable. The meaning of the word "reason" in senses such as "human reason" also overlaps to a large extent with "rationality" and the adjective of "reason" in philosophical contexts is normally "rational", rather than "reasoned" or "reasonable". Some philosophers, Hobbes for example, also used the word ratiocination as a synonym for "reasoning". In contrast to the use of "reason" as an abstract noun, a reason is a consideration that either explains or justifies events, phenomena, or behavior. Reasons justify decisions, reasons support explanations of natural phenomena, and reasons can be given to explain the actions (conduct) of individuals. The words are connected in this way: using reason, or reasoning, means providing good reasons. For example, when evaluating a moral decision, "morality is, at the very least, the effort to guide one's conduct by reason—that is, doing what there are the best reasons for doing—while giving equal [and impartial] weight to the interests of all those affected by what one does." == Philosophical history == The proposal that reason gives humanity a special position in nature has been argued to be a defining characteristic of western philosophy and later western science, starting with classical Greece. Philosophy can be described as a way of life based upon reason, while reason has been among the major subjects of philosophical discussion since ancient times. Reason is often said to be reflexive, or "self-correcting", and the critique of reason has been a persistent theme in philosophy. === Classical philosophy === For many classical philosophers, nature was understood teleologically, meaning that every type of thing had a definitive purpose that fit within a natural order that was itself understood to have aims. Perhaps starting with Pythagoras or Heraclitus, the cosmos was even said to have reason. Reason, by this account, is not just a characteristic that people happen to have. Reason was considered of higher stature than other characteristics of human nature, because it is something people share with nature itself, linking an apparently immortal part of the human mind with the divine order of the cosmos. Within the human mind or soul (psyche), reason was described by Plato as being the natural monarch which should rule over the other parts, such as spiritedness (thumos) and the passions. Aristotle, Plato's student, defined human beings as rational animals, emphasizing reason as a characteristic of human nature. He described the highest human happiness or well being (eudaimonia) as a life which is lived consistently, excellently, and completely in accordance with reason.: I  The conclusions to be drawn from the discussions of Aristotle and Plato on this matter are amongst the most debated in the history of philosophy. But teleological accounts such as Aristotle's were highly influential for those who attempt to explain reason in a way that is consistent with monotheism and the immortality and divinity of the human soul. For example, in the neoplatonist account of Plotinus, the cosmos has one soul, which is the seat of all reason, and the souls of all people are part of this soul. Reason is for Plotinus both the provider of form to material things, and the light which brings people's souls back into line with their source. === Christian and Islamic philosophy === The classical view of reason, like many important Neoplatonic and Stoic ideas, was readily adopted by the early Church as the Church Fathers saw Greek Philosophy as an indispensable instrument given to mankind so that we may understand revelation. For example, the greatest among the early Church Fathers and Doctors of the Church such as Augustine of Hippo, Basil of Caesarea, and Gregory of Nyssa were as much Neoplatonic philosophers as they were Christian theologians, and they adopted the Neoplatonic view of human reason and its implications for our relationship to creation, to ourselves, and to God. The Neoplatonic conception of the rational aspect of the human soul was widely adopted by medieval Islamic philosophers and continues to hold significance in Iranian philosophy. As European intellectual life reemerged from the Dark Ages, the Christian Patristic tradition and the influence of esteemed Islamic scholars like Averroes and Avicenna contributed to the development of the Scholastic view of reason, which laid the foundation for our modern understanding of this concept. Among the Scholastics who relied on the classical concept of reason for the development of their doctrines, none were more influential than Saint Thomas Aquinas, who put this concept at the heart of his Natural Law. In this doctrine, Thomas concludes that because humans have reason and because reason is a spark of the divine, every single human life is invaluable, all humans are equal, and every human is born with an intrinsic and permanent set of basic rights. On this foundation, the idea of human rights would later be constructed by Spanish theologians at the School of Salamanca. Other Scholastics, such as Roger Bacon and Albertus Magnus, following the example of Islamic scholars such as Alhazen, emphasised reason an intrinsic human ability to decode the created order and the structures that underlie our experienced physical reality. This interpretation of reason was instrumental to the development of the scientific method in the early Universities of the high Middle Ages. === Subject-centred reason in early modern philosophy === The early modern era was marked by a number of significant changes in the understanding of reason, starting in Europe. One of the most important of these changes involved a change in the metaphysical understanding of human beings. Scientists and philosophers began to question the teleological understanding of the world. Nature was no longer assumed to be human-like, with its own aims or reason, and human nature was no longer assumed to work according to anything other than the same "laws of nature" which affect inanimate things. This new understanding eventually displaced the previous world view that derived from a spiritual understanding of the universe. Accordingly, in the 17th century, René Descartes explicitly rejected the traditional notion of humans as "rational animals", suggesting instead that they are nothing more than "thinking things" along the lines of other "things" in nature. Any grounds of knowledge outside that understanding was, therefore, subject to doubt. In his search for a foundation of all possible knowledge, Descartes decided to throw into doubt all knowledge—except that of the mind itself in the process of thinking: At this time I admit nothing that is not necessarily true. I am therefore precisely nothing but a thinking thing; that is a mind, or intellect, or understanding, or reason—words of whose meanings I was previously ignorant. This eventually became known as epistemological or "subject-centred" reason, because it is based on the knowing subject, who perceives the rest of the world and itself as a set of objects to be studied, and successfully mastered, by applying the knowledge accumulated through such study. Breaking with tradition and with many thinkers after him, Descartes explicitly did not divide the incorporeal soul into parts, such as reason and intellect, describing them instead as one indivisible incorporeal entity. A contemporary of Descartes, Thomas Hobbes described reason as a broader version of "addition and subtraction" which is not limited to numbers. This understanding of reason is sometimes termed "calculative" reason. Similar to Descartes, Hobbes asserted that "No discourse whatsoever, can end in absolute knowledge of fact, past, or to come" but that "sense and memory" is absolute knowledge. In the late 17th century through the 18th century, John Locke and David Hume developed Descartes's line of thought still further. Hume took it in an especially skeptical direction, proposing that there could be no possibility of deducing relationships of cause and effect, and therefore no knowledge is based on reasoning alone, even if it seems otherwise. Hume famously remarked that, "We speak not strictly and philosophically when we talk of the combat of passion and of reason. Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them." Hume also took his definition of reason to unorthodox extremes by arguing, unlike his predecessors, that human reason is not qualitatively different from either simply conceiving individual ideas, or from judgments associating two ideas, and that "reason is nothing but a wonderful and unintelligible instinct in our souls, which carries us along a certain train of ideas, and endows them with particular qualities, according to their particular situations and relations." It followed from this that animals have reason, only much less complex than human reason. In the 18th century, Immanuel Kant attempted to show that Hume was wrong by demonstrating that a "transcendental" self, or "I", was a necessary condition of all experience. Therefore, suggested Kant, on the basis of such a self, it is in fact possible to reason both about the conditions and limits of human knowledge. And so long as these limits are respected, reason can be the vehicle of morality, justice, aesthetics, theories of knowledge (epistemology), and understanding. === Substantive and formal reason === In the formulation of Kant, who wrote some of the most influential modern treatises on the subject, the great achievement of reason (German: Vernunft) is that it is able to exercise a kind of universal law-making. Kant was able therefore to reformulate the basis of moral-practical, theoretical, and aesthetic reasoning on "universal" laws. Here, practical reasoning is the self-legislating or self-governing formulation of universal norms, and theoretical reasoning is the way humans posit universal laws of nature. Under practical reason, the moral autonomy or freedom of people depends on their ability, by the proper exercise of that reason, to behave according to laws that are given to them. This contrasted with earlier forms of morality, which depended on religious understanding and interpretation, or on nature, for their substance. According to Kant, in a free society each individual must be able to pursue their goals however they see fit, as long as their actions conform to principles given by reason. He formulated such a principle, called the "categorical imperative", which would justify an action only if it could be universalized: Act only according to that maxim whereby you can, at the same time, will that it should become a universal law. In contrast to Hume, Kant insisted that reason itself (German Vernunft) could be used to find solutions to metaphysical problems, especially the discovery of the foundations of morality. Kant claimed that these solutions could be found with his "transcendental logic", which unlike normal logic is not just an instrument that can be used indifferently, as it was for Aristotle, but a theoretical science in its own right and the basis of all the others. According to Jürgen Habermas, the "substantive unity" of reason has dissolved in modern times, such that it can no longer answer the question "How should I live?" Instead, the unity of reason has to be strictly formal, or "procedural". He thus described reason as a group of three autonomous spheres (on the model of Kant's three critiques): Cognitive–instrumental reason the kind of reason employed by the sciences; used to observe events, to predict and control outcomes, and to intervene in the world on the basis of its hypotheses Moral–practical reason what we use to deliberate and discuss issues in the moral and political realm, according to universalizable procedures (similar to Kant's categorical imperative) Aesthetic reason typically found in works of art and literature, and encompasses the novel ways of seeing the world and interpreting things that those practices embody For Habermas, these three spheres are the domain of experts, and therefore need to be mediated with the "lifeworld" by philosophers. In drawing such a picture of reason, Habermas hoped to demonstrate that the substantive unity of reason, which in pre-modern societies had been able to answer questions about the good life, could be made up for by the unity of reason's formalizable procedures. === The critique of reason === Hamann, Herder, Kant, Hegel, Kierkegaard, Nietzsche, Heidegger, Foucault, Rorty, and many other philosophers have contributed to a debate about what reason means, or ought to mean. Some, like Kierkegaard, Nietzsche, and Rorty, are skeptical about subject-centred, universal, or instrumental reason, and even skeptical toward reason as a whole. Others, including Hegel, believe that it has obscured the importance of intersubjectivity, or "spirit" in human life, and they attempt to reconstruct a model of what reason should be. Some thinkers, e.g. Foucault, believe there are other forms of reason, neglected but essential to modern life, and to our understanding of what it means to live a life according to reason. Others suggest that there is not just one reason or rationality, but multiple possible systems of reason or rationality which may conflict (in which case there is no super-rational system one can appeal to in order to resolve the conflict). In the last several decades, a number of proposals have been made to "re-orient" this critique of reason, or to recognize the "other voices" or "new departments" of reason: For example, in opposition to subject-centred reason, Habermas has proposed a model of communicative reason that sees it as an essentially cooperative activity, based on the fact of linguistic intersubjectivity. Nikolas Kompridis proposed a widely encompassing view of reason as "that ensemble of practices that contributes to the opening and preserving of openness" in human affairs, and a focus on reason's possibilities for social change. The philosopher Charles Taylor, influenced by the 20th century German philosopher Martin Heidegger, proposed that reason ought to include the faculty of disclosure, which is tied to the way we make sense of things in everyday life, as a new "department" of reason. In the essay "What is Enlightenment?", Michel Foucault proposed a critique based on Kant's distinction between "private" and "public" uses of reason: Private reason the reason that is used when an individual is "a cog in a machine" or when one "has a role to play in society and jobs to do: to be a soldier, to have taxes to pay, to be in charge of a parish, to be a civil servant" Public reason the reason used "when one is reasoning as a reasonable being (and not as a cog in a machine), when one is reasoning as a member of reasonable humanity"; in these circumstances, "the use of reason must be free and public" == Reason compared to related concepts == === Reason compared to logic === The terms logic or logical are sometimes used as if they were identical with reason or rational, or sometimes logic is seen as the most pure or the defining form of reason: "Logic is about reasoning—about going from premises to a conclusion. ... When you do logic, you try to clarify reasoning and separate good from bad reasoning." In modern economics, rational choice is assumed to equate to logically consistent choice. However, reason and logic can be thought of as distinct—although logic is one important aspect of reason. Author Douglas Hofstadter, in Gödel, Escher, Bach, characterizes the distinction in this way: Logic is done inside a system while reason is done outside the system by such methods as skipping steps, working backward, drawing diagrams, looking at examples, or seeing what happens if you change the rules of the system. Psychologists Mark H. Bickard and Robert L. Campbell argue that "rationality cannot be simply assimilated to logicality"; they note that "human knowledge of logic and logical systems has developed" over time through reasoning, and logical systems "can't construct new logical systems more powerful than themselves", so reasoning and rationality must involve more than a system of logic. Psychologist David Moshman, citing Bickhard and Campbell, argues for a "metacognitive conception of rationality" in which a person's development of reason "involves increasing consciousness and control of logical and other inferences". Reason is a type of thought, and logic involves the attempt to describe a system of formal rules or norms of appropriate reasoning. The oldest surviving writing to explicitly consider the rules by which reason operates are the works of the Greek philosopher Aristotle, especially Prior Analytics and Posterior Analytics. Although the Ancient Greeks had no separate word for logic as distinct from language and reason, Aristotle's newly coined word "syllogism" (syllogismos) identified logic clearly for the first time as a distinct field of study. When Aristotle referred to "the logical" (hē logikē), he was referring more broadly to rational thought. === Reason compared to cause-and-effect thinking, and symbolic thinking === As pointed out by philosophers such as Hobbes, Locke, and Hume, some animals are also clearly capable of a type of "associative thinking", even to the extent of associating causes and effects. A dog once kicked, can learn how to recognize the warning signs and avoid being kicked in the future, but this does not mean the dog has reason in any strict sense of the word. It also does not mean that humans acting on the basis of experience or habit are using their reason. Human reason requires more than being able to associate two ideas—even if those two ideas might be described by a reasoning human as a cause and an effect—perceptions of smoke, for example, and memories of fire. For reason to be involved, the association of smoke and the fire would have to be thought through in a way that can be explained, for example as cause and effect. In the explanation of Locke, for example, reason requires the mental use of a third idea in order to make this comparison by use of syllogism. More generally, according to Charles Sanders Peirce, reason in the strict sense requires the ability to create and manipulate a system of symbols, as well as indices and icons, the symbols having only a nominal, though habitual, connection to either (for example) smoke or fire. One example of such a system of symbols and signs is language. The connection of reason to symbolic thinking has been expressed in different ways by philosophers. Thomas Hobbes described the creation of "Markes, or Notes of remembrance" as speech. He used the word speech as an English version of the Greek word logos so that speech did not need to be communicated. When communicated, such speech becomes language, and the marks or notes or remembrance are called "Signes" by Hobbes. Going further back, although Aristotle is a source of the idea that only humans have reason (logos), he does mention that animals with imagination, for whom sense perceptions can persist, come closest to having something like reasoning and nous, and even uses the word "logos" in one place to describe the distinctions which animals can perceive in such cases. === Reason, imagination, mimesis, and memory === Reason and imagination rely on similar mental processes. Imagination is not only found in humans. Aristotle asserted that phantasia (imagination: that which can hold images or phantasmata) and phronein (a type of thinking that can judge and understand in some sense) also exist in some animals. According to him, both are related to the primary perceptive ability of animals, which gathers the perceptions of different senses and defines the order of the things that are perceived without distinguishing universals, and without deliberation or logos. But this is not yet reason, because human imagination is different. Terrence Deacon and Merlin Donald, writing about the origin of language, connect reason not only to language, but also mimesis. They describe the ability to create language as part of an internal modeling of reality, and specific to humankind. Other results are consciousness, and imagination or fantasy. In contrast, modern proponents of a genetic predisposition to language itself include Noam Chomsky and Steven Pinker. If reason is symbolic thinking, and peculiarly human, then this implies that humans have a special ability to maintain a clear consciousness of the distinctness of "icons" or images and the real things they represent. Merlin Donald writes:: 172  A dog might perceive the "meaning" of a fight that was realistically play-acted by humans, but it could not reconstruct the message or distinguish the representation from its referent (a real fight).... Trained apes are able to make this distinction; young children make this distinction early—hence, their effortless distinction between play-acting an event and the event itself In classical descriptions, an equivalent description of this mental faculty is eikasia, in the philosophy of Plato.: Ch.5  This is the ability to perceive whether a perception is an image of something else, related somehow but not the same, and therefore allows humans to perceive that a dream or memory or a reflection in a mirror is not reality as such. What Klein refers to as dianoetic eikasia is the eikasia concerned specifically with thinking and mental images, such as those mental symbols, icons, signes, and marks discussed above as definitive of reason. Explaining reason from this direction: human thinking is special in that we often understand visible things as if they were themselves images of our intelligible "objects of thought" as "foundations" (hypothēses in Ancient Greek). This thinking (dianoia) is "...an activity which consists in making the vast and diffuse jungle of the visible world depend on a plurality of more 'precise' noēta".: 122  Both Merlin Donald and the Socratic authors such as Plato and Aristotle emphasize the importance of mimēsis, often translated as imitation or representation. Donald writes:: 169  Imitation is found especially in monkeys and apes [...but...] Mimesis is fundamentally different from imitation and mimicry in that it involves the invention of intentional representations.... Mimesis is not absolutely tied to external communication. Mimēsis is a concept, now popular again in academic discussion, that was particularly prevalent in Plato's works. In Aristotle, it is discussed mainly in the Poetics. In Michael Davis's account of the theory of man in that work: It is the distinctive feature of human action, that whenever we choose what we do, we imagine an action for ourselves as though we were inspecting it from the outside. Intentions are nothing more than imagined actions, internalizings of the external. All action is therefore imitation of action; it is poetic... Donald, like Plato (and Aristotle, especially in On Memory and Recollection), emphasizes the peculiarity in humans of voluntary initiation of a search through one's mental world. The ancient Greek anamnēsis, normally translated as "recollection" was opposed to mneme or "memory". Memory, shared with some animals, requires a consciousness not only of what happened in the past, but also that something happened in the past, which is in other words a kind of eikasia: 109  "...but nothing except man is able to recollect." Recollection is a deliberate effort to search for and recapture something once known. Klein writes that, "To become aware of our having forgotten something means to begin recollecting.": 112  Donald calls the same thing autocueing, which he explains as follows:: 173  "Mimetic acts are reproducible on the basis of internal, self-generated cues. This permits voluntary recall of mimetic representations, without the aid of external cues—probably the earliest form of representational thinking." In a celebrated paper, the fantasy author and philologist J.R.R. Tolkien wrote in his essay "On Fairy Stories" that the terms "fantasy" and "enchantment" are connected to not only "the satisfaction of certain primordial human desires" but also "the origin of language and of the mind". === Logical reasoning methods and argumentation === A subdivision of philosophy and a variety of reasoning is logic. The traditional main division made in philosophy is between deductive reasoning and inductive reasoning. Formal logic has been described as the science of deduction. The study of inductive reasoning is generally carried out within the field known as informal logic or critical thinking. ==== Deductive reasoning ==== Deduction is a form of reasoning in which a conclusion follows necessarily from the stated premises. A deduction is also the name for the conclusion reached by a deductive reasoning process. A classic example of deductive reasoning is evident in syllogisms like the following: The reasoning in this argument is deductively valid because there is no way in which both premises could be true and the conclusion be false. ==== Inductive reasoning ==== Induction is a form of inference that produces properties or relations about unobserved objects or types based on previous observations or experiences, or that formulates general statements or laws based on limited observations of recurring phenomenal patterns. Inductive reasoning contrasts with deductive reasoning in that, even in the strongest cases of inductive reasoning, the truth of the premises does not guarantee the truth of the conclusion. Instead, the conclusion of an inductive argument follows with some degree of probability. For this reason also, the conclusion of an inductive argument contains more information than is already contained in the premises. Thus, this method of reasoning is ampliative. A classic example of inductive reasoning comes from the empiricist David Hume: ==== Analogical reasoning ==== Analogical reasoning is a form of inductive reasoning from a particular to a particular. It is often used in case-based reasoning, especially legal reasoning. An example follows: Analogical reasoning is a weaker form of inductive reasoning from a single example, because inductive reasoning typically uses a large number of examples to reason from the particular to the general. Analogical reasoning often leads to wrong conclusions. For example: ==== Abductive reasoning ==== Abductive reasoning, or argument to the best explanation, is a form of reasoning that does not fit in either the deductive or inductive categories, since it starts with incomplete set of observations and proceeds with likely possible explanations. The conclusion in an abductive argument does not follow with certainty from its premises and concerns something unobserved. What distinguishes abduction from the other forms of reasoning is an attempt to favour one conclusion above others, by subjective judgement or by attempting to falsify alternative explanations or by demonstrating the likelihood of the favoured conclusion, given a set of more or less disputable assumptions. For example, when a patient displays certain symptoms, there might be various possible causes, but one of these is preferred above others as being more probable. ==== Fallacious reasoning ==== Flawed reasoning in arguments is known as fallacious reasoning. Bad reasoning within arguments can result from either a formal fallacy or an informal fallacy. Formal fallacies occur when there is a problem with the form, or structure, of the argument. The word "formal" refers to this link to the form of the argument. An argument that contains a formal fallacy will always be invalid. An informal fallacy is an error in reasoning that occurs due to a problem with the content, rather than the form or structure, of the argument. === Unreasonable decisions and actions === In law relating to the actions of an employer or a public body, a decision or action which falls outside the range of actions or decision available when acting in good faith can be described as "unreasonable". Use of the term is considered in the English law cases of Short v Poole Corporation (1926), Associated Provincial Picture Houses Ltd v Wednesbury Corporation (1947) and Braganza v BP Shipping Limited (2015). == Traditional problems raised concerning reason == Philosophy is often characterized as a pursuit of rational understanding, entailing a more rigorous and dedicated application of human reasoning than commonly employed. Philosophers have long debated two fundamental questions regarding reason, essentially examining reasoning itself as a human endeavor, or philosophizing about philosophizing. The first question delves into whether we can place our trust in reason's ability to attain knowledge and truth more effectively than alternative methods. The second question explores whether a life guided by reason, a life that aims to be guided by reason, can be expected to lead to greater happiness compared to other approaches to life. === Reason versus truth, and "first principles" === Since classical antiquity a question has remained constant in philosophical debate (sometimes seen as a conflict between Platonism and Aristotelianism) concerning the role of reason in confirming truth. People use logic, deduction, and induction to reach conclusions they think are true. Conclusions reached in this way are considered, according to Aristotle, more certain than sense perceptions on their own. On the other hand, if such reasoned conclusions are only built originally upon a foundation of sense perceptions, then our most logical conclusions can never be said to be certain because they are built upon the very same fallible perceptions they seek to better. This leads to the question of what types of first principles, or starting points of reasoning, are available for someone seeking to come to true conclusions. In Greek, "first principles" are archai, "starting points", and the faculty used to perceive them is sometimes referred to in Aristotle and Plato as nous which was close in meaning to awareness or consciousness. Empiricism (sometimes associated with Aristotle but more correctly associated with British philosophers such as John Locke and David Hume, as well as their ancient equivalents such as Democritus) asserts that sensory impressions are the only available starting points for reasoning and attempting to attain truth. This approach always leads to the controversial conclusion that absolute knowledge is not attainable. Idealism, (associated with Plato and his school), claims that there is a "higher" reality, within which certain people can directly discover truth without needing to rely only upon the senses, and that this higher reality is therefore the primary source of truth. Philosophers such as Plato, Aristotle, Al-Farabi, Avicenna, Averroes, Maimonides, Aquinas, and Hegel are sometimes said to have argued that reason must be fixed and discoverable—perhaps by dialectic, analysis, or study. In the vision of these thinkers, reason is divine or at least has divine attributes. Such an approach allowed religious philosophers such as Thomas Aquinas and Étienne Gilson to try to show that reason and revelation are compatible. According to Hegel, "...the only thought which Philosophy brings with it to the contemplation of History, is the simple conception of reason; that reason is the Sovereign of the World; that the history of the world, therefore, presents us with a rational process." Since the 17th century rationalists, reason has often been taken to be a subjective faculty, or rather the unaided ability (pure reason) to form concepts. For Descartes, Spinoza, and Leibniz, this was associated with mathematics. Kant attempted to show that pure reason could form concepts (time and space) that are the conditions of experience. Kant made his argument in opposition to Hume, who denied that reason had any role to play in experience. === Reason versus emotion or passion === After Plato and Aristotle, western literature often treated reason as being the faculty that trained the passions and appetites. Stoic philosophy, by contrast, claimed most emotions were merely false judgements. According to the Stoics the only good is virtue, and the only evil is vice, therefore emotions that judged things other than vice to be bad (such as fear or distress), or things other than virtue to be good (such as greed) were simply false judgements and should be discarded (though positive emotions based on true judgements, such as kindness, were acceptable). After the critiques of reason in the early Enlightenment the appetites were rarely discussed or were conflated with the passions. Some Enlightenment camps took after the Stoics to say reason should oppose passion rather than order it, while others like the Romantics believed that passion displaces reason, as in the maxim "follow your heart". Reason has been seen as cold, an "enemy of mystery and ambiguity", a slave, or judge, of the passions, notably in the work of David Hume. More recently, Freud wrote, “It seems as though the activity of the other agencies of the mind is able only to modify the pleasure principle but not to nullify it; and it remains a question of the greatest theoretical importance, and one that has not yet been answered, when and how it is ever possible for the pleasure principle to be overcome.” Reasoning that claims the object of a desire is demanded by logic alone is called rationalization. Rousseau first proposed, in his second Discourse, that reason and political life is not natural and is possibly harmful to mankind. He asked what really can be said about what is natural to mankind. What, other than reason and civil society, "best suits his constitution"? Rousseau saw "two principles prior to reason" in human nature. First we hold an intense interest in our own well-being. Secondly we object to the suffering or death of any sentient being, especially one like ourselves. These two passions lead us to desire more than we could achieve. We become dependent upon each other, and on relationships of authority and obedience. This effectively puts the human race into slavery. Rousseau says that he almost dares to assert that nature does not destine men to be healthy. According to Richard Velkley, "Rousseau outlines certain programs of rational self-correction, most notably the political legislation of the Contrat Social and the moral education in Émile. All the same, Rousseau understands such corrections to be only ameliorations of an essentially unsatisfactory condition, that of socially and intellectually corrupted humanity." This quandary presented by Rousseau led to Kant's new way of justifying reason as freedom to create good and evil. These therefore are not to be blamed on nature or God. In various ways, German Idealism after Kant, and major later figures such Nietzsche, Bergson, Husserl, Scheler, and Heidegger, remain preoccupied with problems coming from the metaphysical demands or urges of reason. Rousseau and these later writers also exerted a large influence on art and politics. Many writers (such as Nikos Kazantzakis) extol passion and disparage reason. In politics modern nationalism comes from Rousseau's argument that rationalist cosmopolitanism brings man ever further from his natural state. In Descartes' Error, Antonio Damasio presents the "Somatic Marker Hypothesis" which states that emotions guide behavior and decision-making. Damasio argues that these somatic markers (known collectively as "gut feelings") are "intuitive signals" that direct our decision making processes in a certain way that cannot be solved with rationality alone. Damasio further argues that rationality requires emotional input in order to function. === Reason versus faith or tradition === There are many religious traditions, some of which are explicitly fideist and others of which claim varying degrees of rationalism. Secular critics sometimes accuse all religious adherents of irrationality; they claim such adherents are guilty of ignoring, suppressing, or forbidding some kinds of reasoning concerning some subjects (such as religious dogmas, moral taboos, etc.). Though theologies and religions such as classical monotheism typically do not admit to being irrational, there is often a perceived conflict or tension between faith and tradition on the one hand, and reason on the other, as potentially competing sources of wisdom, law, and truth. Religious adherents sometimes respond by arguing that faith and reason can be reconciled, or have different non-overlapping domains, or that critics engage in a similar kind of irrationalism: Reconciliation Philosopher Alvin Plantinga argues that there is no real conflict between reason and classical theism because classical theism explains (among other things) why the universe is intelligible and why reason can successfully grasp it. Non-overlapping magisteria Evolutionary biologist Stephen Jay Gould argues that there need not be conflict between reason and religious belief because they are each authoritative in their own domain (or "magisterium"). If so, reason can work on those problems over which it has authority while other sources of knowledge or opinion can have authority on the big questions. Tu quoque Philosophers Alasdair MacIntyre and Charles Taylor argue that those critics of traditional religion who are adherents of secular liberalism are also sometimes guilty of ignoring, suppressing, and forbidding some kinds of reasoning about subjects. Similarly, philosophers of science such as Paul Feyarabend argue that scientists sometimes ignore or suppress evidence contrary to the dominant paradigm. Unification Theologian Joseph Ratzinger, later Benedict XVI, asserted that "Christianity has understood itself as the religion of the Logos, as the religion according to reason," referring to John 1 Ἐν ἀρχῇ ἦν ὁ λόγος, usually translated as "In the beginning was the Word (Logos)." Thus, he said that the Christian faith is "open to all that is truly rational", and that the rationality of Western Enlightenment "is of Christian origin". Some commentators have claimed that Western civilization can be almost defined by its serious testing of the limits of tension between "unaided" reason and faith in "revealed" truths—figuratively summarized as Athens and Jerusalem, respectively. Leo Strauss spoke of a "Greater West" that included all areas under the influence of the tension between Greek rationalism and Abrahamic revelation, including the Muslim lands. He was particularly influenced by the Muslim philosopher Al-Farabi. To consider to what extent Eastern philosophy might have partaken of these important tensions, Strauss thought it best to consider whether dharma or tao may be equivalent to Nature (physis in Greek). According to Strauss the beginning of philosophy involved the "discovery or invention of nature" and the "pre-philosophical equivalent of nature" was supplied by "such notions as 'custom' or 'ways'", which appear to be really universal in all times and places. The philosophical concept of nature or natures as a way of understanding archai (first principles of knowledge) brought about a peculiar tension between reasoning on the one hand, and tradition or faith on the other. == Reason in particular fields of study == === Psychology and cognitive science === Scientific research into reasoning is carried out within the fields of psychology and cognitive science. Psychologists attempt to determine whether or not people are capable of rational thought in a number of different circumstances. Assessing how well someone engages in reasoning is the project of determining the extent to which the person is rational or acts rationally. It is a key research question in the psychology of reasoning and cognitive science of reasoning. Rationality is often divided into its respective theoretical and practical counterparts. ==== Behavioral experiments on human reasoning ==== Experimental cognitive psychologists research reasoning behaviour. Such research may focus, for example, on how people perform on tests of reasoning such as intelligence or IQ tests, or on how well people's reasoning matches ideals set by logic (see, for example, the Wason test). Experiments examine how people make inferences from conditionals like if A then B and how they make inferences about alternatives like A or else B. They test whether people can make valid deductions about spatial and temporal relations like A is to the left of B or A happens after B, and about quantified assertions like all the A are B. Experiments investigate how people make inferences about factual situations, hypothetical possibilities, probabilities, and counterfactual situations. ==== Developmental studies of children's reasoning ==== Developmental psychologists investigate the development of reasoning from birth to adulthood. Piaget's theory of cognitive development was the first complete theory of reasoning development. Subsequently, several alternative theories were proposed, including the neo-Piagetian theories of cognitive development. ==== Neuroscience of reasoning ==== The biological functioning of the brain is studied by neurophysiologists, cognitive neuroscientists, and neuropsychologists. This includes research into the structure and function of normally functioning brains, as well as of damaged or otherwise unusual brains. In addition to carrying out research into reasoning, some psychologists—for example clinical psychologists and psychotherapists—work to alter people's reasoning habits when those habits are unhelpful. === Computer science === ==== Automated reasoning ==== In artificial intelligence and computer science, scientists study and use automated reasoning for diverse applications including automated theorem proving the formal semantics of programming languages, and formal specification in software engineering. ==== Meta-reasoning ==== Meta-reasoning is reasoning about reasoning. In computer science, a system performs meta-reasoning when reasoning about its operation. This requires a programming language capable of reflection, the ability to observe and modify its own structure and behaviour. === Evolution of reason === A species could benefit greatly from better abilities to reason about, predict, and understand the world. French social and cognitive scientists Dan Sperber and Hugo Mercier argue that, aside from these benefits, other forces could have been driving the evolution of reason. They point out that reasoning is very difficult for humans to do effectively, and that it is hard for individuals to doubt their own beliefs (confirmation bias). Reasoning is most effective when done as a collective—as demonstrated by the success of projects like science. They suggest that there are pressures not just individual, but group selection at play. Any group that managed to find ways of reasoning effectively would reap benefits for all its members, increasing their fitness. This could also help explain why humans, according to Sperber, are not optimized to reason effectively alone. Sperber's & Mercier's argumentative theory of reasoning claims that reason may have more to do with winning arguments than searching for the truth. === Reason in political philosophy and ethics === Aristotle famously described reason (with language) as a part of human nature, because of which it is best for humans to live "politically" meaning in communities of about the size and type of a small city state (polis in Greek). For example: It is clear, then, that a human being is more of a political [politikon = of the polis] animal [zōion] than is any bee or than any of those animals that live in herds. For nature, as we say, makes nothing in vain, and humans are the only animals who possess reasoned speech [logos]. Voice, of course, serves to indicate what is painful and pleasant; that is why it is also found in other animals, because their nature has reached the point where they can perceive what is painful and pleasant and express these to each other. But speech [logos] serves to make plain what is advantageous and harmful and so also what is just and unjust. For it is a peculiarity of humans, in contrast to the other animals, to have perception of good and bad, just and unjust, and the like; and the community in these things makes a household or city [polis].... By nature, then, the drive for such a community exists in everyone, but the first to set one up is responsible for things of very great goodness. For as humans are the best of all animals when perfected, so they are the worst when divorced from law and right. The reason is that injustice is most difficult to deal with when furnished with weapons, and the weapons a human being has are meant by nature to go along with prudence and virtue, but it is only too possible to turn them to contrary uses. Consequently, if a human being lacks virtue, he is the most unholy and savage thing, and when it comes to sex and food, the worst. But justice is something political [to do with the polis], for right is the arrangement of the political community, and right is discrimination of what is just.: I.2, 1253a  If human nature is fixed in this way, we can define what type of community is always best for people. This argument has remained a central argument in all political, ethical, and moral thinking since then, and has become especially controversial since firstly Rousseau's Second Discourse, and secondly, the Theory of Evolution. Already in Aristotle there was an awareness that the polis had not always existed and had to be invented or developed by humans themselves. The household came first, and the first villages and cities were just extensions of that, with the first cities being run as if they were still families with Kings acting like fathers.: I.2, 1252b15  Friendship seems to prevail in man and woman according to nature [kata phusin]; for people are by nature [tēi phusei] pairing more than political [politikon], in as much as the household [oikos] is prior and more necessary than the polis and making children is more common [koinoteron] with the animals. In the other animals, community [koinōnia] goes no further than this, but people live together [sumoikousin] not only for the sake of making children, but also for the things for life; for from the start the functions [erga] are divided, and are different for man and woman. Thus they supply each other, putting their own into the common [eis to koinon]. It is for these reasons that both utility and pleasure seem to be found in this kind of friendship.: VIII.12  Rousseau in his Second Discourse finally took the shocking step of claiming that this traditional account has things in reverse: with reason, language, and rationally organized communities all having developed over a long period of time merely as a result of the fact that some habits of cooperation were found to solve certain types of problems, and that once such cooperation became more important, it forced people to develop increasingly complex cooperation—often only to defend themselves from each other. In other words, according to Rousseau, reason, language, and rational community did not arise because of any conscious decision or plan by humans or gods, nor because of any pre-existing human nature. As a result, he claimed, living together in rationally organized communities like modern humans is a development with many negative aspects compared to the original state of man as an ape. If anything is specifically human in this theory, it is the flexibility and adaptability of humans. This view of the animal origins of distinctive human characteristics later received support from Charles Darwin's Theory of Evolution. The two competing theories concerning the origins of reason are relevant to political and ethical thought because, according to the Aristotelian theory, a best way of living together exists independently of historical circumstances. According to Rousseau, we should even doubt that reason, language, and politics are a good thing, as opposed to being simply the best option given the particular course of events that led to today. Rousseau's theory, that human nature is malleable rather than fixed, is often taken to imply (for example by Karl Marx) a wider range of possible ways of living together than traditionally known. However, while Rousseau's initial impact encouraged bloody revolutions against traditional politics, including both the French Revolution and the Russian Revolution, his own conclusions about the best forms of community seem to have been remarkably classical, in favor of city-states such as Geneva, and rural living. == See also == Argument – Attempt to persuade or to determine the truth of a conclusion Argumentation theory – Academic field of logic and rhetoric Common sense – Sound practical judgement in everyday matters Confirmation bias – Bias confirming existing attitudes Conformity – Matching opinions and behaviors to group norms Critical thinking – Analysis of facts to form a judgment Logic and rationality – Fundamental concepts in philosophy Outline of thought – Topic tree that identifies many types of thoughts/thinking, types of reasoning, aspects of thought, related fields, and more Outline of human intelligence – Topic tree presenting the traits, capacities, models, and research fields of human intelligence, and more Transduction (psychology) – generalization of attributes from specific examples of a category to the whole categoryPages displaying wikidata descriptions as a fallback == References == == Further reading == Reason at PhilPapers Beer, Francis A., "Words of Reason", Political Communication 11 (Summer, 1994): 185–201. Gilovich, Thomas (1991), How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life, New York: The Free Press, ISBN 978-0029117057
Wikipedia/Method_of_reasoning
The propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. Sometimes, it is called first-order propositional logic to contrast it with System F, but it should not be confused with first-order logic. It deals with propositions (which can be true or false) and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives representing the truth functions of conjunction, disjunction, implication, biconditional, and negation. Some sources include other connectives, as in the table below. Unlike first-order logic, propositional logic does not deal with non-logical objects, predicates about them, or quantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic. Propositional logic is typically studied with a formal language, in which propositions are represented by letters, which are called propositional variables. These are then used, together with symbols for connectives, to make propositional formula. Because of this, the propositional variables are called atomic formulas of a formal propositional language. While the atomic propositions are typically represented by letters of the alphabet, there is a variety of notations to represent the logical connectives. The following table shows the main notational variants for each of the connectives in propositional logic. The most thoroughly researched branch of propositional logic is classical truth-functional propositional logic, in which formulas are interpreted as having precisely one of two possible truth values, the truth value of true or the truth value of false. The principle of bivalence and the law of excluded middle are upheld. By comparison with first-order logic, truth-functional propositional logic is considered to be zeroth-order logic. == History == Although propositional logic had been hinted by earlier philosophers, Chrysippus is often credited with development of a deductive system for propositional logic as his main achievement in the 3rd century BC which was expanded by his successor Stoics. The logic was focused on propositions. This was different from the traditional syllogistic logic, which focused on terms. However, most of the original writings were lost and, at some time between the 3rd and 6th century CE, Stoic logic faded into oblivion, to be resurrected only in the 20th century, in the wake of the (re)-discovery of propositional logic. Symbolic logic, which would come to be important to refine propositional logic, was first developed by the 17th/18th-century mathematician Gottfried Leibniz, whose calculus ratiocinator was, however, unknown to the larger logical community. Consequently, many of the advances achieved by Leibniz were recreated by logicians like George Boole and Augustus De Morgan, completely independent of Leibniz. Gottlob Frege's predicate logic builds upon propositional logic, and has been described as combining "the distinctive features of syllogistic logic and propositional logic." Consequently, predicate logic ushered in a new era in logic's history; however, advances in propositional logic were still made after Frege, including natural deduction, truth trees and truth tables. Natural deduction was invented by Gerhard Gentzen and Stanisław Jaśkowski. Truth trees were invented by Evert Willem Beth. The invention of truth tables, however, is of uncertain attribution. Within works by Frege and Bertrand Russell, are ideas influential to the invention of truth tables. The actual tabular structure (being formatted as a table), itself, is generally credited to either Ludwig Wittgenstein or Emil Post (or both, independently). Besides Frege and Russell, others credited with having ideas preceding truth tables include Philo, Boole, Charles Sanders Peirce, and Ernst Schröder. Others credited with the tabular structure include Jan Łukasiewicz, Alfred North Whitehead, William Stanley Jevons, John Venn, and Clarence Irving Lewis. Ultimately, some have concluded, like John Shosky, that "It is far from clear that any one person should be given the title of 'inventor' of truth-tables". == Sentences == Propositional logic, as currently studied in universities, is a specification of a standard of logical consequence in which only the meanings of propositional connectives are considered in evaluating the conditions for the truth of a sentence, or whether a sentence logically follows from some other sentence or group of sentences. === Declarative sentences === Propositional logic deals with statements, which are defined as declarative sentences having truth value. Examples of statements might include: Wikipedia is a free online encyclopedia that anyone can edit. London is the capital of England. All Wikipedia editors speak at least three languages. Declarative sentences are contrasted with questions, such as "What is Wikipedia?", and imperative statements, such as "Please add citations to support the claims in this article.". Such non-declarative sentences have no truth value, and are only dealt with in nonclassical logics, called erotetic and imperative logics. === Compounding sentences with connectives === In propositional logic, a statement can contain one or more other statements as parts. Compound sentences are formed from simpler sentences and express relationships among the constituent sentences. This is done by combining them with logical connectives: the main types of compound sentences are negations, conjunctions, disjunctions, implications, and biconditionals, which are formed by using the corresponding connectives to connect propositions. In English, these connectives are expressed by the words "and" (conjunction), "or" (disjunction), "not" (negation), "if" (material conditional), and "if and only if" (biconditional). Examples of such compound sentences might include: Wikipedia is a free online encyclopedia that anyone can edit, and millions already have. (conjunction) It is not true that all Wikipedia editors speak at least three languages. (negation) Either London is the capital of England, or London is the capital of the United Kingdom, or both. (disjunction) If sentences lack any logical connectives, they are called simple sentences, or atomic sentences; if they contain one or more logical connectives, they are called compound sentences, or molecular sentences. Sentential connectives are a broader category that includes logical connectives. Sentential connectives are any linguistic particles that bind sentences to create a new compound sentence, or that inflect a single sentence to create a new sentence. A logical connective, or propositional connective, is a kind of sentential connective with the characteristic feature that, when the original sentences it operates on are (or express) propositions, the new sentence that results from its application also is (or expresses) a proposition. Philosophers disagree about what exactly a proposition is, as well as about which sentential connectives in natural languages should be counted as logical connectives. Sentential connectives are also called sentence-functors, and logical connectives are also called truth-functors. == Arguments == An argument is defined as a pair of things, namely a set of sentences, called the premises, and a sentence, called the conclusion. The conclusion is claimed to follow from the premises, and the premises are claimed to support the conclusion. === Example argument === The following is an example of an argument within the scope of propositional logic: Premise 1: If it's raining, then it's cloudy. Premise 2: It's raining. Conclusion: It's cloudy. The logical form of this argument is known as modus ponens, which is a classically valid form. So, in classical logic, the argument is valid, although it may or may not be sound, depending on the meteorological facts in a given context. This example argument will be reused when explaining § Formalization. === Validity and soundness === An argument is valid if, and only if, it is necessary that, if all its premises are true, its conclusion is true. Alternatively, an argument is valid if, and only if, it is impossible for all the premises to be true while the conclusion is false. Validity is contrasted with soundness. An argument is sound if, and only if, it is valid and all its premises are true. Otherwise, it is unsound. Logic, in general, aims to precisely specify valid arguments. This is done by defining a valid argument as one in which its conclusion is a logical consequence of its premises, which, when this is understood as semantic consequence, means that there is no case in which the premises are true but the conclusion is not true – see § Semantics below. == Formalization == Propositional logic is typically studied through a formal system in which formulas of a formal language are interpreted to represent propositions. This formal language is the basis for proof systems, which allow a conclusion to be derived from premises if, and only if, it is a logical consequence of them. This section will show how this works by formalizing the § Example argument. The formal language for a propositional calculus will be fully specified in § Language, and an overview of proof systems will be given in § Proof systems. === Propositional variables === Since propositional logic is not concerned with the structure of propositions beyond the point where they cannot be decomposed any more by logical connectives, it is typically studied by replacing such atomic (indivisible) statements with letters of the alphabet, which are interpreted as variables representing statements (propositional variables). With propositional variables, the § Example argument would then be symbolized as follows: Premise 1: P → Q {\displaystyle P\to Q} Premise 2: P {\displaystyle P} Conclusion: Q {\displaystyle Q} When P is interpreted as "It's raining" and Q as "it's cloudy" these symbolic expressions correspond exactly with the original expression in natural language. Not only that, but they will also correspond with any other inference with the same logical form. When a formal system is used to represent formal logic, only statement letters (usually capital roman letters such as P {\displaystyle P} , Q {\displaystyle Q} and R {\displaystyle R} ) are represented directly. The natural language propositions that arise when they're interpreted are outside the scope of the system, and the relation between the formal system and its interpretation is likewise outside the formal system itself. === Gentzen notation === If we assume that the validity of modus ponens has been accepted as an axiom, then the same § Example argument can also be depicted like this: P → Q , P Q {\displaystyle {\frac {P\to Q,P}{Q}}} This method of displaying it is Gentzen's notation for natural deduction and sequent calculus. The premises are shown above a line, called the inference line, separated by a comma, which indicates combination of premises. The conclusion is written below the inference line. The inference line represents syntactic consequence, sometimes called deductive consequence,> which is also symbolized with ⊢. So the above can also be written in one line as P → Q , P ⊢ Q {\displaystyle P\to Q,P\vdash Q} . Syntactic consequence is contrasted with semantic consequence, which is symbolized with ⊧. In this case, the conclusion follows syntactically because the natural deduction inference rule of modus ponens has been assumed. For more on inference rules, see the sections on proof systems below. == Language == The language (commonly called L {\displaystyle {\mathcal {L}}} ) of a propositional calculus is defined in terms of: a set of primitive symbols, called atomic formulas, atomic sentences, atoms, placeholders, prime formulas, proposition letters, sentence letters, or variables, and a set of operator symbols, called connectives, logical connectives, logical operators, truth-functional connectives, truth-functors, or propositional connectives. A well-formed formula is any atomic formula, or any formula that can be built up from atomic formulas by means of operator symbols according to the rules of the grammar. The language L {\displaystyle {\mathcal {L}}} , then, is defined either as being identical to its set of well-formed formulas, or as containing that set (together with, for instance, its set of connectives and variables). Usually the syntax of L {\displaystyle {\mathcal {L}}} is defined recursively by just a few definitions, as seen next; some authors explicitly include parentheses as punctuation marks when defining their language's syntax, while others use them without comment. === Syntax === Given a set of atomic propositional variables p 1 {\displaystyle p_{1}} , p 2 {\displaystyle p_{2}} , p 3 {\displaystyle p_{3}} , ..., and a set of propositional connectives c 1 1 {\displaystyle c_{1}^{1}} , c 2 1 {\displaystyle c_{2}^{1}} , c 3 1 {\displaystyle c_{3}^{1}} , ..., c 1 2 {\displaystyle c_{1}^{2}} , c 2 2 {\displaystyle c_{2}^{2}} , c 3 2 {\displaystyle c_{3}^{2}} , ..., c 1 3 {\displaystyle c_{1}^{3}} , c 2 3 {\displaystyle c_{2}^{3}} , c 3 3 {\displaystyle c_{3}^{3}} , ..., a formula of propositional logic is defined recursively by these definitions: Definition 1: Atomic propositional variables are formulas. Definition 2: If c n m {\displaystyle c_{n}^{m}} is a propositional connective, and ⟨ {\displaystyle \langle } A, B, C, … ⟩ {\displaystyle \rangle } is a sequence of m, possibly but not necessarily atomic, possibly but not necessarily distinct, formulas, then the result of applying c n m {\displaystyle c_{n}^{m}} to ⟨ {\displaystyle \langle } A, B, C, … ⟩ {\displaystyle \rangle } is a formula. Definition 3: Nothing else is a formula. Writing the result of applying c n m {\displaystyle c_{n}^{m}} to ⟨ {\displaystyle \langle } A, B, C, … ⟩ {\displaystyle \rangle } in functional notation, as c n m {\displaystyle c_{n}^{m}} (A, B, C, …), we have the following as examples of well-formed formulas: p 5 {\displaystyle p_{5}} c 3 2 ( p 2 , p 9 ) {\displaystyle c_{3}^{2}(p_{2},p_{9})} c 3 2 ( p 1 , c 2 1 ( p 3 ) ) {\displaystyle c_{3}^{2}(p_{1},c_{2}^{1}(p_{3}))} c 1 3 ( p 4 , p 6 , c 2 2 ( p 1 , p 2 ) ) {\displaystyle c_{1}^{3}(p_{4},p_{6},c_{2}^{2}(p_{1},p_{2}))} c 4 2 ( c 1 1 ( p 7 ) , c 3 1 ( p 8 ) ) {\displaystyle c_{4}^{2}(c_{1}^{1}(p_{7}),c_{3}^{1}(p_{8}))} c 2 3 ( c 1 2 ( p 3 , p 4 ) , c 2 1 ( p 5 ) , c 3 2 ( p 6 , p 7 ) ) {\displaystyle c_{2}^{3}(c_{1}^{2}(p_{3},p_{4}),c_{2}^{1}(p_{5}),c_{3}^{2}(p_{6},p_{7}))} c 3 1 ( c 1 3 ( p 2 , p 3 , c 2 2 ( p 4 , p 5 ) ) ) {\displaystyle c_{3}^{1}(c_{1}^{3}(p_{2},p_{3},c_{2}^{2}(p_{4},p_{5})))} What was given as Definition 2 above, which is responsible for the composition of formulas, is referred to by Colin Howson as the principle of composition. It is this recursion in the definition of a language's syntax which justifies the use of the word "atomic" to refer to propositional variables, since all formulas in the language L {\displaystyle {\mathcal {L}}} are built up from the atoms as ultimate building blocks. Composite formulas (all formulas besides atoms) are called molecules, or molecular sentences. (This is an imperfect analogy with chemistry, since a chemical molecule may sometimes have only one atom, as in monatomic gases.) The definition that "nothing else is a formula", given above as Definition 3, excludes any formula from the language which is not specifically required by the other definitions in the syntax. In particular, it excludes infinitely long formulas from being well-formed. It is sometimes called the Closure Clause. ==== CF grammar in BNF ==== An alternative to the syntax definitions given above is to write a context-free (CF) grammar for the language L {\displaystyle {\mathcal {L}}} in Backus-Naur form (BNF). This is more common in computer science than in philosophy. It can be done in many ways, of which a particularly brief one, for the common set of five connectives, is this single clause: ϕ ::= a 1 , a 2 , … | ¬ ϕ | ϕ & ψ | ϕ ∨ ψ | ϕ → ψ | ϕ ↔ ψ {\displaystyle \phi ::=a_{1},a_{2},\ldots ~|~\neg \phi ~|~\phi ~\&~\psi ~|~\phi \vee \psi ~|~\phi \rightarrow \psi ~|~\phi \leftrightarrow \psi } This clause, due to its self-referential nature (since ϕ {\displaystyle \phi } is in some branches of the definition of ϕ {\displaystyle \phi } ), also acts as a recursive definition, and therefore specifies the entire language. To expand it to add modal operators, one need only add … | ◻ ϕ | ◊ ϕ {\displaystyle |~\Box \phi ~|~\Diamond \phi } to the end of the clause. === Constants and schemata === Mathematicians sometimes distinguish between propositional constants, propositional variables, and schemata. Propositional constants represent some particular proposition, while propositional variables range over the set of all atomic propositions. Schemata, or schematic letters, however, range over all formulas. (Schematic letters are also called metavariables.) It is common to represent propositional constants by A, B, and C, propositional variables by P, Q, and R, and schematic letters are often Greek letters, most often φ, ψ, and χ. However, some authors recognize only two "propositional constants" in their formal system: the special symbol ⊤ {\displaystyle \top } , called "truth", which always evaluates to True, and the special symbol ⊥ {\displaystyle \bot } , called "falsity", which always evaluates to False. Other authors also include these symbols, with the same meaning, but consider them to be "zero-place truth-functors", or equivalently, "nullary connectives". == Semantics == To serve as a model of the logic of a given natural language, a formal language must be semantically interpreted. In classical logic, all propositions evaluate to exactly one of two truth-values: True or False. For example, "Wikipedia is a free online encyclopedia that anyone can edit" evaluates to True, while "Wikipedia is a paper encyclopedia" evaluates to False. In other respects, the following formal semantics can apply to the language of any propositional logic, but the assumptions that there are only two semantic values (bivalence), that only one of the two is assigned to each formula in the language (noncontradiction), and that every formula gets assigned a value (excluded middle), are distinctive features of classical logic. To learn about nonclassical logics with more than two truth-values, and their unique semantics, one may consult the articles on "Many-valued logic", "Three-valued logic", "Finite-valued logic", and "Infinite-valued logic". === Interpretation (case) and argument === For a given language L {\displaystyle {\mathcal {L}}} , an interpretation, valuation, Boolean valuation, or case, is an assignment of semantic values to each formula of L {\displaystyle {\mathcal {L}}} . For a formal language of classical logic, a case is defined as an assignment, to each formula of L {\displaystyle {\mathcal {L}}} , of one or the other, but not both, of the truth values, namely truth (T, or 1) and falsity (F, or 0). An interpretation that follows the rules of classical logic is sometimes called a Boolean valuation. An interpretation of a formal language for classical logic is often expressed in terms of truth tables. Since each formula is only assigned a single truth-value, an interpretation may be viewed as a function, whose domain is L {\displaystyle {\mathcal {L}}} , and whose range is its set of semantic values V = { T , F } {\displaystyle {\mathcal {V}}=\{{\mathsf {T}},{\mathsf {F}}\}} , or V = { 1 , 0 } {\displaystyle {\mathcal {V}}=\{1,0\}} . For n {\displaystyle n} distinct propositional symbols there are 2 n {\displaystyle 2^{n}} distinct possible interpretations. For any particular symbol a {\displaystyle a} , for example, there are 2 1 = 2 {\displaystyle 2^{1}=2} possible interpretations: either a {\displaystyle a} is assigned T, or a {\displaystyle a} is assigned F. And for the pair a {\displaystyle a} , b {\displaystyle b} there are 2 2 = 4 {\displaystyle 2^{2}=4} possible interpretations: either both are assigned T, or both are assigned F, or a {\displaystyle a} is assigned T and b {\displaystyle b} is assigned F, or a {\displaystyle a} is assigned F and b {\displaystyle b} is assigned T. Since L {\displaystyle {\mathcal {L}}} has ℵ 0 {\displaystyle \aleph _{0}} , that is, denumerably many propositional symbols, there are 2 ℵ 0 = c {\displaystyle 2^{\aleph _{0}}={\mathfrak {c}}} , and therefore uncountably many distinct possible interpretations of L {\displaystyle {\mathcal {L}}} as a whole. Where I {\displaystyle {\mathcal {I}}} is an interpretation and φ {\displaystyle \varphi } and ψ {\displaystyle \psi } represent formulas, the definition of an argument, given in § Arguments, may then be stated as a pair ⟨ { φ 1 , φ 2 , φ 3 , . . . , φ n } , ψ ⟩ {\displaystyle \langle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\},\psi \rangle } , where { φ 1 , φ 2 , φ 3 , . . . , φ n } {\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}} is the set of premises and ψ {\displaystyle \psi } is the conclusion. The definition of an argument's validity, i.e. its property that { φ 1 , φ 2 , φ 3 , . . . , φ n } ⊨ ψ {\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}\models \psi } , can then be stated as its absence of a counterexample, where a counterexample is defined as a case I {\displaystyle {\mathcal {I}}} in which the argument's premises { φ 1 , φ 2 , φ 3 , . . . , φ n } {\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}} are all true but the conclusion ψ {\displaystyle \psi } is not true. As will be seen in § Semantic truth, validity, consequence, this is the same as to say that the conclusion is a semantic consequence of the premises. === Propositional connective semantics === An interpretation assigns semantic values to atomic formulas directly. Molecular formulas are assigned a function of the value of their constituent atoms, according to the connective used; the connectives are defined in such a way that the truth-value of a sentence formed from atoms with connectives depends on the truth-values of the atoms that they're applied to, and only on those. This assumption is referred to by Colin Howson as the assumption of the truth-functionality of the connectives. ==== Semantics via. truth tables ==== Since logical connectives are defined semantically only in terms of the truth values that they take when the propositional variables that they're applied to take either of the two possible truth values, the semantic definition of the connectives is usually represented as a truth table for each of the connectives, as seen below: This table covers each of the main five logical connectives: conjunction (here notated p ∧ q {\displaystyle p\land q} ), disjunction (p ∨ q), implication (p → q), biconditional (p ↔ q) and negation, (¬p, or ¬q, as the case may be). It is sufficient for determining the semantics of each of these operators. For more truth tables for more different kinds of connectives, see the article "Truth table". ==== Semantics via assignment expressions ==== Some authors (viz., all the authors cited in this subsection) write out the connective semantics using a list of statements instead of a table. In this format, where I ( φ ) {\displaystyle {\mathcal {I}}(\varphi )} is the interpretation of φ {\displaystyle \varphi } , the five connectives are defined as: I ( ¬ P ) = T {\displaystyle {\mathcal {I}}(\neg P)={\mathsf {T}}} if, and only if, I ( P ) = F {\displaystyle {\mathcal {I}}(P)={\mathsf {F}}} I ( P ∧ Q ) = T {\displaystyle {\mathcal {I}}(P\land Q)={\mathsf {T}}} if, and only if, I ( P ) = T {\displaystyle {\mathcal {I}}(P)={\mathsf {T}}} and I ( Q ) = T {\displaystyle {\mathcal {I}}(Q)={\mathsf {T}}} I ( P ∨ Q ) = T {\displaystyle {\mathcal {I}}(P\lor Q)={\mathsf {T}}} if, and only if, I ( P ) = T {\displaystyle {\mathcal {I}}(P)={\mathsf {T}}} or I ( Q ) = T {\displaystyle {\mathcal {I}}(Q)={\mathsf {T}}} I ( P → Q ) = T {\displaystyle {\mathcal {I}}(P\to Q)={\mathsf {T}}} if, and only if, it is true that, if I ( P ) = T {\displaystyle {\mathcal {I}}(P)={\mathsf {T}}} , then I ( Q ) = T {\displaystyle {\mathcal {I}}(Q)={\mathsf {T}}} I ( P ↔ Q ) = T {\displaystyle {\mathcal {I}}(P\leftrightarrow Q)={\mathsf {T}}} if, and only if, it is true that I ( P ) = T {\displaystyle {\mathcal {I}}(P)={\mathsf {T}}} if, and only if, I ( Q ) = T {\displaystyle {\mathcal {I}}(Q)={\mathsf {T}}} Instead of I ( φ ) {\displaystyle {\mathcal {I}}(\varphi )} , the interpretation of φ {\displaystyle \varphi } may be written out as | φ | {\displaystyle |\varphi |} , or, for definitions such as the above, I ( φ ) = T {\displaystyle {\mathcal {I}}(\varphi )={\mathsf {T}}} may be written simply as the English sentence " φ {\displaystyle \varphi } is given the value T {\displaystyle {\mathsf {T}}} ". Yet other authors may prefer to speak of a Tarskian model M {\displaystyle {\mathfrak {M}}} for the language, so that instead they'll use the notation M ⊨ φ {\displaystyle {\mathfrak {M}}\models \varphi } , which is equivalent to saying I ( φ ) = T {\displaystyle {\mathcal {I}}(\varphi )={\mathsf {T}}} , where I {\displaystyle {\mathcal {I}}} is the interpretation function for M {\displaystyle {\mathfrak {M}}} . ==== Connective definition methods ==== Some of these connectives may be defined in terms of others: for instance, implication, p → q {\displaystyle p\rightarrow q} , may be defined in terms of disjunction and negation, as ¬ p ∨ q {\displaystyle \neg p\lor q} ; and disjunction may be defined in terms of negation and conjunction, as ¬ ( ¬ p ∧ ¬ q {\displaystyle \neg (\neg p\land \neg q} . In fact, a truth-functionally complete system, in the sense that all and only the classical propositional tautologies are theorems, may be derived using only disjunction and negation (as Russell, Whitehead, and Hilbert did), or using only implication and negation (as Frege did), or using only conjunction and negation, or even using only a single connective for "not and" (the Sheffer stroke), as Jean Nicod did. A joint denial connective (logical NOR) will also suffice, by itself, to define all other connectives. Besides NOR and NAND, no other connectives have this property. Some authors, namely Howson and Cunningham, distinguish equivalence from the biconditional. (As to equivalence, Howson calls it "truth-functional equivalence", while Cunningham calls it "logical equivalence".) Equivalence is symbolized with ⇔ and is a metalanguage symbol, while a biconditional is symbolized with ↔ and is a logical connective in the object language L {\displaystyle {\mathcal {L}}} . Regardless, an equivalence or biconditional is true if, and only if, the formulas connected by it are assigned the same semantic value under every interpretation. Other authors often do not make this distinction, and may use the word "equivalence", and/or the symbol ⇔, to denote their object language's biconditional connective. === Semantic truth, validity, consequence === Given φ {\displaystyle \varphi } and ψ {\displaystyle \psi } as formulas (or sentences) of a language L {\displaystyle {\mathcal {L}}} , and I {\displaystyle {\mathcal {I}}} as an interpretation (or case) of L {\displaystyle {\mathcal {L}}} , then the following definitions apply: Truth-in-a-case: A sentence φ {\displaystyle \varphi } of L {\displaystyle {\mathcal {L}}} is true under an interpretation I {\displaystyle {\mathcal {I}}} if I {\displaystyle {\mathcal {I}}} assigns the truth value T to φ {\displaystyle \varphi } . If φ {\displaystyle \varphi } is true under I {\displaystyle {\mathcal {I}}} , then I {\displaystyle {\mathcal {I}}} is called a model of φ {\displaystyle \varphi } . Falsity-in-a-case: φ {\displaystyle \varphi } is false under an interpretation I {\displaystyle {\mathcal {I}}} if, and only if, ¬ φ {\displaystyle \neg \varphi } is true under I {\displaystyle {\mathcal {I}}} . This is the "truth of negation" definition of falsity-in-a-case. Falsity-in-a-case may also be defined by the "complement" definition: φ {\displaystyle \varphi } is false under an interpretation I {\displaystyle {\mathcal {I}}} if, and only if, φ {\displaystyle \varphi } is not true under I {\displaystyle {\mathcal {I}}} . In classical logic, these definitions are equivalent, but in nonclassical logics, they are not. Semantic consequence: A sentence ψ {\displaystyle \psi } of L {\displaystyle {\mathcal {L}}} is a semantic consequence ( φ ⊨ ψ {\displaystyle \varphi \models \psi } ) of a sentence φ {\displaystyle \varphi } if there is no interpretation under which φ {\displaystyle \varphi } is true and ψ {\displaystyle \psi } is not true. Valid formula (tautology): A sentence φ {\displaystyle \varphi } of L {\displaystyle {\mathcal {L}}} is logically valid ( ⊨ φ {\displaystyle \models \varphi } ), or a tautology,ref name="ms32 if it is true under every interpretation, or true in every case. Consistent sentence: A sentence of L {\displaystyle {\mathcal {L}}} is consistent if it is true under at least one interpretation. It is inconsistent if it is not consistent. An inconsistent formula is also called self-contradictory, and said to be a self-contradiction, or simply a contradiction, although this latter name is sometimes reserved specifically for statements of the form ( p ∧ ¬ p ) {\displaystyle (p\land \neg p)} . For interpretations (cases) I {\displaystyle {\mathcal {I}}} of L {\displaystyle {\mathcal {L}}} , these definitions are sometimes given: Complete case: A case I {\displaystyle {\mathcal {I}}} is complete if, and only if, either φ {\displaystyle \varphi } is true-in- I {\displaystyle {\mathcal {I}}} or ¬ φ {\displaystyle \neg \varphi } is true-in- I {\displaystyle {\mathcal {I}}} , for any φ {\displaystyle \varphi } in L {\displaystyle {\mathcal {L}}} . Consistent case: A case I {\displaystyle {\mathcal {I}}} is consistent if, and only if, there is no φ {\displaystyle \varphi } in L {\displaystyle {\mathcal {L}}} such that both φ {\displaystyle \varphi } and ¬ φ {\displaystyle \neg \varphi } are true-in- I {\displaystyle {\mathcal {I}}} . For classical logic, which assumes that all cases are complete and consistent, the following theorems apply: For any given interpretation, a given formula is either true or false under it. No formula is both true and false under the same interpretation. φ {\displaystyle \varphi } is true under I {\displaystyle {\mathcal {I}}} if, and only if, ¬ φ {\displaystyle \neg \varphi } is false under I {\displaystyle {\mathcal {I}}} ; ¬ φ {\displaystyle \neg \varphi } is true under I {\displaystyle {\mathcal {I}}} if, and only if, φ {\displaystyle \varphi } is not true under I {\displaystyle {\mathcal {I}}} . If φ {\displaystyle \varphi } and ( φ → ψ ) {\displaystyle (\varphi \to \psi )} are both true under I {\displaystyle {\mathcal {I}}} , then ψ {\displaystyle \psi } is true under I {\displaystyle {\mathcal {I}}} . If ⊨ φ {\displaystyle \models \varphi } and ⊨ ( φ → ψ ) {\displaystyle \models (\varphi \to \psi )} , then ⊨ ψ {\displaystyle \models \psi } . ( φ → ψ ) {\displaystyle (\varphi \to \psi )} is true under I {\displaystyle {\mathcal {I}}} if, and only if, either φ {\displaystyle \varphi } is not true under I {\displaystyle {\mathcal {I}}} , or ψ {\displaystyle \psi } is true under I {\displaystyle {\mathcal {I}}} . φ ⊨ ψ {\displaystyle \varphi \models \psi } if, and only if, ( φ → ψ ) {\displaystyle (\varphi \to \psi )} is logically valid, that is, φ ⊨ ψ {\displaystyle \varphi \models \psi } if, and only if, ⊨ ( φ → ψ ) {\displaystyle \models (\varphi \to \psi )} . == Proof systems == Proof systems in propositional logic can be broadly classified into semantic proof systems and syntactic proof systems, according to the kind of logical consequence that they rely on: semantic proof systems rely on semantic consequence ( φ ⊨ ψ {\displaystyle \varphi \models \psi } ), whereas syntactic proof systems rely on syntactic consequence ( φ ⊢ ψ {\displaystyle \varphi \vdash \psi } ). Semantic consequence deals with the truth values of propositions in all possible interpretations, whereas syntactic consequence concerns the derivation of conclusions from premises based on rules and axioms within a formal system. This section gives a very brief overview of the kinds of proof systems, with anchors to the relevant sections of this article on each one, as well as to the separate Wikipedia articles on each one. === Semantic proof systems === Semantic proof systems rely on the concept of semantic consequence, symbolized as φ ⊨ ψ {\displaystyle \varphi \models \psi } , which indicates that if φ {\displaystyle \varphi } is true, then ψ {\displaystyle \psi } must also be true in every possible interpretation. ==== Truth tables ==== A truth table is a semantic proof method used to determine the truth value of a propositional logic expression in every possible scenario. By exhaustively listing the truth values of its constituent atoms, a truth table can show whether a proposition is true, false, tautological, or contradictory. See § Semantic proof via truth tables. ==== Semantic tableaux ==== A semantic tableau is another semantic proof technique that systematically explores the truth of a proposition. It constructs a tree where each branch represents a possible interpretation of the propositions involved. If every branch leads to a contradiction, the original proposition is considered to be a contradiction, and its negation is considered a tautology. See § Semantic proof via tableaux. === Syntactic proof systems === Syntactic proof systems, in contrast, focus on the formal manipulation of symbols according to specific rules. The notion of syntactic consequence, φ ⊢ ψ {\displaystyle \varphi \vdash \psi } , signifies that ψ {\displaystyle \psi } can be derived from φ {\displaystyle \varphi } using the rules of the formal system. ==== Axiomatic systems ==== An axiomatic system is a set of axioms or assumptions from which other statements (theorems) are logically derived. In propositional logic, axiomatic systems define a base set of propositions considered to be self-evidently true, and theorems are proved by applying deduction rules to these axioms. See § Syntactic proof via axioms. ==== Natural deduction ==== Natural deduction is a syntactic method of proof that emphasizes the derivation of conclusions from premises through the use of intuitive rules reflecting ordinary reasoning. Each rule reflects a particular logical connective and shows how it can be introduced or eliminated. See § Syntactic proof via natural deduction. ==== Sequent calculus ==== The sequent calculus is a formal system that represents logical deductions as sequences or "sequents" of formulas. Developed by Gerhard Gentzen, this approach focuses on the structural properties of logical deductions and provides a powerful framework for proving statements within propositional logic. == Semantic proof via truth tables == Taking advantage of the semantic concept of validity (truth in every interpretation), it is possible to prove a formula's validity by using a truth table, which gives every possible interpretation (assignment of truth values to variables) of a formula. If, and only if, all the lines of a truth table come out true, the formula is semantically valid (true in every interpretation). Further, if (and only if) ¬ φ {\displaystyle \neg \varphi } is valid, then φ {\displaystyle \varphi } is inconsistent. For instance, this table shows that "p → (q ∨ r → (r → ¬p))" is not valid: The computation of the last column of the third line may be displayed as follows: Further, using the theorem that φ ⊨ ψ {\displaystyle \varphi \models \psi } if, and only if, ( φ → ψ ) {\displaystyle (\varphi \to \psi )} is valid, we can use a truth table to prove that a formula is a semantic consequence of a set of formulas: { φ 1 , φ 2 , φ 3 , . . . , φ n } ⊨ ψ {\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}\models \psi } if, and only if, we can produce a truth table that comes out all true for the formula ( ( ⋀ i = 1 n φ i ) → ψ ) {\displaystyle \left(\left(\bigwedge _{i=1}^{n}\varphi _{i}\right)\rightarrow \psi \right)} (that is, if ⊨ ( ( ⋀ i = 1 n φ i ) → ψ ) {\displaystyle \models \left(\left(\bigwedge _{i=1}^{n}\varphi _{i}\right)\rightarrow \psi \right)} ). == Semantic proof via tableaux == Since truth tables have 2n lines for n variables, they can be tiresomely long for large values of n. Analytic tableaux are a more efficient, but nevertheless mechanical, semantic proof method; they take advantage of the fact that "we learn nothing about the validity of the inference from examining the truth-value distributions which make either the premises false or the conclusion true: the only relevant distributions when considering deductive validity are clearly just those which make the premises true or the conclusion false." Analytic tableaux for propositional logic are fully specified by the rules that are stated in schematic form below. These rules use "signed formulas", where a signed formula is an expression T X {\displaystyle TX} or F X {\displaystyle FX} , where X {\displaystyle X} is a (unsigned) formula of the language L {\displaystyle {\mathcal {L}}} . (Informally, T X {\displaystyle TX} is read " X {\displaystyle X} is true", and F X {\displaystyle FX} is read " X {\displaystyle X} is false".) Their formal semantic definition is that "under any interpretation, a signed formula T X {\displaystyle TX} is called true if X {\displaystyle X} is true, and false if X {\displaystyle X} is false, whereas a signed formula F X {\displaystyle FX} is called false if X {\displaystyle X} is true, and true if X {\displaystyle X} is false." 1 ) T ∼ X F X F ∼ X T X s p a c e r 2 ) T ( X ∧ Y ) T X T Y F ( X ∧ Y ) F X | F Y s p a c e r 3 ) T ( X ∨ Y ) T X | T Y F ( X ∨ Y ) F X F Y s p a c e r 4 ) T ( X ⊃ Y ) F X | T Y F ( X ⊃ Y ) T X F Y {\displaystyle {\begin{aligned}&1)\quad {\frac {T\sim X}{FX}}\quad &&{\frac {F\sim X}{TX}}\\{\phantom {spacer}}\\&2)\quad {\frac {T(X\land Y)}{\begin{matrix}TX\\TY\end{matrix}}}\quad &&{\frac {F(X\land Y)}{FX|FY}}\\{\phantom {spacer}}\\&3)\quad {\frac {T(X\lor Y)}{TX|TY}}\quad &&{\frac {F(X\lor Y)}{\begin{matrix}FX\\FY\end{matrix}}}\\{\phantom {spacer}}\\&4)\quad {\frac {T(X\supset Y)}{FX|TY}}\quad &&{\frac {F(X\supset Y)}{\begin{matrix}TX\\FY\end{matrix}}}\end{aligned}}} In this notation, rule 2 means that T ( X ∧ Y ) {\displaystyle T(X\land Y)} yields both T X , T Y {\displaystyle TX,TY} , whereas F ( X ∧ Y ) {\displaystyle F(X\land Y)} branches into F X , F Y {\displaystyle FX,FY} . The notation is to be understood analogously for rules 3 and 4. Often, in tableaux for classical logic, the signed formula notation is simplified so that T φ {\displaystyle T\varphi } is written simply as φ {\displaystyle \varphi } , and F φ {\displaystyle F\varphi } as ¬ φ {\displaystyle \neg \varphi } , which accounts for naming rule 1 the "Rule of Double Negation". One constructs a tableau for a set of formulas by applying the rules to produce more lines and tree branches until every line has been used, producing a complete tableau. In some cases, a branch can come to contain both T X {\displaystyle TX} and F X {\displaystyle FX} for some X {\displaystyle X} , which is to say, a contradiction. In that case, the branch is said to close. If every branch in a tree closes, the tree itself is said to close. In virtue of the rules for construction of tableaux, a closed tree is a proof that the original formula, or set of formulas, used to construct it was itself self-contradictory, and therefore false. Conversely, a tableau can also prove that a logical formula is tautologous: if a formula is tautologous, its negation is a contradiction, so a tableau built from its negation will close. To construct a tableau for an argument ⟨ { φ 1 , φ 2 , φ 3 , . . . , φ n } , ψ ⟩ {\displaystyle \langle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\},\psi \rangle } , one first writes out the set of premise formulas, { φ 1 , φ 2 , φ 3 , . . . , φ n } {\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}} , with one formula on each line, signed with T {\displaystyle T} (that is, T φ {\displaystyle T\varphi } for each T φ {\displaystyle T\varphi } in the set); and together with those formulas (the order is unimportant), one also writes out the conclusion, ψ {\displaystyle \psi } , signed with F {\displaystyle F} (that is, F ψ {\displaystyle F\psi } ). One then produces a truth tree (analytic tableau) by using all those lines according to the rules. A closed tree will be proof that the argument was valid, in virtue of the fact that φ ⊨ ψ {\displaystyle \varphi \models \psi } if, and only if, { φ , ∼ ψ } {\displaystyle \{\varphi ,\sim \psi \}} is inconsistent (also written as φ , ∼ ψ ⊨ {\displaystyle \varphi ,\sim \psi \models } ). == List of classically valid argument forms == Using semantic checking methods, such as truth tables or semantic tableaux, to check for tautologies and semantic consequences, it can be shown that, in classical logic, the following classical argument forms are semantically valid, i.e., these tautologies and semantic consequences hold. We use φ {\displaystyle \varphi } ⟚ ψ {\displaystyle \psi } to denote equivalence of φ {\displaystyle \varphi } and ψ {\displaystyle \psi } , that is, as an abbreviation for both φ ⊨ ψ {\displaystyle \varphi \models \psi } and ψ ⊨ φ {\displaystyle \psi \models \varphi } ; as an aid to reading the symbols, a description of each formula is given. The description reads the symbol ⊧ (called the "double turnstile") as "therefore", which is a common reading of it, although many authors prefer to read it as "entails", or as "models". == Syntactic proof via natural deduction == Natural deduction, since it is a method of syntactical proof, is specified by providing inference rules (also called rules of proof) for a language with the typical set of connectives { − , & , ∨ , → , ↔ } {\displaystyle \{-,\&,\lor ,\to ,\leftrightarrow \}} ; no axioms are used other than these rules. The rules are covered below, and a proof example is given afterwards. === Notation styles === Different authors vary to some extent regarding which inference rules they give, which will be noted. More striking to the look and feel of a proof, however, is the variation in notation styles. The § Gentzen notation, which was covered earlier for a short argument, can actually be stacked to produce large tree-shaped natural deduction proofs—not to be confused with "truth trees", which is another name for analytic tableaux. There is also a style due to Stanisław Jaśkowski, where the formulas in the proof are written inside various nested boxes, and there is a simplification of Jaśkowski's style due to Fredric Fitch (Fitch notation), where the boxes are simplified to simple horizontal lines beneath the introductions of suppositions, and vertical lines to the left of the lines that are under the supposition. Lastly, there is the only notation style which will actually be used in this article, which is due to Patrick Suppes, but was much popularized by E.J. Lemmon and Benson Mates. This method has the advantage that, graphically, it is the least intensive to produce and display, which made it a natural choice for the editor who wrote this part of the article, who did not understand the complex LaTeX commands that would be required to produce proofs in the other methods. A proof, then, laid out in accordance with the Suppes–Lemmon notation style, is a sequence of lines containing sentences, where each sentence is either an assumption, or the result of applying a rule of proof to earlier sentences in the sequence. Each line of proof is made up of a sentence of proof, together with its annotation, its assumption set, and the current line number. The assumption set lists the assumptions on which the given sentence of proof depends, which are referenced by the line numbers. The annotation specifies which rule of proof was applied, and to which earlier lines, to yield the current sentence. See the § Natural deduction proof example. === Inference rules === Natural deduction inference rules, due ultimately to Gentzen, are given below. There are ten primitive rules of proof, which are the rule assumption, plus four pairs of introduction and elimination rules for the binary connectives, and the rule reductio ad adbsurdum. Disjunctive Syllogism can be used as an easier alternative to the proper ∨-elimination, and MTT and DN are commonly given rules, although they are not primitive. === Natural deduction proof example === The proof below derives − P {\displaystyle -P} from P → Q {\displaystyle P\to Q} and − Q {\displaystyle -Q} using only MPP and RAA, which shows that MTT is not a primitive rule, since it can be derived from those two other rules. == Syntactic proof via axioms == It is possible to perform proofs axiomatically, which means that certain tautologies are taken as self-evident and various others are deduced from them using modus ponens as an inference rule, as well as a rule of substitution, which permits replacing any well-formed formula with any substitution-instance of it. Alternatively, one uses axiom schemas instead of axioms, and no rule of substitution is used. This section gives the axioms of some historically notable axiomatic systems for propositional logic. For more examples, as well as metalogical theorems that are specific to such axiomatic systems (such as their completeness and consistency), see the article Axiomatic system (logic). === Frege's Begriffsschrift === Although axiomatic proof has been used since the famous Ancient Greek textbook, Euclid's Elements of Geometry, in propositional logic it dates back to Gottlob Frege's 1879 Begriffsschrift. Frege's system used only implication and negation as connectives. It had six axioms: Proposition 1: a → ( b → a ) {\displaystyle a\to (b\to a)} Proposition 2: ( c → ( b → a ) ) → ( ( c → b ) → ( c → a ) ) {\displaystyle (c\to (b\to a))\to ((c\to b)\to (c\to a))} Proposition 8: ( d → ( b → a ) ) → ( b → ( d → a ) ) {\displaystyle (d\to (b\to a))\to (b\to (d\to a))} Proposition 28: ( b → a ) → ( ¬ a → ¬ b ) {\displaystyle (b\to a)\to (\neg a\to \neg b)} Proposition 31: ¬ ¬ a → a {\displaystyle \neg \neg a\to a} Proposition 41: a → ¬ ¬ a {\displaystyle a\to \neg \neg a} These were used by Frege together with modus ponens and a rule of substitution (which was used but never precisely stated) to yield a complete and consistent axiomatization of classical truth-functional propositional logic. === Łukasiewicz's P2 === Jan Łukasiewicz showed that, in Frege's system, "the third axiom is superfluous since it can be derived from the preceding two axioms, and that the last three axioms can be replaced by the single sentence C C N p N q C p q {\displaystyle CCNpNqCpq} ". Which, taken out of Łukasiewicz's Polish notation into modern notation, means ( ¬ p → ¬ q ) → ( p → q ) {\displaystyle (\neg p\rightarrow \neg q)\rightarrow (p\rightarrow q)} . Hence, Łukasiewicz is credited with this system of three axioms: p → ( q → p ) {\displaystyle p\to (q\to p)} ( p → ( q → r ) ) → ( ( p → q ) → ( p → r ) ) {\displaystyle (p\to (q\to r))\to ((p\to q)\to (p\to r))} ( ¬ p → ¬ q ) → ( q → p ) {\displaystyle (\neg p\to \neg q)\to (q\to p)} Just like Frege's system, this system uses a substitution rule and uses modus ponens as an inference rule. The exact same system was given (with an explicit substitution rule) by Alonzo Church, who referred to it as the system P2 and helped popularize it. ==== Schematic form of P2 ==== One may avoid using the rule of substitution by giving the axioms in schematic form, using them to generate an infinite set of axioms. Hence, using Greek letters to represent schemata (metalogical variables that may stand for any well-formed formulas), the axioms are given as: φ → ( ψ → φ ) {\displaystyle \varphi \to (\psi \to \varphi )} ( φ → ( ψ → χ ) ) → ( ( φ → ψ ) → ( φ → χ ) ) {\displaystyle (\varphi \to (\psi \to \chi ))\to ((\varphi \to \psi )\to (\varphi \to \chi ))} ( ¬ φ → ¬ ψ ) → ( ψ → φ ) {\displaystyle (\neg \varphi \to \neg \psi )\to (\psi \to \varphi )} The schematic version of P2 is attributed to John von Neumann, and is used in the Metamath "set.mm" formal proof database. It has also been attributed to Hilbert, and named H {\displaystyle {\mathcal {H}}} in this context. ==== Proof example in P2 ==== As an example, a proof of A → A {\displaystyle A\to A} in P2 is given below. First, the axioms are given names: (A1) ( p → ( q → p ) ) {\displaystyle (p\to (q\to p))} (A2) ( ( p → ( q → r ) ) → ( ( p → q ) → ( p → r ) ) ) {\displaystyle ((p\to (q\to r))\to ((p\to q)\to (p\to r)))} (A3) ( ( ¬ p → ¬ q ) → ( q → p ) ) {\displaystyle ((\neg p\to \neg q)\to (q\to p))} And the proof is as follows: A → ( ( B → A ) → A ) {\displaystyle A\to ((B\to A)\to A)} (instance of (A1)) ( A → ( ( B → A ) → A ) ) → ( ( A → ( B → A ) ) → ( A → A ) ) {\displaystyle (A\to ((B\to A)\to A))\to ((A\to (B\to A))\to (A\to A))} (instance of (A2)) ( A → ( B → A ) ) → ( A → A ) {\displaystyle (A\to (B\to A))\to (A\to A)} (from (1) and (2) by modus ponens) A → ( B → A ) {\displaystyle A\to (B\to A)} (instance of (A1)) A → A {\displaystyle A\to A} (from (4) and (3) by modus ponens) == Solvers == One notable difference between propositional calculus and predicate calculus is that satisfiability of a propositional formula is decidable.: 81  Deciding satisfiability of propositional logic formulas is an NP-complete problem. However, practical methods exist (e.g., DPLL algorithm, 1962; Chaff algorithm, 2001) that are very fast for many useful cases. Recent work has extended the SAT solver algorithms to work with propositions containing arithmetic expressions; these are the SMT solvers. == See also == === Higher logical levels === First-order logic Second-order propositional logic Second-order logic Higher-order logic === Related topics === == Notes == == References == == Further reading == Brown, Frank Markham (2003), Boolean Reasoning: The Logic of Boolean Equations, 1st edition, Kluwer Academic Publishers, Norwell, MA. 2nd edition, Dover Publications, Mineola, NY. Chang, C.C. and Keisler, H.J. (1973), Model Theory, North-Holland, Amsterdam, Netherlands. Kohavi, Zvi (1978), Switching and Finite Automata Theory, 1st edition, McGraw–Hill, 1970. 2nd edition, McGraw–Hill, 1978. Korfhage, Robert R. (1974), Discrete Computational Structures, Academic Press, New York, NY. Lambek, J. and Scott, P.J. (1986), Introduction to Higher Order Categorical Logic, Cambridge University Press, Cambridge, UK. Mendelson, Elliot (1964), Introduction to Mathematical Logic, D. Van Nostrand Company. === Related works === Hofstadter, Douglas (1979). Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books. ISBN 978-0-465-02656-2. == External links == Klement, Kevin C. "Propositional Logic". In Fieser, James; Dowden, Bradley (eds.). Internet Encyclopedia of Philosophy. Retrieved 7 April 2025. Franks, Curtis (2024). "Propositional Logic". In Zalta, Edward N.; Nodelman, Uri (eds.). Stanford Encyclopedia of Philosophy (Winter 2024 ed.). Metaphysics Research Lab, Stanford University. Retrieved 7 April 2025. Formal Predicate Calculus, contains a systematic formal development with axiomatic proof forall x: an introduction to formal logic, by P.D. Magnus, covers formal semantics and proof theory for sentential logic. Chapter 2 / Propositional Logic from Logic In Action Propositional sequent calculus prover on Project Nayuki. (note: implication can be input in the form !X|Y, and a sequent can be a single formula prefixed with > and having no commas) Propositional Logic - A Generative Grammar A Propositional Calculator that helps to understand simple expressions
Wikipedia/Truth-functional_propositional_calculus
In computability theory, a system of data-manipulation rules (such as a model of computation, a computer's instruction set, a programming language, or a cellular automaton) is said to be Turing-complete or computationally universal if it can be used to simulate any Turing machine (devised by English mathematician and computer scientist Alan Turing). This means that this system is able to recognize or decode other data-manipulation rule sets. Turing completeness is used as a way to express the power of such a data-manipulation rule set. Virtually all programming languages today are Turing-complete. A related concept is that of Turing equivalence – two computers P and Q are called equivalent if P can simulate Q and Q can simulate P. The Church–Turing thesis conjectures that any function whose values can be computed by an algorithm can be computed by a Turing machine, and therefore that if any real-world computer can simulate a Turing machine, it is Turing equivalent to a Turing machine. A universal Turing machine can be used to simulate any Turing machine and by extension the purely computational aspects of any possible real-world computer. To show that something is Turing-complete, it is enough to demonstrate that it can be used to simulate some Turing-complete system. No physical system can have infinite memory, but if the limitation of finite memory is ignored, most programming languages are otherwise Turing-complete. == Non-mathematical usage == In colloquial usage, the terms "Turing-complete" and "Turing-equivalent" are used to mean that any real-world general-purpose computer or computer language can approximately simulate the computational aspects of any other real-world general-purpose computer or computer language. In real life, this leads to the practical concepts of computing virtualization and emulation. Real computers constructed so far can be functionally analyzed like a single-tape Turing machine (which uses a "tape" for memory); thus the associated mathematics can apply by abstracting their operation far enough. However, real computers have limited physical resources, so they are only linear bounded automaton complete. In contrast, the abstraction of a universal computer is defined as a device with a Turing-complete instruction set, infinite memory, and infinite available time. == Formal definitions == In computability theory, several closely related terms are used to describe the computational power of a computational system (such as an abstract machine or programming language): Turing completeness A computational system that can compute every Turing-computable function is called Turing-complete (or Turing-powerful). Alternatively, such a system is one that can simulate a universal Turing machine. Turing equivalence A Turing-complete system is called Turing-equivalent if every function it can compute is also Turing-computable; i.e., it computes precisely the same class of functions as do Turing machines. Alternatively, a Turing-equivalent system is one that can simulate, and be simulated by, a universal Turing machine. (All known physically-implementable Turing-complete systems are Turing-equivalent, which adds support to the Church–Turing thesis.) (Computational) universality A system is called universal with respect to a class of systems if it can compute every function computable by systems in that class (or can simulate each of those systems). Typically, the term 'universality' is tacitly used with respect to a Turing-complete class of systems. The term "weakly universal" is sometimes used to distinguish a system (e.g. a cellular automaton) whose universality is achieved only by modifying the standard definition of Turing machine so as to include input streams with infinitely many 1s. == History == Turing completeness is significant in that every real-world design for a computing device can be simulated by a universal Turing machine. The Church–Turing thesis states that this is a law of mathematics – that a universal Turing machine can, in principle, perform any calculation that any other programmable computer can. This says nothing about the effort needed to write the program, or the time it may take for the machine to perform the calculation, or any abilities the machine may possess that have nothing to do with computation. Charles Babbage's analytical engine (1830s) would have been the first Turing-complete machine if it had been built at the time it was designed. Babbage appreciated that the machine was capable of great feats of calculation, including primitive logical reasoning, but he did not appreciate that no other machine could do better. From the 1830s until the 1940s, mechanical calculating machines such as adders and multipliers were built and improved, but they could not perform a conditional branch and therefore were not Turing-complete. In the late 19th century, Leopold Kronecker formulated notions of computability, defining primitive recursive functions. These functions can be calculated by rote computation, but they are not enough to make a universal computer, because the instructions that compute them do not allow for an infinite loop. In the early 20th century, David Hilbert led a program to axiomatize all of mathematics with precise axioms and precise logical rules of deduction that could be performed by a machine. Soon it became clear that a small set of deduction rules are enough to produce the consequences of any set of axioms. These rules were proved by Kurt Gödel in 1930 to be enough to produce every theorem. The actual notion of computation was isolated soon after, starting with Gödel's incompleteness theorem. This theorem showed that axiom systems were limited when reasoning about the computation that deduces their theorems. Church and Turing independently demonstrated that Hilbert's Entscheidungsproblem (decision problem) was unsolvable, thus identifying the computational core of the incompleteness theorem. This work, along with Gödel's work on general recursive functions, established that there are sets of simple instructions, which, when put together, are able to produce any computation. The work of Gödel showed that the notion of computation is essentially unique. In 1941 Konrad Zuse completed the Z3 computer. Zuse was not familiar with Turing's work on computability at the time. In particular, the Z3 lacked dedicated facilities for a conditional jump, thereby precluding it from being Turing complete. However, in 1998, it was shown by Rojas that the Z3 is capable of simulating conditional jumps, and therefore Turing complete in theory. To do this, its tape program would have to be long enough to execute every possible path through both sides of every branch. The first computer capable of conditional branching in practice, and therefore Turing complete in practice, was the ENIAC in 1946. Zuse's Z4 computer was operational in 1945, but it did not support conditional branching until 1950. == Computability theory == Computability theory uses models of computation to analyze problems and determine whether they are computable and under what circumstances. The first result of computability theory is that there exist problems for which it is impossible to predict what a (Turing-complete) system will do over an arbitrarily long time. The classic example is the halting problem: create an algorithm that takes as input a program in some Turing-complete language and some data to be fed to that program, and determines whether the program, operating on the input, will eventually stop or will continue forever. It is trivial to create an algorithm that can do this for some inputs, but impossible to do this in general. For any characteristic of the program's eventual output, it is impossible to determine whether this characteristic will hold. This impossibility poses problems when analyzing real-world computer programs. For example, one cannot write a tool that entirely protects programmers from writing infinite loops or protects users from supplying input that would cause infinite loops. One can instead limit a program to executing only for a fixed period of time (timeout) or limit the power of flow-control instructions (for example, providing only loops that iterate over the items of an existing array). However, another theorem shows that there are problems solvable by Turing-complete languages that cannot be solved by any language with only finite looping abilities (i.e., languages that guarantee that every program will eventually finish to a halt). So any such language is not Turing-complete. For example, a language in which programs are guaranteed to complete and halt cannot compute the computable function produced by Cantor's diagonal argument on all computable functions in that language. == Turing oracles == A computer with access to an infinite tape of data may be more powerful than a Turing machine: for instance, the tape might contain the solution to the halting problem or some other Turing-undecidable problem. Such an infinite tape of data is called a Turing oracle. Even a Turing oracle with random data is not computable (with probability 1), since there are only countably many computations but uncountably many oracles. So a computer with a random Turing oracle can compute things that a Turing machine cannot. == Digital physics == All known laws of physics have consequences that are computable by a series of approximations on a digital computer. A hypothesis called digital physics states that this is no accident because the universe itself is computable on a universal Turing machine. This would imply that no computer more powerful than a universal Turing machine can be built physically. == Examples == The computational systems (algebras, calculi) that are discussed as Turing-complete systems are those intended for studying theoretical computer science. They are intended to be as simple as possible, so that it would be easier to understand the limits of computation. Here are a few: Automata theory Formal grammar (language generators) Formal language (language recognizers) Lambda calculus Post–Turing machines Process calculus Most programming languages (their abstract models, maybe with some particular constructs that assume finite memory omitted), conventional and unconventional, are Turing-complete. This includes: All general-purpose languages in wide use. Procedural programming languages such as C, Pascal. Object-oriented languages such as Java, Smalltalk or C#. Multi-paradigm languages such as Ada, C++, Common Lisp, Fortran, JavaScript, Object Pascal, Perl, Python, R. Most languages using less common paradigms: Functional languages such as Lisp and Haskell. Logic programming languages such as Prolog. General-purpose macro processor such as m4. Declarative languages such as SQL and XSLT. VHDL and other hardware description languages. TeX, a typesetting system. Esoteric programming languages, a form of mathematical recreation in which programmers work out how to achieve basic programming constructs in an extremely difficult but mathematically Turing-equivalent language. Some rewrite systems are Turing-complete. Turing completeness is an abstract statement of ability, rather than a prescription of specific language features used to implement that ability. The features used to achieve Turing completeness can be quite different; Fortran systems would use loop constructs or possibly even goto statements to achieve repetition; Haskell and Prolog, lacking looping almost entirely, would use recursion. Most programming languages are describing computations on von Neumann architectures, which have memory (RAM and register) and a control unit. These two elements make this architecture Turing-complete. Even pure functional languages are Turing-complete. Turing completeness in declarative SQL is implemented through recursive common table expressions. Unsurprisingly, procedural extensions to SQL (PLSQL, etc.) are also Turing-complete. This illustrates one reason why relatively powerful non-Turing-complete languages are rare: the more powerful the language is initially, the more complex are the tasks to which it is applied and the sooner its lack of completeness becomes perceived as a drawback, encouraging its extension until it is Turing-complete. The untyped lambda calculus is Turing-complete, but many typed lambda calculi, including System F, are not. The value of typed systems is based in their ability to represent most typical computer programs while detecting more errors. Rule 110 and Conway's Game of Life, both cellular automata, are Turing-complete. === Unintentional Turing completeness === Some software and video games are Turing-complete by accident, i.e. not by design. Software: Microsoft Excel Games: Dwarf Fortress Cities: Skylines Opus Magnum Minecraft Magic: The Gathering Infinite-grid Minesweeper Social media: Habbo Hotel Computational languages: C++ templates printf format string TypeScript's type system x86 assembly's MOV instruction Biology: Chemical reaction networks and enzyme-based DNA computers have been shown to be Turing-equivalent == Non-Turing-complete languages == Many computational languages exist that are not Turing-complete. One such example is the set of regular languages, which are generated by regular expressions and which are recognized by finite automata. A more powerful but still not Turing-complete extension of finite automata is the category of pushdown automata and context-free grammars, which are commonly used to generate parse trees in an initial stage of program compiling. Further examples include some of the early versions of the pixel shader languages embedded in Direct3D and OpenGL extensions. In total functional programming languages, such as Charity and Epigram, all functions are total and must terminate. Charity uses a type system and control constructs based on category theory, whereas Epigram uses dependent types. The LOOP language is designed so that it computes only the functions that are primitive recursive. All of these compute proper subsets of the total computable functions, since the full set of total computable functions is not computably enumerable. Also, since all functions in these languages are total, algorithms for recursively enumerable sets cannot be written in these languages, in contrast with Turing machines. Although (untyped) lambda calculus is Turing-complete, simply typed lambda calculus is not. == See also == == Footnotes == == References == == Further reading == == External links == "Turing Complete". wiki.c2.com.
Wikipedia/Turing_equivalence_(theory_of_computation)
In mathematics, the composition operator takes two functions, f {\displaystyle f} and g {\displaystyle g} , and returns a new function h ( x ) := ( g ∘ f ) ( x ) = g ( f ( x ) ) {\displaystyle h(x):=(g\circ f)(x)=g(f(x))} . Thus, the function g is applied after applying f to x. ( g ∘ f ) {\displaystyle (g\circ f)} is pronounced "the composition of g and f". Reverse composition, sometimes denoted , applies the operation in the opposite order, applying f {\displaystyle f} first and g {\displaystyle g} second. Intuitively, reverse composition is a chaining process in which the output of function f feeds the input of function g. The composition of functions is a special case of the composition of relations, sometimes also denoted by ∘ {\displaystyle \circ } . As a result, all properties of composition of relations are true of composition of functions, such as associativity. == Examples == Composition of functions on a finite set: If f = {(1, 1), (2, 3), (3, 1), (4, 2)}, and g = {(1, 2), (2, 3), (3, 1), (4, 2)}, then g ∘ f = {(1, 2), (2, 1), (3, 2), (4, 3)}, as shown in the figure. Composition of functions on an infinite set: If f: R → R (where R is the set of all real numbers) is given by f(x) = 2x + 4 and g: R → R is given by g(x) = x3, then: If an airplane's altitude at time t is a(t), and the air pressure at altitude x is p(x), then (p ∘ a)(t) is the pressure around the plane at time t. Function defined on finite sets which change the order of their elements such as permutations can be composed on the same set, this being composition of permutations. == Properties == The composition of functions is always associative—a property inherited from the composition of relations. That is, if f, g, and h are composable, then f ∘ (g ∘ h) = (f ∘ g) ∘ h. Since the parentheses do not change the result, they are generally omitted. In a strict sense, the composition g ∘ f is only meaningful if the codomain of f equals the domain of g; in a wider sense, it is sufficient that the former be an improper subset of the latter. Moreover, it is often convenient to tacitly restrict the domain of f, such that f produces only values in the domain of g. For example, the composition g ∘ f of the functions f : R → (−∞,+9] defined by f(x) = 9 − x2 and g : [0,+∞) → R defined by g ( x ) = x {\displaystyle g(x)={\sqrt {x}}} can be defined on the interval [−3,+3]. The functions g and f are said to commute with each other if g ∘ f = f ∘ g. Commutativity is a special property, attained only by particular functions, and often in special circumstances. For example, |x| + 3 = |x + 3| only when x ≥ 0. The picture shows another example. The composition of one-to-one (injective) functions is always one-to-one. Similarly, the composition of onto (surjective) functions is always onto. It follows that the composition of two bijections is also a bijection. The inverse function of a composition (assumed invertible) has the property that (f ∘ g)−1 = g−1∘ f−1. Derivatives of compositions involving differentiable functions can be found using the chain rule. Higher derivatives of such functions are given by Faà di Bruno's formula. Composition of functions is sometimes described as a kind of multiplication on a function space, but has very different properties from pointwise multiplication of functions (e.g. composition is not commutative). == Composition monoids == Suppose one has two (or more) functions f: X → X, g: X → X having the same domain and codomain; these are often called transformations. Then one can form chains of transformations composed together, such as f ∘ f ∘ g ∘ f. Such chains have the algebraic structure of a monoid, called a transformation monoid or (much more seldom) a composition monoid. In general, transformation monoids can have remarkably complicated structure. One particular notable example is the de Rham curve. The set of all functions f: X → X is called the full transformation semigroup or symmetric semigroup on X. (One can actually define two semigroups depending how one defines the semigroup operation as the left or right composition of functions.) If the given transformations are bijective (and thus invertible), then the set of all possible combinations of these functions forms a transformation group (also known as a permutation group); and one says that the group is generated by these functions. The set of all bijective functions f: X → X (called permutations) forms a group with respect to function composition. This is the symmetric group, also sometimes called the composition group. A fundamental result in group theory, Cayley's theorem, essentially says that any group is in fact just a subgroup of a symmetric group (up to isomorphism). In the symmetric semigroup (of all transformations) one also finds a weaker, non-unique notion of inverse (called a pseudoinverse) because the symmetric semigroup is a regular semigroup. == Functional powers == If Y ⊆ X, then f : X → Y {\displaystyle f:X\to Y} may compose with itself; this is sometimes denoted as f 2 {\displaystyle f^{2}} . That is: More generally, for any natural number n ≥ 2, the nth functional power can be defined inductively by f n = f ∘ f n−1 = f n−1 ∘ f, a notation introduced by Hans Heinrich Bürmann and John Frederick William Herschel. Repeated composition of such a function with itself is called function iteration. By convention, f 0 is defined as the identity map on f 's domain, idX. If Y = X and f: X → X admits an inverse function f −1, negative functional powers f −n are defined for n > 0 as the negated power of the inverse function: f −n = (f −1)n. Note: If f takes its values in a ring (in particular for real or complex-valued f ), there is a risk of confusion, as f n could also stand for the n-fold product of f, e.g. f 2(x) = f(x) · f(x). For trigonometric functions, usually the latter is meant, at least for positive exponents. For example, in trigonometry, this superscript notation represents standard exponentiation when used with trigonometric functions: sin2(x) = sin(x) · sin(x). However, for negative exponents (especially −1), it nevertheless usually refers to the inverse function, e.g., tan−1 = arctan ≠ 1/tan. In some cases, when, for a given function f, the equation g ∘ g = f has a unique solution g, that function can be defined as the functional square root of f, then written as g = f 1/2. More generally, when gn = f has a unique solution for some natural number n > 0, then f m/n can be defined as gm. Under additional restrictions, this idea can be generalized so that the iteration count becomes a continuous parameter; in this case, such a system is called a flow, specified through solutions of Schröder's equation. Iterated functions and flows occur naturally in the study of fractals and dynamical systems. To avoid ambiguity, some mathematicians choose to use ∘ to denote the compositional meaning, writing f∘n(x) for the n-th iterate of the function f(x), as in, for example, f∘3(x) meaning f(f(f(x))). For the same purpose, f[n](x) was used by Benjamin Peirce whereas Alfred Pringsheim and Jules Molk suggested nf(x) instead. == Alternative notations == Many mathematicians, particularly in group theory, omit the composition symbol, writing gf for g ∘ f. During the mid-20th century, some mathematicians adopted postfix notation, writing xf  for f(x) and (xf)g for g(f(x)). This can be more natural than prefix notation in many cases, such as in linear algebra when x is a row vector and f and g denote matrices and the composition is by matrix multiplication. The order is important because function composition is not necessarily commutative. Having successive transformations applying and composing to the right agrees with the left-to-right reading sequence. Mathematicians who use postfix notation may write "fg", meaning first apply f and then apply g, in keeping with the order the symbols occur in postfix notation, thus making the notation "fg" ambiguous. Computer scientists may write "f ; g" for this, thereby disambiguating the order of composition. To distinguish the left composition operator from a text semicolon, in the Z notation the ⨾ character is used for left relation composition. Since all functions are binary relations, it is correct to use the [fat] semicolon for function composition as well (see the article on composition of relations for further details on this notation). == Composition operator == Given a function g, the composition operator Cg is defined as that operator which maps functions to functions as C g f = f ∘ g . {\displaystyle C_{g}f=f\circ g.} Composition operators are studied in the field of operator theory. == In programming languages == Function composition appears in one form or another in numerous programming languages. == Multivariate functions == Partial composition is possible for multivariate functions. The function resulting when some argument xi of the function f is replaced by the function g is called a composition of f and g in some computer engineering contexts, and is denoted f |xi = g f | x i = g = f ( x 1 , … , x i − 1 , g ( x 1 , x 2 , … , x n ) , x i + 1 , … , x n ) . {\displaystyle f|_{x_{i}=g}=f(x_{1},\ldots ,x_{i-1},g(x_{1},x_{2},\ldots ,x_{n}),x_{i+1},\ldots ,x_{n}).} When g is a simple constant b, composition degenerates into a (partial) valuation, whose result is also known as restriction or co-factor. f | x i = b = f ( x 1 , … , x i − 1 , b , x i + 1 , … , x n ) . {\displaystyle f|_{x_{i}=b}=f(x_{1},\ldots ,x_{i-1},b,x_{i+1},\ldots ,x_{n}).} In general, the composition of multivariate functions may involve several other functions as arguments, as in the definition of primitive recursive function. Given f, a n-ary function, and n m-ary functions g1, ..., gn, the composition of f with g1, ..., gn, is the m-ary function h ( x 1 , … , x m ) = f ( g 1 ( x 1 , … , x m ) , … , g n ( x 1 , … , x m ) ) . {\displaystyle h(x_{1},\ldots ,x_{m})=f(g_{1}(x_{1},\ldots ,x_{m}),\ldots ,g_{n}(x_{1},\ldots ,x_{m})).} This is sometimes called the generalized composite or superposition of f with g1, ..., gn. The partial composition in only one argument mentioned previously can be instantiated from this more general scheme by setting all argument functions except one to be suitably chosen projection functions. Here g1, ..., gn can be seen as a single vector/tuple-valued function in this generalized scheme, in which case this is precisely the standard definition of function composition. A set of finitary operations on some base set X is called a clone if it contains all projections and is closed under generalized composition. A clone generally contains operations of various arities. The notion of commutation also finds an interesting generalization in the multivariate case; a function f of arity n is said to commute with a function g of arity m if f is a homomorphism preserving g, and vice versa, that is: f ( g ( a 11 , … , a 1 m ) , … , g ( a n 1 , … , a n m ) ) = g ( f ( a 11 , … , a n 1 ) , … , f ( a 1 m , … , a n m ) ) . {\displaystyle f(g(a_{11},\ldots ,a_{1m}),\ldots ,g(a_{n1},\ldots ,a_{nm}))=g(f(a_{11},\ldots ,a_{n1}),\ldots ,f(a_{1m},\ldots ,a_{nm})).} A unary operation always commutes with itself, but this is not necessarily the case for a binary (or higher arity) operation. A binary (or higher arity) operation that commutes with itself is called medial or entropic. == Generalizations == Composition can be generalized to arbitrary binary relations. If R ⊆ X × Y and S ⊆ Y × Z are two binary relations, then their composition amounts to R ∘ S = { ( x , z ) ∈ X × Z : ( ∃ y ∈ Y ) ( ( x , y ) ∈ R ∧ ( y , z ) ∈ S ) } {\displaystyle R\circ S=\{(x,z)\in X\times Z:(\exists y\in Y)((x,y)\in R\,\land \,(y,z)\in S)\}} . Considering a function as a special case of a binary relation (namely functional relations), function composition satisfies the definition for relation composition. A small circle R∘S has been used for the infix notation of composition of relations, as well as functions. When used to represent composition of functions ( g ∘ f ) ( x ) = g ( f ( x ) ) {\displaystyle (g\circ f)(x)\ =\ g(f(x))} however, the text sequence is reversed to illustrate the different operation sequences accordingly. The composition is defined in the same way for partial functions and Cayley's theorem has its analogue called the Wagner–Preston theorem. The category of sets with functions as morphisms is the prototypical category. The axioms of a category are in fact inspired from the properties (and also the definition) of function composition. The structures given by composition are axiomatized and generalized in category theory with the concept of morphism as the category-theoretical replacement of functions. The reversed order of composition in the formula (f ∘ g)−1 = (g−1 ∘ f −1) applies for composition of relations using converse relations, and thus in group theory. These structures form dagger categories.The standard "foundation" for mathematics starts with sets and their elements. It is possible to start differently, by axiomatising not elements of sets but functions between sets. This can be done by using the language of categories and universal constructions. . . . the membership relation for sets can often be replaced by the composition operation for functions. This leads to an alternative foundation for Mathematics upon categories -- specifically, on the category of all functions. Now much of Mathematics is dynamic, in that it deals with morphisms of an object into another object of the same kind. Such morphisms (like functions) form categories, and so the approach via categories fits well with the objective of organizing and understanding Mathematics. That, in truth, should be the goal of a proper philosophy of Mathematics. - Saunders Mac Lane, Mathematics: Form and Function == Typography == The composition symbol ∘ is encoded as U+2218 ∘ RING OPERATOR (∘, ∘); see the Degree symbol article for similar-appearing Unicode characters. In TeX, it is written \circ. == See also == Cobweb plot – a graphical technique for functional composition Combinatory logic Composition ring, a formal axiomatization of the composition operation Flow (mathematics) Function composition (computer science) Function of random variable, distribution of a function of a random variable Functional decomposition Functional square root Functional equation Higher-order function Infinite compositions of analytic functions Iterated function Lambda calculus == Notes == == References == == External links == "Composite function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] "Composition of Functions" by Bruce Atwood, the Wolfram Demonstrations Project, 2007.
Wikipedia/Composition_of_functions
The propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. Sometimes, it is called first-order propositional logic to contrast it with System F, but it should not be confused with first-order logic. It deals with propositions (which can be true or false) and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives representing the truth functions of conjunction, disjunction, implication, biconditional, and negation. Some sources include other connectives, as in the table below. Unlike first-order logic, propositional logic does not deal with non-logical objects, predicates about them, or quantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic. Propositional logic is typically studied with a formal language, in which propositions are represented by letters, which are called propositional variables. These are then used, together with symbols for connectives, to make propositional formula. Because of this, the propositional variables are called atomic formulas of a formal propositional language. While the atomic propositions are typically represented by letters of the alphabet, there is a variety of notations to represent the logical connectives. The following table shows the main notational variants for each of the connectives in propositional logic. The most thoroughly researched branch of propositional logic is classical truth-functional propositional logic, in which formulas are interpreted as having precisely one of two possible truth values, the truth value of true or the truth value of false. The principle of bivalence and the law of excluded middle are upheld. By comparison with first-order logic, truth-functional propositional logic is considered to be zeroth-order logic. == History == Although propositional logic had been hinted by earlier philosophers, Chrysippus is often credited with development of a deductive system for propositional logic as his main achievement in the 3rd century BC which was expanded by his successor Stoics. The logic was focused on propositions. This was different from the traditional syllogistic logic, which focused on terms. However, most of the original writings were lost and, at some time between the 3rd and 6th century CE, Stoic logic faded into oblivion, to be resurrected only in the 20th century, in the wake of the (re)-discovery of propositional logic. Symbolic logic, which would come to be important to refine propositional logic, was first developed by the 17th/18th-century mathematician Gottfried Leibniz, whose calculus ratiocinator was, however, unknown to the larger logical community. Consequently, many of the advances achieved by Leibniz were recreated by logicians like George Boole and Augustus De Morgan, completely independent of Leibniz. Gottlob Frege's predicate logic builds upon propositional logic, and has been described as combining "the distinctive features of syllogistic logic and propositional logic." Consequently, predicate logic ushered in a new era in logic's history; however, advances in propositional logic were still made after Frege, including natural deduction, truth trees and truth tables. Natural deduction was invented by Gerhard Gentzen and Stanisław Jaśkowski. Truth trees were invented by Evert Willem Beth. The invention of truth tables, however, is of uncertain attribution. Within works by Frege and Bertrand Russell, are ideas influential to the invention of truth tables. The actual tabular structure (being formatted as a table), itself, is generally credited to either Ludwig Wittgenstein or Emil Post (or both, independently). Besides Frege and Russell, others credited with having ideas preceding truth tables include Philo, Boole, Charles Sanders Peirce, and Ernst Schröder. Others credited with the tabular structure include Jan Łukasiewicz, Alfred North Whitehead, William Stanley Jevons, John Venn, and Clarence Irving Lewis. Ultimately, some have concluded, like John Shosky, that "It is far from clear that any one person should be given the title of 'inventor' of truth-tables". == Sentences == Propositional logic, as currently studied in universities, is a specification of a standard of logical consequence in which only the meanings of propositional connectives are considered in evaluating the conditions for the truth of a sentence, or whether a sentence logically follows from some other sentence or group of sentences. === Declarative sentences === Propositional logic deals with statements, which are defined as declarative sentences having truth value. Examples of statements might include: Wikipedia is a free online encyclopedia that anyone can edit. London is the capital of England. All Wikipedia editors speak at least three languages. Declarative sentences are contrasted with questions, such as "What is Wikipedia?", and imperative statements, such as "Please add citations to support the claims in this article.". Such non-declarative sentences have no truth value, and are only dealt with in nonclassical logics, called erotetic and imperative logics. === Compounding sentences with connectives === In propositional logic, a statement can contain one or more other statements as parts. Compound sentences are formed from simpler sentences and express relationships among the constituent sentences. This is done by combining them with logical connectives: the main types of compound sentences are negations, conjunctions, disjunctions, implications, and biconditionals, which are formed by using the corresponding connectives to connect propositions. In English, these connectives are expressed by the words "and" (conjunction), "or" (disjunction), "not" (negation), "if" (material conditional), and "if and only if" (biconditional). Examples of such compound sentences might include: Wikipedia is a free online encyclopedia that anyone can edit, and millions already have. (conjunction) It is not true that all Wikipedia editors speak at least three languages. (negation) Either London is the capital of England, or London is the capital of the United Kingdom, or both. (disjunction) If sentences lack any logical connectives, they are called simple sentences, or atomic sentences; if they contain one or more logical connectives, they are called compound sentences, or molecular sentences. Sentential connectives are a broader category that includes logical connectives. Sentential connectives are any linguistic particles that bind sentences to create a new compound sentence, or that inflect a single sentence to create a new sentence. A logical connective, or propositional connective, is a kind of sentential connective with the characteristic feature that, when the original sentences it operates on are (or express) propositions, the new sentence that results from its application also is (or expresses) a proposition. Philosophers disagree about what exactly a proposition is, as well as about which sentential connectives in natural languages should be counted as logical connectives. Sentential connectives are also called sentence-functors, and logical connectives are also called truth-functors. == Arguments == An argument is defined as a pair of things, namely a set of sentences, called the premises, and a sentence, called the conclusion. The conclusion is claimed to follow from the premises, and the premises are claimed to support the conclusion. === Example argument === The following is an example of an argument within the scope of propositional logic: Premise 1: If it's raining, then it's cloudy. Premise 2: It's raining. Conclusion: It's cloudy. The logical form of this argument is known as modus ponens, which is a classically valid form. So, in classical logic, the argument is valid, although it may or may not be sound, depending on the meteorological facts in a given context. This example argument will be reused when explaining § Formalization. === Validity and soundness === An argument is valid if, and only if, it is necessary that, if all its premises are true, its conclusion is true. Alternatively, an argument is valid if, and only if, it is impossible for all the premises to be true while the conclusion is false. Validity is contrasted with soundness. An argument is sound if, and only if, it is valid and all its premises are true. Otherwise, it is unsound. Logic, in general, aims to precisely specify valid arguments. This is done by defining a valid argument as one in which its conclusion is a logical consequence of its premises, which, when this is understood as semantic consequence, means that there is no case in which the premises are true but the conclusion is not true – see § Semantics below. == Formalization == Propositional logic is typically studied through a formal system in which formulas of a formal language are interpreted to represent propositions. This formal language is the basis for proof systems, which allow a conclusion to be derived from premises if, and only if, it is a logical consequence of them. This section will show how this works by formalizing the § Example argument. The formal language for a propositional calculus will be fully specified in § Language, and an overview of proof systems will be given in § Proof systems. === Propositional variables === Since propositional logic is not concerned with the structure of propositions beyond the point where they cannot be decomposed any more by logical connectives, it is typically studied by replacing such atomic (indivisible) statements with letters of the alphabet, which are interpreted as variables representing statements (propositional variables). With propositional variables, the § Example argument would then be symbolized as follows: Premise 1: P → Q {\displaystyle P\to Q} Premise 2: P {\displaystyle P} Conclusion: Q {\displaystyle Q} When P is interpreted as "It's raining" and Q as "it's cloudy" these symbolic expressions correspond exactly with the original expression in natural language. Not only that, but they will also correspond with any other inference with the same logical form. When a formal system is used to represent formal logic, only statement letters (usually capital roman letters such as P {\displaystyle P} , Q {\displaystyle Q} and R {\displaystyle R} ) are represented directly. The natural language propositions that arise when they're interpreted are outside the scope of the system, and the relation between the formal system and its interpretation is likewise outside the formal system itself. === Gentzen notation === If we assume that the validity of modus ponens has been accepted as an axiom, then the same § Example argument can also be depicted like this: P → Q , P Q {\displaystyle {\frac {P\to Q,P}{Q}}} This method of displaying it is Gentzen's notation for natural deduction and sequent calculus. The premises are shown above a line, called the inference line, separated by a comma, which indicates combination of premises. The conclusion is written below the inference line. The inference line represents syntactic consequence, sometimes called deductive consequence,> which is also symbolized with ⊢. So the above can also be written in one line as P → Q , P ⊢ Q {\displaystyle P\to Q,P\vdash Q} . Syntactic consequence is contrasted with semantic consequence, which is symbolized with ⊧. In this case, the conclusion follows syntactically because the natural deduction inference rule of modus ponens has been assumed. For more on inference rules, see the sections on proof systems below. == Language == The language (commonly called L {\displaystyle {\mathcal {L}}} ) of a propositional calculus is defined in terms of: a set of primitive symbols, called atomic formulas, atomic sentences, atoms, placeholders, prime formulas, proposition letters, sentence letters, or variables, and a set of operator symbols, called connectives, logical connectives, logical operators, truth-functional connectives, truth-functors, or propositional connectives. A well-formed formula is any atomic formula, or any formula that can be built up from atomic formulas by means of operator symbols according to the rules of the grammar. The language L {\displaystyle {\mathcal {L}}} , then, is defined either as being identical to its set of well-formed formulas, or as containing that set (together with, for instance, its set of connectives and variables). Usually the syntax of L {\displaystyle {\mathcal {L}}} is defined recursively by just a few definitions, as seen next; some authors explicitly include parentheses as punctuation marks when defining their language's syntax, while others use them without comment. === Syntax === Given a set of atomic propositional variables p 1 {\displaystyle p_{1}} , p 2 {\displaystyle p_{2}} , p 3 {\displaystyle p_{3}} , ..., and a set of propositional connectives c 1 1 {\displaystyle c_{1}^{1}} , c 2 1 {\displaystyle c_{2}^{1}} , c 3 1 {\displaystyle c_{3}^{1}} , ..., c 1 2 {\displaystyle c_{1}^{2}} , c 2 2 {\displaystyle c_{2}^{2}} , c 3 2 {\displaystyle c_{3}^{2}} , ..., c 1 3 {\displaystyle c_{1}^{3}} , c 2 3 {\displaystyle c_{2}^{3}} , c 3 3 {\displaystyle c_{3}^{3}} , ..., a formula of propositional logic is defined recursively by these definitions: Definition 1: Atomic propositional variables are formulas. Definition 2: If c n m {\displaystyle c_{n}^{m}} is a propositional connective, and ⟨ {\displaystyle \langle } A, B, C, … ⟩ {\displaystyle \rangle } is a sequence of m, possibly but not necessarily atomic, possibly but not necessarily distinct, formulas, then the result of applying c n m {\displaystyle c_{n}^{m}} to ⟨ {\displaystyle \langle } A, B, C, … ⟩ {\displaystyle \rangle } is a formula. Definition 3: Nothing else is a formula. Writing the result of applying c n m {\displaystyle c_{n}^{m}} to ⟨ {\displaystyle \langle } A, B, C, … ⟩ {\displaystyle \rangle } in functional notation, as c n m {\displaystyle c_{n}^{m}} (A, B, C, …), we have the following as examples of well-formed formulas: p 5 {\displaystyle p_{5}} c 3 2 ( p 2 , p 9 ) {\displaystyle c_{3}^{2}(p_{2},p_{9})} c 3 2 ( p 1 , c 2 1 ( p 3 ) ) {\displaystyle c_{3}^{2}(p_{1},c_{2}^{1}(p_{3}))} c 1 3 ( p 4 , p 6 , c 2 2 ( p 1 , p 2 ) ) {\displaystyle c_{1}^{3}(p_{4},p_{6},c_{2}^{2}(p_{1},p_{2}))} c 4 2 ( c 1 1 ( p 7 ) , c 3 1 ( p 8 ) ) {\displaystyle c_{4}^{2}(c_{1}^{1}(p_{7}),c_{3}^{1}(p_{8}))} c 2 3 ( c 1 2 ( p 3 , p 4 ) , c 2 1 ( p 5 ) , c 3 2 ( p 6 , p 7 ) ) {\displaystyle c_{2}^{3}(c_{1}^{2}(p_{3},p_{4}),c_{2}^{1}(p_{5}),c_{3}^{2}(p_{6},p_{7}))} c 3 1 ( c 1 3 ( p 2 , p 3 , c 2 2 ( p 4 , p 5 ) ) ) {\displaystyle c_{3}^{1}(c_{1}^{3}(p_{2},p_{3},c_{2}^{2}(p_{4},p_{5})))} What was given as Definition 2 above, which is responsible for the composition of formulas, is referred to by Colin Howson as the principle of composition. It is this recursion in the definition of a language's syntax which justifies the use of the word "atomic" to refer to propositional variables, since all formulas in the language L {\displaystyle {\mathcal {L}}} are built up from the atoms as ultimate building blocks. Composite formulas (all formulas besides atoms) are called molecules, or molecular sentences. (This is an imperfect analogy with chemistry, since a chemical molecule may sometimes have only one atom, as in monatomic gases.) The definition that "nothing else is a formula", given above as Definition 3, excludes any formula from the language which is not specifically required by the other definitions in the syntax. In particular, it excludes infinitely long formulas from being well-formed. It is sometimes called the Closure Clause. ==== CF grammar in BNF ==== An alternative to the syntax definitions given above is to write a context-free (CF) grammar for the language L {\displaystyle {\mathcal {L}}} in Backus-Naur form (BNF). This is more common in computer science than in philosophy. It can be done in many ways, of which a particularly brief one, for the common set of five connectives, is this single clause: ϕ ::= a 1 , a 2 , … | ¬ ϕ | ϕ & ψ | ϕ ∨ ψ | ϕ → ψ | ϕ ↔ ψ {\displaystyle \phi ::=a_{1},a_{2},\ldots ~|~\neg \phi ~|~\phi ~\&~\psi ~|~\phi \vee \psi ~|~\phi \rightarrow \psi ~|~\phi \leftrightarrow \psi } This clause, due to its self-referential nature (since ϕ {\displaystyle \phi } is in some branches of the definition of ϕ {\displaystyle \phi } ), also acts as a recursive definition, and therefore specifies the entire language. To expand it to add modal operators, one need only add … | ◻ ϕ | ◊ ϕ {\displaystyle |~\Box \phi ~|~\Diamond \phi } to the end of the clause. === Constants and schemata === Mathematicians sometimes distinguish between propositional constants, propositional variables, and schemata. Propositional constants represent some particular proposition, while propositional variables range over the set of all atomic propositions. Schemata, or schematic letters, however, range over all formulas. (Schematic letters are also called metavariables.) It is common to represent propositional constants by A, B, and C, propositional variables by P, Q, and R, and schematic letters are often Greek letters, most often φ, ψ, and χ. However, some authors recognize only two "propositional constants" in their formal system: the special symbol ⊤ {\displaystyle \top } , called "truth", which always evaluates to True, and the special symbol ⊥ {\displaystyle \bot } , called "falsity", which always evaluates to False. Other authors also include these symbols, with the same meaning, but consider them to be "zero-place truth-functors", or equivalently, "nullary connectives". == Semantics == To serve as a model of the logic of a given natural language, a formal language must be semantically interpreted. In classical logic, all propositions evaluate to exactly one of two truth-values: True or False. For example, "Wikipedia is a free online encyclopedia that anyone can edit" evaluates to True, while "Wikipedia is a paper encyclopedia" evaluates to False. In other respects, the following formal semantics can apply to the language of any propositional logic, but the assumptions that there are only two semantic values (bivalence), that only one of the two is assigned to each formula in the language (noncontradiction), and that every formula gets assigned a value (excluded middle), are distinctive features of classical logic. To learn about nonclassical logics with more than two truth-values, and their unique semantics, one may consult the articles on "Many-valued logic", "Three-valued logic", "Finite-valued logic", and "Infinite-valued logic". === Interpretation (case) and argument === For a given language L {\displaystyle {\mathcal {L}}} , an interpretation, valuation, Boolean valuation, or case, is an assignment of semantic values to each formula of L {\displaystyle {\mathcal {L}}} . For a formal language of classical logic, a case is defined as an assignment, to each formula of L {\displaystyle {\mathcal {L}}} , of one or the other, but not both, of the truth values, namely truth (T, or 1) and falsity (F, or 0). An interpretation that follows the rules of classical logic is sometimes called a Boolean valuation. An interpretation of a formal language for classical logic is often expressed in terms of truth tables. Since each formula is only assigned a single truth-value, an interpretation may be viewed as a function, whose domain is L {\displaystyle {\mathcal {L}}} , and whose range is its set of semantic values V = { T , F } {\displaystyle {\mathcal {V}}=\{{\mathsf {T}},{\mathsf {F}}\}} , or V = { 1 , 0 } {\displaystyle {\mathcal {V}}=\{1,0\}} . For n {\displaystyle n} distinct propositional symbols there are 2 n {\displaystyle 2^{n}} distinct possible interpretations. For any particular symbol a {\displaystyle a} , for example, there are 2 1 = 2 {\displaystyle 2^{1}=2} possible interpretations: either a {\displaystyle a} is assigned T, or a {\displaystyle a} is assigned F. And for the pair a {\displaystyle a} , b {\displaystyle b} there are 2 2 = 4 {\displaystyle 2^{2}=4} possible interpretations: either both are assigned T, or both are assigned F, or a {\displaystyle a} is assigned T and b {\displaystyle b} is assigned F, or a {\displaystyle a} is assigned F and b {\displaystyle b} is assigned T. Since L {\displaystyle {\mathcal {L}}} has ℵ 0 {\displaystyle \aleph _{0}} , that is, denumerably many propositional symbols, there are 2 ℵ 0 = c {\displaystyle 2^{\aleph _{0}}={\mathfrak {c}}} , and therefore uncountably many distinct possible interpretations of L {\displaystyle {\mathcal {L}}} as a whole. Where I {\displaystyle {\mathcal {I}}} is an interpretation and φ {\displaystyle \varphi } and ψ {\displaystyle \psi } represent formulas, the definition of an argument, given in § Arguments, may then be stated as a pair ⟨ { φ 1 , φ 2 , φ 3 , . . . , φ n } , ψ ⟩ {\displaystyle \langle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\},\psi \rangle } , where { φ 1 , φ 2 , φ 3 , . . . , φ n } {\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}} is the set of premises and ψ {\displaystyle \psi } is the conclusion. The definition of an argument's validity, i.e. its property that { φ 1 , φ 2 , φ 3 , . . . , φ n } ⊨ ψ {\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}\models \psi } , can then be stated as its absence of a counterexample, where a counterexample is defined as a case I {\displaystyle {\mathcal {I}}} in which the argument's premises { φ 1 , φ 2 , φ 3 , . . . , φ n } {\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}} are all true but the conclusion ψ {\displaystyle \psi } is not true. As will be seen in § Semantic truth, validity, consequence, this is the same as to say that the conclusion is a semantic consequence of the premises. === Propositional connective semantics === An interpretation assigns semantic values to atomic formulas directly. Molecular formulas are assigned a function of the value of their constituent atoms, according to the connective used; the connectives are defined in such a way that the truth-value of a sentence formed from atoms with connectives depends on the truth-values of the atoms that they're applied to, and only on those. This assumption is referred to by Colin Howson as the assumption of the truth-functionality of the connectives. ==== Semantics via. truth tables ==== Since logical connectives are defined semantically only in terms of the truth values that they take when the propositional variables that they're applied to take either of the two possible truth values, the semantic definition of the connectives is usually represented as a truth table for each of the connectives, as seen below: This table covers each of the main five logical connectives: conjunction (here notated p ∧ q {\displaystyle p\land q} ), disjunction (p ∨ q), implication (p → q), biconditional (p ↔ q) and negation, (¬p, or ¬q, as the case may be). It is sufficient for determining the semantics of each of these operators. For more truth tables for more different kinds of connectives, see the article "Truth table". ==== Semantics via assignment expressions ==== Some authors (viz., all the authors cited in this subsection) write out the connective semantics using a list of statements instead of a table. In this format, where I ( φ ) {\displaystyle {\mathcal {I}}(\varphi )} is the interpretation of φ {\displaystyle \varphi } , the five connectives are defined as: I ( ¬ P ) = T {\displaystyle {\mathcal {I}}(\neg P)={\mathsf {T}}} if, and only if, I ( P ) = F {\displaystyle {\mathcal {I}}(P)={\mathsf {F}}} I ( P ∧ Q ) = T {\displaystyle {\mathcal {I}}(P\land Q)={\mathsf {T}}} if, and only if, I ( P ) = T {\displaystyle {\mathcal {I}}(P)={\mathsf {T}}} and I ( Q ) = T {\displaystyle {\mathcal {I}}(Q)={\mathsf {T}}} I ( P ∨ Q ) = T {\displaystyle {\mathcal {I}}(P\lor Q)={\mathsf {T}}} if, and only if, I ( P ) = T {\displaystyle {\mathcal {I}}(P)={\mathsf {T}}} or I ( Q ) = T {\displaystyle {\mathcal {I}}(Q)={\mathsf {T}}} I ( P → Q ) = T {\displaystyle {\mathcal {I}}(P\to Q)={\mathsf {T}}} if, and only if, it is true that, if I ( P ) = T {\displaystyle {\mathcal {I}}(P)={\mathsf {T}}} , then I ( Q ) = T {\displaystyle {\mathcal {I}}(Q)={\mathsf {T}}} I ( P ↔ Q ) = T {\displaystyle {\mathcal {I}}(P\leftrightarrow Q)={\mathsf {T}}} if, and only if, it is true that I ( P ) = T {\displaystyle {\mathcal {I}}(P)={\mathsf {T}}} if, and only if, I ( Q ) = T {\displaystyle {\mathcal {I}}(Q)={\mathsf {T}}} Instead of I ( φ ) {\displaystyle {\mathcal {I}}(\varphi )} , the interpretation of φ {\displaystyle \varphi } may be written out as | φ | {\displaystyle |\varphi |} , or, for definitions such as the above, I ( φ ) = T {\displaystyle {\mathcal {I}}(\varphi )={\mathsf {T}}} may be written simply as the English sentence " φ {\displaystyle \varphi } is given the value T {\displaystyle {\mathsf {T}}} ". Yet other authors may prefer to speak of a Tarskian model M {\displaystyle {\mathfrak {M}}} for the language, so that instead they'll use the notation M ⊨ φ {\displaystyle {\mathfrak {M}}\models \varphi } , which is equivalent to saying I ( φ ) = T {\displaystyle {\mathcal {I}}(\varphi )={\mathsf {T}}} , where I {\displaystyle {\mathcal {I}}} is the interpretation function for M {\displaystyle {\mathfrak {M}}} . ==== Connective definition methods ==== Some of these connectives may be defined in terms of others: for instance, implication, p → q {\displaystyle p\rightarrow q} , may be defined in terms of disjunction and negation, as ¬ p ∨ q {\displaystyle \neg p\lor q} ; and disjunction may be defined in terms of negation and conjunction, as ¬ ( ¬ p ∧ ¬ q {\displaystyle \neg (\neg p\land \neg q} . In fact, a truth-functionally complete system, in the sense that all and only the classical propositional tautologies are theorems, may be derived using only disjunction and negation (as Russell, Whitehead, and Hilbert did), or using only implication and negation (as Frege did), or using only conjunction and negation, or even using only a single connective for "not and" (the Sheffer stroke), as Jean Nicod did. A joint denial connective (logical NOR) will also suffice, by itself, to define all other connectives. Besides NOR and NAND, no other connectives have this property. Some authors, namely Howson and Cunningham, distinguish equivalence from the biconditional. (As to equivalence, Howson calls it "truth-functional equivalence", while Cunningham calls it "logical equivalence".) Equivalence is symbolized with ⇔ and is a metalanguage symbol, while a biconditional is symbolized with ↔ and is a logical connective in the object language L {\displaystyle {\mathcal {L}}} . Regardless, an equivalence or biconditional is true if, and only if, the formulas connected by it are assigned the same semantic value under every interpretation. Other authors often do not make this distinction, and may use the word "equivalence", and/or the symbol ⇔, to denote their object language's biconditional connective. === Semantic truth, validity, consequence === Given φ {\displaystyle \varphi } and ψ {\displaystyle \psi } as formulas (or sentences) of a language L {\displaystyle {\mathcal {L}}} , and I {\displaystyle {\mathcal {I}}} as an interpretation (or case) of L {\displaystyle {\mathcal {L}}} , then the following definitions apply: Truth-in-a-case: A sentence φ {\displaystyle \varphi } of L {\displaystyle {\mathcal {L}}} is true under an interpretation I {\displaystyle {\mathcal {I}}} if I {\displaystyle {\mathcal {I}}} assigns the truth value T to φ {\displaystyle \varphi } . If φ {\displaystyle \varphi } is true under I {\displaystyle {\mathcal {I}}} , then I {\displaystyle {\mathcal {I}}} is called a model of φ {\displaystyle \varphi } . Falsity-in-a-case: φ {\displaystyle \varphi } is false under an interpretation I {\displaystyle {\mathcal {I}}} if, and only if, ¬ φ {\displaystyle \neg \varphi } is true under I {\displaystyle {\mathcal {I}}} . This is the "truth of negation" definition of falsity-in-a-case. Falsity-in-a-case may also be defined by the "complement" definition: φ {\displaystyle \varphi } is false under an interpretation I {\displaystyle {\mathcal {I}}} if, and only if, φ {\displaystyle \varphi } is not true under I {\displaystyle {\mathcal {I}}} . In classical logic, these definitions are equivalent, but in nonclassical logics, they are not. Semantic consequence: A sentence ψ {\displaystyle \psi } of L {\displaystyle {\mathcal {L}}} is a semantic consequence ( φ ⊨ ψ {\displaystyle \varphi \models \psi } ) of a sentence φ {\displaystyle \varphi } if there is no interpretation under which φ {\displaystyle \varphi } is true and ψ {\displaystyle \psi } is not true. Valid formula (tautology): A sentence φ {\displaystyle \varphi } of L {\displaystyle {\mathcal {L}}} is logically valid ( ⊨ φ {\displaystyle \models \varphi } ), or a tautology,ref name="ms32 if it is true under every interpretation, or true in every case. Consistent sentence: A sentence of L {\displaystyle {\mathcal {L}}} is consistent if it is true under at least one interpretation. It is inconsistent if it is not consistent. An inconsistent formula is also called self-contradictory, and said to be a self-contradiction, or simply a contradiction, although this latter name is sometimes reserved specifically for statements of the form ( p ∧ ¬ p ) {\displaystyle (p\land \neg p)} . For interpretations (cases) I {\displaystyle {\mathcal {I}}} of L {\displaystyle {\mathcal {L}}} , these definitions are sometimes given: Complete case: A case I {\displaystyle {\mathcal {I}}} is complete if, and only if, either φ {\displaystyle \varphi } is true-in- I {\displaystyle {\mathcal {I}}} or ¬ φ {\displaystyle \neg \varphi } is true-in- I {\displaystyle {\mathcal {I}}} , for any φ {\displaystyle \varphi } in L {\displaystyle {\mathcal {L}}} . Consistent case: A case I {\displaystyle {\mathcal {I}}} is consistent if, and only if, there is no φ {\displaystyle \varphi } in L {\displaystyle {\mathcal {L}}} such that both φ {\displaystyle \varphi } and ¬ φ {\displaystyle \neg \varphi } are true-in- I {\displaystyle {\mathcal {I}}} . For classical logic, which assumes that all cases are complete and consistent, the following theorems apply: For any given interpretation, a given formula is either true or false under it. No formula is both true and false under the same interpretation. φ {\displaystyle \varphi } is true under I {\displaystyle {\mathcal {I}}} if, and only if, ¬ φ {\displaystyle \neg \varphi } is false under I {\displaystyle {\mathcal {I}}} ; ¬ φ {\displaystyle \neg \varphi } is true under I {\displaystyle {\mathcal {I}}} if, and only if, φ {\displaystyle \varphi } is not true under I {\displaystyle {\mathcal {I}}} . If φ {\displaystyle \varphi } and ( φ → ψ ) {\displaystyle (\varphi \to \psi )} are both true under I {\displaystyle {\mathcal {I}}} , then ψ {\displaystyle \psi } is true under I {\displaystyle {\mathcal {I}}} . If ⊨ φ {\displaystyle \models \varphi } and ⊨ ( φ → ψ ) {\displaystyle \models (\varphi \to \psi )} , then ⊨ ψ {\displaystyle \models \psi } . ( φ → ψ ) {\displaystyle (\varphi \to \psi )} is true under I {\displaystyle {\mathcal {I}}} if, and only if, either φ {\displaystyle \varphi } is not true under I {\displaystyle {\mathcal {I}}} , or ψ {\displaystyle \psi } is true under I {\displaystyle {\mathcal {I}}} . φ ⊨ ψ {\displaystyle \varphi \models \psi } if, and only if, ( φ → ψ ) {\displaystyle (\varphi \to \psi )} is logically valid, that is, φ ⊨ ψ {\displaystyle \varphi \models \psi } if, and only if, ⊨ ( φ → ψ ) {\displaystyle \models (\varphi \to \psi )} . == Proof systems == Proof systems in propositional logic can be broadly classified into semantic proof systems and syntactic proof systems, according to the kind of logical consequence that they rely on: semantic proof systems rely on semantic consequence ( φ ⊨ ψ {\displaystyle \varphi \models \psi } ), whereas syntactic proof systems rely on syntactic consequence ( φ ⊢ ψ {\displaystyle \varphi \vdash \psi } ). Semantic consequence deals with the truth values of propositions in all possible interpretations, whereas syntactic consequence concerns the derivation of conclusions from premises based on rules and axioms within a formal system. This section gives a very brief overview of the kinds of proof systems, with anchors to the relevant sections of this article on each one, as well as to the separate Wikipedia articles on each one. === Semantic proof systems === Semantic proof systems rely on the concept of semantic consequence, symbolized as φ ⊨ ψ {\displaystyle \varphi \models \psi } , which indicates that if φ {\displaystyle \varphi } is true, then ψ {\displaystyle \psi } must also be true in every possible interpretation. ==== Truth tables ==== A truth table is a semantic proof method used to determine the truth value of a propositional logic expression in every possible scenario. By exhaustively listing the truth values of its constituent atoms, a truth table can show whether a proposition is true, false, tautological, or contradictory. See § Semantic proof via truth tables. ==== Semantic tableaux ==== A semantic tableau is another semantic proof technique that systematically explores the truth of a proposition. It constructs a tree where each branch represents a possible interpretation of the propositions involved. If every branch leads to a contradiction, the original proposition is considered to be a contradiction, and its negation is considered a tautology. See § Semantic proof via tableaux. === Syntactic proof systems === Syntactic proof systems, in contrast, focus on the formal manipulation of symbols according to specific rules. The notion of syntactic consequence, φ ⊢ ψ {\displaystyle \varphi \vdash \psi } , signifies that ψ {\displaystyle \psi } can be derived from φ {\displaystyle \varphi } using the rules of the formal system. ==== Axiomatic systems ==== An axiomatic system is a set of axioms or assumptions from which other statements (theorems) are logically derived. In propositional logic, axiomatic systems define a base set of propositions considered to be self-evidently true, and theorems are proved by applying deduction rules to these axioms. See § Syntactic proof via axioms. ==== Natural deduction ==== Natural deduction is a syntactic method of proof that emphasizes the derivation of conclusions from premises through the use of intuitive rules reflecting ordinary reasoning. Each rule reflects a particular logical connective and shows how it can be introduced or eliminated. See § Syntactic proof via natural deduction. ==== Sequent calculus ==== The sequent calculus is a formal system that represents logical deductions as sequences or "sequents" of formulas. Developed by Gerhard Gentzen, this approach focuses on the structural properties of logical deductions and provides a powerful framework for proving statements within propositional logic. == Semantic proof via truth tables == Taking advantage of the semantic concept of validity (truth in every interpretation), it is possible to prove a formula's validity by using a truth table, which gives every possible interpretation (assignment of truth values to variables) of a formula. If, and only if, all the lines of a truth table come out true, the formula is semantically valid (true in every interpretation). Further, if (and only if) ¬ φ {\displaystyle \neg \varphi } is valid, then φ {\displaystyle \varphi } is inconsistent. For instance, this table shows that "p → (q ∨ r → (r → ¬p))" is not valid: The computation of the last column of the third line may be displayed as follows: Further, using the theorem that φ ⊨ ψ {\displaystyle \varphi \models \psi } if, and only if, ( φ → ψ ) {\displaystyle (\varphi \to \psi )} is valid, we can use a truth table to prove that a formula is a semantic consequence of a set of formulas: { φ 1 , φ 2 , φ 3 , . . . , φ n } ⊨ ψ {\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}\models \psi } if, and only if, we can produce a truth table that comes out all true for the formula ( ( ⋀ i = 1 n φ i ) → ψ ) {\displaystyle \left(\left(\bigwedge _{i=1}^{n}\varphi _{i}\right)\rightarrow \psi \right)} (that is, if ⊨ ( ( ⋀ i = 1 n φ i ) → ψ ) {\displaystyle \models \left(\left(\bigwedge _{i=1}^{n}\varphi _{i}\right)\rightarrow \psi \right)} ). == Semantic proof via tableaux == Since truth tables have 2n lines for n variables, they can be tiresomely long for large values of n. Analytic tableaux are a more efficient, but nevertheless mechanical, semantic proof method; they take advantage of the fact that "we learn nothing about the validity of the inference from examining the truth-value distributions which make either the premises false or the conclusion true: the only relevant distributions when considering deductive validity are clearly just those which make the premises true or the conclusion false." Analytic tableaux for propositional logic are fully specified by the rules that are stated in schematic form below. These rules use "signed formulas", where a signed formula is an expression T X {\displaystyle TX} or F X {\displaystyle FX} , where X {\displaystyle X} is a (unsigned) formula of the language L {\displaystyle {\mathcal {L}}} . (Informally, T X {\displaystyle TX} is read " X {\displaystyle X} is true", and F X {\displaystyle FX} is read " X {\displaystyle X} is false".) Their formal semantic definition is that "under any interpretation, a signed formula T X {\displaystyle TX} is called true if X {\displaystyle X} is true, and false if X {\displaystyle X} is false, whereas a signed formula F X {\displaystyle FX} is called false if X {\displaystyle X} is true, and true if X {\displaystyle X} is false." 1 ) T ∼ X F X F ∼ X T X s p a c e r 2 ) T ( X ∧ Y ) T X T Y F ( X ∧ Y ) F X | F Y s p a c e r 3 ) T ( X ∨ Y ) T X | T Y F ( X ∨ Y ) F X F Y s p a c e r 4 ) T ( X ⊃ Y ) F X | T Y F ( X ⊃ Y ) T X F Y {\displaystyle {\begin{aligned}&1)\quad {\frac {T\sim X}{FX}}\quad &&{\frac {F\sim X}{TX}}\\{\phantom {spacer}}\\&2)\quad {\frac {T(X\land Y)}{\begin{matrix}TX\\TY\end{matrix}}}\quad &&{\frac {F(X\land Y)}{FX|FY}}\\{\phantom {spacer}}\\&3)\quad {\frac {T(X\lor Y)}{TX|TY}}\quad &&{\frac {F(X\lor Y)}{\begin{matrix}FX\\FY\end{matrix}}}\\{\phantom {spacer}}\\&4)\quad {\frac {T(X\supset Y)}{FX|TY}}\quad &&{\frac {F(X\supset Y)}{\begin{matrix}TX\\FY\end{matrix}}}\end{aligned}}} In this notation, rule 2 means that T ( X ∧ Y ) {\displaystyle T(X\land Y)} yields both T X , T Y {\displaystyle TX,TY} , whereas F ( X ∧ Y ) {\displaystyle F(X\land Y)} branches into F X , F Y {\displaystyle FX,FY} . The notation is to be understood analogously for rules 3 and 4. Often, in tableaux for classical logic, the signed formula notation is simplified so that T φ {\displaystyle T\varphi } is written simply as φ {\displaystyle \varphi } , and F φ {\displaystyle F\varphi } as ¬ φ {\displaystyle \neg \varphi } , which accounts for naming rule 1 the "Rule of Double Negation". One constructs a tableau for a set of formulas by applying the rules to produce more lines and tree branches until every line has been used, producing a complete tableau. In some cases, a branch can come to contain both T X {\displaystyle TX} and F X {\displaystyle FX} for some X {\displaystyle X} , which is to say, a contradiction. In that case, the branch is said to close. If every branch in a tree closes, the tree itself is said to close. In virtue of the rules for construction of tableaux, a closed tree is a proof that the original formula, or set of formulas, used to construct it was itself self-contradictory, and therefore false. Conversely, a tableau can also prove that a logical formula is tautologous: if a formula is tautologous, its negation is a contradiction, so a tableau built from its negation will close. To construct a tableau for an argument ⟨ { φ 1 , φ 2 , φ 3 , . . . , φ n } , ψ ⟩ {\displaystyle \langle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\},\psi \rangle } , one first writes out the set of premise formulas, { φ 1 , φ 2 , φ 3 , . . . , φ n } {\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}} , with one formula on each line, signed with T {\displaystyle T} (that is, T φ {\displaystyle T\varphi } for each T φ {\displaystyle T\varphi } in the set); and together with those formulas (the order is unimportant), one also writes out the conclusion, ψ {\displaystyle \psi } , signed with F {\displaystyle F} (that is, F ψ {\displaystyle F\psi } ). One then produces a truth tree (analytic tableau) by using all those lines according to the rules. A closed tree will be proof that the argument was valid, in virtue of the fact that φ ⊨ ψ {\displaystyle \varphi \models \psi } if, and only if, { φ , ∼ ψ } {\displaystyle \{\varphi ,\sim \psi \}} is inconsistent (also written as φ , ∼ ψ ⊨ {\displaystyle \varphi ,\sim \psi \models } ). == List of classically valid argument forms == Using semantic checking methods, such as truth tables or semantic tableaux, to check for tautologies and semantic consequences, it can be shown that, in classical logic, the following classical argument forms are semantically valid, i.e., these tautologies and semantic consequences hold. We use φ {\displaystyle \varphi } ⟚ ψ {\displaystyle \psi } to denote equivalence of φ {\displaystyle \varphi } and ψ {\displaystyle \psi } , that is, as an abbreviation for both φ ⊨ ψ {\displaystyle \varphi \models \psi } and ψ ⊨ φ {\displaystyle \psi \models \varphi } ; as an aid to reading the symbols, a description of each formula is given. The description reads the symbol ⊧ (called the "double turnstile") as "therefore", which is a common reading of it, although many authors prefer to read it as "entails", or as "models". == Syntactic proof via natural deduction == Natural deduction, since it is a method of syntactical proof, is specified by providing inference rules (also called rules of proof) for a language with the typical set of connectives { − , & , ∨ , → , ↔ } {\displaystyle \{-,\&,\lor ,\to ,\leftrightarrow \}} ; no axioms are used other than these rules. The rules are covered below, and a proof example is given afterwards. === Notation styles === Different authors vary to some extent regarding which inference rules they give, which will be noted. More striking to the look and feel of a proof, however, is the variation in notation styles. The § Gentzen notation, which was covered earlier for a short argument, can actually be stacked to produce large tree-shaped natural deduction proofs—not to be confused with "truth trees", which is another name for analytic tableaux. There is also a style due to Stanisław Jaśkowski, where the formulas in the proof are written inside various nested boxes, and there is a simplification of Jaśkowski's style due to Fredric Fitch (Fitch notation), where the boxes are simplified to simple horizontal lines beneath the introductions of suppositions, and vertical lines to the left of the lines that are under the supposition. Lastly, there is the only notation style which will actually be used in this article, which is due to Patrick Suppes, but was much popularized by E.J. Lemmon and Benson Mates. This method has the advantage that, graphically, it is the least intensive to produce and display, which made it a natural choice for the editor who wrote this part of the article, who did not understand the complex LaTeX commands that would be required to produce proofs in the other methods. A proof, then, laid out in accordance with the Suppes–Lemmon notation style, is a sequence of lines containing sentences, where each sentence is either an assumption, or the result of applying a rule of proof to earlier sentences in the sequence. Each line of proof is made up of a sentence of proof, together with its annotation, its assumption set, and the current line number. The assumption set lists the assumptions on which the given sentence of proof depends, which are referenced by the line numbers. The annotation specifies which rule of proof was applied, and to which earlier lines, to yield the current sentence. See the § Natural deduction proof example. === Inference rules === Natural deduction inference rules, due ultimately to Gentzen, are given below. There are ten primitive rules of proof, which are the rule assumption, plus four pairs of introduction and elimination rules for the binary connectives, and the rule reductio ad adbsurdum. Disjunctive Syllogism can be used as an easier alternative to the proper ∨-elimination, and MTT and DN are commonly given rules, although they are not primitive. === Natural deduction proof example === The proof below derives − P {\displaystyle -P} from P → Q {\displaystyle P\to Q} and − Q {\displaystyle -Q} using only MPP and RAA, which shows that MTT is not a primitive rule, since it can be derived from those two other rules. == Syntactic proof via axioms == It is possible to perform proofs axiomatically, which means that certain tautologies are taken as self-evident and various others are deduced from them using modus ponens as an inference rule, as well as a rule of substitution, which permits replacing any well-formed formula with any substitution-instance of it. Alternatively, one uses axiom schemas instead of axioms, and no rule of substitution is used. This section gives the axioms of some historically notable axiomatic systems for propositional logic. For more examples, as well as metalogical theorems that are specific to such axiomatic systems (such as their completeness and consistency), see the article Axiomatic system (logic). === Frege's Begriffsschrift === Although axiomatic proof has been used since the famous Ancient Greek textbook, Euclid's Elements of Geometry, in propositional logic it dates back to Gottlob Frege's 1879 Begriffsschrift. Frege's system used only implication and negation as connectives. It had six axioms: Proposition 1: a → ( b → a ) {\displaystyle a\to (b\to a)} Proposition 2: ( c → ( b → a ) ) → ( ( c → b ) → ( c → a ) ) {\displaystyle (c\to (b\to a))\to ((c\to b)\to (c\to a))} Proposition 8: ( d → ( b → a ) ) → ( b → ( d → a ) ) {\displaystyle (d\to (b\to a))\to (b\to (d\to a))} Proposition 28: ( b → a ) → ( ¬ a → ¬ b ) {\displaystyle (b\to a)\to (\neg a\to \neg b)} Proposition 31: ¬ ¬ a → a {\displaystyle \neg \neg a\to a} Proposition 41: a → ¬ ¬ a {\displaystyle a\to \neg \neg a} These were used by Frege together with modus ponens and a rule of substitution (which was used but never precisely stated) to yield a complete and consistent axiomatization of classical truth-functional propositional logic. === Łukasiewicz's P2 === Jan Łukasiewicz showed that, in Frege's system, "the third axiom is superfluous since it can be derived from the preceding two axioms, and that the last three axioms can be replaced by the single sentence C C N p N q C p q {\displaystyle CCNpNqCpq} ". Which, taken out of Łukasiewicz's Polish notation into modern notation, means ( ¬ p → ¬ q ) → ( p → q ) {\displaystyle (\neg p\rightarrow \neg q)\rightarrow (p\rightarrow q)} . Hence, Łukasiewicz is credited with this system of three axioms: p → ( q → p ) {\displaystyle p\to (q\to p)} ( p → ( q → r ) ) → ( ( p → q ) → ( p → r ) ) {\displaystyle (p\to (q\to r))\to ((p\to q)\to (p\to r))} ( ¬ p → ¬ q ) → ( q → p ) {\displaystyle (\neg p\to \neg q)\to (q\to p)} Just like Frege's system, this system uses a substitution rule and uses modus ponens as an inference rule. The exact same system was given (with an explicit substitution rule) by Alonzo Church, who referred to it as the system P2 and helped popularize it. ==== Schematic form of P2 ==== One may avoid using the rule of substitution by giving the axioms in schematic form, using them to generate an infinite set of axioms. Hence, using Greek letters to represent schemata (metalogical variables that may stand for any well-formed formulas), the axioms are given as: φ → ( ψ → φ ) {\displaystyle \varphi \to (\psi \to \varphi )} ( φ → ( ψ → χ ) ) → ( ( φ → ψ ) → ( φ → χ ) ) {\displaystyle (\varphi \to (\psi \to \chi ))\to ((\varphi \to \psi )\to (\varphi \to \chi ))} ( ¬ φ → ¬ ψ ) → ( ψ → φ ) {\displaystyle (\neg \varphi \to \neg \psi )\to (\psi \to \varphi )} The schematic version of P2 is attributed to John von Neumann, and is used in the Metamath "set.mm" formal proof database. It has also been attributed to Hilbert, and named H {\displaystyle {\mathcal {H}}} in this context. ==== Proof example in P2 ==== As an example, a proof of A → A {\displaystyle A\to A} in P2 is given below. First, the axioms are given names: (A1) ( p → ( q → p ) ) {\displaystyle (p\to (q\to p))} (A2) ( ( p → ( q → r ) ) → ( ( p → q ) → ( p → r ) ) ) {\displaystyle ((p\to (q\to r))\to ((p\to q)\to (p\to r)))} (A3) ( ( ¬ p → ¬ q ) → ( q → p ) ) {\displaystyle ((\neg p\to \neg q)\to (q\to p))} And the proof is as follows: A → ( ( B → A ) → A ) {\displaystyle A\to ((B\to A)\to A)} (instance of (A1)) ( A → ( ( B → A ) → A ) ) → ( ( A → ( B → A ) ) → ( A → A ) ) {\displaystyle (A\to ((B\to A)\to A))\to ((A\to (B\to A))\to (A\to A))} (instance of (A2)) ( A → ( B → A ) ) → ( A → A ) {\displaystyle (A\to (B\to A))\to (A\to A)} (from (1) and (2) by modus ponens) A → ( B → A ) {\displaystyle A\to (B\to A)} (instance of (A1)) A → A {\displaystyle A\to A} (from (4) and (3) by modus ponens) == Solvers == One notable difference between propositional calculus and predicate calculus is that satisfiability of a propositional formula is decidable.: 81  Deciding satisfiability of propositional logic formulas is an NP-complete problem. However, practical methods exist (e.g., DPLL algorithm, 1962; Chaff algorithm, 2001) that are very fast for many useful cases. Recent work has extended the SAT solver algorithms to work with propositions containing arithmetic expressions; these are the SMT solvers. == See also == === Higher logical levels === First-order logic Second-order propositional logic Second-order logic Higher-order logic === Related topics === == Notes == == References == == Further reading == Brown, Frank Markham (2003), Boolean Reasoning: The Logic of Boolean Equations, 1st edition, Kluwer Academic Publishers, Norwell, MA. 2nd edition, Dover Publications, Mineola, NY. Chang, C.C. and Keisler, H.J. (1973), Model Theory, North-Holland, Amsterdam, Netherlands. Kohavi, Zvi (1978), Switching and Finite Automata Theory, 1st edition, McGraw–Hill, 1970. 2nd edition, McGraw–Hill, 1978. Korfhage, Robert R. (1974), Discrete Computational Structures, Academic Press, New York, NY. Lambek, J. and Scott, P.J. (1986), Introduction to Higher Order Categorical Logic, Cambridge University Press, Cambridge, UK. Mendelson, Elliot (1964), Introduction to Mathematical Logic, D. Van Nostrand Company. === Related works === Hofstadter, Douglas (1979). Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books. ISBN 978-0-465-02656-2. == External links == Klement, Kevin C. "Propositional Logic". In Fieser, James; Dowden, Bradley (eds.). Internet Encyclopedia of Philosophy. Retrieved 7 April 2025. Franks, Curtis (2024). "Propositional Logic". In Zalta, Edward N.; Nodelman, Uri (eds.). Stanford Encyclopedia of Philosophy (Winter 2024 ed.). Metaphysics Research Lab, Stanford University. Retrieved 7 April 2025. Formal Predicate Calculus, contains a systematic formal development with axiomatic proof forall x: an introduction to formal logic, by P.D. Magnus, covers formal semantics and proof theory for sentential logic. Chapter 2 / Propositional Logic from Logic In Action Propositional sequent calculus prover on Project Nayuki. (note: implication can be input in the form !X|Y, and a sequent can be a single formula prefixed with > and having no commas) Propositional Logic - A Generative Grammar A Propositional Calculator that helps to understand simple expressions
Wikipedia/Truth-functional_propositional_logic
In argumentation, an objection is a reason arguing against a premise, argument, or conclusion. Definitions of objection vary in whether an objection is always an argument (or counterargument) or may include other moves such as questioning. An objection to an objection is sometimes known as a rebuttal. An objection can be issued against an argument retroactively from the point of reference of that argument. This form of objection – invented by the presocratic philosopher Parmenides – is commonly referred to as a retroactive refutation. == Inference objection == An inference objection is an objection to an argument based not on any of its stated premises, but rather on the relationship between a premise (or set of premises) and main contention. For a given simple argument, if the assumption is made that its premises are correct, fault may be found in the progression from these to the conclusion of the argument. This can often take the form of an unstated co-premise, as in begging the question. In other words, it may be necessary to make an assumption in order to conclude anything from a set of true statements. This assumption must also be true in order that the conclusion follow logically from the initial statements. === Example === In the first example argument map, the objector can't find anything contentious in the stated premises of the argument, but still disagrees with the conclusion; the objection is therefore placed beside the main premise and, in this case, exactly corresponds to an unstated or 'hidden' co-premise. This is demonstrated by the second example argument map in which the full pattern of reasoning relating to the contention is set out. == See also == Argument map Defeater == References ==
Wikipedia/Inference_objection
In logic, more specifically proof theory, a Hilbert system, sometimes called Hilbert calculus, Hilbert-style system, Hilbert-style proof system, Hilbert-style deductive system or Hilbert–Ackermann system, is a type of formal proof system attributed to Gottlob Frege and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well. It is defined as a deductive system that generates theorems from axioms and inference rules, especially if the only postulated inference rule is modus ponens. Every Hilbert system is an axiomatic system, which is used by many authors as a sole less specific term to declare their Hilbert systems, without mentioning any more specific terms. In this context, "Hilbert systems" are contrasted with natural deduction systems, in which no axioms are used, only inference rules. While all sources that refer to an "axiomatic" logical proof system characterize it simply as a logical proof system with axioms, sources that use variants of the term "Hilbert system" sometimes define it in different ways, which will not be used in this article. For instance, Troelstra defines a "Hilbert system" as a system with axioms and with → E {\displaystyle {\rightarrow }E} and ∀ I {\displaystyle {\forall }I} as the only inference rules. A specific set of axioms is also sometimes called "the Hilbert system", or "the Hilbert-style calculus". Sometimes, "Hilbert-style" is used to convey the type of axiomatic system that has its axioms given in schematic form, as in the § Schematic form of P2 below—but other sources use the term "Hilbert-style" as encompassing both systems with schematic axioms and systems with a rule of substitution, as this article does. The use of "Hilbert-style" and similar terms to describe axiomatic proof systems in logic is due to the influence of Hilbert and Ackermann's Principles of Mathematical Logic (1928). Most variants of Hilbert systems take a characteristic tack in the way they balance a trade-off between logical axioms and rules of inference. Hilbert systems can be characterised by the choice of a large number of schemas of logical axioms and a small set of rules of inference. Systems of natural deduction take the opposite tack, including many deduction rules but very few or no axiom schemas. The most commonly studied Hilbert systems have either just one rule of inference – modus ponens, for propositional logics – or two – with generalisation, to handle predicate logics, as well – and several infinite axiom schemas. Hilbert systems for alethic modal logics, sometimes called Hilbert-Lewis systems, additionally require the necessitation rule. Some systems use a finite list of concrete formulas as axioms instead of an infinite set of formulas via axiom schemas, in which case the uniform substitution rule is required. A characteristic feature of the many variants of Hilbert systems is that the context is not changed in any of their rules of inference, while both natural deduction and sequent calculus contain some context-changing rules. Thus, if one is interested only in the derivability of tautologies, no hypothetical judgments, then one can formalize the Hilbert system in such a way that its rules of inference contain only judgments of a rather simple form. The same cannot be done with the other two deductions systems: as context is changed in some of their rules of inferences, they cannot be formalized so that hypothetical judgments could be avoided – not even if we want to use them just for proving derivability of tautologies. == Formal deductions == In a Hilbert system, a formal deduction (or proof) is a finite sequence of formulas in which each formula is either an axiom or is obtained from previous formulas by a rule of inference. These formal deductions are meant to mirror natural-language proofs, although they are far more detailed. Suppose Γ {\displaystyle \Gamma } is a set of formulas, considered as hypotheses. For example, Γ {\displaystyle \Gamma } could be a set of axioms for group theory or set theory. The notation Γ ⊢ ϕ {\displaystyle \Gamma \vdash \phi } means that there is a deduction that ends with ϕ {\displaystyle \phi } using as axioms only logical axioms and elements of Γ {\displaystyle \Gamma } . Thus, informally, Γ ⊢ ϕ {\displaystyle \Gamma \vdash \phi } means that ϕ {\displaystyle \phi } is provable assuming all the formulas in Γ {\displaystyle \Gamma } . Hilbert systems are characterized by the use of numerous schemas of logical axioms. An axiom schema is an infinite set of axioms obtained by substituting all formulas of some form into a specific pattern. The set of logical axioms includes not only those axioms generated from this pattern, but also any generalization of one of those axioms. A generalization of a formula is obtained by prefixing zero or more universal quantifiers on the formula; for example ∀ y ( ∀ x P x y → P t y ) {\displaystyle \forall y(\forall xPxy\to Pty)} is a generalization of ∀ x P x y → P t y {\displaystyle \forall xPxy\to Pty} . == Propositional logic == The following are some Hilbert systems that have been used in propositional logic. One of them, the § Schematic form of P2, is also considered a Frege system. === Frege's Begriffsschrift === Axiomatic proofs have been used in mathematics since the famous Ancient Greek textbook, Euclid's Elements of Geometry, c. 300 BC. But the first known fully formalized proof system that thereby qualifies as a Hilbert system dates back to Gottlob Frege's 1879 Begriffsschrift. Frege's system used only implication and negation as connectives, and it had six axioms, which were these ones: Proposition 1: a ⊃ ( b ⊃ a ) {\displaystyle a\supset (b\supset a)} Proposition 2: ( c ⊃ ( b ⊃ a ) ) ⊃ ( ( c ⊃ b ) ⊃ ( c ⊃ a ) ) {\displaystyle (c\supset (b\supset a))\supset ((c\supset b)\supset (c\supset a))} Proposition 8: ( d ⊃ ( b ⊃ a ) ) ⊃ ( b ⊃ ( d ⊃ a ) ) {\displaystyle (d\supset (b\supset a))\supset (b\supset (d\supset a))} Proposition 28: ( b ⊃ a ) ⊃ ( ¬ a ⊃ ¬ b ) {\displaystyle (b\supset a)\supset (\neg a\supset \neg b)} Proposition 31: ¬ ¬ a ⊃ a {\displaystyle \neg \neg a\supset a} Proposition 41: a ⊃ ¬ ¬ a {\displaystyle a\supset \neg \neg a} These were used by Frege together with modus ponens and a rule of substitution (which was used but never precisely stated) to yield a complete and consistent axiomatization of classical truth-functional propositional logic. === Łukasiewicz's P2 === Jan Łukasiewicz showed that, in Frege's system, "the third axiom is superfluous since it can be derived from the preceding two axioms, and that the last three axioms can be replaced by the single sentence C C N p N q C q p {\displaystyle CCNpNqCqp} ". Which, taken out of Łukasiewicz's Polish notation into modern notation, means ( ¬ p → ¬ q ) → ( q → p ) {\displaystyle (\neg p\rightarrow \neg q)\rightarrow (q\rightarrow p)} . Hence, Łukasiewicz is credited with this system of three axioms: p → ( q → p ) {\displaystyle p\to (q\to p)} ( p → ( q → r ) ) → ( ( p → q ) → ( p → r ) ) {\displaystyle (p\to (q\to r))\to ((p\to q)\to (p\to r))} ( ¬ p → ¬ q ) → ( q → p ) {\displaystyle (\neg p\to \neg q)\to (q\to p)} Just like Frege's system, this system uses a substitution rule and uses modus ponens as an inference rule. The exact same system was given (with an explicit substitution rule) by Alonzo Church, who referred to it as the system P2, and helped popularize it. === Schematic form of P2 === One may avoid using the rule of substitution by giving the axioms in schematic form, using them to generate an infinite set of axioms. Hence, using Greek letters to represent schemas (metalogical variables that may stand for any well-formed formulas), the axioms are given as: φ → ( ψ → φ ) {\displaystyle \varphi \to (\psi \to \varphi )} ( φ → ( ψ → χ ) ) → ( ( φ → ψ ) → ( φ → χ ) ) {\displaystyle (\varphi \to (\psi \to \chi ))\to ((\varphi \to \psi )\to (\varphi \to \chi ))} ( ¬ φ → ¬ ψ ) → ( ψ → φ ) {\displaystyle (\neg \varphi \to \neg \psi )\to (\psi \to \varphi )} The schematic version of P2 is attributed to John von Neumann, and is used in the Metamath "set.mm" formal proof database. In fact, the very idea of using axiom schemas to replace the rule of substitution is attributed to von Neumann. The schematic version of P2 has also been attributed to Hilbert, and named H {\displaystyle {\mathcal {H}}} in this context. Systems for propositional logic whose inference rules are schematic are also called Frege systems; as the authors that originally defined the term "Frege system" note, this actually excludes Frege's own system, given above, since it had axioms instead of axiom schemas. ==== Proof example in P2 ==== As an example, a proof of A → A {\displaystyle A\to A} in P2 is given below. First, the axioms are given names: (A1) ( p → ( q → p ) ) {\displaystyle (p\to (q\to p))} (A2) ( ( p → ( q → r ) ) → ( ( p → q ) → ( p → r ) ) ) {\displaystyle ((p\to (q\to r))\to ((p\to q)\to (p\to r)))} (A3) ( ( ¬ p → ¬ q ) → ( q → p ) ) {\displaystyle ((\neg p\to \neg q)\to (q\to p))} And the proof is as follows: A → ( ( B → A ) → A ) {\displaystyle A\to ((B\to A)\to A)} (instance of (A1)) ( A → ( ( B → A ) → A ) ) → ( ( A → ( B → A ) ) → ( A → A ) ) {\displaystyle (A\to ((B\to A)\to A))\to ((A\to (B\to A))\to (A\to A))} (instance of (A2)) ( A → ( B → A ) ) → ( A → A ) {\displaystyle (A\to (B\to A))\to (A\to A)} (from (1) and (2) by modus ponens) A → ( B → A ) {\displaystyle A\to (B\to A)} (instance of (A1)) A → A {\displaystyle A\to A} (from (4) and (3) by modus ponens) == Predicate logic (example system) == There is an unlimited amount of axiomatisations of predicate logic, since for any logic there is freedom in choosing axioms and rules that characterise that logic. We describe here a Hilbert system with nine axioms and just the rule modus ponens, which we call the one-rule axiomatisation and which describes classical equational logic. We deal with a minimal language for this logic, where formulas use only the connectives ¬ {\displaystyle \lnot } and → {\displaystyle \to } and only the quantifier ∀ {\displaystyle \forall } . Later we show how the system can be extended to include additional logical connectives, such as ∧ {\displaystyle \land } and ∨ {\displaystyle \lor } , without enlarging the class of deducible formulas. The first four logical axiom schemas allow (together with modus ponens) for the manipulation of logical connectives. P1. ϕ → ϕ {\displaystyle \phi \to \phi } P2. ϕ → ( ψ → ϕ ) {\displaystyle \phi \to \left(\psi \to \phi \right)} P3. ( ϕ → ( ψ → ξ ) ) → ( ( ϕ → ψ ) → ( ϕ → ξ ) ) {\displaystyle \left(\phi \to \left(\psi \rightarrow \xi \right)\right)\to \left(\left(\phi \to \psi \right)\to \left(\phi \to \xi \right)\right)} P4. ( ¬ ϕ → ¬ ψ ) → ( ψ → ϕ ) {\displaystyle \left(\lnot \phi \to \lnot \psi \right)\to \left(\psi \to \phi \right)} The axiom P1 is redundant, as it follows from P3, P2 and modus ponens (see proof). These axioms describe classical propositional logic; without axiom P4 we get positive implicational logic. Minimal logic is achieved either by adding instead the axiom P4m, or by defining ¬ ϕ {\displaystyle \lnot \phi } as ϕ → ⊥ {\displaystyle \phi \to \bot } . P4m. ( ϕ → ψ ) → ( ( ϕ → ¬ ψ ) → ¬ ϕ ) {\displaystyle \left(\phi \to \psi \right)\to \left(\left(\phi \to \lnot \psi \right)\to \lnot \phi \right)} Intuitionistic logic is achieved by adding axioms P4i and P5i to positive implicational logic, or by adding axiom P5i to minimal logic. Both P4i and P5i are theorems of classical propositional logic. P4i. ( ϕ → ¬ ϕ ) → ¬ ϕ {\displaystyle \left(\phi \to \lnot \phi \right)\to \lnot \phi } P5i. ¬ ϕ → ( ϕ → ψ ) {\displaystyle \lnot \phi \to \left(\phi \to \psi \right)} Note that these are axiom schemas, which represent infinitely many specific instances of axioms. For example, P1 might represent the particular axiom instance p → p {\displaystyle p\to p} , or it might represent ( p → q ) → ( p → q ) {\displaystyle \left(p\to q\right)\to \left(p\to q\right)} : the ϕ {\displaystyle \phi } is a place where any formula can be placed. A variable such as this that ranges over formulae is called a 'schematic variable'. With a second rule of uniform substitution (US), we can change each of these axiom schemas into a single axiom, replacing each schematic variable by some propositional variable that isn't mentioned in any axiom to get what we call the substitutional axiomatisation. Both formalisations have variables, but where the one-rule axiomatisation has schematic variables that are outside the logic's language, the substitutional axiomatisation uses propositional variables that do the same work by expressing the idea of a variable ranging over formulae with a rule that uses substitution. US. Let ϕ ( p ) {\displaystyle \phi (p)} be a formula with one or more instances of the propositional variable p {\displaystyle p} , and let ψ {\displaystyle \psi } be another formula. Then from ϕ ( p ) {\displaystyle \phi (p)} , infer ϕ ( ψ ) {\displaystyle \phi (\psi )} . The next three logical axiom schemas provide ways to add, manipulate, and remove universal quantifiers. Q5. ∀ x ( ϕ ) → ϕ [ x := t ] {\displaystyle \forall x\left(\phi \right)\to \phi [x:=t]} where t may be substituted for x in ϕ {\displaystyle \,\!\phi } Q6. ∀ x ( ϕ → ψ ) → ( ∀ x ( ϕ ) → ∀ x ( ψ ) ) {\displaystyle \forall x\left(\phi \to \psi \right)\to \left(\forall x\left(\phi \right)\to \forall x\left(\psi \right)\right)} Q7. ϕ → ∀ x ( ϕ ) {\displaystyle \phi \to \forall x\left(\phi \right)} where x is not free in ϕ {\displaystyle \phi } . These three additional rules extend the propositional system to axiomatise classical predicate logic. Likewise, these three rules extend system for intuitionstic propositional logic (with P1-3 and P4i and P5i) to intuitionistic predicate logic. Universal quantification is often given an alternative axiomatisation using an extra rule of generalisation, in which case the rules Q6 and Q7 are redundant. Generalization: If Γ ⊢ ϕ {\displaystyle \Gamma \vdash \phi } and x does not occur free in any formula of Γ {\displaystyle \Gamma } then Γ ⊢ ∀ x ϕ {\displaystyle \Gamma \vdash \forall x\phi } . The final axiom schemas are required to work with formulas involving the equality symbol. I8. x = x {\displaystyle x=x} for every variable x. I9. ( x = y ) → ( ϕ [ z := x ] → ϕ [ z := y ] ) {\displaystyle \left(x=y\right)\to \left(\phi [z:=x]\to \phi [z:=y]\right)} == Conservative extensions == It is common to include in a Hilbert system only axioms for the logical operators implication and negation towards functional completeness. Given these axioms, it is possible to form conservative extensions of the deduction theorem that permit the use of additional connectives. These extensions are called conservative because if a formula φ involving new connectives is rewritten as a logically equivalent formula θ involving only negation, implication, and universal quantification, then φ is derivable in the extended system if and only if θ is derivable in the original system. When fully extended, a Hilbert system will resemble more closely a system of natural deduction. === Existential quantification === Introduction ∀ x ( ϕ → ∃ y ( ϕ [ x := y ] ) ) {\displaystyle \forall x(\phi \to \exists y(\phi [x:=y]))} Elimination ∀ x ( ϕ → ψ ) → ∃ x ( ϕ ) → ψ {\displaystyle \forall x(\phi \to \psi )\to \exists x(\phi )\to \psi } where x {\displaystyle x} is not a free variable of ψ {\displaystyle \psi } . === Conjunction and disjunction === Conjunction introduction and elimination introduction: α → ( β → α ∧ β ) {\displaystyle \alpha \to (\beta \to \alpha \land \beta )} elimination left: α ∧ β → α {\displaystyle \alpha \wedge \beta \to \alpha } elimination right: α ∧ β → β {\displaystyle \alpha \wedge \beta \to \beta } Disjunction introduction and elimination introduction left: α → α ∨ β {\displaystyle \alpha \to \alpha \vee \beta } introduction right: β → α ∨ β {\displaystyle \beta \to \alpha \vee \beta } elimination: ( α → γ ) → ( ( β → γ ) → α ∨ β → γ ) {\displaystyle (\alpha \to \gamma )\to ((\beta \to \gamma )\to \alpha \vee \beta \to \gamma )} == See also == List of Hilbert systems Natural deduction Sequent calculus == Notes == == References == Curry, Haskell B.; Robert Feys (1958). Combinatory Logic Vol. I. Vol. 1. Amsterdam: North Holland. Monk, J. Donald (1976). Mathematical Logic. Graduate Texts in Mathematics. Berlin, New York: Springer-Verlag. ISBN 978-0-387-90170-1. Ruzsa, Imre; Máté, András (1997). Bevezetés a modern logikába (in Hungarian). Budapest: Osiris Kiadó. Tarski, Alfred (1990). Bizonyítás és igazság (in Hungarian). Budapest: Gondolat. It is a Hungarian translation of Alfred Tarski's selected papers on semantic theory of truth. David Hilbert (1927) "The foundations of mathematics", translated by Stephan Bauer-Menglerberg and Dagfinn Føllesdal (pp. 464–479). in: van Heijenoort, Jean (1967). From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931 (3rd printing 1976 ed.). Cambridge MA: Harvard University Press. ISBN 0-674-32449-8. Hilbert's 1927, Based on an earlier 1925 "foundations" lecture (pp. 367–392), presents his 17 axioms—axioms of implication #1-4, axioms about & and V #5-10, axioms of negation #11-12, his logical ε-axiom #13, axioms of equality #14-15, and axioms of number #16-17—along with the other necessary elements of his Formalist "proof theory"—e.g. induction axioms, recursion axioms, etc.; he also offers up a spirited defense against L.E.J. Brouwer's Intuitionism. Also see Hermann Weyl's (1927) comments and rebuttal (pp. 480–484), Paul Bernay's (1927) appendix to Hilbert's lecture (pp. 485–489) and Luitzen Egbertus Jan Brouwer's (1927) response (pp. 490–495) Kleene, Stephen Cole (1952). Introduction to Metamathematics (10th impression with 1971 corrections ed.). Amsterdam NY: North Holland Publishing Company. ISBN 0-7204-2103-9. {{cite book}}: ISBN / Date incompatibility (help) See in particular Chapter IV Formal System (pp. 69–85) wherein Kleene presents subchapters §16 Formal symbols, §17 Formation rules, §18 Free and bound variables (including substitution), §19 Transformation rules (e.g. modus ponens) -- and from these he presents 21 "postulates"—18 axioms and 3 "immediate-consequence" relations divided as follows: Postulates for the propostional calculus #1-8, Additional postulates for the predicate calculus #9-12, and Additional postulates for number theory #13-21. == External links == Gaifman, Haim. "A Hilbert Type Deductive System for Sentential Logic, Completeness and Compactness" (PDF). Farmer, W. M. "Propositional logic" (PDF). It describes (among others) a specific Hilbert-style proof system (that is restricted to propositional calculus).
Wikipedia/Hilbert_systems
A formal system is an abstract structure and formalization of an axiomatic system used for deducing, using rules of inference, theorems from axioms. In 1921, David Hilbert proposed to use formal systems as the foundation of knowledge in mathematics. The term formalism is sometimes a rough synonym for formal system, but it also refers to a given style of notation, for example, Paul Dirac's bra–ket notation. == Concepts == A formal system has the following: Formal language, which is a set of well-formed formulas, which are strings of symbols from an alphabet, formed by a formal grammar (consisting of production rules or formation rules). Deductive system, deductive apparatus, or proof system, which has rules of inference that take axioms and infers theorems, both of which are part of the formal language. A formal system is said to be recursive (i.e. effective) or recursively enumerable if the set of axioms and the set of inference rules are decidable sets or semidecidable sets, respectively. === Formal language === A formal language is a language that is defined by a formal system. Like languages in linguistics, formal languages generally have two aspects: the syntax is what the language looks like (more formally: the set of possible expressions that are valid utterances in the language) the semantics are what the utterances of the language mean (which is formalized in various ways, depending on the type of language in question) Usually only the syntax of a formal language is considered via the notion of a formal grammar. The two main categories of formal grammar are that of generative grammars, which are sets of rules for how strings in a language can be written, and that of analytic grammars (or reductive grammar), which are sets of rules for how a string can be analyzed to determine whether it is a member of the language. === Deductive system === A deductive system, also called a deductive apparatus, consists of the axioms (or axiom schemata) and rules of inference that can be used to derive theorems of the system. Such deductive systems preserve deductive qualities in the formulas that are expressed in the system. Usually the quality we are concerned with is truth as opposed to falsehood. However, other modalities, such as justification or belief may be preserved instead. In order to sustain its deductive integrity, a deductive apparatus must be definable without reference to any intended interpretation of the language. The aim is to ensure that each line of a derivation is merely a logical consequence of the lines that precede it. There should be no element of any interpretation of the language that gets involved with the deductive nature of the system. The logical consequence (or entailment) of the system by its logical foundation is what distinguishes a formal system from others which may have some basis in an abstract model. Often the formal system will be the basis for or even identified with a larger theory or field (e.g. Euclidean geometry) consistent with the usage in modern mathematics such as model theory. An example of a deductive system would be the rules of inference and axioms regarding equality used in first order logic. The two main types of deductive systems are proof systems and formal semantics. ==== Proof system ==== Formal proofs are sequences of well-formed formulas (or WFF for short) that might either be an axiom or be the product of applying an inference rule on previous WFFs in the proof sequence. The last WFF in the sequence is recognized as a theorem. Once a formal system is given, one can define the set of theorems which can be proved inside the formal system. This set consists of all WFFs for which there is a proof. Thus all axioms are considered theorems. Unlike the grammar for WFFs, there is no guarantee that there will be a decision procedure for deciding whether a given WFF is a theorem or not. The point of view that generating formal proofs is all there is to mathematics is often called formalism. David Hilbert founded metamathematics as a discipline for discussing formal systems. Any language that one uses to talk about a formal system is called a metalanguage. The metalanguage may be a natural language, or it may be partially formalized itself, but it is generally less completely formalized than the formal language component of the formal system under examination, which is then called the object language, that is, the object of the discussion in question. The notion of theorem just defined should not be confused with theorems about the formal system, which, in order to avoid confusion, are usually called metatheorems. ==== Formal semantics of logical system ==== A logical system is a deductive system (most commonly first order logic) together with additional non-logical axioms. According to model theory, a logical system may be given interpretations which describe whether a given structure - the mapping of formulas to a particular meaning - satisfies a well-formed formula. A structure that satisfies all the axioms of the formal system is known as a model of the logical system. A logical system is: Sound, if each well-formed formula that can be inferred from the axioms is satisfied by every model of the logical system. Semantically complete, if each well-formed formula that is satisfied by every model of the logical system can be inferred from the axioms. An example of a logical system is Peano arithmetic. The standard model of arithmetic sets the domain of discourse to be the nonnegative integers and gives the symbols their usual meaning. There are also non-standard models of arithmetic. == History == Early logic systems includes Indian logic of Pāṇini, syllogistic logic of Aristotle, propositional logic of Stoicism, and Chinese logic of Gongsun Long (c. 325–250 BCE). In more recent times, contributors include George Boole, Augustus De Morgan, and Gottlob Frege. Mathematical logic was developed in 19th century Europe. David Hilbert instigated a formalist movement called Hilbert’s program as a proposed solution to the foundational crisis of mathematics, that was eventually tempered by Gödel's incompleteness theorems. The QED manifesto represented a subsequent, as yet unsuccessful, effort at formalization of known mathematics. == See also == List of formal systems Formal method – Mathematical program specificationsPages displaying short descriptions of redirect targets Formal science – Study of abstract structures described by formal systems Logic translation – Translation of a text into a logical system Rewriting system – Replacing subterm in a formula with another termPages displaying short descriptions of redirect targets Substitution instance – Concept in logicPages displaying short descriptions of redirect targets Theory (mathematical logic) – Set of sentences in a formal language == References == == Sources == Hunter, Geoffrey (1996) [1971]. Metalogic: An Introduction to the Metatheory of Standard First-Order Logic. University of California Press (published 1973). ISBN 9780520023567. OCLC 36312727. (accessible to patrons with print disabilities) == Further reading == Hofstadter, Douglas, 1979. Gödel, Escher, Bach: An Eternal Golden Braid ISBN 978-0-465-02656-2. 777 pages. Kleene, Stephen C., 1967. Mathematical Logic Reprinted by Dover, 2002. ISBN 0-486-42533-9 Smullyan, Raymond M., 1961. Theory of Formal Systems: Annals of Mathematics Studies, Princeton University Press (April 1, 1961) 156 pages ISBN 0-691-08047-X == External links == Media related to Formal systems at Wikimedia Commons Encyclopædia Britannica, Formal system definition, 2007. Daniel Richardson, Formal systems, logic and semantics Formal System at PlanetMath. Encyclopedia of Mathematics, Formal system Peter Suber, Formal Systems and Machines: An Isomorphism Archived 2011-05-24 at the Wayback Machine, 1997. Ray Taol, Formal Systems What is a Formal System?: Some quotes from John Haugeland's `Artificial Intelligence: The Very Idea' (1985), pp. 48–64.
Wikipedia/Formal_systems
In logic, a truth function is a function that accepts truth values as input and produces a unique truth value as output. In other words: the input and output of a truth function are all truth values; a truth function will always output exactly one truth value, and inputting the same truth value(s) will always output the same truth value. The typical example is in propositional logic, wherein a compound statement is constructed using individual statements connected by logical connectives; if the truth value of the compound statement is entirely determined by the truth value(s) of the constituent statement(s), the compound statement is called a truth function, and any logical connectives used are said to be truth functional. Classical propositional logic is a truth-functional logic, in that every statement has exactly one truth value which is either true or false, and every logical connective is truth functional (with a correspondent truth table), thus every compound statement is a truth function. On the other hand, modal logic is non-truth-functional. == Overview == A logical connective is truth-functional if the truth-value of a compound sentence is a function of the truth-value of its sub-sentences. A class of connectives is truth-functional if each of its members is. For example, the connective "and" is truth-functional since a sentence like "Apples are fruits and carrots are vegetables" is true if, and only if, each of its sub-sentences "apples are fruits" and "carrots are vegetables" is true, and it is false otherwise. Some connectives of a natural language, such as English, are not truth-functional. Connectives of the form "x believes that ..." are typical examples of connectives that are not truth-functional. If e.g. Mary mistakenly believes that Al Gore was President of the USA on April 20, 2000, but she does not believe that the moon is made of green cheese, then the sentence "Mary believes that Al Gore was President of the USA on April 20, 2000" is true while "Mary believes that the moon is made of green cheese" is false. In both cases, each component sentence (i.e. "Al Gore was president of the USA on April 20, 2000" and "the moon is made of green cheese") is false, but each compound sentence formed by prefixing the phrase "Mary believes that" differs in truth-value. That is, the truth-value of a sentence of the form "Mary believes that..." is not determined solely by the truth-value of its component sentence, and hence the (unary) connective (or simply operator since it is unary) is non-truth-functional. The class of classical logic connectives (e.g. &, →) used in the construction of formulas is truth-functional. Their values for various truth-values as argument are usually given by truth tables. Truth-functional propositional calculus is a formal system whose formulae may be interpreted as either true or false. == Table of binary truth functions == In two-valued logic, there are sixteen possible truth functions, also called Boolean functions, of two inputs P and Q. Any of these functions corresponds to a truth table of a certain logical connective in classical logic, including several degenerate cases such as a function not depending on one or both of its arguments. Truth and falsehood are denoted as 1 and 0, respectively, in the following truth tables for sake of brevity. == Functional completeness == Because a function may be expressed as a composition, a truth-functional logical calculus does not need to have dedicated symbols for all of the above-mentioned functions to be functionally complete. This is expressed in a propositional calculus as logical equivalence of certain compound statements. For example, classical logic has ¬P ∨ Q equivalent to P → Q. The conditional operator "→" is therefore not necessary for a classical-based logical system if "¬" (not) and "∨" (or) are already in use. A minimal set of operators that can express every statement expressible in the propositional calculus is called a minimal functionally complete set. A minimally complete set of operators is achieved by NAND alone {↑} and NOR alone {↓}. The following are the minimal functionally complete sets of operators whose arities do not exceed 2: One element {↑}, {↓}. Two elements { ∨ , ¬ } {\displaystyle \{\vee ,\neg \}} , { ∧ , ¬ } {\displaystyle \{\wedge ,\neg \}} , { → , ¬ } {\displaystyle \{\to ,\neg \}} , { ← , ¬ } {\displaystyle \{\gets ,\neg \}} , { → , ⊥ } {\displaystyle \{\to ,\bot \}} , { ← , ⊥ } {\displaystyle \{\gets ,\bot \}} , { → , ↮ } {\displaystyle \{\to ,\nleftrightarrow \}} , { ← , ↮ } {\displaystyle \{\gets ,\nleftrightarrow \}} , { → , ↛ } {\displaystyle \{\to ,\nrightarrow \}} , { → , ↚ } {\displaystyle \{\to ,\nleftarrow \}} , { ← , ↛ } {\displaystyle \{\gets ,\nrightarrow \}} , { ← , ↚ } {\displaystyle \{\gets ,\nleftarrow \}} , { ↛ , ¬ } {\displaystyle \{\nrightarrow ,\neg \}} , { ↚ , ¬ } {\displaystyle \{\nleftarrow ,\neg \}} , { ↛ , ⊤ } {\displaystyle \{\nrightarrow ,\top \}} , { ↚ , ⊤ } {\displaystyle \{\nleftarrow ,\top \}} , { ↛ , ↔ } {\displaystyle \{\nrightarrow ,\leftrightarrow \}} , { ↚ , ↔ } {\displaystyle \{\nleftarrow ,\leftrightarrow \}} . Three elements { ∨ , ↔ , ⊥ } {\displaystyle \{\lor ,\leftrightarrow ,\bot \}} , { ∨ , ↔ , ↮ } {\displaystyle \{\lor ,\leftrightarrow ,\nleftrightarrow \}} , { ∨ , ↮ , ⊤ } {\displaystyle \{\lor ,\nleftrightarrow ,\top \}} , { ∧ , ↔ , ⊥ } {\displaystyle \{\land ,\leftrightarrow ,\bot \}} , { ∧ , ↔ , ↮ } {\displaystyle \{\land ,\leftrightarrow ,\nleftrightarrow \}} , { ∧ , ↮ , ⊤ } {\displaystyle \{\land ,\nleftrightarrow ,\top \}} . == Algebraic properties == Some truth functions possess properties which may be expressed in the theorems containing the corresponding connective. Some of those properties that a binary truth function (or a corresponding logical connective) may have are: associativity: Within an expression containing two or more of the same associative connectives in a row, the order of the operations does not matter as long as the sequence of the operands is not changed. commutativity: The operands of the connective may be swapped without affecting the truth-value of the expression. distributivity: A connective denoted by · distributes over another connective denoted by +, if a · (b + c) = (a · b) + (a · c) for all operands a, b, c. idempotence: Whenever the operands of the operation are the same, the connective gives the operand as the result. In other words, the operation is both truth-preserving and falsehood-preserving (see below). absorption: A pair of connectives ∧ , ∨ {\displaystyle \land ,\lor } satisfies the absorption law if a ∧ ( a ∨ b ) = a ∨ ( a ∧ b ) = a {\displaystyle a\land (a\lor b)=a\lor (a\land b)=a} for all operands a, b. A set of truth functions is functionally complete if and only if for each of the following five properties it contains at least one member lacking it: monotonic: If f(a1, ..., an) ≤ f(b1, ..., bn) for all a1, ..., an, b1, ..., bn ∈ {0,1} such that a1 ≤ b1, a2 ≤ b2, ..., an ≤ bn. E.g., ∨ , ∧ , ⊤ , ⊥ {\displaystyle \vee ,\wedge ,\top ,\bot } . affine: For each variable, changing its value either always or never changes the truth-value of the operation, for all fixed values of all other variables. E.g., ¬ , ↔ {\displaystyle \neg ,\leftrightarrow } , ↮ , ⊤ , ⊥ {\displaystyle \not \leftrightarrow ,\top ,\bot } . self dual: To read the truth-value assignments for the operation from top to bottom on its truth table is the same as taking the complement of reading it from bottom to top; in other words, f(¬a1, ..., ¬an) = ¬f(a1, ..., an). E.g., ¬ {\displaystyle \neg } . truth-preserving: The interpretation under which all variables are assigned a truth value of true produces a truth value of true as a result of these operations. E.g., ∨ , ∧ , ⊤ , → , ↔ , ⊂ {\displaystyle \vee ,\wedge ,\top ,\rightarrow ,\leftrightarrow ,\subset } . (see validity) falsehood-preserving: The interpretation under which all variables are assigned a truth value of false produces a truth value of false as a result of these operations. E.g., ∨ , ∧ , ↮ , ⊥ , ⊄ , ⊅ {\displaystyle \vee ,\wedge ,\nleftrightarrow ,\bot ,\not \subset ,\not \supset } . (see validity) === Arity === A concrete function may be also referred to as an operator. In two-valued logic there are 2 nullary operators (constants), 4 unary operators, 16 binary operators, 256 ternary operators, and 2 2 n {\displaystyle 2^{2^{n}}} n-ary operators. In three-valued logic there are 3 nullary operators (constants), 27 unary operators, 19683 binary operators, 7625597484987 ternary operators, and 3 3 n {\displaystyle 3^{3^{n}}} n-ary operators. In k-valued logic, there are k nullary operators, k k {\displaystyle k^{k}} unary operators, k k 2 {\displaystyle k^{k^{2}}} binary operators, k k 3 {\displaystyle k^{k^{3}}} ternary operators, and k k n {\displaystyle k^{k^{n}}} n-ary operators. An n-ary operator in k-valued logic is a function from Z k n → Z k {\displaystyle \mathbb {Z} _{k}^{n}\to \mathbb {Z} _{k}} . Therefore, the number of such operators is | Z k | | Z k n | = k k n {\displaystyle |\mathbb {Z} _{k}|^{|\mathbb {Z} _{k}^{n}|}=k^{k^{n}}} , which is how the above numbers were derived. However, some of the operators of a particular arity are actually degenerate forms that perform a lower-arity operation on some of the inputs and ignore the rest of the inputs. Out of the 256 ternary Boolean operators cited above, ( 3 2 ) ⋅ 16 − ( 3 1 ) ⋅ 4 + ( 3 0 ) ⋅ 2 {\displaystyle {\binom {3}{2}}\cdot 16-{\binom {3}{1}}\cdot 4+{\binom {3}{0}}\cdot 2} of them are such degenerate forms of binary or lower-arity operators, using the inclusion–exclusion principle. The ternary operator f ( x , y , z ) = ¬ x {\displaystyle f(x,y,z)=\lnot x} is one such operator which is actually a unary operator applied to one input, and ignoring the other two inputs. "Not" is a unary operator, it takes a single term (¬P). The rest are binary operators, taking two terms to make a compound statement (P ∧ Q, P ∨ Q, P → Q, P ↔ Q). The set of logical operators Ω may be partitioned into disjoint subsets as follows: Ω = Ω 0 ∪ Ω 1 ∪ … ∪ Ω j ∪ … ∪ Ω m . {\displaystyle \Omega =\Omega _{0}\cup \Omega _{1}\cup \ldots \cup \Omega _{j}\cup \ldots \cup \Omega _{m}\,.} In this partition, Ω j {\displaystyle \Omega _{j}} is the set of operator symbols of arity j. In the more familiar propositional calculi, Ω {\displaystyle \Omega } is typically partitioned as follows: nullary operators: Ω 0 = { ⊥ , ⊤ } {\displaystyle \Omega _{0}=\{\bot ,\top \}} unary operators: Ω 1 = { ¬ } {\displaystyle \Omega _{1}=\{\lnot \}} binary operators: Ω 2 ⊃ { ∧ , ∨ , → , ↔ } {\displaystyle \Omega _{2}\supset \{\land ,\lor ,\rightarrow ,\leftrightarrow \}} == Principle of compositionality == Instead of using truth tables, logical connective symbols can be interpreted by means of an interpretation function and a functionally complete set of truth-functions (Gamut 1991), as detailed by the principle of compositionality of meaning. Let I be an interpretation function, let Φ, Ψ be any two sentences and let the truth function fnand be defined as: fnand(T,T) = F; fnand(T,F) = fnand(F,T) = fnand(F,F) = T Then, for convenience, fnot, for fand and so on are defined by means of fnand: fnot(x) = fnand(x,x) for(x,y) = fnand(fnot(x), fnot(y)) fand(x,y) = fnot(fnand(x,y)) or, alternatively fnot, for fand and so on are defined directly: fnot(T) = F; fnot(F) = T; for(T,T) = for(T,F) = for(F,T) = T; for(F,F) = F fand(T,T) = T; fand(T,F) = fand(F,T) = fand(F,F) = F Then etc. Thus if S is a sentence that is a string of symbols consisting of logical symbols v1...vn representing logical connectives, and non-logical symbols c1...cn, then if and only if I(v1)...I(vn) have been provided interpreting v1 to vn by means of fnand (or any other set of functional complete truth-functions) then the truth-value of ⁠ I ( s ) {\displaystyle I(s)} ⁠ is determined entirely by the truth-values of c1...cn, i.e. of I(c1)...I(cn). In other words, as expected and required, S is true or false only under an interpretation of all its non-logical symbols. == Definition == Using the functions defined above, we can give a formal definition of a proposition's truth function. Let PROP be the set of all propositional variables, P R O P = { p 1 , p 2 , … } {\displaystyle PROP=\{p_{1},p_{2},\dots \}} We define a truth assignment to be any function ϕ : P R O P → { T , F } {\displaystyle \phi :PROP\to \{T,F\}} . A truth assignment is therefore an association of each propositional variable with a particular truth value. This is effectively the same as a particular row of a proposition's truth table. For a truth assignment, ϕ {\displaystyle \phi } , we define its extended truth assignment, ϕ ¯ {\displaystyle {\overline {\phi }}} , as follows. This extends ϕ {\displaystyle \phi } to a new function ϕ ¯ {\displaystyle {\overline {\phi }}} which has domain equal to the set of all propositional formulas. The range of ϕ ¯ {\displaystyle {\overline {\phi }}} is still { T , F } {\displaystyle \{T,F\}} . If A ∈ P R O P {\displaystyle A\in PROP} then ϕ ¯ ( A ) = ϕ ( A ) {\displaystyle {\overline {\phi }}(A)=\phi (A)} . If A and B are any propositional formulas, then ϕ ¯ ( ¬ A ) = f not ( ϕ ¯ ( A ) ) {\displaystyle {\overline {\phi }}(\neg A)=f_{\text{not}}({\overline {\phi }}(A))} . ϕ ¯ ( A ∧ B ) = f and ( ϕ ¯ ( A ) , ϕ ¯ ( B ) ) {\displaystyle {\overline {\phi }}(A\land B)=f_{\text{and}}({\overline {\phi }}(A),{\overline {\phi }}(B))} . ϕ ¯ ( A ∨ B ) = f or ( ϕ ¯ ( A ) , ϕ ¯ ( B ) ) {\displaystyle {\overline {\phi }}(A\lor B)=f_{\text{or}}({\overline {\phi }}(A),{\overline {\phi }}(B))} . ϕ ¯ ( A → B ) = ϕ ¯ ( ¬ A ∨ B ) {\displaystyle {\overline {\phi }}(A\to B)={\overline {\phi }}(\neg A\lor B)} . ϕ ¯ ( A ↔ B ) = ϕ ¯ ( ( A → B ) ∧ ( B → A ) ) {\displaystyle {\overline {\phi }}(A\leftrightarrow B)={\overline {\phi }}((A\to B)\land (B\to A))} . Finally, now that we have defined the extended truth assignment, we can use this to define the truth-function of a proposition. For a proposition, A, its truth function, f A {\displaystyle f_{A}} , has domain equal to the set of all truth assignments, and range equal to { T , F } {\displaystyle \{T,F\}} . It is defined, for each truth assignment ϕ {\displaystyle \phi } , by f A ( ϕ ) = ϕ ¯ ( A ) {\displaystyle f_{A}(\phi )={\overline {\phi }}(A)} . The value given by ϕ ¯ ( A ) {\displaystyle {\overline {\phi }}(A)} is the same as the one displayed in the final column of the truth table of A, on the row identified with ϕ {\displaystyle \phi } . == Computer science == Logical operators are implemented as logic gates in digital circuits. Practically all digital circuits (the major exception is DRAM) are built up from NAND, NOR, NOT, and transmission gates. NAND and NOR gates with 3 or more inputs rather than the usual 2 inputs are fairly common, although they are logically equivalent to a cascade of 2-input gates. All other operators are implemented by breaking them down into a logically equivalent combination of 2 or more of the above logic gates. The "logical equivalence" of "NAND alone", "NOR alone", and "NOT and AND" is similar to Turing equivalence. The fact that all truth functions can be expressed with NOR alone is demonstrated by the Apollo guidance computer. == See also == == Notes == == References == This article incorporates material from TruthFunction on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. == Further reading == Józef Maria Bocheński (1959), A Précis of Mathematical Logic, translated from the French and German versions by Otto Bird, Dordrecht, South Holland: D. Reidel. Alonzo Church (1944), Introduction to Mathematical Logic, Princeton, NJ: Princeton University Press. See the Introduction for a history of the truth function concept.
Wikipedia/Truth-functional
The British Science Association (BSA) is a charity and learned society founded in 1831 to aid in the promotion and development of science. Until 2009 it was known as the British Association for the Advancement of Science (BA). The current Chief Executive is Hannah Russell. The BSA's mission is to get more people engaged in the field of science by coordinating, delivering, and overseeing different projects that are suited to achieve these goals. The BSA "envisions a society in which a diverse group of people can learn and apply the sciences in which they learn." and is managed by a professional staff located at their Head Office in the Wellcome Wolfson Building. The BSA offers a wide variety of activities and events that both recognise and encourage people to be involved in science. These include the British Science Festival, British Science Week, the CREST Awards, For Thought, The Ideas Fund, along with regional and local events. == History == === Foundation === The Association was founded in 1831 and modelled on the German Gesellschaft Deutscher Naturforscher und Ärzte. It was founded during post-war reconstruction after the Peninsula war to improve the advancement of science in England. The prime mover (who is regarded as the main founder) was Reverend William Vernon Harcourt, following a suggestion by Sir David Brewster, who was disillusioned with the elitist and conservative attitude of the Royal Society. Charles Babbage, William Whewell and J. F. W. Johnston are also considered to be founding members. The first meeting was held in York (at the Yorkshire Museum) on Tuesday 27 September 1831 with various scientific papers being presented on the following days. It was chaired by Viscount Milton, president of the Yorkshire Philosophical Society, and "upwards of 300 gentlemen" attended the meeting. The Preston Mercury recorded that those gathered consisted of "persons of distinction from various parts of the kingdom, together with several of the gentry of Yorkshire and the members of philosopher societies in this country". The newspaper published the names of over a hundred of those attending and these included, amongst others, eighteen clergymen, eleven doctors, four knights, two Viscounts and one Lord. From that date onwards a meeting was held annually at a place chosen at a previous meeting. In 1832, for example, the meeting was held in Oxford, chaired by Reverend Dr William Buckland. By this stage the Association had four sections: Physics (including Mathematics and Mechanical Arts), Chemistry (including Mineralogy and Chemical Arts), Geology (including Geography) and Natural History. During this second meeting, the first objects and rules of the Association were published. Objects included systematically directing the acquisition of scientific knowledge, spreading this knowledge as well as discussion between scientists across the world, and to focus on furthering science by removing obstacles to progress. The rules established included what constituted a member of the Association, the fee to remain a member, and the process for future meetings. They also include dividing the members into different committees. These committees separated members into their preferred subject matter, and were to recommend investigations into areas of interest, then report on these findings, as well as progress in their science at the annual meetings. Additional sections were added throughout the years by either splitting off part of an original section, like making Geography and Ethnology its own section apart from Geology in 1851, or by defining a new subject area of discussion, such as Anthropology in 1869. A very important decision in the Association's history was made in 1842 when it was resolved to create a "physical observatory". A building that became well known as the Kew Observatory was taken on for the purpose and Francis Ronalds was chosen as the inaugural Honorary Director. Kew Observatory quickly became one of the most renowned meteorological and geomagnetic observatories in the world. The Association relinquished control of the Kew Observatory in 1871 to the management of the Royal Society, after a large donation to grant the observatory its independence. In 1872, the Association purchased its first central office in London, acquiring four rooms at 22 Albemarle Street. This office was intended to be a resource for members of the Association. One of the most famous events linked to the Association Meeting was an exchange between Thomas Henry Huxley and Bishop Samuel Wilberforce in 1860 (see the 1860 Oxford evolution debate). Although it is often described as a "debate", the exchange occurred after the presentation of a paper by Prof Draper of New York, on the intellectual development of Europe with relation to Darwin's theory (one of a number of scientific papers presented during the week) and the subsequent discussion involved a number of other participants (although Wilberforce and Huxley were the most prominent). Although a number of newspapers made passing references to the exchange, it was not until later that it was accorded greater significance in the evolution debate. === Electrical standards === One of the most important contributions of the British Association was the establishment of standards for electrical usage: the ohm as the unit of electrical resistance, the volt as the unit of electrical potential, and the ampere as the unit of electrical current. A need for standards arose with the submarine telegraph industry. Practitioners came to use their own standards established by wire coils: "By the late 1850s, Clark, Varley, Bright, Smith and other leading British cable engineers were using calibrated resistance coils on a regular basis and were beginning to use calibrated condensers as well.": 52  The undertaking was suggested to the BA by William Thomson, and its success was due to the use of Thomson's mirror galvanometer. Josiah Latimer Clark and Fleeming Jenkin made preparations. Thomson, with his students, found that impure copper, contaminated with arsenic, introduced significant extra resistance. The chemist Augustus Matthiessen contributed an appendix (A) to the final 1873 report that showed temperature-dependence of alloys. The natural relation between these units are clearly, that a unit of electromotive force between two points of a conductor separated by a unit of resistance shall produce unit current, and that this current in a unit of time convey a unit quantity of electricity. The unit system was "absolute" since it agreed with previously accepted units of work, or energy: The unit current of electricity, in passing through a conductor of unit resistance, does a unit of work or its equivalent in a unit of time. === Committee on Mechanical Nomenclature === In 1888, at a meeting of the British Association in Bath, the Committee on Mechanical Nomenclature suggested three new units: the kine for velocity, equal to 1 centimeter per second; the bole for momentum, equal to 1 gram times 1 kine; and the barad for pressure, equal to 1 dyne per square centimeter.: 184  The London Electrical Review called the new units "an abomination, and wholly unnecessary" and attributed their creation to a "craze" for naming new units.: 330  William Henry Preece noted in 1891 that he had only seen one instance of use of the new units.: 506  By 1913, the units had fallen entirely out of use.: 344  === Other === The Association was parodied by English novelist Charles Dickens as 'The Mudfog Society for the Advancement of Everything' in The Mudfog Papers (1837–38). In 1878 a committee of the Association recommended against constructing Charles Babbage's analytical engine, due to concerns about the current state of the machine's lack of complete working drawings, the machine's potential cost to produce, the machine's durability during repeated use, how and what the machine will actually be utilized for, and that more work would need to be done to bring the design up to a standard at which it is guaranteed to work. The Association introduced the British Association (usually termed "BA") screw threads, a series of screw thread standards in sizes from 0.25 mm up to 6 mm, in 1882. The standards were based on the metric system, although they had to be re-defined in imperial terms for use by UK industry. The standard was modified in 1884 to restrict significant figures for the metric counterpart of diameter and pitch of the screw in the published table, as well as not designating screws by their number of threads per inch, and instead giving an approximation due to considerable actual differences in manufactured screws. In 1889, a member of the Rational Dress Society, Charlotte Carmichael Stopes, stunned the proceedings of a meeting of the Association in Newcastle upon Tyne by organizing an impromptu session where she introduced rational dress to a wide audience, her speech being noted in newspapers across Britain. In 1903, microscopist and astronomer Washington Teasdale died whilst attending the annual meeting. == Perception of science in the UK == The Association's main aim is to make science more relevant, representative and connected to society. At the beginning of the Great Depression, the Association's focus began to shift their purpose to account for not only scientific progress, but the social aspects of such progress. In the Association's 1931 meeting, the president General Jan Christiaan Smuts ended his address by the proposal of linking science and ethics together but provided no means to actuate his ideas. In the following years, debate began as to whom the responsibilities of scientists fell upon. The Association adopted a resolution in 1934 that dedicated efforts to better balance scientific advancement with social progress. J.D. Bernal, a member of the Royal Society and the British Association, wrote The Social Function of Science in 1939, describing a need to correctly utilize science for society and the importance of its public perception. The idea of the public perception of science was furthered in 1985 when the Royal Society published a report titled The Public Understanding of Science. In the report, a committee of the Royal Society determined that it was scientists' duty to communicate to and educate the public. Lord George Porter, then president of the Royal Society, British Association, and director of the Royal Institution, created the Committee on the Public Understanding of Science, or COPUS, to promote public understanding of science. Professor Sir George Porter became the president in September 1985. He won the Nobel Prize in Chemistry 1967 along with Manfred Eigen, and Ronald George Wreyford Norrish. When asked about the scientific literacy of Britain, he stated that Britain was the least educated country compared to all the other advanced countries. His idea to solve this problem would be to start scientific education for children at the age of 4. He says his reason for such an early age is because that is the age when children are the most curious, and implementing science at that age will help them gain curiosity towards all disciplines of science. When asked why public ignorance to science matters, his response wasIt matters because among those who are scientifically illiterate are some of those who are in power, people who lead us in politics, in civil service, in the media, in the church, often in industry and sometimes even in education. Think, for example, about the enormous influence of scientific knowledge on one's whole philosophy of life, even one's religion. It is no more permissible for the archbishops of today, who advise their flocks on how to interpret the Scriptures, to ignore the findings of Watson and Crick, than it was right for clerics of the last century to ignore the work of Darwin. Science today is all-pervasive. Without some scientific and technical education, it is becoming impossible even to vote responsibly on matters of health, energy, defense or education. So unless things change, we shall soon live in a country that is backward not only in its technology and standard of living but in its cultural vitality too. It is wrong to suppose that by foregoing technological and scientific education we shall somehow become a nation of artists, writers or philosophers instead. These two aspects of culture have never been divorced from each other throughout our history. Every renaissance, every period that showed a flowering of civilization, advanced simultaneously in the arts and sciences, and in technology too. Sir Kenneth Durham, former director of research at Unilever, on becoming president in August 1987 followed on from Sir George Porter saying that science teachers needed extra pay to overcome the scarcity of mathematics and physics teachers in secondary schools, and that "unless we deal with this as matter of urgency, the outlook for our manufacturing future is bleak". He regretted that headmasters and careers masters had for many years followed 'the cult of Oxbridge' because "it carried more prestige to read classics at Oxbridge and go into the Civil Service or banking, than to read engineering at, say, Salford, and go into manufacturing industry". He said that reporting of sciences gave good coverage to medical science, but that "nevertheless, editors ought to be sensitive to developments in areas such as solid state physics, astro-physics, colloid science, molecular biology, transmission of stimuli along nerve fibres, and so on, and that newspaper editors were in danger of waiting for disasters before the scientific factors involved in the incidents were explained. In September 2001 Sir William Stewart, as outgoing president, warned that universities faced "dumbing down" and thatwe can deliver social inclusiveness, and the best universities, but not both from a limited amount of money. We run the risk of doing neither well. Universities are underfunded, and must not be seen simply as a substitute for National Service to keep youngsters off the dole queue... [Adding,] scientists have to be careful and consider the full implications of what they are seeking to achieve. The problem with some clever people is that they find cleverer ways of being stupid. In the year 2000, Sir Peter Williams had put together a panel to discuss the shortage of physics majors. A physicist called Derek Raine had stated that he has had multiple firms call him up asking for physics majors. The report they made stated that it is critical that they increase the number of physics teachers, or it will have a detrimental effect on the number of future engineers and scientists. === British Science Festival === The Association's major emphasis in recent decades has been on public engagement in science. Its annual meeting, now called the British Science Festival, is the largest public showcase for science in the UK and attracts a great deal of media attention. It is held at UK universities in early September for one week, with visits to science-related local cultural attractions. The 2010 Festival, held in Birmingham with Aston University as lead University partner, featured a prank event: the unveiling of Dulcis foetidus, a fictional plant purported to emit a pungent odour. An experiment in herd mentality, some audience members were induced into believing they could smell it. The Festival has also been the home to protest and debate. In 1970 there were protestors over the use of science for weapons. === Science Communication Conference === The Association organised and held the annual Science Communication Conference for over ten years. It was the largest conference of its kind in the UK, and addressed the key issues facing science communicators. In 2015, the BSA introduced a new series of smaller events for science communicators, designed to address the same issues as the Science Communication Conference but for a more targeted audience. === British Science Week === In addition to the British Science Festival, the British Science Association organises the British Science Week (formerly National Science & Engineering Week), an opportunity for people of all ages to get involved in science, engineering, technology and maths activities, originating as the National Week of Science, Engineering and Technology. The Association also has a young people's programme, the CREST Awards which seeks to involve school students in science beyond the school curriculum, and to encourage them to consider higher education and careers in science. Huxley Summit Named after Thomas Huxley, the Huxley Summit is a leadership event run by the British Science Association, where 250 of the most influential people in the UK are brought together to discuss scientific and social challenges that the UK faces in the 21st century and to develop a link between scientists and non-scientists to ensure that science can be understood by society as a whole. On 8 November 2016, the British Science Association held the very first Huxley Summit at BAFTA, London. The theme of the summit was "Trust in the 21st Century" and how that would affect the future of science, innovation, and business. Media Fellowship Schemes The British Science Association's Media Fellowship provides the opportunity for practicing scientists, clinicians, and engineers to spend a period of time working at media outlets such as the Guardian, BBC Breakfast or The Londonist. After their time with the media placement, the fellows attend the British Science Festival which will offer these practitioners valuable working experience with a range of media organizations along with learning from a wide range of public engagement activities and be able to network with academics, journalists and science communicators. == CREST Awards == CREST Awards is the British Science Association's scheme to encourage students aged 5–19 to get involved with STEM projects and encourage scientific thinking. Awards range from Star Awards (targeted at those aged 5–7) to Gold Awards (targeted to those aged 16–19). Overall, 30,000 awards are undertaken annually. Many students who do CREST Awards, especially Silver and Gold Awards which require 30 and 70 hours of work respectively, enter competitions like the UK Big Bang Fair. == Patrons and Presidents of the British Science Association == Traditionally the president is elected at the meeting usually held in August/September for a one-year term and gives a presidential address upon retiring. The honour of the presidency is traditionally bestowed only once per individual. Written sources that give the year of presidency as a single year generally mean the year in which the presidential address is given. In 1926/1927 the association's patron was King George V and the president was his son Edward, Prince of Wales. The vice-presidents for the Leeds meeting at this time included City of Leeds Alderman Charles Lupton and his brother, The Rt. Hon. the Lord Mayor of Leeds Hugh Lupton. The husband of the brothers' first cousin once removed - Lord Airedale of Gledhow - was also a vice-president at the Leeds meeting. == List of annual meetings == 1831 (1st meeting) York, England. 1832 (2nd meeting) Oxford, England. 2013 (174th meeting) Newcastle upon Tyne, England. 2014 (175th meeting) Birmingham, England. 2015 (176th meeting) Bradford, England 2016 (177th meeting) Swansea, Wales 2017 (178th meeting) Brighton, England 2018 (179th meeting) Hull, England 2019 (180th meeting) Coventry, England 2020 No meeting due to the COVID pandemic 2021 (181st meeting) Chelmsford, Essex, England 2022 (182nd meeting) Leicester, England 2023 (183rd meeting) Exeter, England 2024 (184th meeting) East London, England 2025 (185th meeting) Liverpool, England 2026 (186th meeting) Southampton, England == Structure == The organisation is administered from the Wellcome Wolfson Building at the Science Museum, London in South Kensington in Kensington and Chelsea, within a few feet of the northern boundary with the City of Westminster (in which most of the neighbouring Imperial College London is resident). == See also == 1860 Oxford evolution debate American Association for the Advancement of Science Association of British Science Writers Café Scientifique EuroScience Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics Guildhall Lectures National Science Week Royal Institution Royal Society Scandinavian Scientist Conference (1839–1936) Science Abstracts Science Festival == References == == External links == British Science Association British Science Festival British Science Association: Our history Digitised Reports 1833–1937, Biodiversity Heritage Library Reports of the meetings 1877–90 are available on Gallica The University of Toronto Archives and Record Management Services holds some papers of the British Association for the Advancement of Science. Media related to British Association at Wikimedia Commons === Video clips === British Science Association YouTube channel
Wikipedia/British_Association_for_the_Advancement_of_Science
In metaphysics, phenomenalism is the view that physical objects cannot justifiably be said to exist as "things-in-themselves", but only as perceptual phenomena or sensory stimuli (e.g. redness, hardness, softness, sweetness, etc.) situated in time and in space. In particular, some forms of phenomenalism reduce all talk about physical objects in the external world to talk about bundles of sense data. == History == Phenomenalism is a radical form of empiricism. Its roots as an ontological view of the nature of existence can be traced back to George Berkeley and his subjective idealism, upon which David Hume further elaborated. John Stuart Mill had a theory of perception which is commonly referred to as classical phenomenalism. This differs from Berkeley's idealism in its account of how objects continue to exist when no one is perceiving them. Berkeley claimed that an omniscient God perceived all objects and that this was what kept them in existence, whereas Mill claimed that permanent possibilities of experience were sufficient for an object's existence. These permanent possibilities could be analysed into counterfactual conditionals, such as "if I were to have y-type sensations, then I would also have x-type sensations". As an epistemological theory about the possibility of knowledge of objects in the external world, however, the most accessible formulation of phenomenalism is perhaps to be found in the transcendental idealism of Immanuel Kant. According to Kant, space and time, which are the a priori forms and preconditions of all sensory experience, "refer to objects only to the extent that these are considered as phenomena, but do not represent the things in themselves". While Kant insisted that knowledge is limited to phenomena, he never denied or excluded the existence of objects which were not knowable by way of experience, the things-in-themselves or noumena, though his proof of noumena had many problems and is one of the most controversial aspects of his Critiques. Kant's "epistemological phenomenalism", as it has been called, is therefore quite distinct from Berkeley's earlier ontological version. In Berkeley's view, the so-called "things-in-themselves" do not exist except as subjectively perceived bundles of sensations which are guaranteed consistency and permanence because they are constantly perceived by the mind of God. Hence, while Berkeley holds that objects are merely bundles of sensations (see bundle theory), Kant holds (unlike other bundle theorists) that objects do not cease to exist when they are no longer perceived by some merely human subject or mind. In the late 19th century, an even more extreme form of phenomenalism was formulated by Ernst Mach, later developed and refined by Russell, Ayer and the logical positivists. Mach rejected the existence of God and also denied that phenomena were data experienced by the mind or consciousness of subjects. Instead, Mach held sensory phenomena to be "pure data" whose existence was to be considered anterior to any arbitrary distinction between mental and physical categories of phenomena. In this way, it was Mach who formulated the key thesis of phenomenalism, which separates it from bundle theories of objects: objects are logical constructions out of sense-data or ideas; whereas according to bundle theories, objects are made up of sets, or bundles, of actual ideas or perceptions. That is, according to bundle theory, to say that the pear before me exists is simply to say that certain properties (greenness, hardness, etc.) are being perceived at this moment. When these characteristics are no longer perceived or experienced by anyone, then the object (pear, in this case) no longer exists. Phenomenalism as formulated by Mach, in contrast, is the view that objects are logical constructions out of perceptual properties. On this view, to say there is a table in the other room when there is no one in that room to perceive it, is to say that if there were someone in that room, then that person would perceive the table. It is not the actual perception that counts, but the conditional possibility of perceiving. Logical positivism, a movement begun as a small circle which grew around the philosopher Moritz Schlick in Vienna, inspired many philosophers in the English speaking world from the 1930s through the 1950s. Important influences on their brand of empiricism included Ernst Mach — himself holding the Chair of Inductive Sciences at the University of Vienna, a position Schlick would later hold — and the Cambridge philosopher Bertrand Russell. The idea of some logical positivists, such as A. J. Ayer and Rudolf Carnap, was to apply phenomenalism in linguistic terms, enabling reliable discourse of physical objects, such as tables, in strict terms of either actual or possible sensory experiences. 20th century American philosopher Arthur Danto asserted that "a phenomenalist, believ[es] that whatever is finally meaningful can be expressed in terms of our own [sense] experience.". He claimed that "The phenomenalist really is committed to the most radical kind of empiricism: For him reference to objects is always finally a reference to sense-experience ... ." To the phenomenalist, objects of any kind must be related to experience. "John Stuart Mill once spoke of physical objects as but the 'permanent possibility of experience' and this, by and large, is what the phenomenalist exploits: All we can mean, in talking about physical objects — or nonphysical objects, if there are any — is what experiences we would have in dealing with them ... ." However, phenomenalism is based on mental operations. These operations, themselves, are not known from sense experience. Such non-empirical, non-sensual operations are the "...nonempirical matters of space, time, and continuity that empiricism in all its forms and despite its structures seems to require ... ." See for comparison Sensualism, to which phenomenalism is closely related. == Criticisms == C.I. Lewis had previously suggested that the physical claim "There is a doorknob in front of me" necessarily entails the sensory conditional "If I should seem to see a doorknob and if I should seem to myself to be initiating a grasping motion, then in all probability the sensation of contacting a doorknob should follow". Roderick Firth formulated another objection in 1950, stemming from perceptual relativity: White wallpaper looks white under white light and red under red light, etc. Any possible course of experience resulting from a possible course of action will apparently underdetermine our surroundings: it would determine, for example, that there is either white wallpaper under red light or red wallpaper under white light, and so on. Another criticism of phenomenalism comes from truthmaker theory. Truthmaker theorists hold that the truth depends on reality. In the terms of truthmaker theory: a truthbearer (e.g. a proposition) is true because of the existence of its truthmaker (e.g. a fact). Phenomenalists have been accused of violating this principle and thereby engaging in "ontological cheating": of positing truths without being able to account for the truthmakers of these truths. The criticism is usually directed at the phenomenalist account of material objects. The phenomenalist faces the problem of how to account for the existence of unperceived material objects. A well-known solution to this problem comes from John Stuart Mill. He claimed that we can account for unperceived objects in terms of counterfactual conditionals: It is true that valuables locked in a safe remain in existence, despite being unperceived, because if someone were to look inside then this person would have a corresponding sensory impression. Truthmaker theorist may object that this still leaves open what the truthmaker for this counterfactual conditional is, considering it unclear how such a truthmaker could be found within the phenomenalist ontology. == Notable proponents == Johannes Nikolaus Tetens John Foster == See also == Peripatetic axiom – Greek principle quoted by Thomas Aquinas == References == == Bibliography == Fenomenismo in L'Enciclopedia Garzanti di Filosofia (eds.) Gianni Vattimo and Gaetano Chiurazzi. Third Edition. Garzanti. Milan, 2004. ISBN 88-11-50515-1 Berlin, Isaiah. The Refutation of Phenomenalism. The Isaiah Berlin Virtual Library. 2004. Bolender, John. Factual Phenomenalism: a Supervenience Theory, in SORITES Issue #09. April 1998. pp. 16–31. == External links == Phenomenalism at PhilPapers Phenomenalism at the Indiana Philosophy Ontology Project
Wikipedia/Phenomenalism
The New World Order (NWO) is a term often used in conspiracy theories which hypothesize a secretly emerging totalitarian world government. The common theme in conspiracy theories about a New World Order is that a secretive power elite with a globalist agenda is conspiring to eventually rule the world through an authoritarian one-world government—which will replace sovereign nation-states—and an all-encompassing propaganda whose ideology hails the establishment of the New World Order as the culmination of history's progress. Many influential historical and contemporary figures have therefore been alleged to be part of a cabal that operates through many front organizations to orchestrate significant political and financial events, ranging from causing systemic crises to pushing through controversial policies, at both national and international levels, as steps in an ongoing plot to achieve world domination. Before the early 1990s, New World Order conspiracism was limited to two American countercultures, primarily the militantly anti-government right, and secondarily the part of fundamentalist Christianity concerned with the eschatological end-time emergence of the Antichrist. Academics who study conspiracy theories and religious extremism, such as Michael Barkun and Chip Berlet, observed that right-wing populist conspiracy theories about a New World Order not only had been embraced by many seekers of stigmatized knowledge but also had seeped into popular culture, thereby fueling a surge of interest and participation in survivalism and paramilitarism as many people actively prepare for apocalyptic and millenarian scenarios. These political scientists warn that mass hysteria over New World Order conspiracy theories could eventually have devastating effects on American political life, ranging from escalating lone-wolf terrorism to the rise to power of authoritarian ultranationalist demagogues. == History of the term == === General usage (pre-Cold War) === During the 20th century, political figures such as Woodrow Wilson and Winston Churchill used the term "new world order" to refer to a new period of history characterized by a dramatic change in world political thought and in the global balance of power after World War I and World War II. The interwar and post-World War II period were seen as opportunities to implement idealistic proposals for global governance by collective efforts to address worldwide problems that go beyond the capacity of individual nation-states to resolve, while nevertheless respecting the right of nations to self-determination. Such collective initiatives manifested in the formation of intergovernmental organizations such as the League of Nations in 1920, the United Nations (UN) in 1945, and the North Atlantic Treaty Organization (NATO) in 1949, along with international regimes such as the Bretton Woods system and the General Agreement on Tariffs and Trade (GATT), implemented to maintain a cooperative balance of power and facilitate reconciliation between nations to prevent the prospect of another global conflict. These cosmopolitan efforts to instill liberal internationalism were regularly criticized and opposed by American paleoconservative business nationalists from the 1930s on. Progressives welcomed international organizations and regimes such as the United Nations in the aftermath of the two World Wars, but argued that these initiatives suffered from a democratic deficit and were therefore inadequate not only to prevent another world war, but also to foster global justice, as the UN was chartered to be a free association of sovereign nation-states rather than a transition to democratic world government. Thus, cosmopolitan activists around the globe, perceiving the IGOs as too ineffectual for global change, formed a world federalist movement. British writer and futurist H. G. Wells went further than progressives in the 1940s, by appropriating and redefining the term "new world order" as a synonym for the establishment of a technocratic world state and of a planned economy, garnering popularity in state socialist circles. === Usage as reference to a conspiracy (Cold War era) === During the Second Red Scare, both secular and Christian right American agitators, largely influenced by the work of Canadian conspiracy theorist William Guy Carr, increasingly embraced and spread dubious fears of Freemasons, Illuminati and Jews as the alleged driving forces behind an "international communist conspiracy". The threat of "Godless communism", in the form of an atheistic, bureaucratic collectivist world government, demonized as the "Red Menace", became the focus of apocalyptic millenarian conspiracism. The Red Scare came to shape one of the core ideas of the political right in the United States, which is that liberals and progressives, with their welfare-state policies and international cooperation programs such as foreign aid, supposedly contribute to a gradual process of global collectivism that will inevitably lead to nations being replaced with a communistic/collectivist one-world government. James Warburg, appearing before the United States Senate Committee on Foreign Relations in 1950, famously stated: "We shall have world government, whether or not we like it. The question is only whether world government will be achieved by consent or by conquest." Right-wing populist advocacy groups with a paleoconservative world-view, such as the John Birch Society, disseminated a multitude of conspiracy theories in the 1960s claiming that the governments of both the United States and the Soviet Union were controlled by a cabal of corporate internationalists, "greedy" bankers and corrupt politicians who were intent on using the UN as the vehicle to create a "One World Government". This anti-globalist conspiracism fueled the campaign for U.S. withdrawal from the UN. American writer Mary M. Davison, in her booklet The Profound Revolution (1966), traced the alleged New World Order conspiracy to the establishment of the U.S. Federal Reserve in 1913 by international bankers, whom she claimed later formed the Council on Foreign Relations in 1921 as a shadow government. At the time the booklet was published, many readers would have interpreted "international bankers" as a reference to a postulated "international Jewish banking conspiracy" masterminded by the Rothschild family. Arguing that the term "New World Order" is used by a secretive global elite dedicated to the eradication of the sovereignty of the world's nations, American writer Gary Allen—in his books None Dare Call It Conspiracy (1971), Rockefeller: Campaigning for the New World Order (1974), and Say "No!" to the New World Order (1987)—articulated the anti-globalist theme of contemporary right-wing conspiracism in the U.S. After the fall of communism in the early 1990s, the de facto subject of New World Order conspiracism shifted from crypto-communists, perceived to be plotting to establish an atheistic world communist government, to globalists, perceived to be plotting to implement a collectivist generally, unified world government ultimately controlled by an untouchable oligarchy of international bankers, corrupt politicians, and corporatists, or the United Nations itself. The shift in perception was inspired by growing opposition to corporate internationalism on the American right in the 1990s. In his speech, Toward a New World Order, delivered on 11 September 1990 during a joint session of the US Congress, President George H. W. Bush described his objectives for post-Cold War global governance in cooperation with post-Soviet states. He stated: Until now, the world we've known has been a world divided—a world of barbed wire and concrete block, conflict, and the cold war. Now, we can see a new world coming into view. A world in which there is the genuine prospect of new world order. In the words of Winston Churchill, a "world order" in which "the principles of justice and fair play ... protect the weak against the strong ..." A world where the United Nations, freed from cold war stalemate, is poised to fulfill the historic vision of its founders. A world in which freedom and respect for human rights find a home among all nations. The New York Times observed that progressives were denouncing this new world order as a rationalization of American imperial ambitions in the Middle East at the time. At the same time conservatives rejected any new security arrangements altogether and fulminated about any possibility of a UN revival. Chip Berlet, an American investigative reporter specializing in the study of right-wing movements in the US, wrote that the Christian and secular far-right were especially terrified by Bush's speech. Fundamentalist Christian groups interpreted Bush's words as signaling the End Times. At the same time, more secular theorists approached it from an anti-communist and anti-collectivist standpoint and feared for hegemony over all countries by the United Nations. === Post-Cold War usage === The New World Order has been a focus of the American Christian right, and specifically the Protestant right. The NWO is seen as a prophesied anti-Christian enemy established by globalists which uses perceived secular philosophies such as environmentalism, feminism, and socialism (collectively referred to as globalism) to thwart Christianity through the work of organizations such as the United Nations, European Union, World Trade Organization, and World Health Organization. Organizations like the UN, as well as concepts such as the New World Order and globalism have played a significant role in right-wing Protestant prophecy media. American televangelist Pat Robertson, with his best-selling book The New World Order (1991), became the most prominent Christian disseminator of conspiracy theories about recent American history. He describes a scenario where Wall Street, the Federal Reserve System, the Council on Foreign Relations, the Bilderberg Group and the Trilateral Commission control the flow of events from behind the scenes, constantly nudging people covertly in the direction of world government for the Antichrist. It has been observed that, throughout the 1990s, the galvanizing language used by conspiracy theorists such as Linda Thompson, Mark Koernke and Robert K. Spear led to militancy and the rise of the American militia movement. The militia movement's anti-government ideology was spread through speeches at rallies and meetings, books and videotapes sold at gun shows, shortwave and satellite radio, fax networks, and computer bulletin boards. It has been argued that it was overnight AM radio shows and propagandistic viral content on the internet that most effectively contributed to more extremist responses to the perceived threat of the New World Order. This led to the substantial growth of New World Order conspiracism, with it retroactively finding its way into the previously apolitical literature of numerous Kennedy assassinologists, ufologists, lost land theorists and—partially inspired by fears surrounding the "Satanic panic"—occultists. From the mid-1990s onward, the amorphous appeal of those subcultures transmitted New World Order conspiracism to a larger audience of seekers of stigmatized knowledge, with the common characteristic of disillusionment of political efficacy. From the mid-1990s to the early 2000s, Hollywood conspiracy-thriller television shows and films also played a role in introducing a general audience to various fringe, esoteric theories related to New World Order conspiracism—which by that point had developed to include black helicopters, FEMA "concentration camps", etc.—theories which for decades previously were confined to largely right-wing subcultures. The 1993–2002 television series The X-Files, the 1997 film Conspiracy Theory and the 1998 film The X-Files: Fight the Future are often cited as notable examples. Following the start of the 21st century, and specifically during the 2008 financial crisis, many politicians and pundits, such as Gordon Brown and Henry Kissinger, used the term "new world order" in their advocacy for a comprehensive reform of the global financial system and their calls for a "New Bretton Woods" taking into account emerging markets such as China and India. These public declarations reinvigorated New World Order conspiracism, culminating in talk-show host Sean Hannity stating on his Fox News program Hannity that the "conspiracy theorists were right". Progressive media-watchdog groups have repeatedly criticized Fox News in general, and its now-defunct opinion show Glenn Beck in particular, for not only disseminating New World Order conspiracy theories to mainstream audiences, but possibly agitating so-called "lone wolf" extremism, particularly from the radical right. In 2009, American film directors Luke Meyer and Andrew Neel released New World Order, a critically acclaimed documentary film which explores the world of conspiracy theorists—such as American radio host Alex Jones—who vigorously oppose what they perceive as an emerging New World Order. The growing dissemination and popularity of conspiracy theories has also created an alliance between right-wing agitators and hip hop music's left-wing rappers (such as KRS-One, Professor Griff of Public Enemy and Immortal Technique), illustrating how anti-elitist conspiracism can create unlikely political allies in efforts to oppose a political system. == Conspiracy theories == There are numerous systemic conspiracy theories through which the concept of a New World Order is viewed. The following is a list of the major ones in roughly chronological order: === End time === Since the 19th century, many apocalyptic millennial Christian eschatologists, starting with John Nelson Darby, have predicted a globalist conspiracy to impose a tyrannical New World Order governing structure as the fulfillment of prophecies about the "end time" in the Bible, specifically in the Book of Ezekiel, the Book of Daniel, the Olivet Discourse found in the Synoptic Gospels, 2 Esdras 11:32 and Revelation 13:7. They claim that people who have made a deal with the Devil to gain wealth and power have become pawns in a supernatural chess game to move humanity into accepting a utopian world government that rests on the spiritual foundations of a syncretic-messianic world religion, which will later reveal itself to be a dystopian world empire that imposes the imperial cult of an "Unholy Trinity" of Satan, the Antichrist and the False Prophet. In many contemporary Christian conspiracy theories, the False Prophet will be either the last pope of the Catholic Church (groomed and installed by an Alta Vendita or Jesuit conspiracy), a guru from the New Age movement, or even the leader of an elite fundamentalist Christian organization like the Fellowship, while the Antichrist will be either the President of the European Union, the Caliph of a pan-Islamic state, or even the Secretary-General of the United Nations. Some of the most vocal critics of end-time conspiracy theories come from within Christianity. In 1993, historian Bruce Barron wrote a stern rebuke of apocalyptic Christian conspiracism in the Christian Research Journal, when reviewing Robertson's 1991 book The New World Order. Another critique can be found in historian Gregory S. Camp's 1997 book Selling Fear: Conspiracy Theories and End-Times Paranoia. Religious studies scholar Richard T. Hughes argues that "New World Order" rhetoric libels the Christian faith, since the "New World Order" as defined by Christian conspiracy theorists has no basis in the Bible whatsoever. Furthermore, he argues that not only is this idea unbiblical, it is positively anti-biblical and fundamentally anti-Christian, because by misinterpreting key passages in the Book of Revelation, it turns a comforting message about the coming kingdom of God into one of fear, panic and despair in the face of an allegedly approaching one-world government. Progressive Christians, such as preacher-theologian Peter J. Gomes, caution Christian fundamentalists that a "spirit of fear" can distort scripture and history through dangerously combining biblical literalism, apocalyptic timetables, demonization and oppressive prejudices, while Camp warns of the "very real danger that Christians could pick up some extra spiritual baggage" by credulously embracing conspiracy theories. They therefore call on Christians who indulge in conspiracism to repent. === Freemasonry === Freemasonry is one of the world's oldest secular fraternal organizations and arose in Great Britain during the 18th century. Over the years, several allegations and conspiracy theories have been directed towards Freemasonry, including the allegation that Freemasons have a hidden political agenda and are conspiring to bring about a New World Order, a world government organized according to Masonic principles or governed only by Freemasons. The esoteric nature of Masonic symbolism and rites led to Freemasons first being accused of secretly practicing Satanism in the late 18th century. The original allegation of a conspiracy within Freemasonry to subvert religions and governments to take over the world traces back to Scottish author John Robison, whose reactionary conspiracy theories crossed the Atlantic and influenced outbreaks of Protestant anti-Masonry in the United States during the 19th century. In the 1890s, French writer Léo Taxil wrote a series of pamphlets and books denouncing Freemasonry and charging their lodges with worshiping Lucifer as the Supreme Being and Great Architect of the Universe. Despite the fact that Taxil admitted that his claims were all a hoax, they were and still are believed and repeated by numerous conspiracy theorists and had a huge influence on subsequent anti-Masonic claims about Freemasonry. Some conspiracy theorists eventually speculated that some Founding Fathers of the United States, such as George Washington and Benjamin Franklin, were having Masonic sacred geometric designs interwoven into American society, particularly in the Great Seal of the United States, the United States one-dollar bill, the architecture of National Mall landmarks and the streets and highways of Washington, D.C., as part of a master plan to create the first "Masonic government" as a model for the coming New World Order. Freemasons rebut these claims of a Masonic conspiracy. Freemasonry, which promotes rationalism, places no power in occult symbols themselves, and it is not a part of its principles to view the drawing of symbols, no matter how large, as an act of consolidating or controlling power. Furthermore, there is no published information establishing the Masonic membership of the men responsible for the design of the Great Seal. While conspiracy theorists assert that there are elements of Masonic influence on the Great Seal of the United States and that these elements were intentionally or unintentionally used because the creators were familiar with the symbols, in fact, the all-seeing Eye of Providence and the unfinished pyramid were symbols used as much outside Masonic lodges as within them in the late 18th century. Therefore, the designers were drawing from common esoteric symbols. The Latin phrase "novus ordo seclorum", appearing on the reverse side of the Great Seal since 1782 and the back of the one-dollar bill since 1935, translates to "New Order of the Ages", and alludes to the beginning of an era where the United States of America is an independent nation-state; conspiracy theorists often mistranslate it as "New World Order". Although the European continental branch of Freemasonry has organizations that allow political discussion within their Masonic Lodges, Masonic researcher Trevor W. McKeown argues that the accusations ignore several facts. Firstly, the many Grand Lodges are independent and sovereign, meaning they act independently and do not have a common agenda. The points of belief of the various lodges often differ. Secondly, famous Freemasons have always held views that span the political spectrum and show no particular pattern or preference. As such, the term "Masonic government" is erroneous; there is no consensus among Freemasons about what an ideal government would look like. === Illuminati === The Order of the Illuminati was an Enlightenment-age secret society founded by university professor Adam Weishaupt on 1 May 1776, in Upper Bavaria, Germany. The movement consisted of advocates of freethought, secularism, liberalism, republicanism, and gender equality, recruited from the German Masonic Lodges, who sought to teach rationalism through mystery schools. In 1785, the order was infiltrated, broken up, and suppressed by the government agents of Charles Theodore, Elector of Bavaria, in his preemptive campaign to neutralize the threat of secret societies ever becoming hotbeds of conspiracies to overthrow the Bavarian monarchy and its state religion, Roman Catholicism. There is no evidence that the Bavarian Illuminati survived its suppression in 1785. In the late 18th century, reactionary conspiracy theorists, such as Scottish physicist John Robison and French Jesuit priest Augustin Barruel, began speculating that the Illuminati had survived their suppression and become the masterminds behind the French Revolution and the Reign of Terror. The Illuminati were accused of being subversives who were attempting to secretly orchestrate a revolutionary wave in Europe and the rest of the world by spreading the most radical ideas and movements of the Enlightenment—anti-clericalism, anti-monarchism, and anti-patriarchalism— which the accusers feared would lead to the destruction of the natural order of things. During the 19th century, fear of an Illuminati conspiracy was a real concern of the European ruling classes, and their oppressive reactions to this unfounded fear provoked in 1848 the very revolutions they sought to prevent. During the interwar period of the 20th century, fascist propagandists, such as British revisionist historian Nesta Helen Webster and American socialite Edith Starr Miller, not only popularized the myth of an Illuminati conspiracy but claimed that it was a subversive secret society which served the Jewish elites that supposedly propped up both finance capitalism and Soviet communism to divide and rule the world. American evangelist Gerald Burton Winrod and other conspiracy theorists within the fundamentalist Christian movement in the United States—which emerged in the 1910s as a backlash against the principles of Enlightenment secular humanism, modernism, and liberalism—became the main channel of dissemination of Illuminati conspiracy theories in the U.S.. Right-wing populists, such as members of the John Birch Society, subsequently began speculating that some collegiate fraternities (Skull and Bones), gentlemen's clubs (Bohemian Club), and think tanks (Council on Foreign Relations, Trilateral Commission) of the American upper class are front organizations of the Illuminati, which they accuse of plotting to create a New World Order through a one-world government. The Illuminatus! Trilogy, a series of three satirical novels by American writers Robert Shea and Robert Anton Wilson, first published in 1975, which attributed the alleged major cover-ups of the era – such as who shot John F. Kennedy – to the Illuminati, was extremely influential in popularizing the myth of an Illuminati superconspiracy during the 1960s and onward. === The Protocols of the Elders of Zion === The Protocols of the Elders of Zion is an antisemitic canard, originally published in Russian in 1903, alleging a Judeo-Masonic conspiracy to achieve world domination. The text purports to be the minutes of the secret meetings of a cabal of Jewish masterminds, which has co-opted Freemasonry and is plotting to rule the world on behalf of all Jews because they believe themselves to be the chosen people of God. The Protocols incorporate many of the core conspiracist themes outlined in the Robison and Barruel attacks on the Freemasons and overlay them with antisemitic allegations about anti-Tsarist movements in Russia. The Protocols reflect themes similar to more general critiques of Enlightenment liberalism by conservative aristocrats who support monarchies and state religions. The interpretation intended by the publication of The Protocols is that if one peels away the layers of the Masonic conspiracy, past the Illuminati, one finds the rotten Jewish core. Numerous polemicists, such as Irish journalist Philip Graves in a 1921 article in The Times, and British academic Norman Cohn in his 1967 book Warrant for Genocide, have proven The Protocols to be both a hoax and a clear case of plagiarism. There is general agreement that Russian-French writer and political activist Matvei Golovinski fabricated the text for Okhrana, the secret police of the Russian Empire, as a work of counter-revolutionary propaganda prior to the 1905 Russian Revolution, by plagiarizing, almost word for word in some passages, from The Dialogue in Hell Between Machiavelli and Montesquieu, a 19th-century satire against Napoleon III of France written by French political satirist and Legitimist militant Maurice Joly. Responsible for feeding many antisemitic and anti-Masonic mass hysteria of the twentieth century, The Protocols has been influential in the development of some conspiracy theories, including some New World Order theories, and repeatedly appears in certain contemporary conspiracy literature. For example, the authors of the 1982 controversial book The Holy Blood and the Holy Grail concluded that The Protocols was the most persuasive piece of evidence for the existence and activities of the Priory of Sion. They speculated that this secret society was working behind the scenes to establish a theocratic "United States of Europe". Politically and religiously unified through the imperial cult of a Merovingian Great Monarch—supposedly descended from a Jesus bloodline—who occupies both the throne of Europe and the Holy See, this "Holy European Empire" would become the hyperpower of the 21st century. Although the Priory of Sion itself has been exhaustively debunked by journalists and scholars as a hoax, some apocalyptic millenarian Christian eschatologists who believe The Protocols is authentic became convinced that the Priory of Sion was a fulfillment of prophecies found in the Book of Revelation and further proof of an anti-Christian conspiracy of epic proportions signaling the imminence of a New World Order. Skeptics argue that the current gambit of contemporary conspiracy theorists who use The Protocols is to claim that they "really" come from some group other than the Jews, such as fallen angels or alien invaders. Although it is hard to determine whether the conspiracy-minded actually believe this or are simply trying to sanitize a discredited text, skeptics argue that it does not make much difference, since they leave the actual, antisemitic text unchanged, giving The Protocols credibility and circulation. === Round table === During the second half of Britain's "imperial century" between 1815 and 1914, English-born South African businessman, mining magnate, and politician Cecil Rhodes advocated the British Empire reannexing the United States of America and reforming itself into an "Imperial Federation" to bring about a hyperpower and lasting world peace. In his first will, written in 1877 at the age of 23, he expressed his wish to fund a secret society (known as the Society of the Elect) that would advance this goal: To and for the establishment, promotion and development of a Secret Society, the true aim and object whereof shall be for the extension of British rule throughout the world, the perfecting of a system of emigration from the United Kingdom, and of colonisation by British subjects of all lands where the means of livelihood are attainable by energy, labour and enterprise, and especially the occupation by British settlers of the entire Continent of Africa, the Holy Land, the Valley of the Euphrates, the Islands of Cyprus and Candia [Crete], the whole of South America, the Islands of the Pacific not heretofore possessed by Great Britain, the whole of the Malay Archipelago, the seaboard of China and Japan, the ultimate recovery of the United States of America as an integral part of the British Empire, the inauguration of a system of Colonial representation in the Imperial Parliament which may tend to weld together the disjointed members of the Empire and, finally, the foundation of so great a Power as to render wars impossible, and promote the best interests of humanity. In 1890, thirteen years after "his now-famous will," Rhodes elaborated on the same idea: establishment of "England everywhere," which would "ultimately lead to the cessation of all wars, and one language throughout the world." "The only thing feasible to carry out this idea is a secret society gradually absorbing the wealth of the world ["and human minds of the higher-order"] to be devoted to such an object." Rhodes also concentrated on the Rhodes Scholarship, which had British statesman Alfred Milner as one of its trustees. Established in 1902, the original goal of the trust fund was to foster peace among the great powers by creating a sense of fraternity and a shared world view among future British, American, and German leaders by having enabled them to study for free at the University of Oxford. Milner and British official Lionel George Curtis were the architects of the Round Table movement, a network of organizations promoting closer union between Britain and its self-governing colonies. To this end, Curtis founded the Royal Institute of International Affairs in June 1919 and, with his 1938 book The Commonwealth of God, began advocating for the creation of an imperial federation that eventually reannexes the U.S., which would be presented to Protestant churches as being the work of the Christian God to elicit their support. The Commonwealth of Nations was created in 1949, but it would only be a free association of independent states rather than the powerful imperial federation imagined by Rhodes, Milner, and Curtis. The Council on Foreign Relations began in 1917 with a group of New York academics who were asked by President Woodrow Wilson to offer options for the foreign policy of the United States in the interwar period. Originally envisioned as a group of American and British scholars and diplomats, some of whom belonging to the Round Table movement, it was a subsequent group of 108 New York financiers, manufacturers, and international lawyers organized in June 1918 by Nobel Peace Prize recipient and U.S. secretary of state Elihu Root, that became the Council on Foreign Relations on 29 July 1921. The first of the council's projects was a quarterly journal launched in September 1922, called Foreign Affairs. The Trilateral Commission was founded in July 1973, at the initiative of American banker David Rockefeller, who was chairman of the Council on Foreign Relations at that time. It is a private organization established to foster closer cooperation among the United States, Europe, and Japan. The Trilateral Commission is widely seen as a counterpart to the Council on Foreign Relations. In the 1960s, right-wing populist individuals and groups with a paleoconservative worldview, such as members of the John Birch Society, were the first to combine and spread a business nationalist critique of corporate internationalists networked through think tanks such as the Council on Foreign Relations with a grand conspiracy theory casting them as front organizations for the Round Table of the "Anglo-American Establishment", which are financed by an "international banking cabal" that has supposedly been plotting from the late 19th century on to impose an oligarchic new world order through a global financial system. Anti-globalist conspiracy theorists therefore fear that international bankers are planning to eventually subvert the independence of the U.S. by subordinating national sovereignty to a strengthened Bank for International Settlements. The research findings of historian Carroll Quigley, author of the 1966 book Tragedy and Hope, are taken by both conspiracy theorists of the American Old Right (W. Cleon Skousen) and New Left (Carl Oglesby) to substantiate this view, even though Quigley argued that the Establishment is not involved in a plot to implement a one-world government but rather British and American benevolent imperialism driven by the mutual interests of economic elites in the United Kingdom and the United States. Quigley also argued that, although the Round Table still exists today, its position in influencing the policies of world leaders has been much reduced from its heyday during World War I and slowly waned after the end of World War II and the Suez Crisis. Today the Round Table is largely a ginger group, designed to consider and gradually influence the policies of the Commonwealth of Nations, but faces strong opposition. Furthermore, in American society after 1965, the problem, according to Quigley, was that no elite was in charge and acting responsibly. Larry McDonald, the second president of the John Birch Society and a conservative Democratic member of the United States House of Representatives who represented the 7th congressional district of Georgia, wrote a foreword for Allen's 1976 book The Rockefeller File, wherein he claimed that the Rockefellers and their allies were driven by a desire to create a one-world government that combined "super-capitalism" with communism and would be fully under their control. He saw a conspiracy plot that was "international in scope, generations old in planning, and incredibly evil in intent." In his 2002 autobiography Memoirs, David Rockefeller wrote: For more than a century, ideological extremists at either end of the political spectrum have seized upon well-publicized incidents ... to attack the Rockefeller family for the inordinate influence they claim we wield over American political and economic institutions. Some even believe we are part of a secret cabal working against the best interests of the United States, characterizing my family and me as 'internationalists' and conspiring with others around the world to build a more integrated global political and economic structure—one world if you will. If that's the charge, I stand guilty, and I am proud of it. Barkun argues that this statement is partly facetious (the claim of "conspiracy" and "treason") and partly serious—the desire to encourage trilateral cooperation among the U.S., Europe, and Japan; for example — an ideal that used to be a hallmark of the internationalist wing of the Republican Party (known as "Rockefeller Republicans" in honor of Nelson Rockefeller) when there was an internationalist wing. The statement, however, is taken at face value and widely cited by conspiracy theorists as proof that the Council on Foreign Relations uses its role as the brain trust of American presidents, senators and representatives to manipulate them into supporting a New World Order in the form of a one-world government. In a 13 November 2007 interview with Canadian journalist Benjamin Fulford, Rockefeller countered that he felt no need for a world government and wished for the world's governments to work together and collaborate. He also stated that it seemed neither likely nor desirable to have only one elected government rule worldwide. He criticized accusations of him being "ruler of the world" as nonsensical. Some American social critics, such as Laurence H. Shoup, argue that the Council on Foreign Relations is an "imperial brain trust" which has, for decades, played a central behind-the-scenes role in shaping U.S. foreign policy choices for the post-World War II international order and the Cold War by determining what options show up on the agenda and what options do not even make it to the table; others, such as G. William Domhoff, argue that it is in fact a mere policy discussion forum which provides the business input to U.S. foreign policy planning. Domhoff argues that "[i]t has nearly 3,000 members, far too many for secret plans to be kept within the group. All the council does is sponsor discussion groups, debates, and speakers. As far as being secretive, it issues annual reports and allows access to its historical archives." However, all these critics agree that "[h]istorical studies of the CFR show that it has a very different role in the overall power structure than what is claimed by conspiracy theorists." === The Open Conspiracy === In his 1928 book The Open Conspiracy British writer and futurist H. G. Wells promoted cosmopolitanism and offered blueprints for a world revolution and World Brain to establish a technocratic world state and planned economy. Wells warned, however, in his 1940 book The New World Order that: ... when the struggle seems to be drifting definitely towards a world social democracy, there may still be very great delays and disappointments before it becomes an efficient and beneficent world system. Countless people ... will hate the new world order, be rendered unhappy by the frustration of their passions and ambitions through its advent and will die protesting against it. When we attempt to evaluate its promise, we have to bear in mind the distress of a generation or so of malcontents, many of them quite gallant and graceful-looking people. Wells's books were influential in giving a second meaning to the term "new world order", which would only be used by state socialist supporters and anti-communist opponents. However, despite the popularity and notoriety of his ideas, Wells failed to exert a deeper and more lasting influence because he was unable to concentrate his energies on a direct appeal to intelligentsias who would, ultimately, have to coordinate the Wellsian new world order. === New Age === British neo-Theosophical occultist Alice Bailey, one of the founders of the so-called New Age movement, prophesied in 1940 the eventual victory of the Allies of World War II over the Axis powers (which occurred in 1945) and the establishment by the Allies of a political and religious New World Order. She saw a federal world government as the culmination of Wells' Open Conspiracy but favorably argued that it would be synarchist because it was guided by the Masters of the Ancient Wisdom, intent on preparing humanity for the mystical second coming of Christ, and the dawning of the Age of Aquarius. According to Bailey, a group of ascended masters called the Great White Brotherhood works on the "inner planes" to oversee the transition to the New World Order but, for now, the members of this Spiritual Hierarchy are only known to a few occult scientists, with whom they communicate telepathically, but as the need for their personal involvement in the plan increases, there will be an "Externalization of the Hierarchy" and everyone will know of their presence on Earth. Bailey's writings, along with American writer Marilyn Ferguson's 1980 book The Aquarian Conspiracy, contributed to conspiracy theorists of the Christian right viewing the New Age movement as the "false religion" that would supersede Christianity in a New World Order. Skeptics argue that the term "New Age movement" is a misnomer, generally used by conspiracy theorists as a catch-all rubric for any new religious movement that is not fundamentalist Christian. By this logic, anything that is not Christian is by definition actively and willfully anti-Christian. Paradoxically, since the first decade of the 21st century, New World Order conspiracism is increasingly being embraced and propagandized by New Age occultists, who are people bored by rationalism and drawn to stigmatized knowledge—such as alternative medicine, astrology, quantum mysticism, spiritualism, and theosophy. Thus, New Age conspiracy theorists, such as the makers of documentary films like Esoteric Agenda, claim that globalists who plot on behalf of the New World Order are simply misusing occultism for Machiavellian ends, such as adopting 21 December 2012 as the exact date for the establishment of the New World Order to take advantage of the growing 2012 phenomenon, which has its origins in the fringe Mayanist theories of New Age writers José Argüelles, Terence McKenna, and Daniel Pinchbeck. Skeptics argue that the connection of conspiracy theorists and occultists follows from their common fallacious premises. First, any widely accepted belief must necessarily be false. Second, stigmatized knowledge—what the Establishment spurns—must be true. The result is a large, self-referential network in which, for example, some UFO religionists promote anti-Jewish phobias while some antisemites practice Peruvian shamanism. === Fourth Reich === Conspiracy theorists often use the term "Fourth Reich" simply as a pejorative synonym for the "New World Order" to imply that its state ideology and government will be similar to Germany's Third Reich. Conspiracy theorists, such as American writer Jim Marrs, claim that some ex-Nazis, who survived the fall of the Greater German Reich, along with sympathizers in the United States and elsewhere, given haven by organizations like ODESSA and Die Spinne, has been working behind the scenes since the end of World War II to enact at least some principles of Nazism (e.g., militarism, imperialism, widespread spying on citizens, corporatism, the use of propaganda to manufacture a national consensus) into culture, government, and business worldwide, but primarily in the U.S. They cite the influence of ex-Nazi scientists brought in under Operation Paperclip to help advance aerospace manufacturing in the U.S. with technological principles from Nazi UFOs, and the acquisition and creation of conglomerates by ex-Nazis and their sympathizers after the war, in both Europe and the U.S. This neo-Nazi conspiracy is said to be animated by an "Iron Dream" in which the American Empire, having thwarted the Judeo-Masonic conspiracy and overthrown its Zionist Occupation Government, gradually establishes a Fourth Reich formerly known as the "Western Imperium"—a pan-Aryan world empire modeled after Adolf Hitler's New Order—which reverses the "decline of the West" and ushers a golden age of white supremacy. Skeptics argue that conspiracy theorists grossly overestimate the influence of ex-Nazis and neo-Nazis on American society and point out that political repression at home and imperialism abroad have a long history in the United States that predates the 20th century. Political theorist Sheldon Wolin has expressed concern that the twin forces of democratic deficit and superpower status have paved the way in the U.S. for the emergence of an inverted totalitarianism which contradicts many principles of Nazism. === Alien invasion === Since the late 1970s, extraterrestrials from other habitable planets or parallel dimensions (such as "Greys") and intraterrestrials from Hollow Earth (such as "Reptilians") have been included in the New World Order conspiracy, in more or less dominant roles, as in the theories put forward by American writers Stan Deyo and Milton William Cooper, and British writer David Icke. The common theme in these conspiracy theories is that aliens have been among us for decades, centuries or millennia. Still, a government cover-up enforced by "Men in black" has shielded the public from knowledge of a secret alien invasion. Motivated by speciesism and imperialism, these aliens have been and are secretly manipulating developments and changes in human society to more efficiently control and exploit human beings. In some theories, alien infiltrators have shapeshifted into human form and move freely throughout human society, even to the point of taking control of command positions in governmental, corporate, and religious institutions, and are now in the final stages of their plan to take over the world. A mythical covert government agency of the United States code-named Majestic 12 is often imagined being the shadow government which collaborates with the alien occupation and permits alien abductions, in exchange for assistance in the development and testing of military "flying saucers" at Area 51, for United States armed forces to achieve full-spectrum dominance. Those who adhere to the psychosocial hypothesis for unidentified flying objects argue that the convergence of New World Order conspiracy theory and UFO conspiracy theory is a product of not only the era's widespread mistrust of governments and the popularity of the extraterrestrial hypothesis for UFOs but of the far right and ufologists joining forces. Barkun notes that the only positive side to this development is that, if conspirators plotting to rule the world are believed to be aliens, traditional human scapegoats (Freemasons, Illuminati, Jews, etc.) are downgraded or exonerated. === Brave New World === Antiscience and neo-Luddite conspiracy theorists emphasize technology forecasting in their New World Order conspiracy theories. They speculate that the global power elite are reactionary modernists pursuing a transhumanist plan to develop and use human enhancement technologies to become a "posthuman ruling caste", while change accelerates toward a technological singularity—a theorized future point of discontinuity when events will accelerate at such a pace that normal unenhanced humans will be unable to predict or even understand the rapid changes occurring in the world around them. Conspiracy theorists fear the outcome will either be the emergence of a Brave New World-like dystopia—a "Brave New World Order"—or the extinction of the human species. Democratic transhumanists, such as American sociologist James Hughes, counter that many influential members of the United States establishment are bioconservatives strongly opposed to human enhancement, as demonstrated by President Bush's Council on Bioethics's proposed international treaty prohibiting human cloning and germline engineering. Furthermore, he argues that conspiracy theorists underestimate how fringe the transhumanist movement really is. == Postulated implementations == Just as there are several overlapping or conflicting theories among conspiracists about the nature of the New World Order, so are there several beliefs about how its architects and planners will implement it: === Gradualism === Conspiracy theorists generally speculate that the New World Order is being implemented gradually, citing the formation of the U.S. Federal Reserve System in 1913; the League of Nations in 1919; the International Monetary Fund in 1944; the United Nations in 1945; the World Bank in 1945; the World Health Organization in 1948; the European Union and the Euro in 1993; the World Trade Organization in 1998; the African Union in 2002, and the Union of South American Nations in 2008 as major milestones. An increasingly popular conspiracy theory among American right-wing populists is that the hypothetical North American Union and the amero currency, proposed by the Council on Foreign Relations and its counterparts in Mexico and Canada, will be the next milestone in the implementation of the New World Order. The theory holds that a group of shadowy and mostly nameless international elites is planning to replace the federal government of the United States with a transnational government. Therefore, conspiracy theorists believe the borders between Mexico, Canada, and the United States are in the process of being erased, covertly, by a group of globalists whose ultimate goal is to replace national governments in Washington, D.C., Ottawa, and Mexico City with a European-style political union and a bloated E.U.-style bureaucracy. Skeptics argue that the North American Union exists only as a proposal contained in one of a thousand academic and policy papers published each year that advocate all manner of idealistic but ultimately unrealistic approaches to social, economic, and political problems. Most of these are passed around in their circles and eventually filed away and forgotten by junior staffers in congressional offices. However, some of these papers become touchstones for the conspiracy-minded and form the basis of all kinds of unfounded xenophobic fears, especially during times of economic anxiety. For example, in March 2009, due to the 2008 financial crisis, the People's Republic of China and the Russian Federation pressed for urgent consideration of a new international reserve currency and the United Nations Conference on Trade and Development proposed greatly expanding the I.M.F.'s special drawing rights. Conspiracy theorists fear these proposals are a call for the U.S. to adopt a single global currency for a New World Order. Judging that both national governments and global institutions have proven ineffective in addressing global problems that go beyond the capacity of individual nation-states to solve, some political scientists critical of New World Order conspiracism, such as Mark C. Partridge, argue that regionalism will be the major force in the coming decades, pockets of power around regional centers: Western Europe around Brussels, the Western Hemisphere around Washington, D.C., East Asia around Beijing, and Eastern Europe around Moscow. As such, the E.U., the Shanghai Cooperation Organisation, and the G-20 will likely become more influential as time progresses. The question then is not whether global governance is gradually emerging, but rather how will these regional powers interact with one another. === Coup d'état === American right-wing populist conspiracy theorists, especially those who joined the militia movement in the United States, speculate that the New World Order will be implemented through a dramatic coup d'état by a "secret team", using black helicopters, in the U.S. and other nation-states to bring about a totalitarian world government controlled by the United Nations and enforced by troops of foreign U.N. peacekeepers. Following the Rex 84 and Operation Garden Plot plans, this military coup would involve the suspension of the Constitution, the imposition of martial law, and the appointment of military commanders to head state and local governments and to detain dissidents. These conspiracy theorists, who are all strong believers in a right to keep and bear arms, are extremely fearful that the passing of any gun control legislation will be later followed by the abolition of personal gun ownership and a campaign of gun confiscation, and that the refugee camps of emergency management agencies such as FEMA will be used for the internment of suspected subversives, making little effort to distinguish true threats to the New World Order from pacifist dissidents. Before 2000, some survivalists believed this process would be set in motion by the predicted Y2K problem causing societal collapse. Since many left-wing and right-wing conspiracy theorists believe that the 11 September attacks were a false flag operation carried out by the United States intelligence community, as part of a strategy of tension to justify political repression at home and preemptive war abroad, they have become convinced that a more catastrophic terrorist incident will be responsible for triggering Executive Directive 51 to complete the transition to a police state. Skeptics argue that unfounded fears about an imminent or eventual gun ban, military coup, internment, or U.N. invasion and occupation are rooted in the siege mentality of the American militia movement but also an apocalyptic millenarianism which provides a basic narrative within the political right in the U.S., claiming that the idealized society (i.e., constitutional republic, Jeffersonian democracy, "Christian nation", "white nation") is thwarted by subversive conspiracies of liberal secular humanists who want "Big Government" and globalists who plot on behalf of the New World Order. === Mass surveillance === Conspiracy theorists concerned with surveillance abuse believe that the New World Order is being implemented by the cult of intelligence at the core of the surveillance-industrial complex through mass surveillance and the use of Social Security numbers, the bar-coding of retail goods with Universal Product Code markings, and, most recently, RFID tagging by microchip implants. Claiming that corporations and government are planning to track every move of consumers and citizens with RFID as the latest step toward a 1984-like surveillance state, consumer privacy advocates, such as Katherine Albrecht and Liz McIntyre, have become Christian conspiracy theorists who believe spychips must be resisted because they argue that modern database and communications technologies, coupled with point of sale data-capture equipment and sophisticated ID and authentication systems, now make it possible to require a biometrically associated number or mark to make purchases. They fear that the ability to implement such a system closely resembles the Number of the beast prophesied in the Book of Revelation. In January 2002, the Information Awareness Office (IAO) was established by the Defense Advanced Research Projects Agency (DARPA) to bring together several DARPA projects focused on applying information technology to counter asymmetric threats to national security. Following public criticism that the development and deployment of these technologies could potentially lead to a mass surveillance system, the IAO was defunded by the United States Congress in 2003. The second source of controversy involved IAO's original logo, which depicted the "all-seeing" Eye of Providence atop of a pyramid looking down over the globe, accompanied by the Latin phrase scientia est potentia (knowledge is power). Although DARPA eventually removed the logo from its website, it left a lasting impression on privacy advocates. It also inflamed conspiracy theorists, who misinterpret the "eye and pyramid" as the Masonic symbol of the Illuminati, an 18th-century secret society they speculate continues to exist and is plotting on behalf of a New World Order. American historian Richard Landes, who specialized in the history of apocalypticism and was co-founder and director of the Center for Millennial Studies at Boston University, argues that new and emerging technologies often trigger alarmism among millenarians. Even the introduction of Gutenberg's printing press in 1436 caused waves of apocalyptic thinking. The Year 2000 problem, bar codes, and Social Security numbers all triggered end-time warnings which either proved to be false or were no longer taken seriously once the public became accustomed to these technological changes. Civil libertarians argue that the privatization of surveillance and the rise of the surveillance-industrial complex in the United States does raise legitimate concerns about the erosion of privacy. However, skeptics of mass surveillance conspiracism caution that such concerns should be disentangled from secular paranoia about Big Brother or religious hysteria about the Antichrist. === Occultism === Conspiracy theorists of the Christian right, starting with British revisionist historian Nesta Helen Webster, believe there is an ancient occult conspiracy—started by the first mystagogues of Gnosticism and perpetuated by their alleged esoteric successors, such as the Kabbalists, Cathars, Knights Templar, Hermeticists, Rosicrucians, Freemasons, and, ultimately, the Illuminati—which seeks to subvert the Judeo-Christian foundations of the Western world and implement the New World Order through a one-world religion that prepares the masses to embrace the imperial cult of the Antichrist. More broadly, they speculate that globalists who plot on behalf of a New World Order are directed by occult agencies of some sort: unknown superiors, spiritual hierarchies, demons, fallen angels or Lucifer. They believe that these conspirators use the power of occult sciences (numerology), symbols (Eye of Providence), rituals (Masonic degrees), monuments (National Mall landmarks), buildings (Manitoba Legislative Building) and facilities (Denver International Airport) to advance their plot to rule the world. For example, in June 1979, an unknown benefactor under the pseudonym "R. C. Christian" had a huge granite megalith built in the U.S. state of Georgia, which acts like a compass, calendar, and clock. A message comprising ten guides is inscribed on the occult structure in many languages to serve as instructions for survivors of a doomsday event to establish a more enlightened and sustainable civilization than the destroyed one. The "Georgia Guidestones" has subsequently become a spiritual and political Rorschach test onto which any number of ideas can be imposed. Some New Agers and neo-pagans revere it as a ley-line power nexus while a few conspiracy theorists are convinced that they are engraved with the New World Order's anti-Christian "Ten Commandments." Should the Guidestones survive for centuries as their creators intended, many more meanings could arise, equally unrelated to the designer's original intention. Skeptics argue that the demonization of Western esotericism by conspiracy theorists is rooted in religious intolerance but also in the same moral panics that have fueled witch trials in the Early Modern period, and satanic ritual abuse allegations in the United States. === Population control === Conspiracy theorists believe that the New World Order will also be implemented through human population control to more easily monitor and control the movement of individuals. The means range from stopping the growth of human societies through reproductive health and family planning programs, which promote abstinence, contraception and abortion, or intentionally reducing the bulk of the world population through genocides by mongering unnecessary wars, through plagues by engineering emergent viruses and tainting vaccines, and through environmental disasters by controlling the weather (HAARP, chemtrails), etc. Conspiracy theorists argue that globalists plotting on behalf of a New World Order are neo-Malthusians who engage in overpopulation and climate change alarmism to create public support for coercive population control and ultimately world government. United Nations Agenda 21 is condemned as "reconcentrating" people into urban areas and depopulating rural ones, even generating a dystopian novel by Glenn Beck where single-family homes are a distant memory. Skeptics argue that fears of population control can be traced back to the traumatic legacy of the eugenics movement's "war against the weak" in the United States during the first decades of the 20th century but also the Second Red Scare in the U.S. during the late 1940s and 1950s, and to a lesser extent in the 1960s, when activists on the far right of American politics routinely opposed public health programs, notably water fluoridation, mass vaccination and mental health services, by asserting they were all part of a far-reaching plot to impose a socialist or communist regime. Their views were influenced by opposition to a number of major social and political changes that had happened in recent years: the growth of internationalism, particularly the United Nations and its programs; the introduction of social welfare provisions, particularly the various programs established by the New Deal; and government efforts to reduce inequalities in the social structure of the U.S. Opposition towards mass vaccinations in particular got significant attention in the late 2010s, so much so the World Health Organization listed vaccine hesitancy as one of the top ten global health threats of 2019. By this time, people that refused or refused to allow their children to be vaccinated were known colloquially as "anti-vaxxers", though citing the New World Order conspiracy theory or resistance to a perceived population control plan as a reason to refuse vaccination were few and far between. === Mind control === Social critics accuse governments, corporations, and the mass media of being involved in the manufacturing of a national consensus and, paradoxically, a culture of fear due to the potential for increased social control that a mistrustful and mutually fearing population might offer to those in power. The worst fear of some conspiracy theorists, however, is that the New World Order will be implemented through the use of mind control—a broad range of tactics able to subvert an individual's control of their own thinking, behavior, emotions, or decisions. These tactics are said to include everything from Manchurian candidate-style brainwashing of sleeper agents (Project MKULTRA, "Project Monarch") to engineering psychological operations (water fluoridation, subliminal advertising, "Silent Sound Spread Spectrum", MEDUSA) and parapsychological operations (Stargate Project) to influence the masses. The concept of wearing a tin foil hat for protection from such threats has become a popular stereotype and term of derision; the phrase serves as a byword for paranoia and is associated with conspiracy theorists. Skeptics argue that the paranoia behind a conspiracy theorist's obsession with mind control, population control, occultism, surveillance abuse, Big Business, Big Government, and globalization arises from a combination of two factors, when he or she: 1) holds strong individualist values and 2) lacks power. The first attribute refers to people who care deeply about an individual's right to make their own choices and direct their own lives without interference or obligations to a larger system (like the government), but combine this with a sense of powerlessness in one's own life. One gets what some psychologists call "agency panic," intense anxiety about an apparent loss of autonomy to outside forces or regulators. When fervent individualists feel that they cannot exercise their independence, they experience a crisis and assume that larger forces are to blame for usurping this freedom. == Alleged conspirators == According to Domhoff, many people seem to believe that the United States is ruled from behind the scenes by a conspiratorial elite with secret desires, i.e., by a small, secretive group that wants to change the government system or put the country under the control of a world government. In the past, the conspirators were usually said to be crypto-communists who were intent upon bringing the United States under a common world government with the Soviet Union, but the dissolution of the USSR in 1991 undercut that theory. Domhoff notes that most conspiracy theorists changed their focus to the United Nations as the likely controlling force in a New World Order, an idea which is undermined by the powerlessness of the U.N. and the unwillingness of even moderates within the American Establishment to give it anything but a limited role. Although skeptical of New World Order conspiracism, political scientist David Rothkopf argues, in the 2008 book Superclass: The Global Power Elite and the World They Are Making, that the world population of 6 billion people is governed by an elite of 6,000 individuals. Until the late 20th century, governments of the great powers provided most of the superclass, accompanied by a few heads of international movements (i.e., the Pope of the Catholic Church) and entrepreneurs (Rothschilds, Rockefellers). According to Rothkopf, in the early 21st century, economic clout—fueled by the explosive expansion of international trade, travel, and communication—rules; the nation-state's power has diminished shrinking politicians to minority power broker status; leaders in international business, finance, and the defense industry not only dominate the superclass, but they also move freely into high positions in their nations' governments and back to private life largely beyond the notice of elected legislatures (including the U.S. Congress), which remain abysmally ignorant of affairs beyond their borders. He asserts that the superclass' disproportionate influence over national policy is constructive but always self-interested and that across the world, few object to corruption and oppressive governments provided they can do business in these countries. Viewing the history of the world as the history of warfare between secret societies, conspiracy theorists go further than Rothkopf, and other scholars who have studied the global power elite, by claiming that established upper-class families with "old money" who founded and finance the Bilderberg Group, Bohemian Club, Club of Rome, Council on Foreign Relations, Rhodes Trust, Skull and Bones, Trilateral Commission, and similar think tanks and private clubs, are illuminated conspirators plotting to impose a totalitarian New World Order—the implementation of an authoritarian world government controlled by the United Nations and a global central bank, which maintains political power through the financialization of the economy, regulation and restriction of speech through the concentration of media ownership, mass surveillance, widespread use of state terrorism, and an all-encompassing propaganda that creates a cult of personality around a puppet world leader and ideologizes world government as the culmination of history's progress. == Criticism == Skeptics of New World Order conspiracy theories accuse its proponents of indulging in the furtive fallacy, a belief that significant facts of history are necessarily sinister; conspiracism, a world view that centrally places conspiracy theories in the unfolding of history, rather than social and economic forces; and fusion paranoia, a promiscuous absorption of fears from any source whatsoever. Marxists, who are skeptical of right-wing populist conspiracy theories, also accuse the global power elite of not having the best interests of all at heart, and many intergovernmental organizations of suffering from a democratic deficit, but they argue that the superclass are plutocrats only interested in brazenly imposing a neoliberal or neoconservative new world order—the implementation of global capitalism through economic and military coercion to protect the interests of transnational corporations—which systematically undermines the possibility of international socialism. Arguing that the world is in the middle of a transition from the American Empire to the rule of a global ruling class that has emerged from within the American Empire, they point out that right-wing populist conspiracy theorists, blinded by their anti-communism, fail to see that what they demonize as the "New World Order" is, ironically, the highest stage of the very capitalist economic system they defend. Domhoff, a professor in psychology and sociology who studies theories of power, wrote in 2005 an essay entitled There Are No Conspiracies. He says that for this theory to be true, it required several "wealthy and highly educated people" to do things that don't "fit with what we know about power structures". Claims that this will happen go back decades and have always been proved wrong. Partridge, a contributing editor to the global affairs magazine Diplomatic Courier, wrote a 2008 article entitled One World Government: Conspiracy Theory or Inevitable Future? He says that if anything, nationalism, which is the opposite of a global government, is rising. He also says that attempts at creating global governments or global agreements "have been categorical failures". Although some cultural critics see superconspiracy theories about a New World Order as "postmodern metanarratives" that may be politically empowering, a way of giving ordinary people a narrative structure with which to question what they see around them, skeptics argue that conspiracism leads people into cynicism, convoluted thinking, and a tendency to feel it is hopeless even as they denounce the alleged conspirators. Alexander Zaitchik from the Southern Poverty Law Center wrote a report titled "'Patriot' Paranoia: A Look at the Top Ten Conspiracy Theories", in which he personally condemns such conspiracies as an effort of the radical right to undermine society. Concerned that the improvisational millennialism of most conspiracy theories about a New World Order might motivate lone wolves to engage in leaderless resistance leading to domestic terrorist incidents like the Oklahoma City bombing, Barkun writes that "the danger lies less in such beliefs themselves ... than in the behavior they might stimulate or justify" and warns "should they believe that the prophesied evil day had in fact arrived, their behavior would become far more difficult to predict." Warning of the threat to American democracy posed by right-wing populist movements led by demagogues who mobilize support for mob rule or even a fascist revolution by exploiting the fear of conspiracies, Berlet writes that Right-wing populist movements can cause serious damage to a society because they often popularize xenophobia, authoritarianism, scapegoating, and conspiracism. This can lure mainstream politicians to adopt these themes to attract voters, legitimize acts of discrimination (or even violence), and open the door for revolutionary right-wing populist movements, such as fascism, to recruit from the reformist populist movements. Criticisms of New World Order conspiracy theorists also come from within their own community. Despite believing themselves to be "freedom fighters", many right-wing populist conspiracy theorists hold views that are incompatible with their professed libertarianism, such as Christian dominionism, authoritarian ultranationalism, white supremacy and eliminationism. == See also == Anti-globalization movement Brainwashing Climate change denial Criticisms of globalization Zionist Occupation Government conspiracy theory == References == == Further reading == The following is a list of non-self-published non-fiction books that discuss New World Order conspiracy theories. Carr, William Guy (1954). Pawns in the Game. Legion for the Survival of Freedom, an affiliate of the Institute for Historical Review. ISBN 0-911038-29-9. {{cite book}}: ISBN / Date incompatibility (help) Still, William T. (1990). New World Order: The Ancient Plan of Secret Societies. Huntington House Publishers. ISBN 0-910311-64-1. Cooper, Milton William (1991). Behold a Pale Horse. Light Technology Publications. ISBN 0-929385-22-5. Kah, Gary H. (1991). En Route to Global Occupation. Huntington House Publishers. ISBN 0-910311-97-8. Martin, Malachi (1991). Keys of This Blood: Pope John Paul II Versus Russia and the West for Control of the New World Order. Simon & Schuster. ISBN 0-671-74723-1. Robertson, Pat (1992). The New World Order. W Publishing Group. ISBN 0-8499-3394-3. Wardner, James (1994) [1993]. The Planned Destruction of America. Longwood Communications. ISBN 0-9632190-5-7. Keith, Jim (1995). Black Helicopters over America: Strikeforce for the New World Order. Illuminet Press. ISBN 1-881532-05-4. Cuddy, Dennis Laurence (1999) [1994]. Secret Records Revealed: The Men, The Money and The Methods Behind the New World Order. Hearthstone Publishing, Ltd. ISBN 1-57558-031-4. Marrs, Jim (2001) [2001]. Rule by Secrecy: The Hidden History That Connects the Trilateral Commission, the Freemasons, and the Great Pyramids. HarperCollins. ISBN 0-06-093184-1. Lina, Jüri (2004). Architects of Deception. Referent Publishing. ASIN B0017YZELI. == External links == World Government summit Official Website Quotations related to New World Order at Wikiquote
Wikipedia/New_World_Order_conspiracy_theory
Misinformation related to immunization and the use of vaccines circulates in mass media and social media despite the fact that there is no serious hesitancy or debate within mainstream medical and scientific circles about the benefits of vaccination. Unsubstantiated safety concerns related to vaccines are often presented on the Internet as being scientific information. A large proportion of internet sources on the topic are mostly inaccurate which can lead people searching for information to form misconceptions relating to vaccines. Although opposition to vaccination has existed for centuries, the internet and social media have recently facilitated the spread of vaccine-related misinformation. Intentional spreading of false information and conspiracy theories have been propagated by the general public and celebrities. Active disinformation campaigns by foreign actors are related to increases in negative discussions online and decreases in vaccination use over time. Misinformation related to vaccination leads to vaccine hesitancy which fuels disease outbreaks. As of 2019, prior to the COVID-19 pandemic, vaccine hesitancy was considered one of the top 10 threats to global health by the World Health Organization. == Extent == A survey by the Royal Society for Public Health found that 50% of the parents of children under the age of five regularly encountered misinformation related to vaccination on social media. On Twitter, bots, masked as legitimate users were found creating false pretenses that there are nearly equal number of individuals on both sides of the debate, thus spreading misleading information related to vaccination and vaccine safety. The accounts created by bots used additional compelling stories related to anti-vaccination as clickbait to drive up their revenue and expose users to malware. A study revealed that Michael Manoel Chaves, an ex-paramedic who was sacked by the NHS for Gross Misconduct after stealing from two patients he was treating, is involved with the anti-vaccine community. These are the type of individuals who were previously interested in alternative medicine or conspiracy theories. Another study showed that a predisposition to believe in conspiracy theories was negatively correlated to the intention of individuals to get vaccinated. Spreading vaccine misinformation can lead to financial rewards by posting on social media and asking for donations or fundraising for anti-vaccination causes. == List of popular misinformation == The World Health Organization has classified vaccine related misinformation into five topic areas. These are: threat of disease (vaccine preventable diseases are harmless), trust (questioning the trustworthiness of healthcare authorities who administer vaccines), alternative methods (such as alternative medicine to replace vaccination), effectiveness (vaccines do not work) and safety (vaccines have more risks than benefits). === Vaccination causes idiopathic conditions === False: Vaccines cause autism: The established scientific consensus is that there is no link between vaccines and autism. No ingredients in vaccines, including thiomersal, have been found to cause autism. The incorrect claim that vaccines cause autism dates to a paper published in 1998 and has since been retracted. In the late 1990s' a physician at Royal Free Hospital by the name of Andrew Wakefield published an article claiming to have found an explanation for autism. He first reported a relationship between measles virus and colonic lesions in Crohn's disease, which was soon disproved. He next hypothesized that the MMR triad vaccine, the vaccine for measles, triggered colonic lesions that disrupted the colon's permeability, causing neurotoxic proteins to enter the bloodstream, eventually reach the brain and result in autistic symptoms. The article was partially retracted by The Lancet as of March 6, 2004, after journalist Brian Deer raised issues including the possibility of severe research misconduct, conflict of interest and probable falsehood. The paper was fully retracted as of February 2, 2010, following an investigation of the flawed study by Britain's General Medical Council which supported those concerns. The British Medical Association took disciplinary action against Wakefield on May 24, 2010, revoking his right to practice medicine. There are some indications that people with autism may also tend to have gastrointestinal disorders like an unusually shaped intestinal tract and micro bacteria alterations. However, multiple large-scale studies of more than half a million children have been carried out without finding a causal link between MMR vaccines and autism. False: Vaccines can cause the same disease that one is vaccinated against: A vaccine causing complete disease is extremely unlikely (with the sole exception of the oral polio vaccine, which is no longer in use as a result). In traditional vaccines, the virus is attenuated (weakened) and thus it is not possible to contract the disease, while in newer technologies like mRNA vaccines the vaccine does not contain the virus at all. False: Vaccines can cause harmful side effects and even death: Vaccines are very safe. Most adverse events after vaccination are mild and temporary, such as a sore throat or mild fever, which can be controlled by taking paracetamol after vaccination. False: Vaccines can cause infertility: There is no supporting evidence or data that any vaccines have a negative impact on women's fertility. In 2020, as COVID-19 numbers rose and vaccinations started to roll out, the misinformation around vaccines causing infertility began to circulate. The false narrative began that mRNA vaccine-induced antibodies which act against the SARS-CoV-2 spruce protein could also attack the placental protein syncytin-1, and that this could cause infertility. There is no evidence to support this. A joint statement of the American College of Obstetricians and Gynecologists, the American Society for Reproductive Medicine, and the Society for Maternal-Fetal Medicine clearly states “that there is no evidence that the vaccine can lead to loss of fertility”. There are numerous studies and surveys that purport to show an association between vaccines and a range of conditions: from ear infections and asthma, to ADHD and autism; however most of the studies have been retracted, or are unpublished, and the surveys are non peer-reviewed. One of the studies in question has been criticised for only calculating an unadjusted observational association (as opposed to a correlation or causation). === Alternative remedies to vaccination === Responding to misinformation, some may resort to complementary or alternative medicine as an alternative to vaccination. Those who believe in this narrative view vaccines as 'toxic and adulterating' while seeing alternative 'natural' methods as safe and effective. Some of the misinformation circulating around alternate remedies for vaccination include: False: Eating yoghurt cures human papillomavirus: Eating any natural product does not prevent or cure HPV. False: Homeopathy can be used as an alternative to protect against measles: Homeopathy has been shown to be ineffective against preventing measles. False: Quercetin, zinc, vitamin D, and other nutritional supplements can protect from/treat COVID-19: none of the above can prevent or treat COVID-19. False: Nosodes are an alternative to vaccines: There is no evidence supporting nosodes effectiveness in preventing or treating infectious diseases. === Vaccination as genocide === Misinformation that forced vaccination could be used to "depopulate" the earth circulated in 2011 by misquoting Bill Gates. There is misinformation implying that vaccines (particularly the mRNA vaccine) could alter DNA in the nucleus. mRNA in the cytosol is very rapidly degraded before it would have time to gain entry into the cell nucleus. (mRNA vaccines must be stored at very low temperatures to prevent mRNA degradation.) Retrovirus can be single-stranded RNA (just as SARS-CoV-2 vaccine is single-stranded RNA) which enters the cell nucleus and uses reverse transcriptase to make DNA from the RNA in the cell nucleus. A retrovirus has mechanisms to be imported into the nucleus, but other mRNA lack these mechanisms. Once inside the nucleus, creation of DNA from RNA cannot occur without a primer, which accompanies a retrovirus, but which would not exist for other mRNA if placed in the nucleus. Thus, mRNA vaccines cannot alter DNA because they cannot enter the nucleus, and because they have no primer to activate reverse transcriptase. === Vaccine components contain forbidden additives === Anti-vaxxers emphasize that the components in vaccines such as thiomersal and aluminum are capable for causing health hazards. Thiomersal is a harmless component in vaccines which is used to maintain its sterility, and there are no known adverse effects due to it. Aluminium is included in the vaccine as an adjuvant, and it has low toxicity even in large amounts. Formaldehyde included in some vaccines is in negligibly low quantities and it is harmless. Narratives that COVID-19 vaccines contain haram products were circulated in Muslim communities. === Vaccines are part of a governmental/pharmaceutical conspiracy === The Big Pharma conspiracy theory, that pharmaceutical companies operate for sinister purposes and against the public good, has been used in the context of vaccination. The theory states that vaccines have unusual substances in them and that they are only made for an increase in profit. === Vaccine preventable diseases are harmless === There is a common misconception that vaccine-preventable diseases such as measles are harmless. However, measles remains a serious disease, and can cause severe complications or even death. Vaccination is the only way to protect against measles. === Personal anecdotes about harmed individuals === Personal anecdotes and sometimes false stories are circulated about vaccination. Misinformation has spread claiming that people died due to COVID-19 vaccination. There are individuals that perpetuate the harmful mistruths about vaccinations and the falsified links vaccinations have with autism. Through the spread of false media, civilians are blindly being led to believe that vaccinations are the leading cause of autism, when in fact, this is far from the truth. For one, autism occurs during fetal development, not after the mother has given birth (Rodier, P. M. 2000). However, there are contributing factors that can influence where a child may be placed on the spectrum. These factors include the mother consuming medication while pregnant that should not be consumed during pregnancy, genetics playing a part, the environment as well as metabolic disorders and epigenetic mechanisms (Manzi, B. et al. 2008). Though individuals tend to believe that autism is a harmful and negative disorder—and therefore refusing to be vaccinated—they are actually causing more harm to themselves and others by potentially putting themselves at risk of being exposed to diseases and infections that can be harmful to their body. Moreover, when infected, they can then transfer the disease to a person who is immunocompromised. This not only harms themselves but can contribute to the spread of viral infections with harmful long-term effects that can potentially result in death. All in all, through the many experiments performed on the links between vaccinations and autism, no experiment has conclusively proven the link between autism and vaccinations. === Other conspiracy theories === Other conspiracy theories circulated on social media have included the false notion such as; False: Polio is not a real disease and the symptoms are actually due to DDT poisoning: The first major documented polio outbreak in the United States occurred in 1894 in Vermont. In the early 20th century, a polio epidemic started in the west causing 6,000 deaths and leaving 27,000 people paralyzed. In 1954, the Salk Institute created the polio vaccine putting an end to the epidemic and saving millions of lives. The incorrect theory that polio was related to pesticide poisoning predates the discovery of the polio vaccine. It was proposed in 1952 by Dr. Ralph R. Scobey in an article in the Archives of Pediatrics. Scobey argued that there were similarities between the symptoms of polio and various types of poisoning, and suggested that polio outbreaks might be more likely to occur during the summer and be related to consumption of fresh fruit and vegetables. While pesticides such as DDT are dangerous, as was shown by Rachel Carson in Silent Spring in 1962, they are not dangerous in the way that Scobey believed them to be, as a cause of polio. Studies have clearly demonstrated causal relationships showing that polio is caused by a virus. Vaccines have proven effective in preventing the disease and eliminating wild poliovirus in most parts of the world. False: The COVID-19 vaccines contain injectable microchips to identify and track people: This conspiracy theory started circulating in 2020 claiming the COVID-19 pandemic was a cover for a plan to implant trackable microchips and Bill Gates, co-founder of Microsoft, was behind it. A YouGov poll conducted in 2020 suggested that 28% of Americans believe in this conspiracy theory. The origin of the theory is a long-term effort of Bill and Melinda Gates Foundation on sponsoring research on vaccinating people by pricking skin with an array of a large count of sharp microneedles coated with a vaccine, as long as with some fluorescent ink. The needles were made of silicon using the similar technology integrated circuits are made. Any piece of silicon resulted from this technology is called a "chip", be it an integrated circuit, a MEMS device, or something else. So the theory has arisen from the confusion of different meanings of the word "chip". In the series of research papers, the chip is just pressed against the skin with a finger to make the needles prick the skin, then the vaccine coating and fluorescent ink are transferred from the needles into skin, then the chip itself is disposed. The ink is meant to leave a tattoo that could be visualized by irradiating the dye with the light of certain wavelengths, this way allowing to check if the tattoo was made, which is useful in the contexts when vaccination is compulsory and using more low-cost and secure alternatives like database lookups of ID card or biometrics is infeasible due to lack of infrastructure like power grid and Internet connectivity. So the chip is neither meant to be implanted, nor can physically fit into a syringe needle, as the conspiracy theory suggests. == Impact == Fueled by misinformation, anti-vaccination activism is on the rise on social media and in many countries. Research has shown that viewing a website containing vaccine misinformation for 5–10 minutes decreases a person's intention to vaccinate. A 2020 study found that "large proportions of the content about vaccines on popular social media sites are anti-vaccination messages." It further found that there is a significant relationship between joining vaccine hesitant groups on social media and openly casting doubts in public about vaccine safety, as well as a substantial relationship between foreign disinformation campaigns and declining vaccination coverage. In 2003, rumors about polio vaccines intensified vaccine hesitancy in Nigeria and led to a five-fold increase in the number of polio cases in the country over three years. A 2021 study found that misinformation about COVID-19 vaccines on social media "induced a decline in intent [to vaccinate] of 6.2 percentage points in the [United Kingdom] and 6.4 percentage points in the [United States] among those who said they would definitely accept a vaccine". Social media is again the leading platform for the rapid spreading of vaccine misinformation during a pandemic. For example, A study in 2020 of public opinions about the developing Chinese domestic COVID-19 vaccines found around one-fifth of the post on weibo related to the vaccine claimed that the COVID-19 vaccines are generally overpriced, even though they are later being administered totally free. Many people in China also hold the belief that inactive vaccines are safer than the newly developed mRNA vaccine of SARS-Covid-2. The cause of this might be a combination of national pride and a lack of understanding of vaccine literacy. In general, misinformation related to the COVID-19 vaccine reduced public confidence. Public acceptance of Chinese domestic COVID-19 vaccines dropped significantly due to concerns about the possible high cost. An online survey in China showed only 28.7% of the participants expressed definite interest in getting the vaccine. Most people (54.6%) held some hesitancy toward the vaccine. == Measures against misinformation == === Communication === After repeated exposure to misinformation - for example through social media-, individuals might hold misinformed mental models of the function, risk, and purpose of vaccines. The longer an individual holds misinformation, the more staunchly rooted it becomes in their mental model, making its correction and retraction all the more difficult. Over time, these models may become integral to a vaccine hesitant individual's worldview. People are likely to filter any new information they receive to fit their preexisting worldview – corrective vaccine facts are no exception to this motivated reasoning. Thus, by the time vaccine hesitant individuals arrive at the doctor's office, healthcare workers face an uphill battle. If they seek to change minds and maintain herd immunity against preventable diseases, they must do more than simply present facts about vaccines. Providers need communication strategies that effectively change minds and behavior. Communication strategies to counter vaccine misinformation and effectively improve the intention to vaccinate include communicating the scientific consensus that vaccines are safe and effective, using humour to dispel vaccine myths, and providing vaccine misinformation warnings. Compared to these, debunking vaccine misinformation and providing vaccine education materials work less in tackling misinformation. Scare tactics, and failing to acknowledge uncertainty is not effective, and can even backfire and worsen the intention to vaccinate. Research shows that science communicators should directly counter misinformation because of its negative influence on silent audience who are observing the vaccine debate, but not engaging in it. The refutations to vaccine-related misinformation should be straightforward in order to avoid emphasizing misinformation. It is useful to pair scientific evidence with stories that connect to the belief and value system of the audience. Interventions for parents/caregivers who make decisions about their children's vaccination are vital. Given the complexity of this problem, effective evidence-based strategies have yet to be identified. Although many wish to provide families with as much corrective information as possible, this often has unintended consequences. One study in 2013 tested four separate interventions to correct MMR vaccine misinformation and promote parental behavioral change: (1) Provide information explaining lack of evidence that MMR causes autism. (2) Present textual information about the dangers of measles, mumps, and rubella. (3) Show images of children with measles, mumps and rubella. (4) Provide a dramatic written narrative about an infant who became deathly ill from measles. Before and after each intervention, researchers measured parents' belief in the vaccine/autism misperception, their intent to vaccinate future children, and their general risk perception of the vaccine. They found that none of the interventions increased parental intent to vaccinate. Instead, the first intervention (1) reduced misperceptions about autism, but still decreased parents' intent to vaccinate future children. Notably, this effect was significant among parents who were already the most vaccine-hesitant. This shows that corrective information may backfire. Motivated reasoning could be the mechanism behind this dynamic – no matter how many facts are provided, parents still sift through them to selectively find those that support their worldview. While the corrective information can have an effect on a specific belief, ultimately vaccine-hesitant parents often use this additional information to strengthen their original behavioral intent. Interventions three and four increased the vaccine/autism misperception and increased belief in serious vaccine side effects, respectively. This can be attributed to a potential danger priming effect – when pushed into a fearful state, parents misattribute this fear to the vaccine itself, rather than the diseases it prevents. In all cases, the facts included had little, if not counterproductive effect on future behaviors. This work has important implications for future research. First, the study's findings revealed a disparity between beliefs and intentions – even as specific misperceptions are corrected, behavior may not change. Since reaching herd immunity for preventable diseases requires promoting a behavior – vaccination – it is important for future research to measure behavioral intent, rather than just beliefs. Second, it is imperative for all health messaging to be tested before its widespread use. Society does not necessarily know the behavioral impacts of communication interventions – they may have unintended consequences on different groups. In the case of correcting vaccine misinformation and changing vaccination behaviors, much more research is still needed to identify effective communication strategies. Several governmental agencies, such as the Centers for Disease Control (CDC) in the United States and National Health Service (NHS) in the United Kingdom have dedicated webpages for addressing vaccine-related misinformation. === Social media === Pinterest was one of the first social media platforms to surface only trustworthy information from reliable sources on their vaccine related searches back in 2019. In 2020, Facebook announced that it would no longer allow anti-vaccination advertisements on its platform. Facebook also said it would elevate posts from the World Health Organization, UNICEF and other NGOs, in order to increase immunization rates through public health campaigns. In April 22, Meta announced that its collaboration with UNICEF had reached more than 150 million people with information about the COVID-19 vaccine via online outreach campaigns in several countries. Twitter announced that it would put a warning label on tweets containing disputed or unsubstantiated rumors about vaccination and require users to remove tweets that spread false information about vaccines. TikTok announced that it would start directing people to official health sources when they search for vaccine related information. By December 2020, YouTube had removed more than 700,000 videos containing misinformation related to COVID-19. === Vaccine-preventable diseases have been eradicated === Vaccination has enabled the reduction of most vaccine-preventable diseases (e.g. polio has been eradicated in every country except Afghanistan and Pakistan). However, some are still prevalent and even cause epidemics in some parts of the world. If the affected population is not protected by vaccination, the disease can quickly spread from country to country. Vaccines not only protect individuals, but also lead to herd immunity if a sufficient number of people in the population have taken the vaccine. Eradication is the permanent elimination of an infectious disease worldwide through deliberate efforts, rendering further intervention measures unnecessary. To date, the only disease that has been successfully eradicated is smallpox. Poliomyelitis was targeted for eradication by the year 2000, and significant progress was made towards this goal, with the Western Hemisphere being declared polio-free and over a year having passed without any reported cases in the Western Pacific Region of the World Health Organization. An examination of the technical feasibility of eradicating other diseases preventable by vaccines currently available in the United States suggests that measles, hepatitis B, mumps, rubella, and possibly Haemophilus influenzae type b are potential candidates for eradication. From a practical standpoint, measles appears to be the most likely candidate for the next eradication effort. Despite the challenges, eradication represents the ultimate achievement in sustainability and social justice, and even if eradication is not possible, significant improvements in control can still be made with existing vaccines and new and improved vaccines may offer further possibilities in the future. == See also == COVID-19 vaccine misinformation and hesitancy == References == == External links ==
Wikipedia/Vaccine_misinformation
An ideograph or virtue word is a word frequently used in political discourse that uses an abstract concept to develop support for political positions. Such words are usually terms that do not have a clear definition but are used to give the impression of a clear meaning. An ideograph in rhetoric often exists as a building block or simply one term or short phrase that summarizes the orientation or attitude of an ideology. Such examples notably include <liberty>, <freedom>, <democracy> and <rights>. Rhetorical critics use chevrons or angle brackets (<>) to mark off ideographs. The term ideograph was coined by rhetorical scholar and critic Michael Calvin McGee (1980) describing the use of particular words and phrases as political language in a way that captures (as well as creates or reinforces) particular ideological positions. McGee sees the ideograph as a way of understanding of how specific, concrete instances of political discourse relate to the more abstract idea of political ideology. Robertson defines ideographs as "political slogans or labels that encapsulate ideology in political discourse." Meanwhile, Celeste Condit and John Lucaites, influenced by McGee, explain, "Ideographs represent in condensed form the normative, collective commitments of the members of a public, and they typically appear in public argumentation as the necessary motivations or justifications for action performed in the name of the public." Ideographs are common in advertising and political discourse. == Definition == McGee uses the term in his seminal article "The 'Ideograph': A Link Between Rhetoric and Ideology" which appeared in the Quarterly Journal of Speech in 1980. He begins his essay by defining the practice of ideology as practice of political language in specific contexts—actual discursive acts by individual speakers and writers. The question this raises is how does this practice of ideology create social control. McGee's answer to this is to say that "political language which manifests ideology seems characterized by slogans, a vocabulary of 'ideographs' easily mistaken for the technical terminology of political philosophy." He goes on to offer his definition of "ideograph": "an ideograph is an ordinary-language term found in political discourse. It is a high order abstraction representing commitment to a particular but equivocal and ill-defined normative goal." An ideograph, then, is not just any particular word or phrase used in political discourse, but one of a particular subset of terms that are often invoked in political discourse but which does not have a clear, univocal definition. Despite this, in their use, ideographs are often invoked precisely to give the sense of a clearly understood and shared meaning. This potency makes them the primary tools for shaping public decisions. It is in this role as the vocabulary for public values and decision-making that they are linked to ideology. == Examples == There is no absolute litmus test for what terms are or are not ideographs. Rather, this is a judgment that must be made through the study of specific examples of discourse. However, McGee (and others who have followed him) have identified several examples of ideographs or virtue words in Western liberal political discourse, such as <liberty>, <property>, <freedom of speech>, <religion>, and <equality>. In each case, the term does not have a specific referent. Rather, each term refers to an abstraction which may have many different meanings depending on its context. It is in their mutability between circumstances that give the terms such rhetorical power. If the definition of a term such as <equality> can be stretched to include a particular act or condition, then public support for that act or condition is likely to be stronger than it was previously. By encapsulating values which are perceived to be widely shared by the community, but which are in fact highly abstract and defined in very different ways by individuals, ideographs provide a potent persuasive tool for the political speaker. McGee offers the example of Richard Nixon's attempt to defend his decision not to turn over documents to Congress during the Watergate scandal by invoking "the principle of confidentiality." Recognizing that his refusal to submit to Congress could be seen as a violation of the "rule of law", Nixon pitted "the principle of confidentiality" against the "rule of law," despite the fact that these two ideographs would, in the abstract, not likely be seen as in conflict with one another. Nixon, in an attempt to expand the understanding of "the principle of confidentiality" to cover his own specific refusal to cooperate with Congress, used the abstractness of the term to his benefit, claiming that right to confidentiality was the more central term. While the term has remained mostly in this sphere of academic rhetorical criticism, some political consultants and practitioners are becoming savvy to this art. Ideographs appear in advertising and political campaigns regularly, and are crucial to helping the public understand what is really being asked of them. For example, "equality" is a term commonly used in political discourse and rarely defined. It can refer to a situation in which all people have the same opportunities, or a condition in which social resources are distributed uniformly to different individuals and groups. The former is the more commonly used definition in US history, according to Condit & Lucaites, although in a socialist or left-leaning political state, the term may refer foremost to the distribution of social resources. Condit and Lucaites depict the racial facet of equality as the dominant meaning in an American context of political discourse, since 1865. Another important ideograph used specifically by U.S. presidents Barack Obama and George W. Bush after the 9/11 attacks is <terrorism>. The term does not have a clear or specific definition, but when applied to the context in the fear-stricken country after the devastating attacks in 2001, this term held significant weight and meaning to Americans all across the country. Kelly Long explores Obama's discourse on the <War on Terror> and states that "by developing an ideological justification for the conflicts that the United States was involved in at the time, Obama remedied much of the damage done by the Bush administration". Obama justified the <War on Terror> by addressing the nation and saying that in order to protect the <rule of law> and <democratic values>, we must fight against <terrorism>. Obama used this term to his advantage and made <terrorism> appear to be a common enemy and fighting back was the common cause. This use of the ideograph unified the country creating a sense of identity for American citizens, "defining what the nation stands for and against. The term divides those who are civilized from those who are uncivilized, those who defend economic freedom from those who would attack America’s way of life and those who support democracy from those who would disrupt it". Marouf Hasian discusses how key ideographs representative of a society's commitments change over time, particularly in the name of <liberty>, <equality>, or <privacy> epitomized in eugenics. From the 1900s-1930s, Americans justified the restriction of reproductive rights based on medical, social, economic, and political considerations, but were appalled when the Nazis used some of the same arguments in their creation of the "perfect race". While rhetorical critics identify these terms as ideographs, political leaders viewed each other's terms as "glittering generalities," as Lincoln first identified his opponent's words. In addition to practitioners, corporate marketing and political consulting use key terms in this way, concentrating on the image and branding of terms. For example, Frank Luntz tests audience reaction to certain words or phrases using dial technology, a mechanism which instantaneously shows moment by moment reactions to speeches or presentations. This research has been extremely beneficial to his clients, as they can use ideographs as "trigger words" in an advertising campaign. == Importance == There are three primary ways in which the concept of the ideograph is important to rhetorical critics. First, it suggests a way of studying political ideology using concrete instances of language use. By showing how looking at specific uses of key words and phrases in political language reveal underlying ideological commitments, McGee offers a concrete method for understanding the highly abstract concept of ideology. Second, the definition of the ideograph makes clear that the rhetorical study of a term is different from a legal, historical, or etymological study of a term. Unlike other perspectives that focus on how a term has changed over time, a rhetorical study of a term focuses on the forces involved in the creation of these meanings. In short, a rhetorical study of a term is the study of the use of that term in practice. This leads to a third key aspect of what the concept of the ideograph offers to rhetorical critics. McGee notes that the study of a term must not, and should not, be limited to its use in "formal discourse." Instead, the critic is much more likely to gain a better understanding of an ideograph by looking at how it is used and depicted in movies, plays, and songs, as well as how it is presented in educational texts aimed at children. This moves the study of ideology beyond the limits of social philosophy or even political discourse as traditionally conceived (i.e., "great speeches by great men"). == Cultural variability == "An ideograph is a culturally biased, abstract word or phrase drawn from ordinary language, which serves a constitutional value for a historically situated collectivity." There exists a culturally-specific understanding in each culture about what an ideograph means. Ideographs in rhetoric are culturally specific but recur inter-culturally; meaning the understanding of one ideograph can be used and interpreted differently across cultures. The idea may be different from culture to culture, but this doesn't mean some aspects won't be the same in one or more cultures. For example, the concept of femininity that exists cross-culturally to define ideas about women, yet one can expect these ideas to vary from culture to culture. == Critical use == At the end of his essay defining the ideograph, McGee says that “A complete description of an ideology . . . will consist of (1) the isolation of a society’s ideographs, (2) the exposure and analysis of the diachronic structure of every ideography, and (3) characterization of synchronic relationships among all the ideographs in a particular context.” Such an exhaustive study of any ideology has yet to materialize, but many scholars have made use of the ideograph as a tool of understanding both specific rhetorical situations as well as a broader scope of ideological history. As a teacher, McGee himself made use of the ideograph as a tool for structuring the study of the rise of liberalism in British public address, focusing on ideographs such as <property>, <patriarchy>, <religion>, <liberty>. Other scholars have made a study of specific uses of ideographs such as <family values> and <equality>. Some critics have gone beyond the idea that an ideograph must be a verbal symbol and have expanded the notion to include photographs. and objects represented in mass media. == See also == Essentially contested concept Loaded language Propaganda == References == == Further reading == Pineda, R. D., & Sowards, S. K. (2007). Flag waving as visual argument: 2006 immigration demonstrations and cultural citizenship. Argumentation & Advocacy, 43(3/4), 164–174. Potter, J. E. (2014). Brown-skinned outlaws: An ideographic analysis of "illegal(s)". Communication, Culture & Critique, 7(2), 228–245.
Wikipedia/Ideograph_(rhetoric)
Misinformation related to birth control pertains to incorrect or misleading information surrounding birth control and its medical, legal and societal implications. This misinformation is mostly related to contraceptive methods that do not contain basis in science. Belief in this misinformation can deter people from using effective solutions in favor of solutions that are entirely ineffective and, in some cases, harmful to health. == Commonly propagated misinformation == === Misconceptions and myths around negative side effects of birth control === The myth that birth control increases the risk of transmitting a sexually transmitted infection: Some forms of birth control, namely condoms and dental dams, can prevent the transmission of STIs by providing a barrier for skin to skin contact and fluid exchange. Some forms of hormonal birth control, such as the pill, prevent pregnancy but do not prevent the spread of STIs. The pill has been shown preliminarily to increase the chances of certain types of STI transmission slightly, while lowering the risk of spread of others. The myth that taking birth control pills negatively impacts future fertility: Oral contraceptives prevent pregnancy temporarily but haven to been shown to significantly impact future fertility. The body's hormonal balance typically restores to regular, pre-pill levels within a few cycles after discontinuing the pill. The myth that IUDs can cause Pelvic Inflammatory Disease: The risk of developing Pelvic Inflammatory Disease following the insertion of an IUD has been proven to be very low. IUDs are considered amongst the safest and most effective forms of birth control. The myth that birth control negatively impacts libido and sexual attraction: While hormonal birth control such as the pill may impact libido, most people taking birth control won't experience any change. For the small percentage that do, some may experienced decreased libido, while others may experience increased libido. The misconception that male birth control causes exploding testicles The misconception that contraception causes blood clots and death: While birth control pills, namely those with estrogen have been shown to increase the risk of blood clots, this increase is small—at most 10 in 10,000 people per year develop blood clots from being on birth control pills. Women with a history of blood clots are at much higher risk and should consult a doctor. The myth that use of birth control affects future fertility: Research shows that both long and short term birth control pills do not affect the future ability to have children. Individuals with IUDs can get pregnant after the IUD is removed. Fertility declines with age, and it has no significant effect on use of birth control methods. The misconception that birth control increases the risk of cancer: Birth control pills actually decrease the chance of getting endometrial, ovarian and colon cancer. The misconceptions that all birth control pills can cause stroke or blood clots: Individuals with uncontrolled high blood pressure, lupus, migraines with aura or depression needing further monitoring are not given birth control pills containing combination hormones because it increases their chances of getting a stroke or blood clots elsewhere in the body. In that case, an IUD is a better choice. Birth control pills cause weight gain: Studies have shown that the effect of birth control pills on weight gain is relatively small or nil. Birth control pills can cause water retention. The progesterone components of the pill can make some people more hungry, leading to weight gain. === Misconceptions and myths around a women's menstrual cycle / maternity and birth control === The misconception that the woman is only fertile one day of the month: While women's cycles are generally regular, hormones involved in the menstrual cycle can be impacted by a number of factors, including medication and stress. This can cause ovulation to happen on a different day than expected, or for more than one day per month, resulting in fertile days that a woman may not anticipate even if actively tracking her cycle. The myth that women can't get pregnant right after their period: Although ovulation typically occurs around 10–16 days before the next period, various factors can cause early ovulation in any given cycle; some women also have naturally short ovulation cycles, making them more likely to get pregnant closer to the end of their period. The myth that active breastfeeding prevents pregnancy: Breastfeeding is shown to help prevent ovulation in certain circumstances: the baby is younger than 6 months old; breastfeeding happens at least every four hours during the day and every six hours overnight; the woman is currently not having her period. While adhering strictly to these conditions shows success rates similar to hormonal birth control, they are difficult to adhere to and may not be practical for many women. === Misconceptions and myths around sexual practices and birth control === The myth that if the woman doesn't orgasm, she can't get pregnant: Women ovulate each month as part of their regular menstrual cycle. Pregnancy occurs when a sperm fertilizes an egg that has been released in ovulation. This happens regardless of whether a woman has an orgasm. The misconception that if the woman douches after sex, she won't get pregnant: Douching is washing out the vagina with fluids. It does not prevent pregnancy and is often advised against more generally by doctors. The myth that the woman won't get pregnant if she has sex standing up or is on top. Sperm is able to move up the cervical canal regardless of the position the woman is in during sex. The myth that plastic wrap or a balloon are acceptable condom substitutes: Plastic wrap and balloons, besides not being designed for the shape of a penis, are not durable and may not serve as an effective barrier. The myth that the woman won't get pregnant if the man pulls out before ejaculation: While more effective than no birth control at all, pulling out before ejaculation is less effective than other birth control methods—approximately one in five people who rely on this method will get pregnant. The myth that the woman can't get pregnant if it is her first time having sex: The number of times you've had sex has no bearing on whether or not you become pregnant—you can become pregnant the first time you have sex. The myth that the woman won't get pregnant if she takes a shower or urinates after sex: While urinating after sex may help prevent infections like UTIs, it will not prevent pregnancy. === Misconceptions around vaccines and infertility === There have been a number of myths about different vaccines causing infertility, from vaccines for tetanus to COVID-19 to smallpox falsely associated with infertility. However, no vaccines have been found to be associated with infertility. === Plants erroneously believed to prevent pregnancy === Plants erroneously believed to prevent pregnancy: Several plants are erraneously believed to prevent pregnancy, such as smartweed, wild yam, Pennroyal, Black Cohosh, Angelica, papaya, neem, asafoetida, figs and ginger. None of these plants have been found to be beneficial for achieving contraception. === Misconceptions and myths around how birth control works === The myth that birth control causes abortions. This misconception is documented as blocking access Birth control is prevention, not interruption of pregnancy. However, between 2022 and 2024, a number of lawmakers across the United States publicized claims that IUDs and morning-after pills cause abortions and should therefore not be funded by taxpayer money. In 2022, a group of conservative, anti-abortion Missouri lawmakers attempted to stop their state's Medicaid from paying for emergency contraceptives and IUDs on these grounds. In February 2024, Oklahoma lawmakers proposed a bill to ban the morning-after pill and some IUDs. In May 2024, Virginia Governor Glenn Youngkin vetoed a bill protecting access to contraception, citing that some Virginians believed contraception causes abortions and thus protecting access would intrude on their religious freedoms. These false assertions and the publicity surrounding them have amounted to barriers to access to birth control–in the 13 states in the United States that had total abortion bans at the end of 2024, many women believe they can no longer access some forms of birth control. A survey in 2023 found that almost half of women in states where abortion is fulled banned believe Plan B is illegal in their states. The myth that birth control pills are effective immediately after taking the initial dose(s): Different types of pills have different windows for when efficacy begins, though none is immediately effective after the initial dose. The misconception that IUDs cannot or should not be used prior to having had a baby: IUDs are completely safe to use even in individuals who have never been pregnant before. Antibiotics impact the efficacy of birth control pills:Except for a tuberculosis drug Rifampicin, antibiotics generally do not decrease the efficacy of birth control pills. == References ==
Wikipedia/Misinformation_related_to_birth_control
A strategy of tension (Italian: strategia della tensione) is a political policy wherein violent struggle is encouraged rather than suppressed. The purpose is to create a general feeling of insecurity in the population and make people seek security in a strong government. The strategy of tension is most closely identified with the Years of Lead in Italy from 1968 to 1982, wherein far-left Marxist groups, far-right neo-fascist extra-parliamentary groups and state intelligence agencies performed bombings, kidnappings, arsons, and murders. Some historians and activists have accused NATO of allowing and sanctioning such terrorism, through projects such as Operation Gladio, although this is disputed by other historians and denied by the intelligence agencies involved. Other cases where writers have alleged a strategy of tension include the deep state in Turkey from the 1970s–1990s, the war veterans and ZANU–PF in Zimbabwe which coordinated the farm invasions of 2000, the DRS security agency in Algeria from 1991 to 1999, and the Belgian State Security Service during the Belgian terrorist crisis of 1982–1986. According to the sociologist Franco Ferraresi, the term "strategy of tension" was first used in an article on the Piazza Fontana bombing in The Observer newspaper, published on 14 December 1969. Neal Ascherson, one of those responsible for that article, later clarified that the expression had been suggested to him by the journalists Antonio Gambino and Claudio Risé, both of L'Espresso, who had been in conversation with him in the days immediately following the explosion of the Piazza Fontana bomb. == Alleged examples == United Kingdom During the sectarian 40 year long conflict in Northern Ireland known as The Troubles, there were allegations of significant state collusion between paramilitaries and the UK Government === Italy === From 1968 to 1982, Italy suffered numerous terrorist attacks by both the left and the right, which were often followed by government round-ups and mass arrests. Allegations, especially made by adherents of the Italian Communist Party (PCI), are that the government trumped up and intentionally allowed the attacks of communist radicals, or even carried out false flag operations in their name, as an excuse to arrest other communists, and allowed the attacks of far-right paramilitary organizations as an extrajudicial way to silence enemies. Various parliamentary committees were held to investigate and prosecute these crimes in the 1990s. A 1995 report from the Left Democrats (a merger of former center-left parties and the PCI) to a subcommittee of the Italian Parliament stated that a "strategy of tension" had been supported by the United States to "stop the PCI, and to a certain degree also the PSI, from reaching executive power in the country". Aldo Giannuli [it], a historian who worked as a consultant to the parliamentary terrorism commission, wrote that he considered the Left Democrats' report as dictated primarily by domestic political considerations rather than historical ones: "Since they have been in power the Left Democrats have given us very little help in gaining access to security service archives," he said. "This is a falsely courageous report." Giannuli did decry the fact that many more leftist terrorists were prosecuted and convicted than rightist terrorists, though. Swiss academic Daniele Ganser wrote NATO's Secret Armies, a 2004 book that alleged direct NATO support for far-right terrorists in Italy as part of its "strategy of tension". Ganser also alleges that Operation Gladio, an effort to organize stay-behind guerrillas and resistance in the event of a communist takeover of Italy by the Eastern Bloc, continued into the 1970s and supplied the far-right neo-fascist movements with weapons. Ganser's conclusions have been disputed; most notably, Ganser heavily cites the document US Army Field Manual 30-31B, which the US state department claims is a 1976 Soviet hoax meant to discredit the US whilst others such as Ray S. Cline have claimed it is likely authentic and Licio Gelli who claimed it was in fact given to him by the CIA. In a 1992 BBC documentary on Gladio titled Operation GLADIO, the neo-fascist terrorist Vincenzo Vinciguerra reported that the stay-behind armies really did possess this strategy, stating that the state needed those terrorist attacks for the population to willingly turn to the state and ask for security. == See also == Agent provocateur Culture of fear False flag Operation Gladio Years of Lead (Italy) == References == == External links == The Strategy of Tension on libcom.org
Wikipedia/Strategy_of_tension
Lewis's trilemma is an apologetic argument traditionally used to argue for the divinity of Jesus by postulating that the only alternatives were that he was evil or mad. One version was popularised by University of Oxford literary scholar and writer C. S. Lewis in a BBC radio talk and in his writings. It is sometimes described as the "Lunatic, Liar, or Lord", or "Mad, Bad, or God" argument. It takes the form of a trilemma — a choice among three options, each of which is in some way difficult to accept. A form of the argument can be found as early as 1846, and many other versions of the argument preceded Lewis's formulation in the 1940s. The argument has played an important part in Christian apologetics. Criticisms of the argument have included that it relies on the assumption that Jesus claimed to be God, something that most biblical scholars do not believe to be true, and that it is logically unsound since it presents an incomplete set of options. == History == This argument has been used in various forms throughout church history. It was used by the American preacher Mark Hopkins in Lectures on the Evidences of Christianity (1846), a book based on lectures delivered in 1844. Another early use of this approach was by the Scottish preacher "Rabbi" John Duncan (1796–1870), around 1859–1860. He stated: "Christ either deceived mankind by conscious fraud, or He was Himself deluded and self-deceived, or He was Divine. There is no getting out of this trilemma. It is inexorable." J. Gresham Machen used a similar line of argument in fifth chapter of his famous work Christianity and Liberalism (1923). There, Machen says: "The real trouble is that the lofty claim of Jesus, if ... the claim was unjustified, places a moral stain upon Jesus' character. What shall be thought of a human being who lapsed so far from the path of humility and sanity as to believe the eternal destinies of the world were committed into his hands? The truth is that if Jesus be merely an example, he is not a worthy example for he claimed to be far more." Others who used this approach included N. P. Williams, R. A. Torrey (1856–1928), and W. E. Biederwolf (1867–1939). The writer G. K. Chesterton used something similar to the trilemma in his book, The Everlasting Man (1925), which Lewis cited in 1962 as the second book that most influenced him. == Lewis's formulation == Lewis was an Oxford medieval literature scholar, popular writer, Christian apologist, and former atheist. He used the argument outlined below in a series of BBC radio talks later published as the book Mere Christianity. There, he states: "I am trying here to prevent anyone saying the really foolish thing that people often say about Him: I'm ready to accept Jesus as a great moral teacher, but I don't accept his claim to be God. That is the one thing we must not say. A man who was merely a man and said the sort of things Jesus said would not be a great moral teacher. He would either be a lunatic—on the level with the man who says he is a poached egg—or else he would be the Devil of Hell. You must make your choice. Either this man was, and is, the Son of God, or else a madman or something worse. You can shut him up for a fool, you can spit at him and kill him as a demon or you can fall at his feet and call him Lord and God, but let us not come with any patronizing nonsense about his being a great human teacher. He has not left that open to us. He did not intend to. ... Now it seems to me obvious that He was neither a lunatic nor a fiend: and consequently, however strange or terrifying or unlikely it may seem, I have to accept the view that He was and is God." Lewis, who had spoken extensively on Christianity to Royal Air Force personnel, was aware that many ordinary people did not believe Jesus was God but saw him rather as "a 'great human teacher' who was deified by his superstitious followers"; his argument is intended to overcome this. It is based on a traditional assumption that, in his words and deeds, Jesus was asserting a claim to be God. For example, in Mere Christianity, Lewis refers to what he says are Jesus's claims: to have authority to forgive sins—behaving as if "He was the party chiefly concerned, the person chiefly offended in all offences" to have always existed; and to intend to come back to judge the world at the end of time. Lewis implies that these amount to a claim to be God and argues that they logically exclude the possibility that Jesus was merely "a great moral teacher", because he believes no ordinary human making such claims could possibly be rationally or morally reliable. Elsewhere, he refers to this argument as "the aut Deus aut malus homo" ("either God or a bad man"), a reference to an earlier version of the argument used by Henry Parry Liddon in his 1866 Bampton Lectures, in which Liddon argued for the divinity of Jesus based on a number of grounds, including the claims he believed Jesus made. === In Narnia === A version of this argument appears in Lewis's fantasy novel The Lion, the Witch and the Wardrobe. When Lucy and Edmund return from Narnia (her second visit and his first), Edmund tells Peter and Susan that he was playing along with Lucy and pretending they went to Narnia. Peter and Susan believe Edmund and are worried that Lucy might be mentally ill, so they seek out the Professor whose house they are living in. After listening to them explain the situation and asking them some questions, he responds: "'Logic!' said the Professor half to himself. 'Why don't they teach logic at these schools? There are only three possibilities. Either your sister is telling lies, or she is mad, or she is telling the truth. You know she doesn't tell lies and it is obvious she is not mad. For the moment then, and unless any further evidence turns up, we must assume she is telling the truth.'" == Influence == === Christian === The trilemma has continued to be used in Christian apologetics since Lewis, notably by writers like Josh McDowell. Philosopher Peter Kreeft describes the trilemma as "the most important argument in Christian apologetics", and it forms a major part of the first talk in the Alpha Course and the book based on it, Questions of Life by Nicky Gumbel, an English Anglican priest. Ronald Reagan used this argument in 1978, in a written reply to a liberal Methodist minister who said that he did not believe Jesus was the son of God. A variant has also been quoted by Bono. The Lewis version was cited by Charles Colson as the basis of his conversion to Christianity. Stephen Davis, a supporter of Lewis and of this argument, argues that it can show belief in the incarnation as rational. The biblical scholar Bruce M. Metzger argued: "It has often been pointed out that Jesus' claim to be the only Son of God is either true or false. If it is false, he either knew the claim was false or he did not know that it was false. In the former case (2) he was a liar; in the latter case (3) he was a lunatic. No other conclusion beside these three is possible." === Non-Christian === The atheist writer Christopher Hitchens accepts Lewis's analysis of the options but reaches the opposite conclusion that Jesus was not good. He writes: "I am bound to say that Lewis is more honest here. Absent a direct line to the Almighty and a conviction that the last days are upon us, how is it 'moral' ... to claim a monopoly on access to heaven, or to threaten waverers with everlasting fire, let alone to condemn fig trees and persuade devils to infest the bodies of pigs? Such a person if not divine would be a sorcerer and a fanatic." == Criticism == Writing of the argument's "almost total absence from discussions about the status of Jesus by professional theologians and biblical scholars", Stephen T. Davis comments that it is "often severely criticized, both by people who do and by people who do not believe in the divinity of Jesus". === Jesus' claims to divinity === The argument relies on the assumption that Jesus claimed to be God, something that most biblical scholars and historians of the period do not believe to be true. A frequent criticism is that Lewis's trilemma depends on the veracity of the scriptural accounts of Jesus's statements and miracles. The trilemma rests on the interpretation of New Testament authors' depiction of the life of Jesus; a widespread objection is that the statements by Jesus recorded in the Gospels are being misinterpreted, and do not constitute claims to divinity. According to the biblical scholar Bart D. Ehrman, it is historically inaccurate that Jesus called himself God, so Lewis's premise of accepting that very claim is problematic. Ehrman stated that it is a mere legend that the historical Jesus called himself God, and that this was unknown to Lewis since he never was a professional Bible scholar. In Honest to God, John A. T. Robinson, then Bishop of Woolwich, criticizes Lewis's approach, questioning the idea that Jesus intended to claim divinity: "It is, indeed, an open question whether Jesus claimed to be Son of God, let alone God". John Hick, writing in 1993, argued that this "once popular form of apologetic" was ruled out by changes in New Testament studies, citing "broad agreement" that scholars do not today support the view that Jesus claimed to be God, quoting as examples Michael Ramsey (1980), C. F. D. Moule (1977), James Dunn (1980), Brian Hebblethwaite (1985), and David Brown (1985). Larry Hurtado, who argues that the followers of Jesus within a very short period developed an exceedingly high level of devotional reverence to Jesus, at the same time says that there is no evidence that Jesus himself demanded or received such cultic reverence. According to Gerd Lüdemann, the broad consensus among modern New Testament scholars is that the proclamation of the divinity of Jesus was a development within the earliest Christian communities. === Unsound logical form === Another criticism raised is that Lewis is creating a false trilemma by insisting that only three options are possible. Craig A. Evans writes that the "liar, lunatic, Lord" trilemma "makes for good alliteration, maybe even good rhetoric, but it is faulty logic". He proceeds to list several other alternatives: Jesus was Israel's messiah, simply a great prophet, or we do not really know who or what he was because the New Testament sources portray him inaccurately. Philosopher and theologian William Lane Craig also believes that the trilemma is an unsound argument for Christianity. Craig gives several other logically possible alternatives: Jesus' claims as to his divinity were merely good-faith mistakes resulting from his sincere efforts at reasoning, Jesus was deluded with respect to the specific issue of his own divinity while his faculties of moral reasoning remained intact, or Jesus did not understand the claims he made about himself as amounting to a claim to divinity. Philosopher John Beversluis comments that Lewis "deprives his readers of numerous alternate interpretations of Jesus that carry with them no such odious implications". Paul E. Little, in his 1967 work Know Why You Believe, expanded the argument into a tetralemma ("Lord, Liar, Lunatic or Legend"). This has also been done by Peter Kreeft and Ronald Tacelli, both Saint John's Seminary professors of philosophy at Boston College, who have also suggested a pentalemma, accommodating the option that Jesus was a guru, who believed himself to be God in the sense that everything is divine. ==== Lewis's response to the possibility that the Gospels are legends ==== Lewis used his own literary expertise in a 1950 essay, "What Are We to Make of Jesus?", to disagree with the possibility that the Gospels are legends. There, Lewis writes: "Now, as a literary historian, I am perfectly convinced that whatever else the Gospels are they are not legends. I have read a great deal of legend and I am quite clear that they are not the same sort of thing. They are not artistic enough to be legends. From an imaginative point of view they are clumsy, they don't work up to things properly. Most of the life of Jesus is totally unknown to us, as is the life of anyone else who lived at that time, and no people building up a legend would allow that to be so. Apart from bits of the Platonic dialogues, there is no conversation that I know of in ancient literature like the Fourth Gospel. There is nothing, even in modern literature, until about a hundred years ago when the realistic novel came into existence." === Apologetic method === Writing from a presuppositional perspective, Richard L. Pratt Jr. has criticized the trilemma as expanded by Paul E. Little ("Lord, Liar, Lunatic or Legend") as being too reliant on human reason: "Instead of insisting on the necessity of repentance and faith as the ground for true knowledge, Little acts as if the unbeliever needs merely to be logical about Jesus' claims in order to arrive at the truth." == See also == Christological argument Christology Historicity of the Bible List of Jewish messiah claimants Mental health of Jesus Pious fraud Rejection of Jesus == References ==
Wikipedia/Lewis's_trilemma
Algorithmic radicalization is the concept that recommender algorithms on popular social media sites such as YouTube and Facebook drive users toward progressively more extreme content over time, leading to them developing radicalized extremist political views. Algorithms record user interactions, from likes/dislikes to amount of time spent on posts, to generate endless media aimed to keep users engaged. Through echo chamber channels, the consumer is driven to be more polarized through preferences in media and self-confirmation. Algorithmic radicalization remains a controversial phenomenon as it is often not in the best interest of social media companies to remove echo chamber channels. To what extent recommender algorithms are actually responsible for radicalization remains disputed; studies have found contradictory results as to whether algorithms have promoted extremist content. == Social media echo chambers and filter bubbles == Social media platforms learn the interests and likes of the user to modify their experiences in their feed to keep them engaged and scrolling, known as a filter bubble. An echo chamber is formed when users come across beliefs that magnify or reinforce their thoughts and form a group of like-minded users in a closed system. Echo chambers spread information without any opposing beliefs and can possibly lead to confirmation bias. According to group polarization theory, an echo chamber can potentially lead users and groups towards more extreme radicalized positions. According to the National Library of Medicine, "Users online tend to prefer information adhering to their worldviews, ignore dissenting information, and form polarized groups around shared narratives. Furthermore, when polarization is high, misinformation quickly proliferates." == By site == === 4chan === On May 14, 2022, 18-year old Payton S. Gendron carried out a mass-shooting in Buffalo, New York. The shooter stated in his manifesto that the internet was the source of his radical beliefs: "There was little to no influence on my personal beliefs by people I met in person." Around March 19, 2024, a New York state judge ruled Reddit and YouTube must face lawsuits in connection with the mass shooting over accusations that they played a role in the radicalization of the shooter. === Facebook === Facebook's algorithm focuses on recommending content that makes the user want to interact. They rank content by prioritizing popular posts by friends, viral content, and sometimes divisive content. Each feed is personalized to the user's specific interests which can sometimes lead users towards an echo chamber of troublesome content. Users can find their list of interests the algorithm uses by going to the "Your ad Preferences" page. According to a Pew Research study, 74% of Facebook users did not know that list existed until they were directed towards that page in the study. It is also relatively common for Facebook to assign political labels to their users. In recent years, Facebook has started using artificial intelligence to change the content users see in their feed and what is recommended to them. A document known as The Facebook Files has revealed that their AI system prioritizes user engagement over everything else. The Facebook Files has also demonstrated that controlling the AI systems has proven difficult to handle. In an August 2019 internal memo leaked in 2021, Facebook has admitted that "the mechanics of our platforms are not neutral", concluding that in order to reach maximum profits, optimization for engagement is necessary. In order to increase engagement, algorithms have found that hate, misinformation, and politics are instrumental for app activity. As referenced in the memo, "The more incendiary the material, the more it keeps users engaged, the more it is boosted by the algorithm." According to a 2018 study, "false rumors spread faster and wider than true information... They found falsehoods are 70% more likely to be retweeted on Twitter than the truth, and reach their first 1,500 people six times faster. This effect is more pronounced with political news than other categories." === YouTube === YouTube has been around since 2005 and has more than 2.5 billion monthly users. YouTube discovery content systems focus on the user's personal activity (watched, favorites, likes) to direct them to recommended content. YouTube's algorithm is accountable for roughly 70% of users' recommended videos and what drives people to watch certain content. According to a 2022 study by the Mozilla Foundation, users have little power to keep unsolicited videos out of their suggested recommended content. This includes videos about hate speech, livestreams, etc. YouTube has been identified as an influential platform for spreading radicalized content. Al-Qaeda and similar extremist groups have been linked to using YouTube for recruitment videos and engaging with international media outlets. In a research study published by the American Behavioral Scientist Journal, they researched "whether it is possible to identify a set of attributes that may help explain part of the YouTube algorithm's decision-making process". The results of the study showed that YouTube's algorithm recommendations for extremism content factor into the presence of radical keywords in a video's title. In February 2023, in the case of Gonzalez v. Google, the question at hand is whether or not Google, the parent company of YouTube, is protected from lawsuits claiming that the site's algorithms aided terrorists in recommending ISIS videos to users. Section 230 is known to generally protect online platforms from civil liability for the content posted by its users. Multiple studies have found little to no evidence to suggest that YouTube's algorithms direct attention towards far-right content to those not already engaged with it. === TikTok === TikTok is an app that recommends videos to a user's 'For You Page' (FYP), making every users' page different. With the nature of the algorithm behind the app, TikTok's FYP has been linked to showing more explicit and radical videos over time based on users' previous interactions on the app. Since TikTok's inception, the app has been scrutinized for misinformation and hate speech as those forms of media usually generate more interactions to the algorithm. Various extremist groups, including jihadist organizations, have utilized TikTok to disseminate propaganda, recruit followers, and incite violence. The platform's algorithm, which recommends content based on user engagement, can expose users to extremist content that aligns with their interests or interactions. As of 2022, TikTok's head of US Security has put out a statement that "81,518,334 videos were removed globally between April – June for violating our Community Guidelines or Terms of Service" to cut back on hate speech, harassment, and misinformation. Studies have noted instances where individuals were radicalized through content encountered on TikTok. For example, in early 2023, Austrian authorities thwarted a plot against an LGBTQ+ pride parade that involved two teenagers and a 20-year-old who were inspired by jihadist content on TikTok. The youngest suspect, 14 years old, had been exposed to videos created by Islamist influencers glorifying jihad. These videos led him to further engagement with similar content, eventually resulting in his involvement in planning an attack. Another case involved the arrest of several teenagers in Vienna, Austria, in 2024, who were planning to carry out a terrorist attack at a Taylor Swift concert. The investigation revealed that some of the suspects had been radicalized online, with TikTok being one of the platforms used to disseminate extremist content that influenced their beliefs and actions. == Self-radicalization == The U.S. Department of Justice defines 'Lone-wolf' (self) terrorism as "someone who acts alone in a terrorist attack without the help or encouragement of a government or a terrorist organization". Through social media outlets on the internet, 'Lone-wolf' terrorism has been on the rise, being linked to algorithmic radicalization. Through echo-chambers on the internet, viewpoints typically seen as radical were accepted and quickly adopted by other extremists. These viewpoints are encouraged by forums, group chats, and social media to reinforce their beliefs. == References in media == === The Social Dilemma === The Social Dilemma is a 2020 docudrama about how algorithms behind social media enables addiction, while possessing abilities to manipulate people's views, emotions, and behavior to spread conspiracy theories and disinformation. The film repeatedly uses buzz words such as 'echo chambers' and 'fake news' to prove psychological manipulation on social media, therefore leading to political manipulation. In the film, Ben falls deeper into a social media addiction as the algorithm found that his social media page has a 62.3% chance of long-term engagement. This leads into more videos on the recommended feed for Ben and he eventually becomes more immersed into propaganda and conspiracy theories, becoming more polarized with each video. == Proposed solutions == === Weakening Section 230 protections === In the Communications Decency Act, Section 230 states that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider". Section 230 protects the media from liabilities or being sued of third-party content, such as illegal activity from a user. However, critics argue that this approach reduces a company's incentive to remove harmful content or misinformation, and this loophole has allowed social media companies to maximize profits through pushing radical content without legal risks. This claim has itself been criticized by proponents of Section 230, as prior to its passing, courts had ruled in Stratton Oakmont, Inc. v. Prodigy Services Co. that moderation in any capacity introduces a liability to content providers as "publishers" of the content they chose to leave up. Lawmakers have drafted legislation that would weaken or remove Section 230 protections over algorithmic content. House Democrats Anna Eshoo, Frank Pallone Jr., Mike Doyle, and Jan Schakowsky introduced the "Justice Against Malicious Algorithms Act" in October 2021 as H.R. 5596. The bill died in committee, but it would have removed Section 230 protections for service providers related to personalized recommendation algorithms that present content to users if those algorithms knowingly or recklessly deliver content that contributes to physical or severe emotional injury. == See also == == References ==
Wikipedia/Algorithmic_radicalization
The Euthyphro dilemma is found in Plato's dialogue Euthyphro, in which Socrates asks Euthyphro, "Is the pious (τὸ ὅσιον) loved by the gods because it is pious, or is it pious because it is loved by the gods?" (10a) Although it was originally applied to the ancient Greek pantheon, the dilemma has implications for modern monotheistic religions. Gottfried Leibniz asked whether the good and just "is good and just because God wills it or whether God wills it because it is good and just". Ever since Plato's original discussion, this question has presented a problem for some theists, though others have thought it a false dilemma, and it continues to be an object of theological and philosophical discussion today. == The dilemma == Socrates and Euthyphro discuss the nature of piety in Plato's Euthyphro. Euthyphro proposes (6e) that the pious (τὸ ὅσιον) is the same thing as that which is loved by the gods (τὸ θεοφιλές), but Socrates finds a problem with this proposal: the gods may disagree among themselves (7e). Euthyphro then revises his definition, so that piety is only that which is loved by all of the gods unanimously (9e). At this point the dilemma surfaces. Socrates asks whether the gods love the pious because it is the pious, or whether the pious is pious only because it is loved by the gods (10a). Socrates and Euthyphro both contemplate the first option: surely the gods love the pious because it is the pious. But this means, Socrates argues, that we are forced to reject the second option: the fact that the gods love something cannot explain why the pious is the pious (10d). Socrates points out that if both options were true, they would yield a vicious circle, with the gods loving the pious because it is the pious, and the pious being the pious because the gods love it. And this, in turn, means Socrates argues, that the pious is not the same as the god-beloved, for what makes the pious the pious is not what makes the god-beloved the god-beloved. After all, what makes the god-beloved the god-beloved is that the gods love it, whereas what makes the pious the pious is something else (9d-11a). Thus Euthyphro's theory does not give us the very nature of the pious, but at most a quality of the pious (11ab). The dilemma can be modified to apply to philosophical theism, where it is still the object of theological and philosophical discussion, largely within the Christian, Jewish, and Islamic traditions. As German philosopher and mathematician Gottfried Leibniz presented this version of the dilemma: "It is generally agreed that whatever God wills is good and just. But there remains the question whether it is good and just because God wills it or whether God wills it because it is good and just; in other words, whether justice and goodness are arbitrary or whether they belong to the necessary and eternal truths about the nature of things." Many philosophers and theologians have addressed the Euthyphro dilemma since the time of Plato, though not always with reference to the Platonic dialogue. According to scholar Terence Irwin, the issue and its connection with Plato was revived by Ralph Cudworth and Samuel Clarke in the 17th and 18th centuries. More recently, it has received a great deal of attention from contemporary philosophers working in metaethics and the philosophy of religion. Philosophers and theologians aiming to defend theism against the threat of the dilemma have developed a variety of responses. == Solution: God commands it because it is right == === Supporters === The first horn of the dilemma (i.e. that which is right is commanded by God because it is right) goes by a variety of names, including intellectualism, rationalism, realism, naturalism, and objectivism. Roughly, it is the view that there are independent moral standards: some actions are right or wrong in themselves, independent of God's commands. This is the view accepted by Socrates and Euthyphro in Plato's dialogue. The Mu'tazilah school of Islamic theology also defended the view (with, for example, Nazzam maintaining that God is powerless to engage in injustice or lying), as did the Islamic philosopher Averroes. Thomas Aquinas never explicitly addresses the Euthyphro dilemma, but Aquinas scholars often put him on this side of the issue. Aquinas draws a distinction between what is good or evil in itself and what is good or evil because of God's commands, with unchangeable moral standards forming the bulk of natural law. Thus he contends that not even God can change the Ten Commandments (adding, however, that God can change what individuals deserve in particular cases, in what might look like special dispensations to murder or stealing). Among later Scholastics, Gabriel Vásquez is particularly clear-cut about obligations existing prior to anyone's will, even God's. Modern natural law theory saw Grotius and Leibniz also putting morality prior to God's will, comparing moral truths to unchangeable mathematical truths, and engaging voluntarists like Pufendorf in philosophical controversy. Cambridge Platonists like Benjamin Whichcote and Ralph Cudworth mounted seminal attacks on voluntarist theories, paving the way for the later rationalist metaethics of Samuel Clarke and Richard Price; what emerged was a view on which eternal moral standards, though dependent on God in some way, exist independently of God's will and prior to God's commands. Contemporary philosophers of religion who embrace this horn of the Euthyphro dilemma include Richard Swinburne and T. J. Mawson (though see below for complications). === Criticisms === Sovereignty: If there are moral standards independent of God's will, then "[t]here is something over which God is not sovereign. God is bound by the laws of morality instead of being their establisher. Moreover, God depends for his goodness on the extent to which he conforms to an independent moral standard. Thus, God is not absolutely independent." 18th-century philosopher Richard Price, who takes the first horn and thus sees morality as "necessary and immutable", sets out the objection as follows: "It may seem that this is setting up something distinct from God, which is independent of him, and equally eternal and necessary." Omnipotence: These moral standards would limit God's power: not even God could oppose them by commanding what is evil and thereby making it good. This point was influential in Islamic theology: "In relation to God, objective values appeared as a limiting factor to His power to do as He wills... Ash'ari got rid of the whole problem by denying the existence of objective values which might act as a standard for God's action." Similar concerns drove the medieval voluntarists Duns Scotus and William of Ockham. As modern philosopher Richard Swinburne puts the point, this horn "seems to place a restriction on God's power if he cannot make any action which he chooses obligatory... [and also] it seems to limit what God can command us to do. God, if he is to be God, cannot command us to do what, independently of his will, is wrong." Freedom of the will: Moreover, these moral standards would limit God's freedom of will: God could not command anything opposed to them, and perhaps would have no choice but to command in accordance with them. As Mark Murphy puts the point, "if moral requirements existed prior to God's willing them, requirements that an impeccable God could not violate, God's liberty would be compromised." Morality without God: If there are moral standards independent of God, then morality would retain its authority even if God did not exist. This conclusion was explicitly (and notoriously) drawn by early modern political theorist Hugo Grotius: "What we have been saying [about the natural law] would have a degree of validity even if we should concede that which cannot be conceded without the utmost wickedness, that there is no God, or that the affairs of men are of no concern to him" On such a view, God is no longer a "law-giver" but at most a "law-transmitter" who plays no vital role in the foundations of morality. Nontheists have capitalized on this point, largely as a way of disarming moral arguments for God's existence: if morality does not depend on God in the first place, such arguments stumble at the starting gate. === Responses to criticisms === Contemporary philosophers Joshua Hoffman and Gary S. Rosenkrantz take the first horn of the dilemma, branding divine command theory a "subjective theory of value" that makes morality arbitrary. They accept a theory of morality on which, "right and wrong, good and bad, are in a sense independent of what anyone believes, wants, or prefers." They do not address the problems mentioned above with the first horn, but do consider a related problem concerning God's omnipotence: namely, that it might be handicapped by his inability to bring about what is independently evil. To this they reply that God is omnipotent, even though there are states of affairs he cannot bring about: omnipotence is a matter of "maximal power", not an ability to bring about all possible states of affairs. And supposing that it is impossible for God not to exist, then since there cannot be more than one omnipotent being, it is therefore impossible for any being to have more power than God (e.g., a being who is omnipotent but not omnibenevolent). Thus God's omnipotence remains intact. Richard Swinburne and T. J. Mawson have a slightly more complicated view. They both take the first horn of the dilemma when it comes to necessary moral truths. But divine commands are not totally irrelevant, for God and his will can still effect contingent moral truths. On the one hand, the most fundamental moral truths hold true regardless of whether God exists or what God has commanded: "Genocide and torturing children are wrong and would remain so whatever commands any person issued." This is because, according to Swinburne, such truths are true as a matter of logical necessity: like the laws of logic, one cannot deny them without contradiction. This parallel offers a solution to the aforementioned problems of God's sovereignty, omnipotence, and freedom: namely, that these necessary truths of morality pose no more of a threat than the laws of logic. On the other hand, there is still an important role for God's will. First, there are some divine commands that can directly create moral obligations: e.g., the command to worship on Sundays instead of on Tuesdays. Notably, not even these commands, for which Swinburne and Mawson take the second horn of the dilemma, have ultimate, underived authority. Rather, they create obligations only because of God's role as creator and sustainer and indeed owner of the universe, together with the necessary moral truth that we owe some limited consideration to benefactors and owners. Second, God can make an indirect moral difference by deciding what sort of universe to create. For example, whether a public policy is morally good might indirectly depend on God's creative acts: the policy's goodness or badness might depend on its effects, and those effects would in turn depend on the sort of universe God has decided to create. == Solution: It is right because God commands it == === Supporters === The second horn of the dilemma (i.e. that which is right is right because it is commanded by God) is sometimes known as divine command theory or voluntarism. Roughly, it is the view that there are no moral standards other than God's will: without God's commands, nothing would be right or wrong. This view was partially defended by Duns Scotus, who argued that not all Ten Commandments belong to the Natural Law in the strictest sense. Scotus held that while our duties to God (the first three commandments, traditionally thought of as the First Tablet) are self-evident, true by definition, and unchangeable even by God, our duties to others (found on the second tablet) were arbitrarily willed by God and are within his power to revoke and replace (although, the third commandment, to honour the Sabbath and keep it holy, has a little of both, as we are absolutely obliged to render worship to God, but there is no obligation in natural law to do it on this day or that). Scotus does note, however that the last seven commandments "are highly consonant with [the natural law], though they do not follow necessarily from first practical principles that are known in virtue of their terms and are necessarily known by any intellect [that understands their terms. And it is certain that all the precepts of the second table belong to the natural law in this second way, since their rectitude is highly consonant with first practical principles that are known necessarily". Scotus justifies this position with the example of a peaceful society, noting that the possession of private property is not necessary to have a peaceful society, but that "those of weak character" would be more easily made peaceful with private property than without. William of Ockham went further, contending that (since there is no contradiction in it) God could command us not to love God and even to hate God. Later Scholastics like Pierre D'Ailly and his student Jean de Gerson explicitly confronted the Euthyphro dilemma, taking the voluntarist position that God does not "command good actions because they are good or prohibit evil ones because they are evil; but... these are therefore good because they are commanded and evil because prohibited." Protestant reformers Martin Luther and John Calvin both stressed the absolute sovereignty of God's will, with Luther writing that "for [God's] will there is no cause or reason that can be laid down as a rule or measure for it", and Calvin writing that "everything which [God] wills must be held to be righteous by the mere fact of his willing it." The voluntarist emphasis on God's absolute power was carried further by Descartes, who notoriously held that God had freely created the eternal truths of logic and mathematics, and that God was therefore capable of giving circles unequal radii, giving triangles other than 180 internal degrees, and even making contradictions true. Descartes explicitly seconded Ockham: "why should [God] not have been able to give this command [i.e., the command to hate God] to one of his creatures?" Thomas Hobbes notoriously reduced the justice of God to "irresistible power" (drawing the complaint of Bishop Bramhall that this "overturns... all law"). And William Paley held that all moral obligations bottom out in the self-interested "urge" to avoid Hell and enter Heaven by acting in accord with God's commands. Islam's Ash'arite theologians, al-Ghazali foremost among them, embraced voluntarism: scholar George Hourani writes that the view "was probably more prominent and widespread in Islam than in any other civilization." Wittgenstein said that of "the two interpretations of the Essence of the Good", that which holds that "the Good is good, in virtue of the fact that God wills it" is "the deeper", while that which holds that "God wills the good, because it is good" is "the shallow, rationalistic one, in that it behaves 'as though' that which is good could be given some further foundation". Today, divine command theory is defended by many philosophers of religion, though typically in a restricted form (see below). === Criticisms === This horn of the dilemma also faces several problems: No reasons for morality: If there is no moral standard other than God's will, then God's commands are arbitrary (i.e., based on pure whimsy or caprice). This would mean that morality is ultimately not based on reasons: "if theological voluntarism is true, then God's commands/intentions must be arbitrary; [but] it cannot be that morality could wholly depend on something arbitrary... [for] when we say that some moral state of affairs obtains, we take it that there is a reason for that moral state of affairs obtaining rather than another." And as Michael J. Murray and Michael Rea put it, this would also "cas[t] doubt on the notion that morality is genuinely objective." An additional problem is that it is difficult to explain how true moral actions can exist if one acts only out of fear of God or in an attempt to be rewarded by him. No reasons for God: This arbitrariness would also jeopardize God's status as a wise and rational being, one who always acts on good reasons. As Leibniz writes: "Where will be his justice and his wisdom if he has only a certain despotic power, if arbitrary will takes the place of reasonableness, and if in accord with the definition of tyrants, justice consists in that which is pleasing to the most powerful? Besides it seems that every act of willing supposes some reason for the willing and this reason, of course, must precede the act." Anything goes: This arbitrariness would also mean that anything could become good, and anything could become bad, merely upon God's command. Thus if God commanded us "to gratuitously inflict pain on each other" or to engage in "cruelty for its own sake" or to hold an "annual sacrifice of randomly selected ten-year-olds in a particularly gruesome ritual that involves excruciating and prolonged suffering for its victims", then we would be morally obligated to do so. As 17th-century philosopher Ralph Cudworth put it: "nothing can be imagined so grossly wicked, or so foully unjust or dishonest, but if it were supposed to be commanded by this omnipotent Deity, must needs upon that hypothesis forthwith become holy, just, and righteous." Moral contingency: If morality depends on the perfectly free will of God, morality would lose its necessity: "If nothing prevents God from loving things that are different from what God actually loves, then goodness can change from world to world or time to time. This is obviously objectionable to those who believe that claims about morality are, if true, necessarily true." In other words, no action is necessarily moral: any right action could have easily been wrong, if God had so decided, and an action which is right today could easily become wrong tomorrow, if God so decides. Indeed, some have argued that divine command theory is incompatible with ordinary conceptions of moral supervenience. Why do God's commands obligate?: Mere commands do not create obligations unless the commander has some commanding authority. But this commanding authority cannot itself be based on those very commands (i.e., a command to obey commands), otherwise a vicious circle results. So, in order for God's commands to obligate us, he must derive commanding authority from some source other than his own will. As Cudworth put it: "For it was never heard of, that any one founded all his authority of commanding others, and others [sic] obligation or duty to obey his commands, in a law of his own making, that men should be required, obliged, or bound to obey him. Wherefore since the thing willed in all laws is not that men should be bound or obliged to obey; this thing cannot be the product of the meer [sic] will of the commander, but it must proceed from something else; namely, the right or authority of the commander." To avoid the circle, one might say our obligation comes from gratitude to God for creating us. But this presupposes some sort of independent moral standard obligating us to be grateful to our benefactors. As 18th-century philosopher Francis Hutcheson writes: "Is the Reason exciting to concur with the Deity this, 'The Deity is our Benefactor?' Then what Reason excites to concur with Benefactors?" Or finally, one might resort to Hobbes's view: "The right of nature whereby God reigneth over men, and punisheth those that break his laws, is to be derived, not from his creating them (as if he required obedience, as of gratitude for his benefits), but from his irresistible power." In other words, might makes right. God's goodness: If all goodness is a matter of God's will, then what shall become of God's goodness? Thus William P. Alston writes, "since the standards of moral goodness are set by divine commands, to say that God is morally good is just to say that he obeys his own commands... that God practises what he preaches, whatever that might be;" Hutcheson deems such a view "an insignificant tautology, amounting to no more than this, 'That God wills what he wills.'" Alternatively, as Leibniz puts it, divine command theorists "deprive God of the designation good: for what cause could one have to praise him for what he does, if in doing something quite different he would have done equally well?" A related point is raised by C. S. Lewis: "if good is to be defined as what God commands, then the goodness of God Himself is emptied of meaning and the commands of an omnipotent fiend would have the same claim on us as those of the 'righteous Lord.'" Or again Leibniz: "this opinion would hardly distinguish God from the devil." That is, since divine command theory trivializes God's goodness, it is incapable of explaining the difference between God and an all-powerful demon. The is-ought problem and the naturalistic fallacy: According to David Hume, it is hard to see how moral propositions featuring the relation ought could ever be deduced from ordinary is propositions, such as "the being of a God." Divine command theory is thus guilty of deducing moral oughts from ordinary ises about God's commands. In a similar vein, G. E. Moore argued (with his open question argument) that the notion good is indefinable, and any attempts to analyze it in naturalistic or metaphysical terms are guilty of the so-called "naturalistic fallacy." This would block any theory which analyzes morality in terms of God's will: and indeed, in a later discussion of divine command theory, Moore concluded that "when we assert any action to be right or wrong, we are not merely making an assertion about the attitude of mind towards it of any being or set of beings whatever." No morality without God: If all morality is a matter of God's will, then if God does not exist, there is no morality. This is the thought captured in the slogan (often attributed to Dostoevsky) "If God does not exist, everything is permitted." Divine command theorists disagree over whether this is a problem for their view or a virtue of their view. Many argue that morality does indeed require God's existence, and that this is in fact a problem for atheism. But divine command theorist Robert Merrihew Adams contends that this idea ("that no actions would be ethically wrong if there were not a loving God") is one that "will seem (at least initially) implausible to many", and that his theory must "dispel [an] air of paradox." == Solution: Restricted divine command theory == One common response to the Euthyphro dilemma centers on a distinction between value and obligation. Obligation, which concerns rightness and wrongness (or what is required, forbidden, or permissible), is given a voluntarist treatment. But value, which concerns goodness and badness, is treated as independent of divine commands. The result is a restricted divine command theory that applies only to a specific region of morality: the deontic region of obligation. This response is found in Francisco Suárez's discussion of natural law and voluntarism in De legibus and has been prominent in contemporary philosophy of religion, appearing in the work of Robert M. Adams, Philip L. Quinn, and William P. Alston. A significant attraction of such a view is that, since it allows for a non-voluntarist treatment of goodness and badness, and therefore of God's own moral attributes, some of the aforementioned problems with voluntarism can perhaps be answered. God's commands are not arbitrary: there are reasons which guide his commands based ultimately on this goodness and badness. God could not issue horrible commands: God's own essential goodness or loving character would keep him from issuing any unsuitable commands. Our obligation to obey God's commands does not result in circular reasoning; it might instead be based on a gratitude whose appropriateness is itself independent of divine commands. These proposed solutions are controversial, and some steer the view back into problems associated with the first horn. One problem remains for such views: if God's own essential goodness does not depend on divine commands, then the question regards what it does depend on. Perhaps something other than God. Here the restricted divine command theory is commonly combined with a view reminiscent of Plato: God is identical to the ultimate standard for goodness. Alston offers the analogy of the standard meter bar in France. Something is a meter long inasmuch as it is the same length as the standard meter bar, and likewise, something is good inasmuch as it approximates God. If one asks why God is identified as the ultimate standard for goodness, Alston replies that this is "the end of the line," with no further explanation available, but adds that this is no more arbitrary than a view that invokes a fundamental moral standard. On this view, then, even though goodness is independent of God's will, it still depends on God, and thus God's sovereignty remains intact. This solution has been criticized by Wes Morriston. If we identify the ultimate standard for goodness with God's nature, then it seems we are identifying it with certain properties of God (e.g., being loving, being just). If so, then the dilemma resurfaces: God is either good because he has those properties, or those properties are good because God has them. Nevertheless, Morriston concludes that the appeal to God's essential goodness is the divine-command theorist's best bet. To produce a satisfying result, however, it would have to give an account of God's goodness that does not trivialize it and does not make God subject to an independent standard of goodness. Moral philosopher Peter Singer, disputing the perspective that "God is good" and could never advocate something like torture, states that those who propose this are "caught in a trap of their own making, for what can they possibly mean by the assertion that God is good? That God is approved of by God?" == Solution: False dilemma == Augustine, Anselm, and Aquinas all wrote about the problems raised by the Euthyphro dilemma, although, like William James and Wittgenstein later, they did not mention it by name. As philosopher and Anselm scholar Katherin A. Rogers observes, many contemporary philosophers of religion suppose that there are true propositions which exist as platonic abstracta independently of God. Among these are propositions constituting a moral order, to which God must conform in order to be good. Classical Judaeo-Christian theism, however, rejects such a view as inconsistent with God's omnipotence, which requires that God and what he has made is all that there is. "The classical tradition," Rogers notes, "also steers clear of the other horn of the Euthyphro dilemma, divine command theory." From a classical theistic perspective, therefore, the Euthyphro dilemma is false. As Rogers puts it, "Anselm, like Augustine before him and Aquinas later, rejects both horns of the Euthyphro dilemma. God neither conforms to nor invents the moral order. Rather His very nature is the standard for value." Another criticism raised by Peter Geach is that the dilemma implies you must search for a definition that fits piety rather than work backwards by deciding pious acts (i.e. you must know what piety is before you can list acts which are pious). It also implies something can not be pious if it is only intended to serve the Gods without actually fulfilling any useful purpose. === Jewish thought === The basis of the false dilemma response—God's nature is the standard for value—predates the dilemma itself, appearing first in the thought of the eighth-century BC Hebrew prophets, Amos, Hosea, Micah and Isaiah. (Amos lived some three centuries before Socrates and two before Thales, traditionally regarded as the first Greek philosopher.) "Their message," writes British scholar Norman H. Snaith, "is recognized by all as marking a considerable advance on all previous ideas," not least in its "special consideration for the poor and down-trodden." As Snaith observes, tsedeq, the Hebrew word for righteousness, "actually stands for the establishment of God's will in the land." This includes justice, but goes beyond it, "because God's will is wider than justice. He has a particular regard for the helpless ones on earth." Tsedeq "is the norm by which all must be judged" and it "depends entirely upon the Nature of God." Hebrew has few abstract nouns. What the Greeks thought of as ideas or abstractions, the Hebrews thought of as activities. In contrast to the Greek dikaiosune (justice) of the philosophers, tsedeq is not an idea abstracted from this world of affairs. As Snaith writes: Tsedeq is something that happens here, and can be seen, and recognized, and known. It follows, therefore, that when the Hebrew thought of tsedeq (righteousness), he did not think of Righteousness in general, or of Righteousness as an Idea. On the contrary, he thought of a particular righteous act, an action, concrete, capable of exact description, fixed in time and space.... If the word had anything like a general meaning for him, then it was as it was represented by a whole series of events, the sum-total of a number of particular happenings. The Hebrew stance on what came to be called the problem of universals, as on much else, was very different from that of Plato and precluded anything like the Euthyphro dilemma. This has not changed. In 2005, Jonathan Sacks wrote, "In Judaism, the Euthyphro dilemma does not exist." Jewish philosophers Avi Sagi and Daniel Statman criticized the Euthyphro dilemma as "misleading" because "it is not exhaustive": it leaves out a third option, namely that God "acts only out of His nature." === Thomas Aquinas === In Aquinas' view, to speak of abstractions not only as existent, but as more perfect exemplars than fully designated particulars, is to put a premium on generality and vagueness. On this analysis, the abstract "good" in the first horn of the Euthyphro dilemma is an unnecessary obfuscation. Aquinas frequently quoted with approval Aristotle's definition, "Good is what all desire." As he clarified, "When we say that good is what all desire, it is not to be understood that every kind of good thing is desired by all, but that whatever is desired has the nature of good." In other words, even those who desire evil desire it "only under the aspect of good," i.e., of what is desirable. The difference between desiring good and desiring evil is that in the former, will and reason are in harmony, whereas in the latter, they are in discord. Aquinas's discussion of sin provides a good point of entry to his philosophical explanation of why the nature of God is the standard for value. "Every sin," he writes, "consists in the longing for a passing [i.e., ultimately unreal or false] good." Thus, "in a certain sense it is true what Socrates says, namely that no one sins with full knowledge." "No sin in the will happens without an ignorance of the understanding." God, however, has full knowledge (omniscience) and therefore by definition (that of Socrates, Plato, and Aristotle as well as Aquinas) can never will anything other than what is good. It has been claimed – for instance, by Nicolai Hartmann, who wrote: "There is no freedom for the good that would not be at the same time freedom for evil" – that this would limit God's freedom, and therefore his omnipotence. Josef Pieper, however, replies that such arguments rest upon an impermissibly anthropomorphic conception of God. In the case of humans, as Aquinas says, to be able to sin is indeed a consequence, or even a sign, of freedom (quodam libertatis signum). Humans, in other words, are not puppets manipulated by God so that they always do what is right. However, "it does not belong to the essence of the free will to be able to decide for evil." "To will evil is neither freedom nor a part of freedom." It is precisely humans' creatureliness – that is, their not being God and therefore omniscient – that makes them capable of sinning. Consequently, writes Pieper, "the inability to sin should be looked on as the very signature of a higher freedom – contrary to the usual way of conceiving the issue." Pieper concludes: "Only the will [i.e., God's] can be the right standard of its own willing and must will what is right necessarily, from within itself, and always. A deviation from the norm would not even be thinkable. And obviously only the absolute divine will is the right standard of its own act" – and consequently of all human acts. Thus the second horn of the Euthyphro dilemma, divine command theory, is also disposed of. Thomist philosopher Edward Feser writes, "Divine simplicity [entails] that God's will just is God's goodness which just is His immutable and necessary existence. That means that what is objectively good and what God wills for us as morally obligatory are really the same thing considered under different descriptions, and that neither could have been other than they are. There can be no question then, either of God's having arbitrarily commanded something different for us (torturing babies for fun, or whatever) or of there being a standard of goodness apart from Him. Again, the Euthyphro dilemma is a false one; the third option that it fails to consider is that what is morally obligatory is what God commands in accordance with a non-arbitrary and unchanging standard of goodness that is not independent of Him... He is not under the moral law precisely because He is the moral law." === William James === William James, in his essay "The Moral Philosopher and the Moral Life", dismisses the first horn of the Euthyphro dilemma and stays clear of the second. He writes: "Our ordinary attitude of regarding ourselves as subject to an overarching system of moral relations, true 'in themselves,' is ... either an out-and-out superstition, or else it must be treated as a merely provisional abstraction from that real Thinker ... to whom the existence of the universe is due." Moral obligations are created by "personal demands," whether these demands come from the weakest creatures, from the most insignificant persons, or from God. It follows that "ethics have as genuine a foothold in a universe where the highest consciousness is human, as in a universe where there is a God as well." However, whether "the purely human system" works "as well as the other is a different question." For James, the deepest practical difference in the moral life is between what he calls "the easy-going and the strenuous mood." In a purely human moral system, it is hard to rise above the easy-going mood, since the thinker's "various ideals, known to him to be mere preferences of his own, are too nearly of the same denominational value; he can play fast and loose with them at will. This too is why, in a merely human world without a God, the appeal to our moral energy falls short of its maximum stimulating power." Our attitude is "entirely different" in a world where there are none but "finite demanders" from that in a world where there is also "an infinite demander." This is because "the stable and systematic moral universe for which the ethical philosopher asks is fully possible only in a world where there is a divine thinker with all-enveloping demands", for in that case, "actualized in his thought already must be that ethical philosophy which we seek as the pattern which our own must evermore approach." Even though "exactly what the thought of this infinite thinker may be is hidden from us", our postulation of him serves "to let loose in us the strenuous mood" and confront us with an existential "challenge" in which "our total character and personal genius ... are on trial; and if we invoke any so-called philosophy, our choice and use of that also are but revelations of our personal aptitude or incapacity for moral life. From this unsparing practical ordeal no professor's lectures and no array of books can save us." In the words of Richard M. Gale, "God inspires us to lead the morally strenuous life in virtue of our conceiving of him as unsurpassably good. This supplies James with an adequate answer to the underlying question of the Euthyphro." == Other formulations == === In philosophical atheism === Alexander Rosenberg uses a version of the Euthyphro dilemma to argue that objective morality cannot exist and hence an acceptance of moral nihilism is warranted. He asks, is objective morality correct because evolution discovered it or did evolution discover objective morality because it is correct? If the first horn of the dilemma is true then our current morality cannot be objectively correct by accident because if evolution had given us another type of morality then that would have been objectively correct. If the second horn of dilemma is true then one must account for how the random process of evolution managed to only select for objectively correct moral traits while ignoring the wrong moral traits. Given the knowledge that evolution has given us tendencies to be xenophobic and sexist it is mistaken to claim that evolution has only selected for objective morality as evidently it did not. Because both horns of the dilemma do not give an adequate account for how the evolutionary process instantiated objective morality in humans, a position of Moral nihilism is warranted. === In American legal thinking === Yale Law School Professor Myres S. McDougal, formerly a classicist, later a scholar of property law, posed the question, "Do we protect it because it's a property right, or is it a property right because we protect it?" The dilemma has also been restated in legal terms by Geoffrey Hodgson, who asked: "Does a state make a law because it is a customary rule, or does law become a customary rule because it is approved by the state?" == See also == Appeal to authority – Fallacy in which validity is determined based on an authority's credencePages displaying short descriptions of redirect targets Deontology – Class of ethical theories Divine simplicity – View of God without parts or features Ethical dilemma – Type of dilemma in philosophy Ethics in the Bible == Notes == == References == Adams, Robert Merrihew (1973). "A Modified Divine Command Theory of Ethical Wrongness". In Gene Outka; John P. Reeder (eds.). Religion and Morality: A Collection of Essays. Anchor. Adams, Robert Merrihew (1979). "Divine Command Metaethics Modified Again". Journal of Religious Ethics. 7 (1): 66–79. Adams, Robert Merrihew (1999). Finite and Infinite Goods: A Framework for Ethics. New York: Oxford University Press. ISBN 978-0-19-515371-2. Alston, William P. (1990). "Some suggestions for divine command theorists". In Michael Beaty (ed.). Christian Theism and the Problems of Philosophy. University of Notre Dame Press. pp. 303–26. Alston, William P. (2002). "What Euthyphro should have said". In William Lane Craig (ed.). Philosophy of Religion: A Reader and Guide. Rutgers University Press. ISBN 978-0813531212. Aquinas, Thomas (1265–1274). Summa Theologica. Calvin, John (1536). Institutes of the Christian Religion. Chandler, John (1985). "Divine command theories and the appeal to love". American Philosophical Quarterly. 22 (3): 231–239. JSTOR 20014101. Cross, Richard (1999). Duns Scotus. ISBN 978-0195125535. Cudworth, Ralph (1731). A Treatise concerning eternal and immutable morality. London : Printed for James and John Knapton ... Descartes, René (1985). John Cottingham; Dugald Murdoch; Robert Stoothoff (eds.). The Philosophical Writings of Descartes. Doomen, Jasper (2011). "Religion's Appeal". Philosophy and Theology. 23 (1): 133–148. doi:10.5840/philtheol20112316. Frank, Richard M. (1994). Al-Ghazali and the Asharite School. Duke University Press. ISBN 978-0822314271. Gale, Richard M. (1999). The Divided Self of William James. Cambridge University Press. ISBN 978-0-521-64269-9. Gill, Michael (1999). "The Religious Rationalism of Benjamin Whichcote". Journal of the History of Philosophy. 37 (2): 271–300. doi:10.1353/hph.2008.0832. S2CID 54190387. Gill, Michael (2011). British Moralists on Human Nature and the Birth of Secular Ethics. Cambridge University Press. ISBN 978-0521184403. Grotius, Hugo (2005) [1625]. Richard Tuck (ed.). The Rights of War and Peace. Liberty Fund. ISBN 9780865974364. Haldane, John (1989). "Realism and voluntarism in medieval ethics". Journal of Medical Ethics. 15 (1): 39–44. doi:10.1136/jme.15.1.39. JSTOR 27716767. PMC 1375762. PMID 2926786. Head, Ronan (9 July 2010). "Missing the point about atrocities in the Bible". Church Times. Hobbes, Thomas. Leviathan. Hoffman, Joshua; Rosenkrantz, Gary S. (2002). The Divine Attributes. doi:10.1002/9780470693438. ISBN 978-1892941008. S2CID 55213987. Hourani, George (1960). "Two Theories of Value in Medieval Islam" (PDF). Muslim World. 50 (4): 269–278. doi:10.1111/j.1478-1913.1960.tb01091.x. hdl:2027.42/74937. Hourani, George (1962). "Averroes on Good and Evil". Studia Islamica. 16 (16): 13–40. doi:10.2307/1595117. JSTOR 1595117. Hume, David (1739). A Treatise of Human Nature. CreateSpace Independent Publishing Platform. ISBN 978-1479321728. {{cite book}}: ISBN / Date incompatibility (help) Hutcheson, Francis (1738). An Inquiry into the Original of Our Ideas of Beauty and Virtue; In Two Treatises. London : Printed for D. Midwinter, A. Bettersworth, and C. Hitch ... Hutcheson, Francis (1742). Illustrations on the Moral Sense. ISBN 978-0674443266. {{cite book}}: ISBN / Date incompatibility (help) Irwin, Terence (2006). "Socrates and Euthyphro: The argument and its revival". In Lindsay Judson; V. Karasmanēs (eds.). Remembering Socrates: Philosophical Essays. Oxford University Press. Irwin, Terence (2007). The Development of Ethics. Oxford University Press. ISBN 978-0199693856. James, William (1891). "The Moral Philosopher and the Moral Life". International Journal of Ethics. 1 (3): 330–354. doi:10.1086/intejethi.1.3.2375309. JSTOR 2375309. Janik, Allan; Toulmin, Stephen (1973). Wittgenstein's Vienna. New York: Simon & Schuster. ISBN 978-0-671-21725-9. Klagge, James C. (1984). "An alleged difficulty concerning moral properties". Mind. 93 (371): 370–380. doi:10.1093/mind/xciii.371.370. JSTOR 2254416. Kretzmann, Norman (1999). "Abraham, Isaac, and Euthyphro: God and the basis of morality". In Eleonore Stump; Michael J. Murray (eds.). Philosophy of Religion: The Big Questions. Oxford: Blackwell. ISBN 978-0-631-20604-0. Leibniz, Gottfried (1686). Discourse on Metaphysics. Leibniz, Gottfried (1989) [1702(?)]. "Reflections on the Common Concept of Justice". In Leroy Loemker (ed.). Leibniz: Philosophical Papers and Letters. Dordrecht: Kluwer. pp. 561–573. ISBN 978-9027706935. Leibniz, Gottfried (1706). "Opinion on the Principles of Pufendorf". In Riley (ed.). Leibniz: Political Writings. Cambridge University Press. pp. 64–75. Leibniz, Gottfried (1710). Théodicée. Lewis, C. S. (1967) [1943]. "The Poison of Subjectivism". Christian Reflections. Luther, Martin (1525). On the Bondage of the Will. Mackie, J. L. (1980). Hume's Moral Theory. Routledge. ISBN 978-0415104364. Mawson, T. J. (2008). "The Euthyphro Dilemma". Think. 7 (20): 25–33. doi:10.1017/S1477175608000171. S2CID 170806539. McInerny, Ralph (1982). St. Thomas Aquinas. University of Notre Dame Press. ISBN 978-0-268-01707-1. Moore, G. E. (1903). Principia Ethica. Moore, G. E. (1912). Ethics. Morriston, Wes (2001). "Must there be a standard of moral goodness apart from God". Philosophia Christi. 2. 3 (1): 127–138. doi:10.5840/pc2001318. Morriston, Wes (2009). "What if God commanded something terrible? A worry for divine-command meta-ethics". Religious Studies. 45 (3): 249–267. doi:10.1017/S0034412509990011. JSTOR 27750017. S2CID 55530483. Murphy, Mark (2012). "Theological Voluntarism". In Edward N. Zalta (ed.). Theological Voluntarism. The Stanford Encyclopedia of Philosophy (Fall 2012 ed.). Murray, Michael J.; Rea, Michael (2008). An Introduction to the Philosophy of Religion. Cambridge: Cambridge. ISBN 978-0521619554. Oppy, Graham (2009). Arguing about Gods. Cambridge University Press. ISBN 978-0521122641. Osborne, Thomas M. Jr. (2005). "Ockham as a divine-command theorist". Religious Studies. 41 (1): 1–22. doi:10.1017/S0034412504007218. JSTOR 20008568. S2CID 170351380. Pieper, Josef (2001). The Concept of Sin. Translated by Edward T. Oakes. South Bend, Indiana: St Augustine's Press. ISBN 978-1-890318-07-9. Pink, Thomas (2005). "Action, Will and Law in Late Scholasticism". Action, Will, and Law in Late Scholasticism. The New Synthese Historical Library. Vol. 57. pp. 31–50. doi:10.1007/1-4020-3001-0_3. ISBN 978-1-4020-3000-0. {{cite book}}: |journal= ignored (help) Price, Richard (1769). A Review of the Principal Questions of Morals. London: Printed for T. Cadell. Quinn, Philip (2007). "Theological Voluntarism". In David Copp (ed.). The Oxford Handbook of Ethical Theory. doi:10.1093/oxfordhb/9780195325911.003.0003. Rogers, Katherin A. (2000). "Divine Goodness". Perfect Being Theology. Edinburgh University Press. ISBN 978-0-7486-1012-9. Rogers, Katherin A. (2008). Anselm on Freedom. Oxford University Press. ISBN 978-0-19-923167-6. Sacks, Jonathan (2005). To Heal a Fractured World: The Ethics of Responsibility. New York: Schocken Books. ISBN 978-0-8052-1196-2. Sagi, Avi; Statman, Daniel (1995). Religion and Morality. Amsterdam: Rodopi. ISBN 978-90-5183-838-1. Singer, Peter (1993). Practical Ethics (3d ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-43971-8. Shaw, Joseph (2002). "Divine commands at the foundations of morality". Canadian Journal of Philosophy. 32 (3): 419–439. doi:10.1080/00455091.2002.10716525. JSTOR 40232157. S2CID 170616382. Snaith, Norman H. (1983) [1944]. The Distinctive Ideas of the Old Testament. London: Epworth Press. ISBN 978-0-7162-0392-6. Suárez, Francisco (1872). Tractatus de legibus ac deo legislatore: in decem libros distributus. ex typis Fibrenianis. Swinburne, Richard (1974). "Duty and the Will of God". Canadian Journal of Philosophy. 4 (2): 213–227. doi:10.1080/00455091.1974.10716933. JSTOR 40230500. S2CID 159730360. Swinburne, Richard (1993). The Coherence of Theism. Clarendon Press. ISBN 978-0198240709. Swinburne, Richard (2008). "God and morality". Think. 7 (20): 7–15. doi:10.1017/S1477175608000158. S2CID 170918784. Wainwright, William (2005). Religion and Morality. Ashgate. ISBN 978-0754616320. Wierenga, Edward (1983). "A defensible divine command theory". Noûs. 17 (3): 387–407. doi:10.2307/2215256. JSTOR 2215256. Williams, Thomas (2013). "John Duns Scotus". In Edward N. Zalta (ed.). John Duns Scotus. The Stanford Encyclopedia of Philosophy (Summer 2013 ed.). Williams, Thomas, ed. (2002). The Cambridge Companion to Duns Scotus. Cambridge University Press. ISBN 978-0521635639. Wolfson, Harry (1976). The Philosophy of the Kalam. Harvard University Press. ISBN 978-0674665804. Zagzebski, Linda (2004). Divine Motivation Theory. Cambridge University Press. ISBN 978-0521535762. == Further reading == Jan Aertsen Medieval philosophy and the transcendentals: the case of Thomas Aquinas (2004: New York, Brill) ISBN 90-04-10585-9 John M. Frame Euthyphro, Hume, and the Biblical God retrieved February 13, 2007 Paul Helm [ed.] Divine Commands and Morality (1981: Oxford, Oxford University Press) ISBN 0-19-875049-8 Plato Euthyphro (any edition; the Penguin version can be found in The Last Days of Socrates ISBN 0-14-044037-2) == External links == Euthyphro by Plato from Project Gutenberg
Wikipedia/Euthyphro_dilemma
The East StratCom Task Force (ESCTF or ESTF) is a part of the European External Action Service, focused on "effective communication" and promotion of European Union activities in Eastern Europe (including Armenia, Azerbaijan, Belarus, Georgia, Moldova, and Ukraine) and beyond (Russia itself). The task force's flagship project is EUvsDisinfo, a database of articles and media which the organization considers as providing false, distorted or partial information. == History and mission == The ESCTF was created as a conclusion of the European Council meeting on 19 and 20 March 2015, citing the "need to challenge Russia's ongoing disinformation campaigns". Initially, it relied on donations from European countries and consisted of ten people, of whom only one (a former Czech journalist) worked full time. Funding from the EU budget began in 2018. The East StratCom Task Force is intended to communicate about issues where EU strategic communication needs to be improved, or the EU is subject to disinformation campaigns. Such products will be put at the disposal of the EU's political leadership, press services, EU delegations and EU member states and are intended for the general public. The group is designated to develop communication campaigns, targeting key audiences and focused on specific issues of relevance to those audiences, including local issues. The actions of the ESTF are built on existing work and coherent with wider EU communication efforts, including activities of the EU institutions and EU member states. The ESTF is one of several organizations with the purpose of opposing propaganda that attempts to undermine the norms and collective identity of the European Union, particularly propaganda from Russia. Its motto "Question even more" is a response to RT's "Question more". == Products and activity == The team's communications products are mainly focused on the countries of the Eastern Neighbourhood and produced in the local languages of those countries. They are disseminated via the social media channels of the EU Delegations in the region, and are also carried on television and via other media and public events. In addition, the Task Force, in cooperation with the European Commission, led the EU's six-month Eastern Partnership communications campaign culminating in the November 2017 Eastern Partnership Summit in Brussels. The team's main product to raise awareness of disinformation is the weekly Disinformation Review (in English and Russian languages), launched in November 2015. The goal is to provide data for analysts, journalists and officials dealing with this issue. The Disinformation Review also brings the latest news and analyses of what the task force labels as "pro-Kremlin disinformation". The full record of the Task Force's work on disinformation is available on its website EUvsDisinfo.eu, available in English, Russian, and German languages. The team also runs the European External Action Service's Russian language website, as well as Twitter and Facebook accounts. This communicates primarily about the EU's foreign policy by publishing information about EU activities, as well as EU statements and press releases with relevance to the Eastern Neighbourhood in particular. Most of the organization's efforts are distributed on providing information support on issues related to the Russo-Ukrainian War. One of the ESTF's main challenges has been described as distinguishing disinformation from legitimate dissent. ESCTF has documented numerous examples of propaganda and disinformation published by Russian media. Between 2015 and 2016 EUvsDisinfo registered 1,992 confirmed disinformation cases with 36 each week on average. Between November 2015 and August 2019, the project identified more than 6,000 cases of disinformation. Among the most common topics was the topic of migration. As of March 2024, 943 cases of disinformation were related to the COVID-19 pandemic and 2855 cases related to the war in Ukraine. == Reception == EU Member State Governments have strongly supported the Task Force since its inception and provide the majority of its staff. The European Parliament has consistently supported the Task Force and called for adequate staffing and resourcing. An EP preparatory action for 2018 – "StratCom Plus" - has allocated €1.1m for the team to focus on how to counter disinformation on the EU more systematically. Pavel Telička, Vice-President of the European Parliament: "I place great value in the fact that Europe has experts who address Russia's ongoing disinformation campaigns (…). The quality and the substance of their work is outstanding. Their work is valuable as it indicated the alarming nature of our European security". Keir Giles from Chatham House about East StratCom Task Force: "a critically important capability, ESTF has quite a high credit among experts". European Security Union Commissioner Julian King noted that East StratCom Task Force "gathered more than 3,500 examples of pro-Kremlin disinformation contradicting publicly available facts repeated on many languages on many occasions"; "It also launched a Russian language service from Brussels, providing updates and fact-based background information about the Union for RU language journalists. The aim is to increase visibility and more accurate representation of EU policies in the Russian language media. It produces a weekly Disinformation review. Their Twitter account ensures that the Task Force's products reach up to 2 million people per month, in addition to their regular briefings. This work is very important". Rebecca Harms, Member of the European Parliament (MEP) from Germany and member of the Greens group: "It's important to have this StratCom, but its interaction with national bodies is not strong enough". Former Danish Foreign Minister Uffe Ellemann-Jensen said that: "They provide an excellent instrument. We would of course not be able to do it in other ways". Former Czech Prime Minister Bohuslav Sobotka: "This team is capable of generating quality results". Edward Lucas, vice president at Center for European Policy Analysis (CEPA), said that the East StratCom's Disinformation Review is "the best weekly bulletin on Russian propaganda in the West" next after Ukrainian similar project, StopFake, which he considers to be "the gold standard". According to The New York Times, East StratCom serves as "Europe's front line against this onslaught of fake news". Canadian Maclean's magazine: "As for who first noticed that Moscow was gunning for Freeland, that's something that has yet to show up in any banner Canadian headlines. It was the European Union's East StratCom Task Force, a unit of the External Action Service (the EU's foreign ministry and diplomatic branch). The Task Force was set up in March 2015 as a kind of early warning system to detect incoming Kremlin disinformation campaigns". Brussels-based online newspaper EUobserver: "Eight member states have urged the EU's foreign service to significantly expand its work on countering Russian propaganda. They said in a letter to EU head of foreign affairs Federica Mogherini that "in the face of unabated third party disinformation campaigns … we see an urgent need to further enhance the EU's StratCom capabilities (…) East StratCom circulates online notes that debunk Russian disinformation and has attracted 30,000 followers to its Twitter account. It also promotes positive coverage of the EU in former Soviet states". More in EUobserver: "Its Disinformation Review, a weekly newsletter, and its daily tweets and infographics, should be in the laptops and phones of all MEPs and senior EU officials". Lawfare, about the team: "The task force has made some meaningful contributions to the efforts to counter disinformation warfare. Over the course of its operation, East StratCom has identified over 3,500 disinformation cases (…) These statistics highlight the global nature of the problem, and the benefit of having a body working on disinformation beyond a single country's borders. East StratCom's supranational view also allows it to provide valuable insights into the broader strategy and goals of pro-Kremlin disinformation operations because it can see them as a cohesive whole, rather than isolated incidents in individual countries. === Criticism === The ESTF has been described as "possibly the most widely recognised, and criticised, anti-disinformation unit set up to handle Russian disinformation." In 2020, The New York Times wrote that the ESTF "is unique because its biggest supporters — countries in Central and Eastern Europe with a history of Communist influence — are also among its loudest critics. They say the task force has been underfunded and undersupported and should be more ambitious." Danish newspaper Politiken criticized East StratCom for writing that Russian-backed militants were fighting in Ukraine at the Battle of Avdiivka. They said that ESTF only used Ukrainian sources in their review, and claimed that one of the sources (the Ukrainian website Inform Napalm) was linked to the "controversial and secretive" Ukrainian website Myrotvorets. In 2018, it was found that the ESTF's database of news articles that contain disinformation had incorrectly included three articles from Dutch news outlets, in part due to a translation error. The outlets (GeenStijl, The Post Online and De Gelderlander) sued the EU for libel. On 6 March, the Dutch Parliament passed a motion to advocate that the EU remove the ESTF's funding. In response, the ESTF removed the articles from their database and changed the language it uses when describing outlets that it identifies as publishing disinformation. On 9 March, Dutch Minister of the Interior, who had previously opposed closing EUvsDisinfo, said that the government would make a case for closing it in the European Union. Professor Wouter Hins from Leiden University admitted that EUvsDisinfo made a mistake, but argued that it should not be closed: "The idea that the government should then shut up is rather unworldly". On 13 March the three Dutch media withdrew their case. == See also == Counterpropaganda Fact checking Fake news Russia–European Union relations == References == == External links == Official website European External Action Service - Russian language site
Wikipedia/East_StratCom_Task_Force
In many countries a variety of unfounded conspiracy theories and other misinformation about COVID-19 vaccines have spread based on misunderstood or misrepresented science, religion, and law. These have included exaggerated claims about side effects, misrepresentations about how the immune system works and when and how COVID-19 vaccines are made, a story about COVID-19 being spread by 5G, and other false or distorted information. This misinformation, some created by anti-vaccination activists, has proliferated and may have made many people averse to vaccination. Critics of vaccine mandates have argued that such requirements infringe on individual medical choice and personal autonomy. This has led to governments and private organizations around the world introducing measures to incentivize or coerce vaccination, such as lotteries, mandates, and free entry to events, which has in turn led to further misinformation about the legality and effect of these measures themselves. These measures, while intended to increase vaccination rates, have themselves been criticized for their impact on personal freedoms, further fueling debate about their legality and effectiveness. In the US, some prominent biomedical scientists who publicly advocate vaccination have been attacked and threatened in emails and on social media by anti-vaccination activists. == Misinformation == Various false theories have spread in different parts of the world regarding the COVID-19 vaccines. === COVID-19 and variant related claims === ==== Prevalent COVID-19 skepticism ==== Prior to the vaccine launch many citizens expressed skepticism that COVID-19 was a serious disease or that their countries had cases or high number of cases of the disease during 2020 and 2021. This prior skepticism that was pushed by the late President of Tanzania, John Pombe Magufuli is seen as a leading reason for vaccine hesitancy within the country. Magufuli declared Tanzania COVID-19 free in mid-2020 and pushed herbal remedies, praying and steam inhalation as remedies to COVID-19. ==== Delta variant and vaccines ==== As the delta variant of COVID-19 began to spread globally, disinformation campaigns seized on the idea that COVID-19 vaccines had caused the delta variant, despite the fact that the vaccines cannot replicate the virus. A French virologist likewise falsely claimed that antibodies from vaccines had created and strengthened COVID-19 variants through a previously debunked theory of Antibody-dependent Enhancement. A related debunked theory, out of India, claimed that COVID-19 vaccines were lowering people's ability to withstand new variants instead of boosting immunity. The website Natural News published an article in July 2021 claiming that CDC director Rochelle Walensky admitted that COVID-19 vaccines do not protect against the delta variant and that vaccinated people could be superspreaders due to having a higher viral load. Walensky actually said in a press briefing that vaccinated and unvaccinated people could have "similarly high" viral loads when infected with the delta variant, but did not say that vaccinated people had a higher viral loads or were "super-spreaders". She also stated that the vaccine "continues to prevent severe illness, hospitalization, and death", even against the delta variant. A July 2021 study in the New England Journal of Medicine reported that the Pfizer–BioNTech COVID-19 vaccine was 88 percent effective in preventing symptomatic infections caused by the delta variant. === Organized crime === ==== Fake vaccines ==== In July 2021, Indian police arrested 14 people for administering doses of fake salt water vaccines instead of the Oxford–AstraZeneca COVID-19 vaccine at nearly a dozen private vaccination sites in Mumbai. The organizers, including medical professionals, charged between $10 and $17 for each dose, and more than 2,600 people paid to receive the vaccine. Interpol issued a global alert in December 2020 to law enforcement agencies in its member countries to be on the lookout for organized crime networks targeting COVID-19 vaccines, physically and online. The WHO also released a warning in March 2021 after many ministries of health and regulatory agencies received suspicious offers to supply vaccines. They also noted that some doses of the vaccines were being offered on the dark web priced between $500 and $750, but there was no way to verify the distribution pipeline. ==== Fake vaccination cards ==== In the United States, there was a surge of individuals either looking to purchase fake vaccination cards, alter medical records to show vaccination, or create fake vaccination cards to sell. In Hawaii a vacationer was arrested after it was discovered she had a fake vaccination card, a California doctor was arrested for falsifying patients' vaccination records, and three state troopers in Vermont were arrested for helping create false cards. In August 2021 US Customs and Border Prevention agents seized 121 packages with more than 3,000 fake vaccination cards that had been shipped from Shenzhen to be distributed in the US. Check Point research released in August 2021 showed that fake vaccination cards were being sold via messaging apps and priced between $100 and $120 a card. Interpol announced that they were seeing a direct correlation between countries requiring negative COVID-19 tests to enter the country and the increased number of provided fake vaccination cards. === Medical claims === ==== Claims of inefficacy ==== Recurrent claims, based on misinterpretation of statistical data, have been made regarding the efficacy of COVID-19 vaccines. A frequent fallacy consisted in concluding on the ineffectiveness (or low effectiveness) of vaccines after noticing the apparently high proportion of vaccinated patients among COVID-19-related hospitalisations and deaths, without taking into account the high proportion of vaccinated people among the general population, thus committing the base rate fallacy; or without taking into account the tendency of people at higher risk of developing severe illness from COVID-19 to be vaccinated in priority, thus ignoring the Yule–Simpson effect. In the United Kingdom, a report from the Scientific Pandemic Influenza Group on Modelling (SPI-M), published in March 2021, predicted that 60% of hospitalisations and 70% of deaths would be among people who had received two doses of the vaccine, despite the latter remaining highly effective. The report stated: "This (modelling) is not the result of vaccines being ineffective, merely uptake being so high". Multiple studies have confirmed the effectiveness of a booster dose given on top of the two normal doses of the Pfizer–BioNTech COVID-19 vaccine. There is evidence that those who have received a boosted dose experience reduced severity of infection, in addition to reduced likelihood of developing COVID-19 to begin with. On 17 January 2023, Ron DeSantis claimed, "Almost every study now has said with these new boosters, you're more likely to get infected with the bivalent booster," but PolitiFact rated that claim False, noting that, on the contrary, a "study found that the bivalent booster is 30% effective in preventing infection from the virus." ==== mRNA vaccines are not vaccines ==== Financial analyst and self-help entrepreneur David Martin claimed that mRNA vaccines do not fit the U.S. Centers for Disease Control and Prevention's (CDC) or the U.S. Food and Drug Administration's (FDA) definitions of a vaccine because they do not prevent transmission of SARS-CoV-2, the virus that causes COVID-19. While research has been ongoing to evaluate the effect of vaccination on SARS-CoV 2 transmission, neither the CDC nor the FDA stipulate that vaccines must stop transmission of a virus, both stating that a vaccine is a product that stimulates the immune system to produce immunity to an infectious agent. ==== Altering human DNA ==== The use of mRNA-based vaccines for COVID-19 has been the basis of misinformation circulated in social media, wrongly claiming that the use of RNA somehow alters a person's DNA. The DNA alteration conspiracy theory was cited by a Wisconsin hospital pharmacist who deliberately removed 57 vaccine vials from cold storage in December 2020 and was subsequently charged with felony reckless endangerment and criminal damage to property by Ozaukee County prosecutors. mRNA in the cytosol is very rapidly degraded before it would have time to gain entry into the cell nucleus (mRNA vaccines must be stored at very low temperature to prevent mRNA degradation). Retrovirus can be single-stranded RNA (just as SARS-CoV-2 vaccine is single-stranded RNA) which enters the cell nucleus and uses reverse transcriptase to make DNA from the RNA in the cell nucleus. A retrovirus has mechanisms to be imported into the nucleus, but other mRNA lack these mechanisms. Once inside the nucleus, creation of DNA from RNA cannot occur without a primer, which accompanies a retrovirus, but which would not exist for other mRNA if placed in the nucleus. Thus, mRNA vaccines cannot alter DNA because they cannot enter the nucleus, and because they have no primer to activate reverse transcriptase. Because of misinformation suggesting that COVID-19 might alter DNA, some academics insisted that mRNA vaccines were not a "gene therapy" to prevent the spread of this misinformation, but others said that mRNA vaccines were a gene therapy because they introduce genetic material into cells. ==== Reproductive health ==== In a December 2020 petition to the European Medicines Agency, German physician Wolfgang Wodarg and British researcher Michael Yeadon suggested, without evidence, that mRNA vaccines could cause infertility in women by targeting the syncytin-1 protein necessary for placenta formation. Their petition to halt vaccine trials soon began circulating on social media. A survey of young women in the United Kingdom later found that more than a quarter would refuse COVID-19 vaccines out of concerns for their effects on fertility. A study in Andrologia found that Google searches relating to a supposed link between vaccination against COVID-19 and adverse effects on fertility increased following the Emergency Use Authorization of COVID vaccines in the United States, indicating that concerns about alleged impacts on fertility are a major contributor to vaccine hesitancy. Syncytin-1 and the SARS-CoV-2 spike protein targeted by the vaccines are largely dissimilar, sharing a sequence of only four amino acids out of several hundred. A study conducted on 44 rats injected with the Pfizer–BioNTech COVID-19 vaccine at doses over 300 times the human dose by body weight and 44 rats injected with placebo found no statistically significant evidence of any adverse effects on the fertility of female rats or on the health of the offspring of rats (the 3% lower pregnancy rate found in the vaccine group was not statistically significant). David Gorski wrote on Science-Based Medicine that Wodarg and Yeadon were "stoking real fear [...] based on speculative nonsense". False claims that a vaccinated person could "shed" SARS-CoV-2 spike proteins, allegedly causing menstrual irregularities or other harmful effects on the reproductive health of non-vaccinated women who are in proximity to them, such as miscarriage, were cited by the Centner Academy, a private school in Miami, which announced it would not employ teachers who received the COVID-19 vaccine. Other businesses refused to serve vaccinated customers, citing concerns that vaccinated people could shed the virus. Some promoters of this claim have recommended the use of face masks and social distancing to protect themselves from those who have been vaccinated. Gynecologist and medical columnist Jen Gunter stated none of the vaccines currently approved in the United States "can possibly affect a person who has not been vaccinated, and this includes their menstruation, fertility, and pregnancy". ==== Risk of diseases ==== ==== Bell's palsy ==== In late 2020, claims circulated on social media that the Pfizer–BioNTech COVID-19 vaccine caused Bell's palsy in trial participants. Several pictures which had originally been published prior to 2020 accompanied these posts, and were falsely labeled as these participants. During the trial, four of the 22,000 trial participants indeed developed Bell's palsy. The FDA observed that the "frequency of reported Bell's palsy in the vaccine group is consistent with the expected background rate in the general population". Debate is still ongoing about whether or not there is a causal link between any of the major COVID-19 vaccines and Bell's palsy. However, experts agree that even if an association exists, it occurs extremely rarely and the effect is small (~10 cases per 100,000 vs 3-7 cases per 100,000 in a typical pre-pandemic year). Bell's palsy is usually temporary and known to occur following many vaccines. ==== Blood clots ==== Videos posted to Facebook and Instagram have claimed without evidence that 62 percent of people given an mRNA vaccine develop blood clots, and that Pfizer's COVID-19 vaccine causes blood to clot "in a minute or two". Studies have found possible causal links between the AstraZeneca and Janssen COVID-19 vaccines and a rare clotting disorder known as thrombosis with thrombocytopenia syndrome (TTS), but the risk is low for most people, with 47 confirmed reports of the condition out of more than 15 million recipients of the Janssen vaccine in the United States as of October 2021. A 2021 study published in the British Medical Journal suggested that SARS-CoV-2 infection is approximately 200 times more likely to cause blood clots in patients than the AstraZeneca vaccine. ==== Cancer ==== The website Natural News has published claims that mRNA vaccines for COVID-19 can cause cancer by inactivating tumor-suppressing proteins. This claim was based on a misrepresentation of a 2018 study at Memorial Sloan Kettering Cancer Center (MSKCC), which did not involve the mRNA used in vaccines. The study found that transcription errors in certain mRNA molecules could disrupt production of tumor-suppressing proteins. However, mRNA used in vaccines is made artificially, and poses no risk of transcription errors once made. ==== Prion disease ==== A widely reposted 2021 Facebook post claiming that the mRNA vaccines against COVID-19 could cause prion diseases was based on a paper by J. Bart Classen. The paper was published in Microbiology and Infectious Diseases, whose publisher, Scivision Publishers, is included in Beall's list of publishers of predatory journals. Classen's only published evidence for his claim was a brief summary of an "unspecified analysis of the Pfizer/BioNTech COVID-19 vaccine", according to NewsGuard. Vincent Racaniello, professor of microbiology and immunology at Columbia University, described the claim as "completely wrong". Previous mRNA vaccines have been tested in humans, and were not found to cause prion disease. The mRNA contained in the vaccine is degraded within a few days of entering the cells of a person receiving it and does not accumulate in the brain. The U.S. Alzheimer's Association has stated that currently available COVID-19 vaccines are safe for persons with Alzheimer's disease and other forms of dementia. ==== Polio vaccine as a claimed COVID-19 carrier ==== Social media posts in Cameroon pushed a conspiracy theory that polio vaccines contained COVID-19, further complicating polio eradication beyond the logistical and funding difficulties created by the COVID-19 pandemic. ==== Antibody-dependent enhancement ==== Antibody-dependent enhancement (ADE) is the phenomenon in which a person with antibodies against one virus (i.e. from infection or vaccination) can develop worse disease when infected by a second closely related virus, due to a unique and rare reaction with proteins on the surface of the second virus. ADE has been observed in vitro and in animal studies with many different viruses that do not display ADE in humans. Researchers acknowledge that "Fundamentally, this question should be asked of all vaccine candidates under development, despite the rarity of the phenomenon." Prior to the pandemic, ADE was observed in animal studies of laboratory rodents with vaccines for SARS-CoV, the virus that causes severe acute respiratory syndrome (SARS). However, as of 27 January 2022 there have been no observed incidents with vaccines for COVID-19 in trials with nonhuman primates, in clinical trials with humans, or following the widespread use of approved vaccines. Molecular simulations indicate that ADE might play a role in new strains such as delta, but none in the strains that the vaccines were originally designed for. Anti-vaccination activists cited ADE as a reason to avoid vaccination against COVID-19. ==== Vaccines contain aborted fetal tissue ==== In November 2020, claims circulated on the web that the Oxford–AstraZeneca COVID-19 vaccine contained tissue from aborted fetuses. While it is true that cell lines derived from a fetus aborted in 1970 plays a role in the vaccine development process, the molecules for the vaccine are separated from the resulting cell debris. Several other COVID-19 vaccine candidates use fetal cell lines descended from fetuses aborted between 1972 and 1985. No fetal tissue is present in these vaccines. ==== Spike protein cytotoxicity ==== In 2021, anti-vaccination misinformation circulated on social media saying that SARS-CoV-2 spike proteins were "very dangerous" and "cytotoxic". At that time, all COVID-19 vaccines approved for emergency use either contained mRNA or mRNA precursors for the production of the spike protein. This mRNA consists of instructions which, when processed in cells, cause production of spike proteins, which trigger an adaptive immune response in a safe and effective manner. ==== Acquired immunodeficiency syndrome ==== In October 2021, the website The Exposé used data published by the UK Health Security Agency (UKHSA), which misleadingly indicated that COVID-19 infection rates were higher among fully-vaccinated than unvaccinated people, to falsely claim that the COVID-19 vaccines were not only ineffective but were also causing vaccinated people to develop AIDS "much faster than anticipated". The website's claims were cited in a speech by Brazilian president Jair Bolsonaro. The video of Bolsonaro's speech was removed from Facebook, Instagram and YouTube for violating their policies regarding COVID-19 vaccines. In January 2022, The Exposé promoted a conspiracy theory claiming that Germans fully-vaccinated against COVID-19 "[would] have full blown Covid-19 vaccine induced acquired immunodeficiency syndrome (AIDS) by the end of [the month]." ==== Vaccines as a cause of death ==== ===== United States ===== Claims have been made that data from the United States Department of Health and Human Services's Vaccine Adverse Event Reporting System (VAERS) reveals a hidden toll of COVID-19 vaccine related deaths. This claim have been debunked as a misleading misrepresentation by anti-vaccine sources. The VAERS is known to report and store co-occurring health events with no proof of causation, including suicides, mechanical incidents (car accident), natural deaths by chronic diseases, old age and others. The websites Medalerts.org by the National Vaccine Information Center, a known and leading anti-vaccine center, and OpenVAERS have been linked to this misinformation. Comparative studies of VAERS, which look at relative reporting rates, have found that the data does not support these claims. A 2021 transparency report from Facebook found that the most popular shared link in the United States from January to March was an article from the South Florida Sun-Sentinel about a doctor's death two weeks after getting a COVID-19 vaccine. The medical examiner later found no evidence of a link to the vaccine, but the article was promoted and twisted by anti-vaccine groups to raise doubt about vaccine safety. Anti-vaccine activists Robert F. Kennedy Jr. and Del Bigtree have suggested without evidence that the death of Baseball Hall of Fame member Hank Aaron was caused by receiving the COVID-19 vaccine. Aaron's death was reported as being due to natural causes, and medical officials did not believe the COVID-19 vaccine had any adverse effect on his health. On 7 October 2022, Florida Surgeon General Dr. Joseph Ladapo issued a press release discouraging men aged 18 to 39 from taking the COVID-19 vaccine since a study by the Florida Department of Health concluded vaccinated men of the age group had an 84% increased likelihood of dying from heart problems. The study was neither peer-reviewed, nor published in a scientific journal, while its authors, source of funding, and methods of analysis were not disclosed. The study faced ample criticism, contending misrepresentation of data, that the time frame for examining deaths was too long, a lack of transparency, and that the efficacy and safety of the vaccines were ignored. Steve Kirsch, an entrepreneur who promotes COVID-19 vaccine misinformation, cited the study as proof that mRNA vaccines are fatal to children. A study published in JAMA showed an increased risk for myocarditis within seven days of vaccination. The group with most recorded cases (males aged 16 to 17) had 106 per million doses, though the actual incidence is likely higher due to overall underreporting. 96% of patients were hospitalized, but most cases were mild and patients typically experienced symptomatic recovery by discharge. ===== Taiwan ===== The Falun Gong-affiliated news channel New Tang Dynasty Television spread misrepresentation of Taiwan's VAERS surveillance data to suggest COVID-19 vaccines, including the Taiwanese-developed Medigen vaccine, killed more people than the virus. ===== Other countries ===== Similar misrepresentation of known "deaths after vaccination" as "deaths due to vaccination" have been mentioned in various countries, including Italy, Austria, South Korea, Germany, Spain, USA, Norway, Belgium, Peru, and Canada. These have been debunked as misrepresentation of the cases and data. ==== Vaccine contains tracking agent ==== In November 2021, a White House correspondent for the conservative outlet Newsmax falsely tweeted that the Moderna vaccine contained luciferase "so that you can be tracked." ==== Vaccine 'reversal' and detox ==== In November 2021, erroneous claims arose that a "detox bath" of epsom salt, borax and bentonite clay can remove the effects of the vaccine. In fact, a rapid review of literature shows that no known mechanism exists for removing a vaccine from a vaccinated person. ==== Approved vaccines "not available" in the United States ==== Under U.S. FDA regulations, a product approved under an Emergency Use Authorization (EUA) is considered "legally distinct" from a product that has received full approval by the FDA. Besides differences in naming and labeling to account for its approval, and increased FDA oversight over its production, there are no formulaic differences between the EUA and approved versions of a vaccine, and the two are considered interchangeable once approved. For example, the Pfizer vaccine has been labeled as "Pfizer–BioNTech COVID-19 Vaccine" since distribution began, but was assigned the United States Adopted Name "Comirnaty" upon its approval. Some anti-vaccine advocates have made claims surrounding scenarios where this distinction is allegedly applicable; claims have been made that no FDA-approved vaccine is "available" in the United States because doses labeled as "Pfizer–BioNTech COVID-19 Vaccine" were still being distributed, and not "Comirnaty". This claim was cited by a group of Louisiana Republican lawmakers, Senator Ron Johnson, and in a lawsuit filed by the First Liberty Institute against a COVID-19 vaccine mandate implemented by the U.S. military. In the case of the latter, the plaintiffs claimed that the mandate applied specifically to Comirnaty only, and not the "experimental" Pfizer–BioNTech COVID-19 Vaccine. Another claim was that the approved version does not share the same liability protection as the version produced under an EUA. Under the Public Readiness and Emergency Preparedness (PREP) Act, individuals are eligible for compensation via the Countermeasures Injury Compensation Program (CICP) for severe outcomes or death caused by COVID-19 countermeasures such as vaccines. This applies generally to all COVID-19 vaccines, including those not yet given formal approval. ==== Vaccines as an "operating system" ==== A statement on the Moderna website which likens mRNA vaccines to operating systems as an analogy, but does not literally state that the vaccines were operating systems. ==== Department of Defense disinformation campaign ==== A Reuters investigation found that the United States Department of Defense (DoD), as retaliation for China's attempts to blame the United States for the pandemic, undertook a disinformation campaign in the Philippines, later expanded to Central Asia and the Middle East, which sought to discredit China, in particular its Sinovac vaccine. The campaign was described as "payback" for COVID-19 disinformation by China directed against the U.S. and an effort to counter China's vaccine diplomacy. The campaign ran from 2020 to 2021 and was overseen by Special Operations Command Pacific as well as the United States Central Command. Military personnel at MacDill Air Force Base in Florida operated phony social media accounts, some of which were more than five years old according to Reuters. During the COVID-19 pandemic, they disseminated hashtags of #ChinaIsTheVirus and posts claiming that the Sinovac vaccine contained gelatin from pork and therefore was haram or forbidden for purposes of Islamic law. US diplomats aware of the campaign were against the idea, but they were overruled by the military, which also asked tech companies not to take down the content after it was discovered by Facebook and X. A retrospective review by the DoD subsequently uncovered other social and political messaging that was "many leagues away" from acceptable military objective. The primary defence contractor on the project was General Dynamics IT, which received $493 million for its role. === Socially based claims === ==== Claims about a vaccine before one existed ==== Multiple social media posts promoted a conspiracy theory claiming that in the early stages of the pandemic, the virus was known and that a vaccine was already available. PolitiFact and FactCheck.org noted that no vaccine existed for COVID-19 at that point. The patents cited by various social media posts reference existing patents for genetic sequences and vaccines for other strains of coronavirus such as the SARS coronavirus. The WHO reported that as of 5 February 2020, despite news reports of "breakthrough drugs" being discovered, there were no treatments known to be effective; this included antibiotics and herbal remedies not being useful. On Facebook, a widely shared post claimed in April 2020 that seven Senegalese children had died because they had received a COVID-19 vaccine. No such vaccine existed, although some were in clinical trials at that time. ==== Magnetization ==== Some social media users have falsely asserted COVID-19 vaccines cause people to become magnetized such that metal objects stick to their bodies. Video clips of people showing magnets sticking to the injection site have been spread on social media platforms such as Instagram, Facebook, Twitter, YouTube, and TikTok, claiming that vaccination implants a microchip in people's arms. Called by Republicans as an expert witness before a June 2021 hearing of the Ohio House Health Committee, anti-vaccine activist Sherri Tenpenny promoted the false claim, adding, "There's been people who have long suspected that there's been some sort of an interface, yet to be defined interface, between what's being injected in these shots and all of the 5G towers." 5G-compatible chips are about 13 times too large to fit through the needles used to administer COVID-19 vaccines, whose internal diameter is between 0.26 and 0.41 millimeters. Most microchips do not contain ferromagnetic components, being made mostly of silicon. It is possible for smooth objects such as magnets to stick to one's skin if the skin is slightly oily. No COVID-19 vaccines authorized for use in the U.S. or Europe contain magnetic or metal ingredients or microchips. Instead the vaccines contain proteins. lipids, water, salts, and pH buffers. ==== Disappearing needles ==== Twitter and YouTube users circulated video clips purporting to show that vaccine injections given to health care workers were staged for the press using syringes with "disappearing needles". The syringes used were actually safety syringes, which automatically retract the needle once the vaccine is injected in order to reduce accidental needlestick injuries to nurses and lab workers. ==== Political divides and distrust in government ==== Discourses against COVID vaccines became part of QAnon's set of beliefs, as adherents used the pandemic to promote the conspiracy theory. In 2021, Romana Didulo, a QAnon-affiliated Canadian conspiracy theorist calling herself the "Queen of Canada" caused her online followers to harass Canadian businesses and public authorities with demands that they cease all measures related to combating the pandemic. She was apprehended in late November after calling on her 73,000 Telegram followers to "shoot to kill" all healthcare workers administering COVID-19 vaccines. Anti-government groups such as sovereign citizens and freemen on the land also took part in the anti-vaccine movement. During lockdowns in Bulgaria, many Roma neighborhoods claimed that they were subject to lockdowns without proper explanations even though the level of infections to other parts of the country were higher than their neighborhoods. The communities already held a distrust of institutions and the government, and helped create an even more strained relationship and lack of trust. In France, Florian Philippot and Nicolas Dupont-Aignan, right-wing candidates to the 2022 presidential election, have both cast doubts about the vaccine's effectiveness and safety. === Government investigations === In December 2022, vaccine-skeptical Florida Governor Ron DeSantis requested the impaneling of a grand jury to "investigate criminal or wrongful activity in Florida relating to the development, promotion, and distribution of vaccines purported to prevent COVID-19 infection, symptoms, and transmission", specifically mentioning statements made by drug manufacturers and federal officials. == Vaccine hesitancy == === Concerns about menstrual irregularities === Concerns about menstrual irregularities caused by COVID-19 have led to vaccine hesitancy. A meta-analysis from 2023 indicated that COVID-19 vaccination can lead to menstrual irregularities but that more studies are required to establish a causal relationship. === Pregnancy and vaccine hesitancy === A 2022 meta-analysis on COVID-19 vaccines and pregnancy found that pregnant people were less likely to get vaccinations compared with non-pregnant cohorts. Factors associated with lower takeup of vaccination during pregnancy included younger age, lower education, lower socioeconomic status, and lack of adherence to influenza vaccination recommendations. One study in the analysis found varying influence of education and influenxa vaccination history depending on race, suggesting that lived experiences with systemic racism may have an effect on vaccine hesitancy in pregnancy. === Hong Kong === In Hong Kong, the lower perceived risk of catching COVID-19 when it was under control, misinformation about the vaccines' side effects and efficacy, as well as political events and distrust of the HKSAR government, contributed to a low rate of vaccination. To some extent, similar complacency occurred in Taiwan, Macau, and mainland China. Many Hongkongers felt that the government was actively pushing the SinoVac vaccine despite its lower efficacy compared with BioNTech and AstraZeneca. Older residents might believe the BioNTech vaccine lead to severe side effects. Officials also stated that people with "uncontrolled severe chronic diseases" should not receive the SinoVac vaccine and urged those who weren't sure to consult with their doctors first. Conspiracy theories about the government spread as well due to a packaging issue with the BioNTech vaccine. Skepticism of Western and preventive medicine further contributed to the hesitancy. Towards the end of May 2021, about 19% of Hongkongers had received their first dose and 13.8% their second. By 1 January 2022, 62% of the population was fully vaccinated, but as of 7 February, only 33% of those aged 80 or older had received one dose. As Omicron subvariants spread across the city, a study showed that 15% of those aged 80 or older who weren't immunized at all died after contracting the disease, compared with 3% of those who got two SinoVac shots and 1.5% of those who received two BioNTech doses. === United States === In the United States, COVID-19 vaccine hesitancy varies largely by region; however, regardless of region, medical professionals are vaccinated at higher rates than the general public. Estimates from two surveys were that 67% or 80% of people in the U.S. would accept a new vaccination against COVID-19, with wide disparity by education level, employment status, ethnicity, and geography. A US study conducted in January 2021 found that trust in science and scientists was strongly correlated with likelihood to get vaccinated for COVID-19 among those who had not already gotten vaccinated. In March 2021, 19% of US adults claimed to have been vaccinated while 50% announced plans to get vaccinated. A 2022 study found a link between online COVID-19 misinformation and early vaccine hesitancy and refusal. Despite a strong association between vaccine hesitancy and Republican vote share at the US county and state levels, the authors found that the associations between vaccine outcomes and misinformation remained significant when accounting for political, demographic, and socioeconomic factors. In the United States, vaccine hesitancy could be seen in certain social groups due to lack of trusted medical sources, traumatic past experiences with medical care and widespread theories. Distrust can be seen in the African American population where many see the history in the United States of using African Americans as experiments, such as the Tuskegee experiments and the work of J. Marion Sims, as basis to refuse the vaccine. According to The New York Times, only 28 percent of Black New Yorkers ages 18 to 44 years were fully vaccinated as of August 2021, compared with 48 percent of Latino residents and 52 percent of White residents in that age group. Interviewees cited mistrust of the government, personal experiences of medical racism, and historical medical experimentation on Black people such as the Tuskegee Syphilis Study as reasons for their reluctance to be vaccinated. A professor from the University of Warsaw in Poland, claimed that her research found that medical mistrust was higher in nations that had experienced Soviet-style communism in the past, and vaccine hesitancy could be seen if the countries introduced compulsory vaccination regulations. Medical mistrust is also seen in Russia where one person described a lack of understanding what the vaccine is and claimed that if there was more statistics and research about the Sputnik V and other Russian made vaccines they would be more "loyal". She also stated that there was also mistrust over the lack of consistent medical information about the vaccine coming from many sources including the authorities of the region. According to prominent biomedical researcher Peter Hotez, he and other scientists who publicly defend vaccines have been attacked on social media, harassed with threatening emails, intimidated, and confronted physically by opponents of vaccination. He further attributes the increase in aggressiveness of the anti-vaccination movement to the influence of the extreme wing of the Republican Party. Hotez estimates that roughly 200,000 preventable deaths from COVID-19, mainly among Republicans, occurred in the US because of refusal to be vaccinated. A 2023 study published in the Journal of the American Medical Association found "evidence of higher excess mortality for Republican voters compared with Democratic voters in Florida and Ohio after, but not before, COVID-19 vaccines were available to all adults in the US". == Countermeasures == === COVID-19 passes === Some countries are using vaccination tracking systems, apps, or passports that are labeled as passes to allow individuals certain freedoms. In France, every adult must present a "pass sanitaire" before entering specific locations such as restaurants, cafes, museums, and sports stadiums after a new law was passed in July 2021. Italy reported a 40% increase in the number of people who received the first dose of the vaccine after a governmental decree in September 2021 requiring a health pass for all workers either in the public or private sectors starting in October 2021. Similar passes have been put into effect in countries such as Slovenia and Greece. Lithuania introduced vaccination certificates that citizens 12-years and older must show to enter most public indoor spaces. === Encouragement by public figures and celebrities === Many public figures and celebrities have publicly declared that they have been vaccinated against COVID-19, and encouraged people to get vaccinated. Many have made video recordings or otherwise documented their vaccination. They do this partly to counteract vaccine hesitancy and COVID-19 vaccine conspiracy theories. ==== Politicians ==== Many current and former heads of state and government ministers have released photographs of their vaccinations, encouraging others to be vaccinated, including Kyriakos Mitsotakis, Zdravko Marić, Olivier Véran, Mike Pence, Joe Biden, Barack Obama, George W. Bush, Bill Clinton, the Dalai Lama, Narendra Modi, Justin Trudeau, Alexandria Ocasio-Cortez, Nancy Pelosi and Kamala Harris. Elizabeth II and Prince Philip announced they had the vaccine, breaking from protocol of keeping the British royal family's health private. Pope Francis and Pope Emeritus Benedict both announced they had been vaccinated. In a call-in-television special President Vladimir Putin told listeners that he had received the Sputnik V vaccine and stressed that all the vaccines were safe. ==== Media personalities ==== Dolly Parton recorded herself getting vaccinated with the Moderna vaccine she helped fund, she encouraged people to get vaccinated and created a new version of her song "Jolene" called "Vaccine". Several other musicians like Patti Smith, Yo-Yo Ma, Carole King, Tony Bennett, Mavis Staples, Brian Wilson, Joel Grey, Loretta Lynn, Willie Nelson, and Paul Stanley have all released photographs of them being vaccinated and encouraged others to do so. Grey stated "I got the vaccine because I want to be safe. We've lost so many people to COVID. I've lost a few friends. It's heartbreaking. Frightening." Many actors including Amy Schumer, Rosario Dawson, Arsenio Hall, Danny Trejo, Mandy Patinkin, Samuel L. Jackson, Arnold Schwarzenegger, Sharon Stone, Kate Mulgrew, Jeff Goldblum, Jane Fonda, Anthony Hopkins, Bette Midler, Kim Cattrall, Isabella Rossellini, Christie Brinkley, Cameran Eubanks, Hugh Bonneville, Alan Alda, David Harbour, Sean Penn, Amanda Kloots, Ian McKellen and Patrick Stewart have released photographs of themselves getting vaccinated and encouraging others to do the same. Judi Dench and Joan Collins announced they have been vaccinated. Other TV personalities such as Martha Stewart, Jonathan Van Ness, Al Roker and Dan Rather released photographs of themselves getting vaccinated and encouraged others to do the same. Stephen Fry also shared a photograph of being vaccinated; he wrote, "It's a wonderful moment, but you feel that it's not only helpful for your own health, but you know that you're likely to be less contagious if you yourself happen to carry it ... It's a symbol of being part of society, part of the group that we all want to protect each other and get this thing over and done with." Sir David Attenborough announced that he has been vaccinated. Dutch TV personality Beau van Erven Dorens got his vaccination on live TV in his late-night talk show on 3 June 2021. ==== Athletes ==== Magic Johnson and Kareem Abdul-Jabbar released photographs of themselves getting vaccinated and encouraged others to do the same; Abdul-Jabbar said, "We have to find new ways to keep each other safe." ==== Specific communities ==== Romesh Ranganathan, Meera Syal, Adil Ray, Sadiq Khan and others produced a video specifically encouraging ethnic minority communities in the UK to be vaccinated including addressing conspiracy theories stating "there is no scientific evidence to suggest it will work differently on people from ethnic minorities and that it does not include pork or any material of fetal or animal origin." Oprah Winfrey and Whoopi Goldberg have spoken about being vaccinated and encouraged other black Americans to be so. Stephanie Elam volunteered to be a trial volunteer stating "a large part of the reason why I wanted to volunteer for this COVID-19 vaccine research – more Black people and more people of color need to be part of these trials so more diverse populations can reap the benefits of this medical research." === Experiences of prior hesitant individuals and others === Many news articles, TV interviews and posts on social media appeared in 2021 to highlight either the anger of individuals whose children or immune compromised family members either caught COVID-19 or were vaccine hesitant and later tested positive. The Chief Medical Officer for England, Prof. Chris Whitty, tweeted in September 2021 that "The majority of our hospitalised Covid patients are unvaccinated and regret delaying their vaccines" with about 60% of all hospitalisations due to COVID-19 in the UK being of unvaccinated individuals. While some cases have allowed for more discussions to open up about the vaccine and the effects of the disease, some still have remaining hesitancy about the vaccination process, others have expressed their regret for not pushing the vaccine or determination to get vaccinated. === Targeted lockdowns and fines === Austria and Germany both announced in late 2021 that they would introduce lockdowns for only unvaccinated citizens. These targeted lockdowns faced criticism from various quarters, including from the far-right Freedom Party which labeled the measures as creating a group of second-class citizens. Additionally, thousands protested in Vienna against the vaccine mandate, expressing concerns over personal freedoms and governmental overreach. In Greece, those who refused to get vaccinated and were above the age of 60 were fined 100 euros a month, with the payments put towards a hospital service fund. In Singapore, all citizens who chose not to get vaccinated were required to pay their medical bills in full if they tested positive and receive hospital care, while in Ukraine all teachers and government officials who remain unvaccinated were placed on unpaid leave, and restaurants, shopping malls and fitness centers must have 100% of their employees vaccinated to operate. === Vaccine lotteries and benefits === The Kremlin announced in 2021 that it was supporting a lottery that would give 1,000 chosen vaccinated individuals the equivalent of $1,350. The Mayor of Moscow also announced that the city would give away five cars every week to vaccinated residents. In the United States, many states such as Alaska, Pennsylvania, and Ohio, along with cities and universities, offered scholarships, money, and physical items in lotteries. These benefits had varying success in raising vaccination numbers. In July 2021, the Polish government launched the National Vaccination Programme Lottery to encourage vaccinations against COVID-19. It was open to people aged 18 years and over who had completed the COVID-19 vaccination programme and had registered for the lottery by 30 September 2020. The final prize draw took place on 6 October 2021, and there were two cash prizes of PLN 1 million (US$264,000) and two Toyota C-HR cars to be won. First Capital Bank, based out of Malawi, issued a statement that they would only give the annual performance bonuses to vaccinated employees. === Vaccine mandates === In France, since September 2021, all health care workers must have received at least one dose of the vaccine to continue working with any resisters suspended without pay. Additional worker groups that have been mandated to do so earlier in the year are military members and firefighters. In November 2021, Austria announced that it would introduce a nationwide vaccine mandate. In the United States, many businesses, schools and universities, healthcare providers, and governmental and state departments have enacted vaccine mandates. While many of the mandates allowed for a person to opt out due to medical or religious reasons and be regularly tested, the federal mandate signed in September 2021 did not include these options. The federal mandate was eventually struck down. Some of the mandates were focused only on specific groups, such as Rutgers University, which only mandated the vaccine for students and health-care and public-safety employees. The mandates have seen push back with a New York Judge temporarily blocking one for healthcare workers who claimed they could not opt out due to religious reasons, and Arizona Attorney General Mark Brnovich suing the Biden administration for its vaccine mandate for federal employees and private businesses with over 100 employees. Additional push back on vaccine mandates were seen at local levels with at least one sheriff's department in California announcing they would not enforce any vaccine mandates as "the last line of defense from tyrannical government overreach", while others have seen mass resignation. == See also == COVID-19 misinformation Vaccine misinformation Died Suddenly, an anti-vaccine documentary that promotes false claims about COVID-19 vaccines and the Great Reset conspiracy Tobacco industry playbook Operation Denver == Explanatory notes == == References ==
Wikipedia/COVID-19_vaccine_misinformation_and_hesitancy
A conspiracy theory is an explanation for an event or situation that asserts the existence of a conspiracy (generally by powerful sinister groups, often political in motivation), when other explanations are more probable. The term generally has a negative connotation, implying that the appeal of a conspiracy theory is based in prejudice, emotional conviction, or insufficient evidence. A conspiracy theory is distinct from a conspiracy; it refers to a hypothesized conspiracy with specific characteristics, including but not limited to opposition to the mainstream consensus among those who are qualified to evaluate its accuracy, such as scientists or historians. Conspiracy theories tend to be internally consistent and correlate with each other; they are generally designed to resist falsification either by evidence against them or a lack of evidence for them. They are reinforced by circular reasoning: both evidence against the conspiracy and absence of evidence for it are misinterpreted as evidence of its truth. Stephan Lewandowsky observes "This interpretation relies on the notion that, the stronger the evidence against a conspiracy, the more the conspirators must want people to believe their version of events." As a consequence, the conspiracy becomes a matter of faith rather than something that can be proven or disproven. Studies have linked belief in conspiracy theories to distrust of authority and political cynicism. Some researchers suggest that conspiracist ideation—belief in conspiracy theories—may be psychologically harmful or pathological. Such belief is correlated with psychological projection, paranoia, and Machiavellianism. Psychologists usually attribute belief in conspiracy theories to a number of psychopathological conditions such as paranoia, schizotypy, narcissism, and insecure attachment, or to a form of cognitive bias called "illusory pattern perception". It has also been linked with the so-called Dark triad personality types, whose common feature is lack of empathy. However, a 2020 review article found that most cognitive scientists view conspiracy theorizing as typically nonpathological, given that unfounded belief in conspiracy is common across both historical and contemporary cultures, and may arise from innate human tendencies towards gossip, group cohesion, and religion. One historical review of conspiracy theories concluded that "Evidence suggests that the aversive feelings that people experience when in crisis—fear, uncertainty, and the feeling of being out of control—stimulate a motivation to make sense of the situation, increasing the likelihood of perceiving conspiracies in social situations." Historically, conspiracy theories have been closely linked to prejudice, propaganda, witch hunts, wars, and genocides. They are often strongly believed by the perpetrators of terrorist attacks, and were used as justification by Timothy McVeigh and Anders Breivik, as well as by governments such as Nazi Germany, the Soviet Union, and Turkey. AIDS denialism by the government of South Africa, motivated by conspiracy theories, caused an estimated 330,000 deaths from AIDS. QAnon and denialism about the 2020 United States presidential election results led to the January 6 United States Capitol attack, and belief in conspiracy theories about genetically modified foods led the government of Zambia to reject food aid during a famine, at a time when three million people in the country were suffering from hunger. Conspiracy theories are a significant obstacle to improvements in public health, encouraging opposition to such public health measures as vaccination and water fluoridation. They have been linked to outbreaks of vaccine-preventable diseases. Other effects of conspiracy theories include reduced trust in scientific evidence, radicalization and ideological reinforcement of extremist groups, and negative consequences for the economy. Conspiracy theories once limited to fringe audiences have become commonplace in mass media, the Internet, and social media, emerging as a cultural phenomenon of the late 20th and early 21st centuries. They are widespread around the world and are often commonly believed, some even held by the majority of the population. Interventions to reduce the occurrence of conspiracy beliefs include maintaining an open society, encouraging people to use analytical thinking, and reducing feelings of uncertainty, anxiety, or powerlessness. == Origin and usage == The Oxford English Dictionary defines conspiracy theory as "the theory that an event or phenomenon occurs as a result of a conspiracy between interested parties; spec. a belief that some covert but influential agency (typically political in motivation and oppressive in intent) is responsible for an unexplained event". It cites a 1909 article in The American Historical Review as the earliest usage example, although it also appeared in print for several decades before. The earliest known usage was by the American author Charles Astor Bristed, in a letter to the editor published in The New York Times on 11 January 1863. He used it to refer to claims that British aristocrats were intentionally weakening the United States during the American Civil War in order to advance their financial interests. England has had quite enough to do in Europe and Asia, without going out of her way to meddle with America. It was a physical and moral impossibility that she could be carrying on a gigantic conspiracy against us. But our masses, having only a rough general knowledge of foreign affairs, and not unnaturally somewhat exaggerating the space which we occupy in the world's eye, do not appreciate the complications which rendered such a conspiracy impossible. They only look at the sudden right-about-face movement of the English Press and public, which is most readily accounted for on the conspiracy theory. The term is also used as a way to discredit dissenting analyses. Robert Blaskiewicz comments that examples of the term were used as early as the nineteenth century and states that its usage has always been derogatory. According to a study by Andrew McKenzie-McHarg, in contrast, in the nineteenth century the term conspiracy theory simply "suggests a plausible postulate of a conspiracy" and "did not, at this stage, carry any connotations, either negative or positive", though sometimes a postulate so-labeled was criticized. The author and activist George Monbiot argued that the terms "conspiracy theory" and "conspiracy theorist" are misleading, as conspiracies truly exist and theories are "rational explanations subject to disproof". Instead, he proposed the terms "conspiracy fiction" and "conspiracy fantasist". === Alleged CIA origins === The term "conspiracy theory" is itself the subject of a conspiracy theory, which posits that the term was popularized by the CIA in order to discredit conspiratorial believers, particularly critics of the Warren Commission, by making them a target of ridicule. In his 2013 book Conspiracy Theory in America, the political scientist Lance deHaven-Smith wrote that the term entered everyday language in the United States after 1964, the year in which the Warren Commission published its findings on the assassination of John F. Kennedy, with The New York Times running five stories that year using the term. Whether the CIA was responsible for popularising the term "conspiracy theory" was analyzed by Michael Butter, a Professor of American Literary and Cultural History at the University of Tübingen. Butter wrote in 2020 that the CIA document Concerning Criticism of the Warren Report, which proponents of the theory use as evidence of CIA motive and intention, does not contain the phrase "conspiracy theory" in the singular, and only uses the term "conspiracy theories" once, in the sentence: "Conspiracy theories have frequently thrown suspicion on our organisation [sic], for example, by falsely alleging that Lee Harvey Oswald worked for us." == Difference from conspiracy == A conspiracy theory is not simply a conspiracy, which refers to any covert plan involving two or more people. In contrast, the term "conspiracy theory" refers to hypothesized conspiracies that have specific characteristics. For example, conspiracist beliefs invariably oppose the mainstream consensus among those people who are qualified to evaluate their accuracy, such as scientists or historians. Conspiracy theorists see themselves as having privileged access to socially persecuted knowledge or a stigmatized mode of thought that separates them from the masses who believe the official account. Michael Barkun describes a conspiracy theory as a "template imposed upon the world to give the appearance of order to events". Real conspiracies, even very simple ones, are difficult to conceal and routinely experience unexpected problems. In contrast, conspiracy theories suggest that conspiracies are unrealistically successful and that groups of conspirators, such as bureaucracies, can act with near-perfect competence and secrecy. The causes of events or situations are simplified to exclude complex or interacting factors, as well as the role of chance and unintended consequences. Nearly all observations are explained as having been deliberately planned by the alleged conspirators. In conspiracy theories, the conspirators are usually claimed to be acting with extreme malice. As described by Robert Brotherton: The malevolent intent assumed by most conspiracy theories goes far beyond everyday plots borne out of self-interest, corruption, cruelty, and criminality. The postulated conspirators are not merely people with selfish agendas or differing values. Rather, conspiracy theories postulate a black-and-white world in which good is struggling against evil. The general public is cast as the victim of organised persecution, and the motives of the alleged conspirators often verge on pure maniacal evil. At the very least, the conspirators are said to have an almost inhuman disregard for the basic liberty and well-being of the general population. More grandiose conspiracy theories portray the conspirators as being Evil Incarnate: of having caused all the ills from which we suffer, committing abominable acts of unthinkable cruelty on a routine basis, and striving ultimately to subvert or destroy everything we hold dear. == Examples == A conspiracy theory may take any matter as its subject, but certain subjects attract greater interest than others. Favored subjects include famous deaths and assassinations, morally dubious government activities, suppressed technologies, and "false flag" terrorism. Among the longest-standing and most widely recognized conspiracy theories are notions concerning the assassination of John F. Kennedy, the 1969 Apollo Moon landings, and the 9/11 terrorist attacks, as well as numerous theories pertaining to alleged plots for world domination by various groups, both real and imaginary. == Popularity == Conspiracy beliefs are widespread around the world. In rural Africa, common targets of conspiracy theorizing include societal elites, enemy tribes, and the Western world, with conspirators often alleged to enact their plans via sorcery or witchcraft; one common belief identifies modern technology as itself being a form of sorcery, created with the goal of harming or controlling the people. In China, one widely published conspiracy theory claims that a number of events including the rise of Hitler, the 1997 Asian financial crisis, and climate change were planned by the Rothschild family, which may have led to effects on discussions about China's currency policy. Conspiracy theories once limited to fringe audiences have become commonplace in mass media, contributing to conspiracism emerging as a cultural phenomenon in the United States of the late 20th and early 21st centuries. The general predisposition to believe conspiracy theories cuts across partisan and ideological lines. Conspiratorial thinking is correlated with antigovernmental orientations and a low sense of political efficacy, with conspiracy believers perceiving a governmental threat to individual rights and displaying a deep skepticism that who one votes for really matters. Conspiracy theories are often commonly believed, some even being held by the majority of the population. A broad cross-section of Americans today gives credence to at least some conspiracy theories. For instance, a study conducted in 2016 found that 10% of Americans think the chemtrail conspiracy theory is "completely true" and 20–30% think it is "somewhat true". This puts "the equivalent of 120 million Americans in the 'chemtrails are real' camp". Belief in conspiracy theories has therefore become a topic of interest for sociologists, psychologists and experts in folklore. Conspiracy theories are widely present on the Web in the form of blogs and YouTube videos, as well as on social media. Whether the Web has increased the prevalence of conspiracy theories or not is an open research question. The presence and representation of conspiracy theories in search engine results has been monitored and studied, showing significant variation across different topics, and a general absence of reputable, high-quality links in the results. One conspiracy theory that propagated through former US President Barack Obama's time in office claimed that he was born in Kenya, instead of Hawaii where he was actually born. Former governor of Arkansas and political opponent of Obama Mike Huckabee made headlines in 2011 when he, among other members of Republican leadership, continued to question Obama's citizenship status. == Types == A conspiracy theory can be local or international, focused on single events or covering multiple incidents and entire countries, regions and periods of history. According to Russell Muirhead and Nancy Rosenblum, historically, traditional conspiracism has entailed a "theory", but over time, "conspiracy" and "theory" have become decoupled, as modern conspiracism is often without any kind of theory behind it. === Walker's five kinds === Jesse Walker (2013) has identified five kinds of conspiracy theories: The "Enemy Outside" refers to theories based on figures alleged to be scheming against a community from without. The "Enemy Within" finds the conspirators lurking inside the nation, indistinguishable from ordinary citizens. The "Enemy Above" involves powerful people manipulating events for their own gain. The "Enemy Below" features the lower classes working to overturn the social order. The "Benevolent Conspiracies" are angelic forces that work behind the scenes to improve the world and help people. === Barkun's three types === Michael Barkun has identified three classifications of conspiracy theory: Event conspiracy theories. This refers to limited and well-defined events. Examples may include such conspiracies theories as those concerning the Kennedy assassination, 9/11, and the spread of AIDS. Systemic conspiracy theories. The conspiracy is believed to have broad goals, usually conceived as securing control of a country, a region, or even the entire world. The goals are sweeping, whilst the conspiratorial machinery is generally simple: a single, evil organization implements a plan to infiltrate and subvert existing institutions. This is a common scenario in conspiracy theories that focus on the alleged machinations of Jews, Freemasons, Communism, or the Catholic Church. Superconspiracy theories. For Barkun, such theories link multiple alleged conspiracies together hierarchically. At the summit is a distant but all-powerful evil force. His cited examples are the ideas of David Icke and Milton William Cooper. === Rothbard: shallow vs. deep === Murray Rothbard argues in favor of a model that contrasts "deep" conspiracy theories to "shallow" ones. According to Rothbard, a "shallow" theorist observes an event and asks Cui bono? ("Who benefits?"), jumping to the conclusion that a posited beneficiary is responsible for covertly influencing events. On the other hand, the "deep" conspiracy theorist begins with a hunch and then seeks out evidence. Rothbard describes this latter activity as a matter of confirming with certain facts one's initial paranoia. == Lack of evidence == Belief in conspiracy theories is generally based not on evidence but on the faith of the believer. Noam Chomsky contrasts conspiracy theory to institutional analysis, which focuses mainly on the public, long-term behavior of publicly known institutions, as recorded in, for example, scholarly documents or mainstream media reports. Conspiracy theory conversely posits the existence of secretive coalitions of individuals and speculates on their alleged activities. Belief in conspiracy theories is associated with biases in reasoning, such as the conjunction fallacy. Clare Birchall at King's College London describes conspiracy theory as a "form of popular knowledge or interpretation". The use of the word 'knowledge' here suggests ways in which conspiracy theory may be considered in relation to legitimate modes of knowing. The relationship between legitimate and illegitimate knowledge, Birchall claims, is closer than common dismissals of conspiracy theory contend. Theories involving multiple conspirators that are proven to be correct, such as the Watergate scandal, are usually referred to as investigative journalism or historical analysis rather than conspiracy theory. Bjerg (2016) writes: "the way we normally use the term conspiracy theory excludes instances where the theory has been generally accepted as true. The Watergate scandal serves as the standard reference." By contrast, the term "Watergate conspiracy theory" is used to refer to a variety of hypotheses in which those convicted in the conspiracy were in fact the victims of a deeper conspiracy. There are also attempts to analyze the theory of conspiracy theories (conspiracy theory theory) to ensure that the term "conspiracy theory" is used to refer to narratives that have been debunked by experts, rather than as a generalized dismissal. == Rhetoric == Conspiracy theory rhetoric exploits several important cognitive biases, including proportionality bias, attribution bias, and confirmation bias. Their arguments often take the form of asking reasonable questions, but without providing an answer based on strong evidence. Conspiracy theories are most successful when proponents can gather followers from the general public, such as in politics, religion and journalism. These proponents may not necessarily believe the conspiracy theory; instead, they may just use it in an attempt to gain public approval. Conspiratorial claims can act as a successful rhetorical strategy to convince a portion of the public via appeal to emotion. Conspiracy theories typically justify themselves by focusing on gaps or ambiguities in knowledge, and then arguing that the true explanation for this must be a conspiracy. In contrast, any evidence that directly supports their claims is generally of low quality. For example, conspiracy theories are often dependent on eyewitness testimony, despite its unreliability, while disregarding objective analyses of the evidence. Conspiracy theories are not able to be falsified and are reinforced by fallacious arguments. In particular, the logical fallacy circular reasoning is used by conspiracy theorists: both evidence against the conspiracy and an absence of evidence for it are re-interpreted as evidence of its truth, whereby the conspiracy becomes a matter of faith rather than something that can be proved or disproved. The epistemic strategy of conspiracy theories has been called "cascade logic": each time new evidence becomes available, a conspiracy theory is able to dismiss it by claiming that even more people must be part of the cover-up. Any information that contradicts the conspiracy theory is suggested to be disinformation by the alleged conspiracy. Similarly, the continued lack of evidence directly supporting conspiracist claims is portrayed as confirming the existence of a conspiracy of silence; the fact that other people have not found or exposed any conspiracy is taken as evidence that those people are part of the plot, rather than considering that it may be because no conspiracy exists. This strategy lets conspiracy theories insulate themselves from neutral analyses of the evidence, and makes them resistant to questioning or correction, which is called "epistemic self-insulation". Conspiracy theorists often take advantage of false balance in the media. They may claim to be presenting a legitimate alternative viewpoint that deserves equal time to argue its case; for example, this strategy has been used by the Teach the Controversy campaign to promote intelligent design, which often claims that there is a conspiracy of scientists suppressing their views. If they successfully find a platform to present their views in a debate format, they focus on using rhetorical ad hominems and attacking perceived flaws in the mainstream account, while avoiding any discussion of the shortcomings in their own position. The typical approach of conspiracy theories is to challenge any action or statement from authorities, using even the most tenuous justifications. Responses are then assessed using a double standard, where failing to provide an immediate response to the satisfaction of the conspiracy theorist will be claimed to prove a conspiracy. Any minor errors in the response are heavily emphasized, while deficiencies in the arguments of other proponents are generally excused. In science, conspiracists may suggest that a scientific theory can be disproven by a single perceived deficiency, even though such events are extremely rare. In addition, both disregarding the claims and attempting to address them will be interpreted as proof of a conspiracy. Other conspiracist arguments may not be scientific; for example, in response to the IPCC Second Assessment Report in 1996, much of the opposition centered on promoting a procedural objection to the report's creation. Specifically, it was claimed that part of the procedure reflected a conspiracy to silence dissenters, which served as motivation for opponents of the report and successfully redirected a significant amount of the public discussion away from the science. == Consequences == Historically, conspiracy theories have been closely linked to prejudice, witch hunts, wars, and genocides. They are often strongly believed by the perpetrators of terrorist attacks, and were used as justification by Timothy McVeigh, Anders Breivik and Brenton Tarrant, as well as by governments such as Nazi Germany and the Soviet Union. AIDS denialism by the government of South Africa, motivated by conspiracy theories, caused an estimated 330,000 deaths from AIDS, while belief in conspiracy theories about genetically modified foods led the government of Zambia to reject food aid during a famine, at a time when 3 million people in the country were suffering from hunger. Conspiracy theories are a significant obstacle to improvements in public health. People who believe in health-related conspiracy theories are less likely to follow medical advice, and more likely to use alternative medicine instead. Conspiratorial anti-vaccination beliefs, such as conspiracy theories about pharmaceutical companies, can result in reduced vaccination rates and have been linked to outbreaks of vaccine-preventable diseases. Health-related conspiracy theories often inspire resistance to water fluoridation, and contributed to the impact of the Lancet MMR autism fraud. Conspiracy theories are a fundamental component of a wide range of radicalized and extremist groups, where they may play an important role in reinforcing the ideology and psychology of their members as well as further radicalizing their beliefs. These conspiracy theories often share common themes, even among groups that would otherwise be fundamentally opposed, such as the antisemitic conspiracy theories found among political extremists on both the far right and far left. More generally, belief in conspiracy theories is associated with holding extreme and uncompromising viewpoints, and may help people in maintaining those viewpoints. While conspiracy theories are not always present in extremist groups, and do not always lead to violence when they are, they can make the group more extreme, provide an enemy to direct hatred towards, and isolate members from the rest of society. Conspiracy theories are most likely to inspire violence when they call for urgent action, appeal to prejudices, or demonize and scapegoat enemies. Conspiracy theorizing in the workplace can also have economic consequences. For example, it leads to lower job satisfaction and lower commitment, resulting in workers being more likely to leave their jobs. Comparisons have also been made with the effects of workplace rumors, which share some characteristics with conspiracy theories and result in both decreased productivity and increased stress. Subsequent effects on managers include reduced profits, reduced trust from employees, and damage to the company's image. Conspiracy theories can divert attention from important social, political, and scientific issues. In addition, they have been used to discredit scientific evidence to the general public or in a legal context. Conspiratorial strategies also share characteristics with those used by lawyers who are attempting to discredit expert testimony, such as claiming that the experts have ulterior motives in testifying, or attempting to find someone who will provide statements to imply that expert opinion is more divided than it actually is. It is possible that conspiracy theories may also produce some compensatory benefits to society in certain situations. For example, they may help people identify governmental deceptions, particularly in repressive societies, and encourage government transparency. However, real conspiracies are normally revealed by people working within the system, such as whistleblowers and journalists, and most of the effort spent by conspiracy theorists is inherently misdirected. The most dangerous conspiracy theories are likely to be those that incite violence, scapegoat disadvantaged groups, or spread misinformation about important societal issues. == Interventions == === Target audience === Strategies to address conspiracy theories have been divided into two categories based on whether the target audience is the conspiracy theorists or the general public. These strategies have been described as reducing either the supply or the demand for conspiracy theories. Both approaches can be used at the same time, although there may be issues of limited resources, or if arguments are used which may appeal to one audience at the expense of the other. Brief scientific literacy interventions, particularly those focusing on critical thinking skills, can effectively undermine conspiracy beliefs and related behaviors. Research led by Penn State scholars, published in the Journal of Consumer Research, found that enhancing scientific knowledge and reasoning through short interventions, such as videos explaining concepts like correlation and causation, reduces the endorsement of conspiracy theories. These interventions were most effective against conspiracy theories based on faulty reasoning and were successful even among groups prone to conspiracy beliefs. The studies, involving over 2,700 participants, highlight the importance of educational interventions in mitigating conspiracy beliefs, especially when timed to influence critical decision-making. ==== General public ==== People who feel empowered are more resistant to conspiracy theories. Methods to promote empowerment include encouraging people to use analytical thinking, priming people to think of situations where they are in control, and ensuring that decisions by society and government are seen to follow procedural fairness (the use of fair decision-making procedures). Methods of refutation which have shown effectiveness in various circumstances include: providing facts that demonstrate the conspiracy theory is false, attempting to discredit the source, explaining how the logic is invalid or misleading, and providing links to fact-checking websites. It can also be effective to use these strategies in advance, informing people that they could encounter misleading information in the future, and why the information should be rejected (also called inoculation or prebunking). While it has been suggested that discussing conspiracy theories can raise their profile and make them seem more legitimate to the public, the discussion can put people on guard instead as long as it is sufficiently persuasive. Other approaches to reduce the appeal of conspiracy theories in general among the public may be based in the emotional and social nature of conspiratorial beliefs. For example, interventions that promote analytical thinking in the general public are likely to be effective. Another approach is to intervene in ways that decrease negative emotions, and specifically to improve feelings of personal hope and empowerment. ==== Conspiracy theorists ==== It is much more difficult to convince people who already believe in conspiracy theories. Conspiracist belief systems are not based on external evidence, but instead use circular logic where every belief is supported by other conspiracist beliefs. In addition, conspiracy theories have a "self-sealing" nature, in which the types of arguments used to support them make them resistant to questioning from others. Characteristics of successful strategies for reaching conspiracy theorists have been divided into several broad categories: 1) Arguments can be presented by "trusted messengers", such as people who were formerly members of an extremist group. 2) Since conspiracy theorists think of themselves as people who value critical thinking, this can be affirmed and then redirected to encourage being more critical when analyzing the conspiracy theory. 3) Approaches demonstrate empathy, and are based on building understanding together, which is supported by modeling open-mindedness in order to encourage the conspiracy theorists to do likewise. 4) The conspiracy theories are not attacked with ridicule or aggressive deconstruction, and interactions are not treated like an argument to be won; this approach can work with the general public, but among conspiracy theorists it may simply be rejected. Interventions that reduce feelings of uncertainty, anxiety, or powerlessness result in a reduction in conspiracy beliefs. Other possible strategies to mitigate the effect of conspiracy theories include education, media literacy, and increasing governmental openness and transparency. Due to the relationship between conspiracy theories and political extremism, the academic literature on deradicalization is also important. One approach describes conspiracy theories as resulting from a "crippled epistemology", in which a person encounters or accepts very few relevant sources of information. A conspiracy theory is more likely to appear justified to people with a limited "informational environment" who only encounter misleading information. These people may be "epistemologically isolated" in self-enclosed networks. From the perspective of people within these networks, disconnected from the information available to the rest of society, believing in conspiracy theories may appear to be justified. In these cases, the solution would be to break the group's informational isolation. === Reducing transmission === Public exposure to conspiracy theories can be reduced by interventions that reduce their ability to spread, such as by encouraging people to reflect before sharing a news story. Researchers Carlos Diaz Ruiz and Tomas Nilsson have proposed technical and rhetorical interventions to counter the spread of conspiracy theories on social media. === Government policies === The primary defense against conspiracy theories is to maintain an open society, in which many sources of reliable information are available, and government sources are known to be credible rather than propaganda. Additionally, independent nongovernmental organizations are able to correct misinformation without requiring people to trust the government. The absence of civil rights and civil liberties reduces the number of information sources available to the population, which may lead people to support conspiracy theories. Since the credibility of conspiracy theories can be increased if governments act dishonestly or otherwise engage in objectionable actions, avoiding such actions is also a relevant strategy. Joseph Pierre has said that mistrust in authoritative institutions is the core component underlying many conspiracy theories and that this mistrust creates an epistemic vacuum and makes individuals searching for answers vulnerable to misinformation. Therefore, one possible solution is offering consumers a seat at the table to mend their mistrust in institutions. Regarding the challenges of this approach, Pierre has said, "The challenge with acknowledging areas of uncertainty within a public sphere is that doing so can be weaponized to reinforce a post-truth view of the world in which everything is debatable, and any counter-position is just as valid. Although I like to think of myself as a middle of the road kind of individual, it is important to keep in mind that the truth does not always lie in the middle of a debate, whether we are talking about climate change, vaccines, or antipsychotic medications." Researchers have recommended that public policies should take into account the possibility of conspiracy theories relating to any policy or policy area, and prepare to combat them in advance. Conspiracy theories have suddenly arisen in the context of policy issues as disparate as land-use laws and bicycle-sharing programs. In the case of public communications by government officials, factors that improve the effectiveness of communication include using clear and simple messages, and using messengers which are trusted by the target population. Government information about conspiracy theories is more likely to be believed if the messenger is perceived as being part of someone's in-group. Official representatives may be more effective if they share characteristics with the target groups, such as ethnicity. In addition, when the government communicates with citizens to combat conspiracy theories, online methods are more efficient compared to other methods such as print publications. This also promotes transparency, can improve a message's perceived trustworthiness, and is more effective at reaching underrepresented demographics. However, as of 2019, many governmental websites do not take full advantage of the available information-sharing opportunities. Similarly, social media accounts need to be used effectively in order to achieve meaningful communication with the public, such as by responding to requests that citizens send to those accounts. Other steps include adapting messages to the communication styles used on the social media platform in question, and promoting a culture of openness. Since mixed messaging can support conspiracy theories, it is also important to avoid conflicting accounts, such as by ensuring the accuracy of messages on the social media accounts of individual members of the organization. === Public health campaigns === Successful methods for dispelling conspiracy theories have been studied in the context of public health campaigns. A key characteristic of communication strategies to address medical conspiracy theories is the use of techniques that rely less on emotional appeals. It is more effective to use methods that encourage people to process information rationally. The use of visual aids is also an essential part of these strategies. Since conspiracy theories are based on intuitive thinking, and visual information processing relies on intuition, visual aids are able to compete directly for the public's attention. In public health campaigns, information retention by the public is highest for loss-framed messages that include more extreme outcomes. However, excessively appealing to catastrophic scenarios (e.g. low vaccination rates causing an epidemic) may provoke anxiety, which is associated with conspiracism and could increase belief in conspiracy theories instead. Scare tactics have sometimes had mixed results, but are generally considered ineffective. An example of this is the use of images that showcase disturbing health outcomes, such as the impact of smoking on dental health. One possible explanation is that information processed via the fear response is typically not evaluated rationally, which may prevent the message from being linked to the desired behaviors. A particularly important technique is the use of focus groups to understand exactly what people believe, and the reasons they give for those beliefs. This allows messaging to focus on the specific concerns that people identify, and on topics that are easily misinterpreted by the public, since these are factors which conspiracy theories can take advantage of. In addition, discussions with focus groups and observations of the group dynamics can indicate which anti-conspiracist ideas are most likely to spread. Interventions that address medical conspiracy theories by reducing powerlessness include emphasizing the principle of informed consent, giving patients all the relevant information without imposing decisions on them, to ensure that they have a sense of control. Improving access to healthcare also reduces medical conspiracism. However, doing so by political efforts can also fuel additional conspiracy theories, which occurred with the Affordable Care Act (Obamacare) in the United States. Another successful strategy is to require people to watch a short video when they fulfil requirements such as registration for school or a drivers' license, which has been demonstrated to improve vaccination rates and signups for organ donation. Another approach is based on viewing conspiracy theories as narratives which express personal and cultural values, making them less susceptible to straightforward factual corrections, and more effectively addressed by counter-narratives. Counter-narratives can be more engaging and memorable than simple corrections, and can be adapted to the specific values held by individuals and cultures. These narratives may depict personal experiences, or alternatively they can be cultural narratives. In the context of vaccination, examples of cultural narratives include stories about scientific breakthroughs, about the world before vaccinations, or about heroic and altruistic researchers. The themes to be addressed would be those that could be exploited by conspiracy theories to increase vaccine hesitancy, such as perceptions of vaccine risk, lack of patient empowerment, and lack of trust in medical authorities. === Backfire effects === It has been suggested that directly countering misinformation can be counterproductive. For example, since conspiracy theories can reinterpret disconfirming information as part of their narrative, refuting a claim can result in accidentally reinforcing it, which is referred to as a "backfire effect". In addition, publishing criticism of conspiracy theories can result in legitimizing them. In this context, possible interventions include carefully selecting which conspiracy theories to refute, requesting additional analyses from independent observers, and introducing cognitive diversity into conspiratorial communities by undermining their poor epistemology. Any legitimization effect might also be reduced by responding to more conspiracy theories rather than fewer. There are psychological mechanisms by which backfire effects could potentially occur, but the evidence on this topic is mixed, and backfire effects are very rare in practice. A 2020 review of the scientific literature on backfire effects found that there have been widespread failures to replicate their existence, even under conditions that would be theoretically favorable to observing them. Due to the lack of reproducibility, as of 2020 most researchers believe that backfire effects are either unlikely to occur on the broader population level, or they only occur in very specific circumstances, or they do not exist. Brendan Nyhan, one of the researchers who initially proposed the occurrence of backfire effects, wrote in 2021 that the persistence of misinformation is most likely due to other factors. In general, people do reject conspiracy theories when they learn about their contradictions and lack of evidence. For most people, corrections and fact-checking are very unlikely to have a negative impact, and there is no specific group of people in which backfire effects have been consistently observed. Presenting people with factual corrections, or highlighting the logical contradictions in conspiracy theories, has been demonstrated to have a positive effect in many circumstances. For example, this has been studied in the case of informing believers in 9/11 conspiracy theories about statements by actual experts and witnesses. One possibility is that criticism is most likely to backfire if it challenges someone's worldview or identity. This suggests that an effective approach may be to provide criticism while avoiding such challenges. == Psychology == The widespread belief in conspiracy theories has become a topic of interest for sociologists, psychologists, and experts in folklore since at least the 1960s, when a number of conspiracy theories arose regarding the assassination of U.S. President John F. Kennedy. Sociologist Türkay Salim Nefes underlines the political nature of conspiracy theories. He suggests that one of the most important characteristics of these accounts is their attempt to unveil the "real but hidden" power relations in social groups. The term "conspiracism" was popularized by academic Frank P. Mintz in the 1980s. According to Mintz, conspiracism denotes "belief in the primacy of conspiracies in the unfolding of history":: 4  Conspiracism serves the needs of diverse political and social groups in America and elsewhere. It identifies elites, blames them for economic and social catastrophes, and assumes that things will be better once popular action can remove them from positions of power. As such, conspiracy theories do not typify a particular epoch or ideology.: 199  Research suggests, on a psychological level, conspiracist ideation—belief in conspiracy theories—can be harmful or pathological, and is highly correlated with psychological projection, as well as with paranoia, which is predicted by the degree of a person's Machiavellianism. The propensity to believe in conspiracy theories is strongly associated with the mental health disorder of schizotypy. Conspiracy theories once limited to fringe audiences have become commonplace in mass media, emerging as a cultural phenomenon of the late 20th and early 21st centuries. Exposure to conspiracy theories in news media and popular entertainment increases receptiveness to conspiratorial ideas, and has also increased the social acceptability of fringe beliefs. Conspiracy theories often use complicated and detailed arguments, including ones that appear analytical or scientific. However, belief in conspiracy theories is primarily driven by emotion. One of the most widely confirmed facts about conspiracy theories is that belief in a single conspiracy theory is often associated with belief in other conspiracy theories. This even applies when the conspiracy theories directly contradict each other—e.g., believing that Osama bin Laden was already dead before his compound in Pakistan was attacked makes the same person more likely to believe that he is still alive. One conclusion from this finding is that the content of a conspiracist belief is less important than the idea of a coverup by the authorities. Analytical thinking aids in reducing belief in conspiracy theories, in part because it emphasizes rational and critical cognition. Some psychological scientists assert that explanations related to conspiracy theories can be, and often are, "internally consistent" with strong beliefs previously held prior to the event that sparked the belief in a conspiracy. People who believe in conspiracy theories tend to believe in other unsubstantiated claims, including pseudoscience and paranormal phenomena. === Attractions === Psychological motives for believing in conspiracy theories can be categorized as epistemic, existential, or social. These motives are particularly acute in vulnerable and disadvantaged populations. However, it does not appear that the beliefs help to address these motives; in fact, they may be self-defeating, acting to make the situation worse instead. For example, while conspiratorial beliefs can result from a perceived sense of powerlessness, exposure to conspiracy theories immediately suppresses personal feelings of autonomy and control. Furthermore, they also make people less likely to take actions that could improve their circumstances. This is additionally supported by the fact that conspiracy theories have a number of disadvantageous attributes. For example, they promote a hostile and distrustful view of other people and groups allegedly acting based on antisocial and cynical motivations. This is expected to lead to increased social alienation and anomie and reduced social capital. Similarly, they depict the public as ignorant and powerless against the alleged conspirators, with important aspects of society determined by malevolent forces, a viewpoint that is likely to be disempowering. Each person may endorse conspiracy theories for one of many different reasons. The most consistently demonstrated characteristics of people who find conspiracy theories appealing are a feeling of alienation, unhappiness or dissatisfaction with their situation, an unconventional worldview, and a sense of disempowerment. While various aspects of personality affect susceptibility to conspiracy theories, none of the Big Five personality traits are associated with conspiracy beliefs. The political scientist Michael Barkun, discussing the usage of "conspiracy theory" in contemporary American culture, holds that this term is used for a belief that explains an event as the result of a secret plot by exceptionally powerful and cunning conspirators to achieve a malevolent end. According to Barkun, the appeal of conspiracism is threefold: First, conspiracy theories claim to explain what institutional analysis cannot. They appear to make sense out of a world that is otherwise confusing. Second, they do so in an appealingly simple way, by dividing the world sharply between the forces of light, and the forces of darkness. They trace all evil back to a single source, the conspirators and their agents. Third, conspiracy theories are often presented as special, secret knowledge unknown or unappreciated by others. For conspiracy theorists, the masses are a brainwashed herd, while the conspiracy theorists in the know can congratulate themselves on penetrating the plotters' deceptions. This third point is supported by the research of Roland Imhoff, professor of social psychology at the Johannes Gutenberg University Mainz. His research suggests that the smaller the minority believing in a specific theory, the more attractive it is to conspiracy theorists. Humanistic psychologists argue that even if a posited cabal behind an alleged conspiracy is almost always perceived as hostile, there often remains an element of reassurance for theorists. This is because it is a consolation to imagine that humans create difficulties in human affairs and remain within human control. If a cabal can be implicated, there may be a hope of breaking its power or of joining it. Belief in the power of a cabal is an implicit assertion of human dignity—an unconscious affirmation that man is responsible for his own destiny. People formulate conspiracy theories to explain, for example, power relations in social groups and the perceived existence of evil forces. Proposed psychological origins of conspiracy theorising include projection; the personal need to explain "a significant event [with] a significant cause;" and the product of various kinds and stages of thought disorder, such as paranoid disposition, ranging in severity to diagnosable mental illnesses. Some people prefer socio-political explanations over the insecurity of encountering random, unpredictable, or otherwise inexplicable events. According to Berlet and Lyons, "Conspiracism is a particular narrative form of scapegoating that frames demonized enemies as part of a vast insidious plot against the common good, while it valorizes the scapegoater as a hero for sounding the alarm". === Causes === Some psychologists believe that a search for meaning is common in conspiracism. Once cognized, confirmation bias and avoidance of cognitive dissonance may reinforce the belief. When a conspiracy theory has become embedded within a social group, communal reinforcement may also play a part. Inquiry into possible motives behind the accepting of irrational conspiracy theories has linked these beliefs to distress resulting from an event that occurred, such as the events of 9/11. Additional research suggests that "delusional ideation" is the trait most likely to indicate a stronger belief in conspiracy theories. Research also shows an increased attachment to these irrational beliefs leads to a decreased desire for civic engagement. Belief in conspiracy theories is correlated with low intelligence, lower analytical thinking, anxiety disorders, paranoia, and authoritarian beliefs. Professor Quassim Cassam argues that conspiracy theorists hold their beliefs due to flaws in their thinking and, more precisely, their intellectual character. He cites philosopher Linda Trinkaus Zagzebski and her book Virtues of the Mind in outlining intellectual virtues (such as humility, caution, and carefulness) and intellectual vices (such as gullibility, carelessness, and closed-mindedness). Whereas intellectual virtues help reach sound examination, intellectual vices "impede effective and responsible inquiry", meaning that those prone to believing in conspiracy theories possess certain vices while lacking necessary virtues. Some researchers have suggested that conspiracy theories could be partially caused by the human brain's mechanisms for detecting dangerous coalitions. Such a mechanism could have been helpful in the small-scale environment humanity evolved in but is mismatched in a modern, complex society and thus "misfire", perceiving conspiracies where none exist. ==== Projection ==== Some historians have argued that psychological projection is prevalent amongst conspiracy theorists. According to the argument, this projection is manifested in the form of attributing undesirable characteristics of the self to the conspirators. Historian Richard Hofstadter stated that: This enemy seems on many counts a projection of the self; both the ideal and the unacceptable aspects of the self are attributed to him. A fundamental paradox of the paranoid style is the imitation of the enemy. The enemy, for example, may be the cosmopolitan intellectual, but the paranoid will outdo him in the apparatus of scholarship, even of pedantry. ... The Ku Klux Klan imitated Catholicism to the point of donning priestly vestments, developing an elaborate ritual and an equally elaborate hierarchy. The John Birch Society emulates Communist cells and quasi-secret operation through "front" groups, and preaches a ruthless prosecution of the ideological war along lines very similar to those it finds in the Communist enemy. Spokesmen of the various fundamentalist anti-Communist "crusades" openly express their admiration for the dedication, discipline, and strategic ingenuity the Communist cause calls forth. Hofstadter also noted that "sexual freedom" is a vice frequently attributed to the conspiracist's target group, noting that "very often the fantasies of true believers reveal strong sadomasochistic outlets, vividly expressed, for example, in the delight of anti-Masons with the cruelty of Masonic punishments". ==== Physiology ==== Marcel Danesi suggests that people who believe conspiracy theories have difficulty rethinking situations. Exposure to those theories has caused neural pathways to be more rigid and less subject to change. Initial susceptibility to believing these theories' lies, dehumanizing language, and metaphors leads to the acceptance of larger and more extensive theories because the hardened neural pathways are already present. Repetition of the "facts" of conspiracy theories and their connected lies simply reinforces the rigidity of those pathways. Thus, conspiracy theories and dehumanizing lies are not mere hyperbole; they can actually change the way people think: Unfortunately, research into this brain wiring also shows that once people begin to believe lies, they are unlikely to change their minds even when confronted with evidence that contradicts their beliefs. It is a form of brainwashing. Once the brain has carved out a well-worn path of believing deceit, it is even harder to step out of that path – which is how fanatics are born. Instead, these people will seek out information that confirms their beliefs, avoid anything that is in conflict with them, or even turn the contrasting information on its head, so as to make it fit their beliefs.People with strong convictions will have a hard time changing their minds, given how embedded a lie becomes in the mind. In fact, there are scientists and scholars still studying the best tools and tricks to combat lies with some combination of brain training and linguistic awareness. == Sociology == In addition to psychological factors such as conspiracist ideation, sociological factors also help account for who believes in which conspiracy theories. Such theories tend to get more traction among election losers in society, for example, and the emphasis on conspiracy theories by elites and leaders tends to increase belief among followers with higher levels of conspiracy thinking. Christopher Hitchens described conspiracy theories as the "exhaust fumes of democracy": the unavoidable result of a large amount of information circulating among a large number of people. Conspiracy theories may be emotionally satisfying, as they assign blame to a group to which the theorist does not belong and, thus, absolve the theorist of moral or political responsibility in society. Likewise, Roger Cohen writing for The New York Times has said that, "captive minds; ... resort to conspiracy theory because it is the ultimate refuge of the powerless. If you cannot change your own life, it must be that some greater force controls the world." Sociological historian Holger Herwig found in studying German explanations for the origins of World War I, "Those events that are most important are hardest to understand because they attract the greatest attention from myth makers and charlatans." Justin Fox of Time magazine argues that Wall Street traders are among the most conspiracy-minded group of people, and ascribes this to the reality of some financial market conspiracies, and to the ability of conspiracy theories to provide necessary orientation in the market's day-to-day movements. === Influence of critical theory === Bruno Latour notes that the language and intellectual tactics of critical theory have been appropriated by those he describes as conspiracy theorists, including climate-change denialists and the 9/11 Truth movement: "Maybe I am taking conspiracy theories too seriously, but I am worried to detect, in those mad mixtures of knee-jerk disbelief, punctilious demands for proofs, and free use of powerful explanation from the social neverland, many of the weapons of social critique." === Fusion paranoia === Michael Kelly, a Washington Post journalist and critic of anti-war movements on both the left and right, coined the term "fusion paranoia" to refer to a political convergence of left-wing and right-wing activists around anti-war issues and civil liberties, which he said were motivated by a shared belief in conspiracism or shared anti-government views. Barkun has adopted this term to refer to how the synthesis of paranoid conspiracy theories, which were once limited to American fringe audiences, has given them mass appeal and enabled them to become commonplace in mass media, thereby inaugurating an unrivaled period of people actively preparing for apocalyptic or millenarian scenarios in the United States of the late 20th and early 21st centuries. Barkun notes the occurrence of lone-wolf conflicts with law enforcement acting as a proxy for threatening the established political powers. == Viability == As evidence that undermines an alleged conspiracy grows, the number of alleged conspirators also grows in the minds of conspiracy theorists. This is because of an assumption that the alleged conspirators often have competing interests. For example, if Republican President George W. Bush is allegedly responsible for the 9/11 terrorist attacks, and the Democratic party did not pursue exposing this alleged plot, that must mean that both the Democratic and Republican parties are conspirators in the alleged plot. It also assumes that the alleged conspirators are so competent that they can fool the entire world, but so incompetent that even the unskilled conspiracy theorists can find mistakes they make that prove the fraud. At some point, the number of alleged conspirators, combined with the contradictions within the alleged conspirators' interests and competence, becomes so great that maintaining the theory becomes an obvious exercise in absurdity. The physicist David Robert Grimes estimated the time it would take for a conspiracy to be exposed based on the number of people involved. His calculations used data from the PRISM surveillance program, the Tuskegee syphilis experiment, and the FBI forensic scandal. Grimes estimated that: A Moon landing hoax would require the involvement of 411,000 people and would be exposed within 3.68 years; Climate-change fraud would require a minimum of 29,083 people (published climate scientists only) and would be exposed within 26.77 years, or up to 405,000 people, in which case it would be exposed within 3.70 years; A vaccination conspiracy would require a minimum of 22,000 people (without drug companies) and would be exposed within at least 3.15 years and at most 34.78 years depending on the number involved; A conspiracy to suppress a cure for cancer would require 714,000 people and would be exposed within 3.17 years. Grimes's study did not consider exposure by sources outside of the alleged conspiracy. It only considered exposure from within the alleged conspiracy through whistleblowers or through incompetence. Subsequent comments on the PubPeer website point out that these calculations must exclude successful conspiracies since, by definition, we don't know about them, and are wrong by an order of magnitude about Bletchley Park, which remained a secret far longer than Grimes' calculations predicted. == Terminology == The term "truth seeker" is adopted by some conspiracy theorists when describing themselves on social media. Conspiracy theorists are often referred to derogatorily as "cookers" in Australia. The term "cooker" is also loosely associated with the far right. == Politics == The philosopher Karl Popper described the central problem of conspiracy theories as a form of fundamental attribution error, where every event is generally perceived as being intentional and planned, greatly underestimating the effects of randomness and unintended consequences. In his book The Open Society and Its Enemies, he used the term "the conspiracy theory of society" to denote the idea that social phenomena such as "war, unemployment, poverty, shortages ... [are] the result of direct design by some powerful individuals and groups". Popper argued that totalitarianism was founded on conspiracy theories which drew on imaginary plots which were driven by paranoid scenarios predicated on tribalism, chauvinism, or racism. He also noted that conspirators very rarely achieved their goal. Historically, real conspiracies have usually had little effect on history and have had unforeseen consequences for the conspirators, in contrast to conspiracy theories, which often posit grand, sinister organizations or world-changing events, the evidence for which has been erased or obscured. As described by Bruce Cumings, history is instead "moved by the broad forces and large structures of human collectivities". === Arab world === Conspiracy theories are a prevalent feature of Arab culture and politics. Variants include conspiracies involving colonialism, Zionism, superpowers, oil, and the war on terrorism, which is often referred to in Arab media as a "war against Islam". For example, The Protocols of the Elders of Zion, an infamous hoax document purporting to be a Jewish plan for world domination, is commonly read and promoted in the Muslim world. Roger Cohen has suggested that the popularity of conspiracy theories in the Arab world is "the ultimate refuge of the powerless". Al-Mumin Said has noted the danger of such theories, for they "keep us not only from the truth but also from confronting our faults and problems". Osama bin Laden and Ayman al-Zawahiri used conspiracy theories about the United States to gain support for al-Qaeda in the Arab world, and as rhetoric to distinguish themselves from similar groups, although they may not have believed the conspiratorial claims themselves. === Turkey === Conspiracy theories are a prevalent feature of culture and politics in Turkey. Conspiracism is an important phenomenon in understanding Turkish politics. This is explained by a desire to "make up for our lost Ottoman grandeur", the humiliation of perceiving Turkey as part of "the malfunctioning half" of the world, and a "low level of media literacy among the Turkish population." There are a wide variety of conspiracy theories including the Judeo-Masonic conspiracy theory, the international Jewish conspiracy theory, and the war against Islam conspiracy theory. For example, Islamists, dissatisfied with the modernist and secularist reforms that took place throughout the history of the Ottoman Empire and the Turkish Republic, have put forward many conspiracy theories to defame the Treaty of Lausanne, an important peace treaty for the country, and the republic's founder Kemal Atatürk. Another example is the Sèvres syndrome, a reference to the Treaty of Sèvres of 1920, a popular belief in Turkey that dangerous internal and external enemies, especially the West, are "conspiring to weaken and carve up the Turkish Republic". === United States === The historian Richard Hofstadter addressed the role of paranoia and conspiracism throughout U.S. history in his 1964 essay "The Paranoid Style in American Politics". Bernard Bailyn's classic The Ideological Origins of the American Revolution (1967) notes that a similar phenomenon could be found in North America during the time preceding the American Revolution. Conspiracism labels people's attitudes and the type of conspiracy theories that are more global and historical in proportion. Harry G. West and others have noted that while conspiracy theorists may often be dismissed as a fringe minority, certain evidence suggests that a wide range of the U.S. believes in conspiracy theories. West also compares those theories to hypernationalism and religious fundamentalism. Theologian Robert Jewett and philosopher John Shelton Lawrence attribute the enduring popularity of conspiracy theories in the U.S. to the Cold War, McCarthyism, and counterculture rejection of authority. They state that among both the left-wing and right-wing, there remains a willingness to use real events, such as Soviet plots, inconsistencies in the Warren Report, and the 9/11 attacks, to support the existence of unverified and ongoing large-scale conspiracies. In his studies of "American political demonology", historian Michael Paul Rogin too analyzed this paranoid style of politics that has occurred throughout American history. Conspiracy theories frequently identify an imaginary subversive group that is supposedly attacking the nation and requires the government and allied forces to engage in harsh extra-legal repression of those threatening subversives. Rogin cites examples from the Red Scares of 1919 to McCarthy's anti-communist campaign in the 1950s and, more recently, fears of immigrant hordes invading the US. Unlike Hofstadter, Rogin saw these "countersubversive" fears as frequently coming from those in power and dominant groups instead of from the dispossessed. Unlike Robert Jewett, Rogin blamed not the counterculture but America's dominant culture of liberal individualism and the fears it stimulated to explain the periodic eruption of irrational conspiracy theories. The Watergate scandal has also been used to bestow legitimacy to other conspiracy theories, with Richard Nixon himself commenting that it served as a "Rorschach ink blot" which invited others to fill in the underlying pattern. Historian Kathryn S. Olmsted cites three reasons why Americans are prone to believing in government conspiracy theories: Genuine government overreach and secrecy during the Cold War, such as Watergate, the Tuskegee syphilis experiment, Project MKUltra, and the CIA's assassination attempts on Fidel Castro in collaboration with mobsters. Precedent set by official government-sanctioned conspiracy theories for propaganda, such as claims of German infiltration of the U.S. during World War II or the debunked claim that Saddam Hussein played a role in the 9/11 attacks. Distrust fostered by the government's spying on and harassment of dissenters, such as the Sedition Act of 1918, COINTELPRO, and as part of various Red Scares. Alex Jones referenced numerous conspiracy theories for convincing his supporters to endorse Ron Paul over Mitt Romney in the 2012 Republican Party presidential primaries and Donald Trump over Hillary Clinton in the 2016 United States presidential election. Into the 2020s, the QAnon conspiracy theory alleges that Trump is fighting against a deep-state cabal of child sex-abusing and Satan-worshipping Democrats. == See also == Apophenia – Tendency to perceive connections between unrelated things Big lie – Propaganda technique Conspiracy fiction – Subgenre of thriller fiction Disinformation – Deliberately misleading information Fake news – False or misleading information presented as real Fringe theory – Idea which departs from accepted scholarship in the field Furtive fallacy – Informal fallacy of emphasis List of conspiracy theories Philosophy of conspiracy theories – Branch of academic study Propaganda – Communication used to influence opinion Pseudohistory – Pseudoscholarship that attempts to distort historical record == References == Informational notes Citations == Further reading == == External links == Conspiracy Theories, Internet Encyclopedia of Philosophy
Wikipedia/Conspiracy_theory
Cartographic propaganda is a map created with the goal of achieving a result similar to traditional propaganda. The map can be outright falsified, or created using subjectivity with the goal of persuasion. The idea that maps are subjective is not new; cartographers refer to maps as a human-subjective product and some view cartography as an "industry, which packages and markets spatial knowledge" or as a communicative device distorted by human subjectivity. However, cartographic propaganda is widely successful because maps are often presented as a miniature model of reality, and it is a rare occurrence that a map is referred to as a distorted model, which sometimes can "lie" and contain items that are completely different from reality. Because the word propaganda has become a pejorative, it has been suggested that mapmaking of this kind should be described as "persuasive cartography", defined as maps intended primarily to influence opinions or beliefs – to send a message – rather than to communicate geographic information. == History == The T-O map is a historical example of cartographic propaganda during the Middle Ages. During the Renaissance maps became more widely used in general and their use began to take on a more cultural and political character, more similar to the cartographic propaganda that is seen today. This use was especially practiced in Italy, where the competition for resources between city states in the central and northern Italian heartlands led to a precocious awareness of the practical utility of maps for military and strategic purposes, as well as civilian uses such as the planning of forts, canals, and aqueducts. In sequence, the usage of cartographic propaganda has increased remarkably alongside the rise of the modern state. The interwar period in Germany fostered the development of cartographic propaganda. German propagandists discovered the advantages of cartography in the re-representation of reality. For the Nazi regime, the most important goal in producing maps was their efficiency in providing communication between the ruler and the masses. The use of maps in this manner can be referred to as "suggestive cartography", as being capable of dynamic representations of power. This period of geopolitical cartographic development was a continuous process associated with Nazis and World War II; the development of cartographic propaganda is closely related to the wider Nazi propaganda machine (Tyner 1974). There were three different categories of propaganda maps that were used by the Nazi propaganda machine; (1) maps used to illustrate the condition of Germany as a people and nation are identified; (2) maps taking an aim at the morale of the Allies via a mental offensive through maps specifically designed to keep the U.S. neutral in the war by changing the perception of threats; and (3) maps as blue-prints of the post-war world. During this period, this approach to cartography expanded to Italy, Spain, and Portugal as cartographers and propagandists found inspiration in the "positivistic trends of the German world". This more overt use of maps as propaganda continued into the Cold War period. Post-World War II U.S. cartographers modified projections to create a menacing image of the Soviet Union by making the Soviet Union appear larger and thus more threatening. This approach was also applied to other nearby communist countries, thereby accentuating the rise of communism as a whole. The April 1, 1946, issue of Time published a map entitled "Communist Contagion", which focused on the communist threat of the Soviet Union. In this map the strength of the Soviet Union was enhanced by a split-spherical presentation of Europe and Asia which made the Soviet Union seem larger as a result of the break in the center of the map. Communist expansion was also emphasized in this map as it presented the Soviet Union in a vivid red color, a color commonly associated with danger (and communism as a whole), and categorized neighboring states in terms of the danger of contagion, using the language of disease (states were referred to as quarantined, infected or exposed, adding to the presentation of these countries as dangerous or threatening). More generally, during the Cold War period, small-scale maps served to make dangers appear menacing; some maps were made to make Vietnam appear close to Singapore and Australia; or Afghanistan to the Indian Ocean. Similarly, maps illustrating rocket positions used a polar azimuth projection with the North Pole at its center, which gave the map reader the perception that there existed a relatively small distance between the countries on opposing sides of the Cold War. == Methods == Scale, map projection, and symbolization are characteristics of cartography that can be selectively applied that will therefore transform a map into cartographic propaganda. === Scale and generalization === Scales are used to relate distance because maps are usually smaller than the area they represent. Because of the need for a scale, the cartographer often makes use of map generalization as a way to ensure clarity. The size of the scale affects the use of generalization; a smaller scale forces a higher level of generalization. There are two types of map generalization; geometric and content. The methods of geometric generalization are selection, simplification, displacement, smoothing, and enhancement. Content generalization promotes clarity of the purpose or meaning of a map by filtering out details irrelevant to the map's function or theme. Content generalization has two essential elements; selection and classification. Selection serves to suppress information and classification is the choice of relevant features. === Map projection === Map projection is the method of presenting the curved, three-dimensional surface of the planet into a flat, two-dimensional plane. The flat map, even with a constant scale, stretches some distances and shortens others, and varies the scale from point to point. Choice of map projection affects the map's size, shape, distance and/or direction. Map projection has been used to create cartographic propaganda by making small areas bigger and large areas bigger still. Arno Peters' attack on the Mercator Projection in 1972 is an example of the subjectivity of map projection; Peters argued that it is an ethnocentric projection. === Symbolization === Symbols are used in maps to complement map scale and projection by making visible the features, places, and other locational information represented on a map. Because map symbolization describes and differentiates features and places, "map symbols serve as a geographic code for storing and retrieving data in a two-dimensional geographic framework." Map symbolization tells the map reader what is relevant and what is not. As a result, the selection of symbols can be done subjectively and with a propagandistic intent. == Historical themes == The map is a symbol of the state and has thus been used throughout history as a symbol of power and nationhood. As a symbol the map has served many purposes of the state including the exertion of rule, legitimation of rule, assertion of national unity, and was even used for the mobilization of war. === Exerting imperial rule in medieval and renaissance Europe === Cartographic propaganda in medieval Europe spoke to the emotions rather than to reason and often reflected the prestige of empires. The Fra Mauro World Map (1450) was intended for display in Venice and shows the Portuguese discoveries in Africa and emphasizes the feats of Marco Polo. The Honourable East India Company commissioned a copy in 1804, implying that the company was following in the footsteps of the Portuguese empire. "The Americas" (1562) was created by Diego Gutiérrez and serves as a powerful celebration of Spain's New World Empire. In this map, King Philip II is shown riding the turbulent Atlantic Ocean on a chariot; this illustration is reminiscent of the Roman God Neptune. References like this were intended to strengthen Spain's image in Europe and its claim to the Americas. European rulers often tried to intimidate visiting envoys by displaying maps of their ruler's lands and forts, with the implication that the maps of the ambassador's nation would be conquered as well. For example, in 1527, during festivities for the French ambassador in England, maps depicting aerial views of French towns being successfully besieged by the English decorated the walls of a Greenwich pavilion specially built for the ambassador's visit. === Legitimizing colonial rule === European colonial powers used the map as an intellectual tool to legitimize territorial conquest. Ramsay Muir's Cambridge Modern Historical Atlas (Cambridge, 1912) compiled a selection of imperial triumphs which he displayed on the Atlas. Maps during the colonial period were also used to organize and rank the rest of the world according to the European powers. Edward Quin used color to depict civilization in Historical Atlas in a Series of Maps of the World (London, 1830). In the introduction of the atlas Quin wrote, "we have covered alike in all the periods with a flat olive shading ... barbarous and uncivilized countries such as the interior of Africa at the present moment." === Asserting national unity === A single overview map of an entire country serves as an assertion of national unity. The national atlas commissioned during the rule of Elizabeth I bound together maps of the various English counties and asserted their unity under Elizabeth's rule. A few decades later, Henry VI of France celebrated the reunification of his kingdom through the creation of the atlas, "Le theatre francoys". The atlas includes an impressive engraving proclaiming the glory of king and kingdom. === Political use in the 19th and 20th centuries === In the later nineteenth and twentieth centuries the political potential of cartographic shapes became used more widely and began to be used for more blatantly propagandistic purposes. Map and globe can be used as symbols for abstract ideas because they are familiar to the masses and they harbor emotive connotations. Maps are often incorporated as an emblematic element in a larger design or are used to provide the visual framework on which a scenario is played out. Fred W. Rose created two propaganda posters depicting the British general election in 1880 in which he used the map of England, "Comic Map of the British Isles indicating the Political Situation in 1880" and "The overthrow of His Imperial Majesty King Jingo I: A Map of the Political Situation in 1880 by Nemesis". He was also the creator of the 1899 "Angling in troubled waters". Henri Dron used the figure of the world map in the 1869 propaganda poster, "L'Europe des Points Noirs". === Coaxing during World War I and II === Cartographic propaganda during WW I and WW II was used to polarize states along the lines of war and did so by appealing to the masses. Fred Rose's "Serio-comic war map for the year 1877" portrayed the Russian Empire as an octopus stretching out its tentacles vying for control in Europe and was intended to solicit distrust of the Russian Empire within Europe. This concept was used again in 1917 during WW I, when France commissioned a map which portrayed Prussia as the octopus. The octopus appeared again in 1942 as (Vichy) France intended to sustain its citizens' morale and cast Winston Churchill as the octopus, a demonic green-faced, red-lipped, cigar-smoking creature attempting to seize Africa and the Middle East. == Targets == Political persuasion often concerns territorial claims, nationalities, national pride, borders, strategic positions, conquests, attacks, troop movements, defenses, spheres of influence, regional inequality, etc. The goal of cartographic propaganda is to mold the map’s message by emphasizing supporting features while suppressing contradictory information. Successful cartographic propaganda is geared toward an audience. === Political leadership === Before the U.S. had entered into WW II, U.S. President Franklin D. Roosevelt came to possess a German map of Central and South America that depicted all Latin American republics reduced to "five vassal states ... bringing the whole continent under their [Nazi] domination." FDR viewed this as an open threat to "our great life line, the Panama Canal" and therefore mean that "the Nazi design is not only against South America, but against the U.S. as well." This map was undoubtedly propaganda, yet its target audience could have either been the German or American public. The map was first discovered by the British and then brought to the attention of FDR. Although Berlin claimed that it was a forgery, the origin of the map is still unknown. Some Nazi maps were commissioned as an attempt to divert sympathy from the Allies from neutral countries. The Nazi map, "A Study in Empires" compared the size of Germany (264,300 sq. mi) to that of the British Empire (13,320,854 sq. mi) to argue that Germany could not possibly be an aggressor as her size was far smaller than the Allied nation. The Nazi regime also used maps to persuade the United States to remain neutral during WW II by flattering both isolationism and Monroe Doctrine militarism. "Spheres of Influence", created and published in 1941, uses bold lines traced around sections of the globe to send a clear message to Americans: stay in your own hemisphere and out of Europe. === Military leadership === Cartographic propaganda can be used to mislead the enemy and its military by distorting maps and the information they contain which is used in military strategic planning. In 1958 the Soviet Union launched the Soviet Map Distortion Policy which resulted in the thinning and distortion of detail in all unclassified maps. Then in 1988 the Soviet Union’s chief cartographer, Viktor R. Yashchenko, admitted that Soviet maps had been faked for nearly 50 years. The Soviet Union had deliberately falsified virtually all public maps of the country, misplacing streets, distorting boundaries, and omitting geographical features. These were orders administered by the Soviet secret police. Western experts said the maps were distorted out of fear of aerial bombing or foreign intelligence operations. === Referendums === Maps are often used to persuade the electorate to vote in a particular direction in referendums and are most effective when portraying highly emotive issues. A recent example is the map produced by the Vote Leave campaign for Brexit, which aimed to persuade the voter of the vulnerability of the UK to uncontrolled immigration from the Middle East after a scenario of increased EU expansion. The use of graphical devices, such as the use of bold red arrows to suggest a threat of invasion, communicated a sense of fear and supported the theme of taking back control of borders. === The masses === Cartographic propaganda during the Cold War often appealed to the fear of the masses. During the Cold War period, maps of "us" versus "them" were drawn to emphasize the threat represented by the USSR and its allies. R.M. Chapin Jr. created the map, "Europe From Moscow", in 1952. The map was drawn from a different perspective, from Moscow looking onward toward Europe which made it easy for the map reader to imagine (red) armies sweeping across Western Europe. === Classrooms === Adolf Hitler's schoolroom map of "Deutschland" in 1935 presented all the German-speaking areas surrounding Germany without borders, claiming them as part of the Reich. This gave the impression that the Reich extended over Austria and the German-speaking areas in Poland, Czechoslovakia, and even France. M. Tomasik created the "Pictorial Map of European Russia" (which was published in Warsaw in 1896 and 1903) that provoked an image of Utopia in Russia. The map was intended for display in Polish schools and was meant to appeal directly to the emotions of teachers and (through them) to those they taught. The map illustrated Russia as a nation rich in natural resources and failed to mention the famine that occurred only five years earlier (1891-5) during which half a million people had died. The map also communicated the message of Russian unity; the nation's provinces were shown linked together by a new rail network and contributing to the nation's well-being. === Border disputes === The intentional misrepresentation of national boundaries by nations in border disputes is sometimes called "cartographic aggression". For instance, both China and India attempted to address the lack of treaties or agreed boundaries in the Sino-Indian border dispute by issuing official maps with displayed borders beyond what each nation controlled in the lead-up to the 1962 Sino-Indian War. Libyan maps were issued from around 1969 showing the Aouzou Strip, then-contested with Chad, as part of Libya. The dispute, which led to long-drawn desultory warfare between the two countries, was later settled by the International Court of Justice in 1994 which awarded the entire area to Chad. In the build-up to the invasion of Kuwait, Iraqi maps were issued around 1990 that showed Kuwait as a province of Iraq. In late 2012, China began issuing passports that displays a map showing Aksai Chin, parts of Arunachal Pradesh, and disputed sections of the disputed sections of the South China Sea as part of China. In response, immigration officials in India, Vietnam, and the Philippines reacted by enacting a policy of inserting their own forms and maps into the travel documents of Chinese visitors. == See also == Cartographic censorship Fantasy map Mainland Satellite map images with missing or unclear data Bielefeld conspiracy, a humorous urban legend == References == == Bibliography == Barber, Peter and Tom Harper (2010). Magnificent Maps: Power, Propaganda, and Art. London: The British Library. ISBN 9780712350938. Black, J. (1997). Maps and politics. Chicago: University of Chicago Press. Black, J. (2008). Where to Draw the Line. History Today, 58(11), 50-55. ISSN 0018-2753 1G1-189160110 Boria, Edoardo (2008). "Geopolitical Maps: A Sketch History of a Neglected Trend in Cartography". Geopolitics. 13 (2): 278–308. doi:10.1080/14650040801991522. S2CID 143488540. Cairo, Heriberto (2006). ""Portugal is not a Small Country": Maps and Propaganda in the Salazar Regime". Geopolitics. 11 (3): 367–395. doi:10.1080/14650040600767867. S2CID 143453025. Crampton, Jeremy W. and John Krygier. 2006. "An Introduction to Critical Cartography" Crampton, Jeremy (2010). A Critical Introduction to Cartography and GIS. Wiley Blackwell Publishing. ISBN 9781444317428 Guntram, Henrik Herb (1997). Under the map of Germany: nationalism and propaganda 1918-1945. London: Routledge. ISBN 9780415127493 Mode, PJ. (2015). "Persuasive Cartography". The PJ Mode Collection. Cornell University Library. Monmonier, Mark (1996). How to Lie with Maps. Chicago: The University of Chicago Press. ISBN 9780226534213 Tyner, Judith Ann (1974). Persuasive Cartography. Los Angeles: University of California. Thrower, Norman J.W. (2007). Maps & Civilization. Chicago: University of Chicago. == Further reading == Boggs, S. W. (1947). "Cartohypnosis". The Scientific Monthly. 64 (6): 469–476. Bibcode:1947SciMo..64..469B. JSTOR 19200. Davis, Bruce (1985). "Maps on Postage Stamps as Propaganda". The Cartographic Journal. 22 (2): 125–130. doi:10.1179/caj.1985.22.2.125. Demko, G.J., and W. Hezlep. "USSR: Mapping the Blank Spots". Focus 39 (Spring 1989): 20-21. Edney, Matthew H. (1986). "Politics, Science, and Government Mapping Policy in the United States, 1800–1925". The American Cartographer. 13 (4): 295–306. doi:10.1559/152304086783887262. Kent, Alexander (2016). "Political Cartography: From Bertin to Brexit". The Cartographic Journal. 53 (3): 199–201. doi:10.1080/00087041.2016.1219059. MacEachren, Alan M. (1994). Some Truth with Maps: A Primer on Symbolization and Design. Washington, D.C.: Association of American Geographers. ISBN 9780892912148. McDermott, Paul D. (1969). "Cartography in Advertising". Cartographica: The International Journal for Geographic Information and Geovisualization. 6 (2): 149–155. doi:10.3138/W35R-163R-T13Q-HPV4. Monmonier, Mark (1995). Drawing the Line: Tales of Maps and Cartocontroversy. New York: Henry Holt and Co. ISBN 9780805025811. Monmonier, Mark (1989). Maps with the News: The Development of American Journalistic Cartography. Chicago: University of Chicago Press. ISBN 9780226534114. Monmonier, Mark (1994). "The Rise of the National Atlas". Cartographica: The International Journal for Geographic Information and Geovisualization. 31: 1–15. doi:10.3138/T3NN-QL75-753L-25G7. Quam, Louis O. (1943). "The Use of Maps in Propaganda". Journal of Geography. 42: 21–32. doi:10.1080/00221344308986602. Robinson, Arthur H.; Morrison, Joel L.; Muehrcke, Phillip C.; Jon Kimerling, A.; Guptill, Stephen C. (1995). Elements of Cartography (6th ed.). New York: John Wiley. ISBN 9788126524549. Schmidt, Benjamin (1997). "Mapping an Empire: Cartographic and Colonial Rivalry in Seventeenth-Century Dutch and English North America". The William and Mary Quarterly. 54 (3): 549–578. doi:10.2307/2953839. JSTOR 2953839. Snyder, John P. (1993). Flattening the Earth: Two Thousand Years of Map Projections. Chicago: University of Chicago Press. ISBN 9780226767475. Tyner, Judith A. (1982). "Persuasive cartography". Journal of Geography. 81 (4): 140–144. doi:10.1080/00221348208980868. Woodward, David. "Map Design and the National Consciousness: Typography and the Look of Topographic Maps", Technical Papers of the American Congress on Surveying and Mapping (Spring 1992): 339-347. == External links == Mark Monmonier, Writings The British Library "Magnificent Maps" Exhibition, 2010 Archived 2011-01-21 at the Wayback Machine. British Library A.W.Ward, G.W.Prothero and Stanley Leathes (editors), E.A.Benians (assist.edit.). The Cambridge Modern Historical Atlas, 1912. Cambridge University Press 1912. Media related to Persuasive Cartography at Wikimedia Commons
Wikipedia/Cartographic_propaganda
Anti-vaccine activism, which collectively constitutes the "anti-vax" movement, is a set of organized activities expressing opposition to vaccination, and these collaborating networks have often sought to increase vaccine hesitancy by disseminating vaccine misinformation and/or forms of active disinformation. As a social movement, it has utilized multiple tools both within traditional news media and also through various forms of online communication. Activists have primarily (though far from entirely) focused on issues surrounding children, with vaccination of the young receiving pushback, and they have sought to expand beyond niche subgroups into national political debates. Ideas that would eventually coalesce into anti-vaccine activism have existed for longer than vaccines themselves. Various myths and conspiracy theories (alongside outright disinformation and misinformation) have been spread by the anti-vaccination movement and fringe doctors. These have been spread in a way that has significantly increased vaccine hesitancy (and altered public policy around ethical, legal, and medical matters related to vaccines). However, no serious sense of hesitancy or of debate (in the broad sense) exists within mainstream medical circles about the benefits of vaccination. The scientific consensus in favor of vaccines is "clear and unambiguous". At the same time, however, the anti-vax movement has partially succeeded in distorting common understandings of science in popular culture. == Strategies and tactics == === Arguments used === In a 2002 paper in the British Medical Journal, two medical historians suggested that the arguments made against the safety and effectiveness of vaccines in the late 20th century are similar to those of the early anti-vaccinationists. Both the 19th and 20th century arguments included "vaccine safety issues, vaccine failures, infringement of personal liberty, and an unholy alliance between the medical establishment and the government to reap huge profits for the medical establishment at the expense of the public." However, the authors only considered the use of "newspaper articles and letters, books, journals, and pamphlets to warn against the dangers of vaccination", and did not address the impact of the internet. Comments on YouTube videos during the COVID-19 pandemic clustered similarly around "concerns about side-effects, effectiveness, and lack of trust in corporations and government". === Misrepresentation === In some instances, anti-vaccine organizations have used names intended to sound non-partisan on the issue: e.g. National Vaccine Information Center (USA), Vaccination Risk Awareness Network (Canada), Australian Vaccination Network. In November 2013 the Australian Vaccination Network was ordered by the New South Wales Administrative Decisions Tribunal to change their name so that consumers are aware of the anti-vaccination nature of the group. Lateline reported that former AVN president Meryl Dorey "claimed she was a victim of hate groups and vested interests" in response to the ruling. === Information quality === Although physicians and nurses are still rated as the most trusted source for vaccine information, some vaccine-hesitant individuals report being more comfortable discussing vaccines with providers of complementary and alternative medical (CAM) treatments. With the rise of the internet, many people have turned online for medical information. In some instances, anti-vaccine activists seek to steer people away from vaccination and health-care providers and towards alternative medicines sold by certain activists. Anti-vaccination writings on the internet have been argued to be characterized by a number of differences from medical and scientific literature. These include: Promiscuous copying and reduplication. Ignoring corrections, even when an initial report or data point is shown to be false. Lack of references, difficulty in checking sources and claims. Personal attacks on individual doctors. A high degree of interlinkage between sites. Dishonest or fallacious arguments. For example, a 2020 study examined Instagram posts related to the HPV vaccine, which can prevent some types of cancer. Anti-vaccine posts were more likely than pro-vaccine posts to be sent by non-healthcare individuals, to include personal narratives, and to reference other Instagram users, links, or reposts. Anti-vaccine posts were also more likely to involve concealment or distortion, particularly conspiracy theories and unsubstantiated claims. In total, 72.3% of antivaccine posts made inaccurate claims, including exaggerating the risks of vaccines and minimizing risks of disease. === Disinformation tactics === A number of specific disinformation tactics have been noted in anti-vaccination messaging, including: Asserting that the existence of the 1986 National Childhood Vaccine Injury Act implies that the risk of injury by vaccines is high, rather than very low Claiming to fail to access clinical trial data Conspiracy theories alleging lies, trickery, cover-ups, and secret knowledge Messages crafted for psychological appeal rather than truthfulness Fake experts Impossible expectations: claiming that anything less than 100% certainty in a scientific claim implies doubt and that doubt means there is no consensus Selective use and interpreting of evidence ("cherry-picking"): using obscure or debunked sources while ignoring counter-evidence and scientific consensus Shifting hypotheses: Continually introducing new theories about vaccines being harmful; moving to new claims when existing ones are shown to be false Misrepresentation, false logic and illogical analogies Personal attacks on critics, ranging from online criticism, publicly revealing personal details, and threats, to offline activities such as legal actions, targeting of employers, and violence Targeting China's vaccine: During the pandemic, as retaliation for China's attempts to blame the United States for the pandemic, The Pentagon targeted China's Sinovac COVID-19 vaccine by spreading anti-vaccine misinformation in the Philippines. === Economics of vaccine disinformation === Information is more likely to be believed after repeated exposure. Disinformers use this illusory truth effect as a tactic, repeating false information to make it feel familiar and influence belief. Anti-vaccine activists have leveraged social media to develop interconnected networks of influencers that shape people's opinion, recruit allies, impact policy and monetize vaccine-related disinformation. In 2022, the Journal of Communication published a study of the political economy underlying vaccine disinformation. Researchers identified 59 English-language "actors" that provided "almost exclusively anti-vaccination publications". Their websites monetized disinformation through appeals for donations, sales of content-based media and other merchandise, third-party advertising, and membership fees. Some maintained a group of linked websites, attracting visitors with one site and appealing for money and selling merchandise on others. Their activities to gain attention and obtain funding displayed a "hybrid monetization strategy". They attracted attention by combining eye-catching aspects of "junk news" and online celebrity promotion. At the same time, they developed campaign-specific communities to publicize and legitimize their position, similar to radical social movements. === Misrepresentation of the Vaccine Adverse Event Reporting System === In the United States, the Vaccine Adverse Event Reporting System (VAERS) is used to gather information on potential vaccine adverse reactions, but is susceptible to unverified reports, misattribution, underreporting, and inconsistent data quality. Raw, unverified data from VAERS has often been used by the anti-vaccine community to justify misinformation regarding the safety of vaccines; it is generally not possible to find out from VAERS data if a vaccine caused an adverse event, or how common the event might be. === Legal action === After Republicans gained a majority in the U.S. House of Representatives in January 2023, the House Judiciary Committee used legal action to oppose both disinformation research and government involvement in fighting disinformation. One of the projects targeted was the Virality Project, which has examined the spread of false claims about vaccines. The House Judiciary Committee sent letters, subpoenas, and threats of legal action to researchers, demanding notes, emails and other records from researchers and even student interns, dating back to 2015. Institutions subjected to such inquiries included the Stanford Internet Observatory at Stanford University, the University of Washington, the Atlantic Council's Digital Forensic Research Lab and the social media analytics firm Graphika. Researchers emphasized that they have academic freedom to study disinformation as well as freedom of speech to report their results. Despite conservative claims that the government acted to censor speech online, "no evidence has emerged that government officials coerced the companies to take action against accounts". The actions of the House Judiciary Committee have been described as an "attempt to chill research,” creating a "chilling effect" through increased time demands, legal costs and online harassment of researchers. === Harassment === Persons undertaking efforts to counter vaccine misinformation, including public health experts who use social media, have been targeted for harassment by anti-vaccine activists such as blogger Paul Thacker. For example, Slovakian physician Vladimír Krčméry was a prominent member of the government advisory team during the COVID-19 pandemic in Slovakia, and was the first person in that country to receive a COVID-19 vaccine. Due to his prominent role in the vaccination campaign, Krčméry and his family became a target of anti-vaccine activists, who physically threatened him and his family. In June 2023, Texas-based physician and researcher Peter Hotez tweeted his concerns about Robert F. Kennedy Jr. sharing misinformation about vaccines on Joe Rogan's podcast. Rogan, Kennedy, and Twitter owner Elon Musk asked Hotez to participate in a debate on the podcast. Upon declining the invitation, Hotez was harassed by their fans, with anti-vaccine activist Alex Rosen confronting him at his home. In his book The Deadly Rise of Anti-science: A Scientist's Warning, Hotez describes how he and other scientists who publicly defend vaccines have been attacked on social media, harassed with threatening emails, intimidated, and confronted physically by opponents of vaccination. He attributes the increase in aggressiveness of the anti-vaccination movement to the influence of the extreme wing of the Republican Party. Hotez estimates that roughly 200,000 preventable deaths from COVID-19, mainly among Republicans, occurred in the US because of refusal to be vaccinated. At the extreme end, opposition to vaccination has resulted in substantial violence against vaccinators. In Pakistan, "more than 200 polio team workers have lost their lives" (team members include not only vaccinators but police and security personnel) from "targeted killing and terrorism" while working on polio vaccination campaigns. == Countering anti-vaccine activism == Various efforts have been suggested and undertaken to address concerns about vaccines and counter anti-vaccine disinformation. Efforts include social media advertising campaigns, by public health organizations, in support of public health goals. Best practices for combating vaccine mis- and disinformation include addressing issues openly, clearly identifying areas of scientific consensus and areas of uncertainty, and being sensitive to the cultural and religious values of communities. In countering anti-vaccine disinformation, both factual and emotional aspects need to be addressed. Whether people will update a mistaken belief is complicated and involves psychological factors and social goals as well as accuracy of information. There is some evidence that both debunking and "pre-bunking" of disinformation can be effective, at least in the short term. Elements that may help to correct inaccurate information include: warning people before they are exposed to misinformation; high perceived credibility of message sources, affirmations of identity and social norms; graphical presentation; and focusing attention on clear core messages. Alternative explanations of a situation need to fit plausibly into the original scenario and ideally indicate why the incorrect explanation was previously thought to be correct. The cultivation of critical thinking, health and science awareness, and media literacy skills are all recommended to help people more critically assess the credibility of the information they see. People who seek out multiple reputable news sources at local and national levels are more likely to detect disinformation than those who rely on few sources from a particular viewpoint. Particularly on social media, beware of sensational headlines that appeal to emotion, fact-check information broadly (not just through your usual sources), and consider possible agendas or conflicts of interest of those relaying information. === Operation of social media === Other suggestions for countering anti-vaccine activism focus on changing the operation of social media platforms. Interventions such as accuracy nudges and source labeling change the context in which information is presented. For example, correct information can be directly presented to counter disinformation. Other possibilities include flagging or removing misleading information on social media platforms. Research suggests that a majority of individuals in the United States would support the removal of harmful misinformation posts and the suspension of accounts. This position is less popular with Republicans than Democrats. While private entities like Facebook, Twitter and Telegram could legally establish guidelines for moderation of information and disinformation on their platforms (subject to local and international laws) such companies do not have strong incentives to control disinformation or to self-regulate. Algorithms that are used to maximize user engagement and profits can lead to unbalanced, poorly sourced, and actively misleading information. Criticized for its role in vaccine hesitancy, Facebook announced in March 2019 that it would provide users with "authoritative information" on the topic of vaccines. Facebook introduced several policies chosen to reduce the impact of anti-vaccine content, without actually removing it. These included reducing the ranking of anti-vaccine sources in searches and not recommending them; rejecting ads and targeted advertising that contained vaccine misinformation; and using banners to present vaccine information from authoritative sources. A study examined the six months before and after the policy changes. It found a moderate but significant decrease in the number of likes for anti-vaccine posts following the policy changes. Likes of pro-vaccine posts were unchanged. Facebook has been criticized for not being more aggressive in countering disinformation. In response to efforts to police misinformation, anti-vaccine communities on social media have adopted coded language to refer to vaccinated persons and the vaccines themselves. Supply-side interventions reduce circulation of misinformation directly at their sources through actions such as application of social media policies, regulation, and legislation. A study published in the journal Vaccine examined advertisements posted in the three months prior to the Facebook's 2019 policy changes. It found that 54% of the anti-vaccine advertisements on Facebook were placed by just two organizations, funded by well-known anti-vaccination activists. The Children's Health Defense / World Mercury Project chaired by Robert F. Kennedy Jr. and Stop Mandatory Vaccination, run by campaigner Larry Cook, posted 54% of the advertisements. The ads often linked to commercial products, such as natural remedies and books. Kennedy was suspended from Facebook in August 2022, but reinstated in June 2023. In 2023, however, state governments that were politically aligned with anti-vaccine activists successfully sought a preliminary injunction to prevent the Biden Administration from seeking to pressure social media companies into fighting misinformation. The order issued by United States Court of Appeals for the Fifth Circuit "severely limits the ability of the White House, the surgeon general, [and] the Centers for Disease Control and Prevention... to communicate with social media companies about content related to Covid-19... that the government views as misinformation". In October 2023, this injunction was paused by the Supreme Court of the United States, pending further litigation. === Use of algorithms and data === Algorithms and user data can be used to identify selected subgroups who can then be provided with specialized content. This type of approach has been used both by anti-vaccine activists and by health providers who hope to counter vaccine-related disinformation. For example, in the United States, the CDC's Social Vulnerability Index (SVI) has been used to identify communities that have traditionally been under-served or are at elevated risk for infection, morbidity, and mortality. Programs have been developed in such communities to address disinformation and vaccine hesitancy. === Community engagement === Steps have been taken to counter anti-vaccine messaging by directly engaging with communities. Outreach efforts include call centers and texting campaigns, partnering with local community leaders, and holding community-based vaccine clinics. Creating digital and science literacy resources and distributing them via schools, libraries, municipal offices, churches and other community groups can help to counter misinformation in under-resourced communities. The Black Doctors COVID-19 Consortium in Philadelphia is one example of a successful direct outreach initiative. Another is the New York State Vaccine Equity Task Force. In line with the Strategic Advisory Group of Experts (SAGE)'s 3C's model, outreach to communities has focused on addressing mistrust and increasing Confidence, providing information to improve risk assessment (Complacency), and improving access to COVID-19 vaccines (Convenience). It has been necessary to counter disinformation in all three areas. Recommendations for combating vaccine disinformation include increasing the presence of trusted health agencies and credible information on social media, partnering with social media platforms to promote evidence-based public health information, and identifying and responding to emerging concerns and disinformation campaigns. Networked communities of public health officials and other stakeholders, connecting with the public through a variety of credible and trusted messengers, are recommended. Sharing of messages through such networks could help to debunk and counter highly networked and coordinated disinformation attacks. A networked community approach would differ from the current model of US public health communication, which tends to rely on a single credible messenger (e.g. Anthony Fauci) and is susceptible to disinformation attacks. To deal with disinformation, community networks would need to address issues of liberty and human rights as well as vaccine safety, effectiveness and access. Networks could also help to show support for those attacked by anti-vaccine activists. == History == === 18th and 19th century === Ideas that would eventually coalesce into anti-vaccine activism have existed for longer than vaccines themselves. Some philosophical approaches (e.g. homeopathy, vitalism) are incompatible with the microbiological paradigm that explains how the immune system and vaccines work. Vaccine hesitancy and anti-vaccine activism exist within a broader context that involves cultural tradition, religious belief, approaches to health and disease, and political affiliation. Opposition to variolation for smallpox (a predecessor to vaccination) was organized as early as the 1720s around the premise that vaccination was unnatural and an attempt to thwart divine judgment. Religious arguments against inoculation, the earliest arguments against vaccination, were soon advanced. For example, in a 1722 sermon entitled "The Dangerous and Sinful Practice of Inoculation", the English theologian Reverend Edmund Massey argued that diseases are sent by God to punish sin and that any attempt to prevent smallpox via inoculation is a "diabolical operation". It was customary at the time for popular preachers to publish sermons, which reached a wide audience. This was the case with Massey, whose sermon reached North America, where there was early religious opposition, particularly by John Williams. A greater source of opposition there was William Douglass, a medical graduate of Edinburgh University and a Fellow of the Royal Society, who had settled in Boston.: 114–22  Vaccination itself was invented by British physician Edward Jenner, who published his findings on the efficacy of the practice for smallpox in 1798. By 1801, the practice had been widely endorsed in the scientific community, and by several world leaders. Philadelphia physician John Redman Coxe, noting that even then false accounts were circulated of negative effects of vaccination, wrote, "Such are the falsehoods which impede the progress of the brightest discovery which has ever been made! But the contest is in vain! Time has drawn aside the veil which obstructed our knowledge of this invaluable blessing; and in the examples of the Emperor of Constantinople, of the Dowager Empress of Russia, and the King of Spain, we may date the downfall of further opposition." Coxe's expectation of an end to opposition to vaccination proved premature, and through much of the nineteenth century, the principles, practices and impact of vaccination were matters of active scientific debate. The principles behind vaccination were not clearly understood until the end of the nineteenth century. The importance of hygiene in the preparation, storage, and administration of vaccines was not always understood or practiced. Reliable statistics on vaccine efficacy and side effects were difficult to obtain before the 1930s. ==== Anti-Compulsory Vaccination League ==== In the United Kingdom, the Vaccination Act 1853 (16 & 17 Vict. c. 100) required that every child be vaccinated within three or four months of birth. It set a precedent for the state regulation of physical bodies, and was fiercely resisted. The following year, in 1854, John Gibbs published the first anti-compulsory-vaccination pamphlet, Our Medical Liberties. By the 1860s, anti-vaccinationism in Britain was active in the working class, labor aristocracy, and lower middle class. It had become associated with alternative medicine and was part of a larger culture of social and political dissent that included both labor unions and religious dissenters. In June 1867, the publication "Human Nature" campaigned in the United Kingdom against "The Vaccination Humbug", reporting that many petitions had been presented to Parliament against Compulsory Vaccination for smallpox, including from parents who alleged that their children had died through the procedure, and complaining that these petitions had not been made public. The journal reported the formation of the Anti-Compulsory Vaccination League "To overthrow this huge piece of physiological absurdity and medical tyranny", and quoted Richard Gibbs (a cousin of John Gibbs) who ran the Free Hospital at the same address as stating "I believe we have hundreds of cases here, from being poisoned with vaccination, I deem incurable. One member of a family dating syphilitic symptoms from the time of vaccination, when all the other members of the family have been clear. We strongly advise parents to go to prison, rather than submit to have their helpless offspring inoculated with scrofula, syphilis, and mania". Notable members of the Anti-Compulsory Vaccination League included James Burns, George Dornbusch and Charles Thomas Pearce. After the death of Richard B. Gibbs in 1871, the Anti-Compulsory Vaccination League "languished" until 1876 when it was revived under the leadership of Mary Hume-Rothery and the Rev. W. Hume-Rothery. The Anti-Compulsory Vaccination League published the Occasional Circular which later merged into the National Anti-Compulsory Vaccination Reporter. ==== Anti-Vaccination Society of America ==== In the United States, many states and local school boards established immunization requirements, beginning with a compulsory school vaccination law in Massachusetts in 1855. The Anti-Vaccination Society of America was founded in 1879, after a visit to the United States by British anti-vaccine activist William Tebb, and opposed compulsory smallpox vaccination for smallpox from the final decades of the 19th century through the 1910s. During this period, smallpox vaccination was the only form of vaccination that was widely practiced, and the society published a periodical opposing it, called Vaccination. A series of American legal cases, beginning in various states and culminating with that of Henning Jacobson of Massachusetts in 1905, upheld the mandating of compulsory smallpox vaccination for the good of the public. The court ruled in Jacobson v. Massachusetts that "the liberty secured by the Constitution of the United States to every person within its jurisdiction does not import an absolute right in each person to be, at all times and in all circumstances, wholly freed from restraint. There are manifold restraints to which every person is necessarily subject for the common good". ==== London Society for the Abolition of Compulsory Vaccination ==== In 1880, William Tebb enlarged and reorganized the Anti-Compulsory Vaccination League in the UK with the formation of the London Society for the Abolition of Compulsory Vaccination, with William Young as secretary. The Vaccination Inquirer, established by Tebb in 1879, was adopted as the official organ of the Society. A series of fourteen "Vaccination Tracts" was begun by Young in 1877 and completed by Garth Wilkinson in 1879. William White was the first editor of the Vaccination Inquirer and after his death in 1885, he was succeeded by Alfred Milnes. Frances Hoggan and her husband authored an article for the Vaccination Inquirer in September 1883 which argued against compulsory vaccination. The London Society focused on lobbying parliamentary support in the 1880s and early 1890s. They gained support from several members of the House of Commons of which the most prominent was Peter Alfred Taylor, the member for Leicester, which was described as the "Mecca of antivaccination". ==== The National Anti-Vaccination League ==== The UK movement grew, and as the influence of the London Society overshadowed the Hume-Rotherys and it took the national lead, it was decided in February 1896 to re-form the Society as The National Anti-Vaccination League. Arthur Phelps was elected as president. In 1898, the league took on a school leaver named Lily Loat, who was elected as the league's Secretary by 1909. In 1906, George Bernard Shaw wrote a supportive letter to the National Anti-Vaccination League, equating methods of vaccination with "rubbing the contents of the dustpan into the wound". ==== Anti-Vaccination League of America ==== In 1908, the Anti-Vaccination League of America was created by Charles M. Higgins and industrialist John Pitcairn Jr., with anti-vaccination campaigns focused on New York and Pennsylvania. Members were opposed to compulsory vaccination laws. Higgins was the League's chief spokesman and pamphleteer. Historian James Colgrove noted that Higgins "attempted to overturn the New York State's law mandating vaccination of students in public schools". The League should not be confused with the Anti-Vaccination Society of America, that was formed in 1879. Higgins was criticized by medical experts for spreading misinformation and ignoring facts as to the efficacy of vaccination. The League dissolved after the death of Higgins in 1929. === 20th century === Anti-vaccine activism ebbed for much of the twentieth century, but never completely vanished. In the UK, the National Anti-Vaccination League continued to publish new issues of its journal until 1972, by which time the global campaign for smallpox eradication through vaccination had made the disease so uncommon that compulsory vaccination for smallpox was no longer required in the United Kingdom. New vaccines were developed and used against diseases such as diphtheria and whooping cough. In the UK, these were often introduced on a voluntary basis, without arousing the same kind of anti-vaccination response that had accompanied compulsory smallpox vaccination. In the United States, numerous measles outbreaks occurred in the 1960s and 1970s, and were shown to be more frequent in states that lacked mandatory vaccination requirements. This led to calls in the 1970s for a national level vaccination requirement for children entering schools. Joseph A. Califano Jr. appealed to state governors, and by 1980, all 50 states legally required vaccination for school entrance. Many of these laws allowed exemptions in response to lobbyists. In New York State, a 1967 law allowed exemptions from receiving polio vaccine for members of religious organizations such as Christian Scientists. === 21st century === ==== Lancet MMR autism fraud ==== Anti-vaccine activism in the 2000s regained prominence through exploratory research by Andrew Wakefield based on 12 selected cases. He then made claims about a link between the MMR vaccine and autism. These claims were subsequently extensively investigated and found to be false, and the original study turned out to be based on faked data. The scientific consensus is that there is no link between the MMR vaccine and autism, and that the MMR vaccine's benefits in preventing measles, mumps, and rubella greatly outweigh its potential risks. The idea of an autism link was first suggested in the early 1990s and came to public notice largely as a result of the 1998 Lancet MMR autism fraud, which Dennis K Flaherty at the University of Charleston characterized as "perhaps the most damaging medical hoax of the last 100 years". The fraudulent research paper authored by Wakefield and published in The Lancet falsely claimed the vaccine was linked to colitis and autism spectrum disorders. The paper was retracted by Lancet in 2010 but is still cited by anti-vaccine activists. The claims in the paper were widely reported, leading to a sharp drop in vaccination rates in the UK and Ireland. Promotion of the claimed link, which continued in anti-vaccination propaganda for the next three decades despite being refuted, was estimated to have led to an increase in the incidence of measles and mumps, resulting in deaths and serious permanent injuries. Following the initial claims in 1998, multiple large epidemiological studies were undertaken. Reviews of the evidence by the Centers for Disease Control and Prevention, the American Academy of Pediatrics, the Institute of Medicine of the US National Academy of Sciences, the UK National Health Service, and the Cochrane Library all found no link between the MMR vaccine and autism. Physicians, medical journals, and editors have described Wakefield's actions as fraudulent and tied them to epidemics and deaths. An investigation by journalist Brian Deer found that Wakefield, the author of the original research paper linking the vaccine to autism, had multiple undeclared conflicts of interest, had manipulated evidence, and had broken other ethical codes. After a subsequent 2.5-year investigation, the General Medical Council ruled that Wakefield had acted "dishonestly and irresponsibly" in doing his research, carrying out unauthorized procedures for which he was not qualified, and acting with "callous disregard" for the children involved. Wakefield was found guilty by the General Medical Council of serious professional misconduct in May 2010, and was struck off the Medical Register, meaning he could no longer practise as a physician in the UK. The Lancet paper was partially retracted in 2004 and fully retracted in 2010, when Lancet's editor-in-chief Richard Horton described it as "utterly false" and said that the journal had been deceived. In January 2011, Deer published a series of reports in the British Medical Journal, in which a signed editorial stated of the journalist, "It has taken the diligent scepticism of one man, standing outside medicine and science, to show that the paper was in fact an elaborate fraud." A 2011 journal article described the vaccine-autism connection as "the most damaging medical hoax of the last 100 years". Wakefield continues to promote anti-vaccine beliefs and conspiracy theories in the United States. In February 2015, Wakefield denied that he bore any responsibility for the measles epidemic that started at Disneyland among unvaccinated children that year. He also reaffirmed his discredited belief that "MMR contributes to the current autism epidemic". By that time, at least 166 measles cases had been reported. Paul Offit disagreed, saying that the outbreak was "directly related to Dr. Wakefield's theory". Wakefield and other anti-vaccine activists were active in the American-Somali community in Minnesota, where a drop in vaccination rates was followed by the largest measles outbreak in the state in nearly 30 years in 2017. The anti-vaccination movement was historically apolitical, but in the 2010s and 2020s the movement in the United States has increasingly targeted conservatives. As measles outbreaks increased, so did calls to eliminate exemptions from vaccine administration. As of 2015, 19 American states had suggested legislation to eliminate or increase the difficulty of exemptions, including California. Concurrently, American anti-vaccine activists reached out to libertarian and right-leaning groups such as the Tea Party movement to broaden their base. While earlier anti-vaccination activists focused on health impacts and safety of vaccines, recent themes increasingly involve philosophical arguments about liberty, medical freedom and parental rights. With the growing anti-vaccine movement from the 2010s onwards, the United States has seen a resurgence of certain vaccine-preventable diseases. The measles virus lost its elimination status in the US as the number of measles cases continued to rise in the late 2010s with a total of 17 outbreaks in 2018 and 465 outbreaks in 2019 (as of April 4, 2019). ==== 2019 measles outbreaks ==== Vaccine hesitancy led to declining rates of vaccination for measles, culminating in the 2019–2020 measles outbreaks. The most significant of these in proportion to national population was the 2019 Samoa measles outbreak. In July 2018, two 12-month-old children died in Samoa after receiving incorrectly prepared MMR vaccinations. These two deaths were picked up by anti-vaccine groups and used to incite fear towards vaccination on social media, causing the government to suspend its measles vaccination programme for ten months, despite advice from the WHO. The incident caused many Samoan residents to lose trust in the healthcare system. UNICEF and the World Health Organization estimate that the measles vaccination rate in Samoa fell from 74% in 2017 to 34% in 2018, similar to some of the poorest countries in Africa. In August 2019, an infected passenger on one of the more than 8,000 annual flights between New Zealand and Samoa probably brought the disease from Auckland to Upolu. A full outbreak of measles began on the island in October 2019 and continued for the next four months. As of January 6, 2020, there were over 5,700 cases of measles and 83 deaths, out of a Samoan population of 200,874. Over three percent of the population were infected. The cause of the outbreak was attributed to decreased vaccination rates, from 74% in 2017 to 31–34% in 2018, even though nearby islands had rates near 99%. a rate of 14.3 deaths per 1000 infected) and 5,520 cases (2.75% of the population) of measles in Samoa. Sixty-one out of the first 70 deaths were four years old and under, and all but seven were under 15. After the outbreak, anti-vaxxers employed racist tropes and misinformation to credit the scores of measles deaths to poverty and poor nutrition or even to the vaccine itself, but this has been discounted by the international emergency medical support that arrived in November and December. There was no evidence of acute malnutrition, clinical vitamin A deficiency, or immune deficiency as claimed by various anti-vaxxers. ==== COVID-19 pandemic activism ==== During the COVID-19 pandemic, anti-vaccine activists undertook various efforts to hinder people who wanted to receive the vaccines, with such activities occurring in countries including Australia, Israel, the United Kingdom, and the United States. These included attempts to physically blockade vaccination sites, and making false reservations for vaccination appointments to clog up vaccination booking systems. Protests were also organized by the activists to raise awareness for their cause. In some instances, anti-vaccine rhetoric has been traced to state-sponsored internet troll activities designed to create social dissension. Worldwide, foreign disinformation campaigns have been associated with declining vaccination rates in target countries. Anti-vaccine activism online both before and during the pandemic has been linked to extreme levels of falsehoods, rumors, hoaxes, and conspiracy theories. Anti-vaccine activists have falsely claimed in social media posts that numerous deaths or injuries had to do with reactions to vaccines. In one highly publicized instance in early 2023, after Buffalo Bills football player Damar Hamlin experienced an in-game episode of commotio cordis, there was an increase in rhetoric and disinformation from figures such as Charlie Kirk and Drew Pinsky making unfounded claims about Hamlin's cardiac arrest and COVID-19 vaccines. In another 2023 incident, college basketball player Bronny James experienced cardiac arrest at the Galen Center at the University of Southern California, leading to assertions that this was a result of receiving a COVID-19 vaccine; it was later revealed that the episode had been caused by a congenital heart defect. Also, anti-vaccine activists believed Foo Fighters drummer Taylor Hawkins died in 2022 from the COVID-19 vaccine, while in actuality it was a drug overdose. In December 2023, The New York Times published a detailed investigation of the distortion and misrepresentation of the circumstances surrounding the death of 24-year-old George Watts Jr. by Robert F. Kennedy Jr. and other anti-vaccine activists. Some unvaccinated persons opposed to COVID-19 vaccination began referring to themselves in social media groups as "purebloods", a term historically connoting racial purity. Prominent biomedical researcher Peter Hotez, asserted that he and other American scientists who publicly defend vaccines have been attacked on social media, harassed with threatening emails, intimidated, and confronted physically by opponents of vaccination. He further attributes the increase in aggressiveness of the anti-vaccination movement to the influence of the extreme wing of the Republican Party. Hotez estimates that roughly 200,000 preventable deaths from COVID-19, mainly among Republicans, occurred in the US because of refusal to be vaccinated. A 2023 study published in the Journal of the American Medical Association found "evidence of higher excess mortality for Republican voters compared with Democratic voters in Florida and Ohio after, but not before, COVID-19 vaccines were available to all adults in the US". == See also == Anti-vaccinationism in chiropractic Big Pharma conspiracy theories COVID-19 vaccine misinformation and hesitancy Germ theory denialism List of anti-vaccination groups Oral polio AIDS hypothesis Vaccine misinformation Vaccines and autism MMR vaccine and autism Thiomersal and vaccines == References ==
Wikipedia/Anti-vaccine_activism
Lewis's trilemma is an apologetic argument traditionally used to argue for the divinity of Jesus by postulating that the only alternatives were that he was evil or mad. One version was popularised by University of Oxford literary scholar and writer C. S. Lewis in a BBC radio talk and in his writings. It is sometimes described as the "Lunatic, Liar, or Lord", or "Mad, Bad, or God" argument. It takes the form of a trilemma — a choice among three options, each of which is in some way difficult to accept. A form of the argument can be found as early as 1846, and many other versions of the argument preceded Lewis's formulation in the 1940s. The argument has played an important part in Christian apologetics. Criticisms of the argument have included that it relies on the assumption that Jesus claimed to be God, something that most biblical scholars do not believe to be true, and that it is logically unsound since it presents an incomplete set of options. == History == This argument has been used in various forms throughout church history. It was used by the American preacher Mark Hopkins in Lectures on the Evidences of Christianity (1846), a book based on lectures delivered in 1844. Another early use of this approach was by the Scottish preacher "Rabbi" John Duncan (1796–1870), around 1859–1860. He stated: "Christ either deceived mankind by conscious fraud, or He was Himself deluded and self-deceived, or He was Divine. There is no getting out of this trilemma. It is inexorable." J. Gresham Machen used a similar line of argument in fifth chapter of his famous work Christianity and Liberalism (1923). There, Machen says: "The real trouble is that the lofty claim of Jesus, if ... the claim was unjustified, places a moral stain upon Jesus' character. What shall be thought of a human being who lapsed so far from the path of humility and sanity as to believe the eternal destinies of the world were committed into his hands? The truth is that if Jesus be merely an example, he is not a worthy example for he claimed to be far more." Others who used this approach included N. P. Williams, R. A. Torrey (1856–1928), and W. E. Biederwolf (1867–1939). The writer G. K. Chesterton used something similar to the trilemma in his book, The Everlasting Man (1925), which Lewis cited in 1962 as the second book that most influenced him. == Lewis's formulation == Lewis was an Oxford medieval literature scholar, popular writer, Christian apologist, and former atheist. He used the argument outlined below in a series of BBC radio talks later published as the book Mere Christianity. There, he states: "I am trying here to prevent anyone saying the really foolish thing that people often say about Him: I'm ready to accept Jesus as a great moral teacher, but I don't accept his claim to be God. That is the one thing we must not say. A man who was merely a man and said the sort of things Jesus said would not be a great moral teacher. He would either be a lunatic—on the level with the man who says he is a poached egg—or else he would be the Devil of Hell. You must make your choice. Either this man was, and is, the Son of God, or else a madman or something worse. You can shut him up for a fool, you can spit at him and kill him as a demon or you can fall at his feet and call him Lord and God, but let us not come with any patronizing nonsense about his being a great human teacher. He has not left that open to us. He did not intend to. ... Now it seems to me obvious that He was neither a lunatic nor a fiend: and consequently, however strange or terrifying or unlikely it may seem, I have to accept the view that He was and is God." Lewis, who had spoken extensively on Christianity to Royal Air Force personnel, was aware that many ordinary people did not believe Jesus was God but saw him rather as "a 'great human teacher' who was deified by his superstitious followers"; his argument is intended to overcome this. It is based on a traditional assumption that, in his words and deeds, Jesus was asserting a claim to be God. For example, in Mere Christianity, Lewis refers to what he says are Jesus's claims: to have authority to forgive sins—behaving as if "He was the party chiefly concerned, the person chiefly offended in all offences" to have always existed; and to intend to come back to judge the world at the end of time. Lewis implies that these amount to a claim to be God and argues that they logically exclude the possibility that Jesus was merely "a great moral teacher", because he believes no ordinary human making such claims could possibly be rationally or morally reliable. Elsewhere, he refers to this argument as "the aut Deus aut malus homo" ("either God or a bad man"), a reference to an earlier version of the argument used by Henry Parry Liddon in his 1866 Bampton Lectures, in which Liddon argued for the divinity of Jesus based on a number of grounds, including the claims he believed Jesus made. === In Narnia === A version of this argument appears in Lewis's fantasy novel The Lion, the Witch and the Wardrobe. When Lucy and Edmund return from Narnia (her second visit and his first), Edmund tells Peter and Susan that he was playing along with Lucy and pretending they went to Narnia. Peter and Susan believe Edmund and are worried that Lucy might be mentally ill, so they seek out the Professor whose house they are living in. After listening to them explain the situation and asking them some questions, he responds: "'Logic!' said the Professor half to himself. 'Why don't they teach logic at these schools? There are only three possibilities. Either your sister is telling lies, or she is mad, or she is telling the truth. You know she doesn't tell lies and it is obvious she is not mad. For the moment then, and unless any further evidence turns up, we must assume she is telling the truth.'" == Influence == === Christian === The trilemma has continued to be used in Christian apologetics since Lewis, notably by writers like Josh McDowell. Philosopher Peter Kreeft describes the trilemma as "the most important argument in Christian apologetics", and it forms a major part of the first talk in the Alpha Course and the book based on it, Questions of Life by Nicky Gumbel, an English Anglican priest. Ronald Reagan used this argument in 1978, in a written reply to a liberal Methodist minister who said that he did not believe Jesus was the son of God. A variant has also been quoted by Bono. The Lewis version was cited by Charles Colson as the basis of his conversion to Christianity. Stephen Davis, a supporter of Lewis and of this argument, argues that it can show belief in the incarnation as rational. The biblical scholar Bruce M. Metzger argued: "It has often been pointed out that Jesus' claim to be the only Son of God is either true or false. If it is false, he either knew the claim was false or he did not know that it was false. In the former case (2) he was a liar; in the latter case (3) he was a lunatic. No other conclusion beside these three is possible." === Non-Christian === The atheist writer Christopher Hitchens accepts Lewis's analysis of the options but reaches the opposite conclusion that Jesus was not good. He writes: "I am bound to say that Lewis is more honest here. Absent a direct line to the Almighty and a conviction that the last days are upon us, how is it 'moral' ... to claim a monopoly on access to heaven, or to threaten waverers with everlasting fire, let alone to condemn fig trees and persuade devils to infest the bodies of pigs? Such a person if not divine would be a sorcerer and a fanatic." == Criticism == Writing of the argument's "almost total absence from discussions about the status of Jesus by professional theologians and biblical scholars", Stephen T. Davis comments that it is "often severely criticized, both by people who do and by people who do not believe in the divinity of Jesus". === Jesus' claims to divinity === The argument relies on the assumption that Jesus claimed to be God, something that most biblical scholars and historians of the period do not believe to be true. A frequent criticism is that Lewis's trilemma depends on the veracity of the scriptural accounts of Jesus's statements and miracles. The trilemma rests on the interpretation of New Testament authors' depiction of the life of Jesus; a widespread objection is that the statements by Jesus recorded in the Gospels are being misinterpreted, and do not constitute claims to divinity. According to the biblical scholar Bart D. Ehrman, it is historically inaccurate that Jesus called himself God, so Lewis's premise of accepting that very claim is problematic. Ehrman stated that it is a mere legend that the historical Jesus called himself God, and that this was unknown to Lewis since he never was a professional Bible scholar. In Honest to God, John A. T. Robinson, then Bishop of Woolwich, criticizes Lewis's approach, questioning the idea that Jesus intended to claim divinity: "It is, indeed, an open question whether Jesus claimed to be Son of God, let alone God". John Hick, writing in 1993, argued that this "once popular form of apologetic" was ruled out by changes in New Testament studies, citing "broad agreement" that scholars do not today support the view that Jesus claimed to be God, quoting as examples Michael Ramsey (1980), C. F. D. Moule (1977), James Dunn (1980), Brian Hebblethwaite (1985), and David Brown (1985). Larry Hurtado, who argues that the followers of Jesus within a very short period developed an exceedingly high level of devotional reverence to Jesus, at the same time says that there is no evidence that Jesus himself demanded or received such cultic reverence. According to Gerd Lüdemann, the broad consensus among modern New Testament scholars is that the proclamation of the divinity of Jesus was a development within the earliest Christian communities. === Unsound logical form === Another criticism raised is that Lewis is creating a false trilemma by insisting that only three options are possible. Craig A. Evans writes that the "liar, lunatic, Lord" trilemma "makes for good alliteration, maybe even good rhetoric, but it is faulty logic". He proceeds to list several other alternatives: Jesus was Israel's messiah, simply a great prophet, or we do not really know who or what he was because the New Testament sources portray him inaccurately. Philosopher and theologian William Lane Craig also believes that the trilemma is an unsound argument for Christianity. Craig gives several other logically possible alternatives: Jesus' claims as to his divinity were merely good-faith mistakes resulting from his sincere efforts at reasoning, Jesus was deluded with respect to the specific issue of his own divinity while his faculties of moral reasoning remained intact, or Jesus did not understand the claims he made about himself as amounting to a claim to divinity. Philosopher John Beversluis comments that Lewis "deprives his readers of numerous alternate interpretations of Jesus that carry with them no such odious implications". Paul E. Little, in his 1967 work Know Why You Believe, expanded the argument into a tetralemma ("Lord, Liar, Lunatic or Legend"). This has also been done by Peter Kreeft and Ronald Tacelli, both Saint John's Seminary professors of philosophy at Boston College, who have also suggested a pentalemma, accommodating the option that Jesus was a guru, who believed himself to be God in the sense that everything is divine. ==== Lewis's response to the possibility that the Gospels are legends ==== Lewis used his own literary expertise in a 1950 essay, "What Are We to Make of Jesus?", to disagree with the possibility that the Gospels are legends. There, Lewis writes: "Now, as a literary historian, I am perfectly convinced that whatever else the Gospels are they are not legends. I have read a great deal of legend and I am quite clear that they are not the same sort of thing. They are not artistic enough to be legends. From an imaginative point of view they are clumsy, they don't work up to things properly. Most of the life of Jesus is totally unknown to us, as is the life of anyone else who lived at that time, and no people building up a legend would allow that to be so. Apart from bits of the Platonic dialogues, there is no conversation that I know of in ancient literature like the Fourth Gospel. There is nothing, even in modern literature, until about a hundred years ago when the realistic novel came into existence." === Apologetic method === Writing from a presuppositional perspective, Richard L. Pratt Jr. has criticized the trilemma as expanded by Paul E. Little ("Lord, Liar, Lunatic or Legend") as being too reliant on human reason: "Instead of insisting on the necessity of repentance and faith as the ground for true knowledge, Little acts as if the unbeliever needs merely to be logical about Jesus' claims in order to arrive at the truth." == See also == Christological argument Christology Historicity of the Bible List of Jewish messiah claimants Mental health of Jesus Pious fraud Rejection of Jesus == References ==
Wikipedia/Lewis'_trilemma
Extensive investigation into vaccines and autism spectrum disorder has shown that there is no relationship between the two, causal or otherwise, and that vaccine ingredients do not cause autism. The American scientist Peter Hotez researched the growth of the false claim and concluded that its spread originated with Andrew Wakefield's fraudulent 1998 paper, and that no prior paper supports a link. Despite the scientific consensus for the absence of a relationship and the retracted paper, the anti-vaccination movement at large continues to promote theories linking the two. A developing tactic appears to be the "promotion of irrelevant research [as] an active aggregation of several questionable or peripherally related research studies in an attempt to justify the science underlying a questionable claim." The Centers for Disease Control and Prevention (CDC) published updated statistics on autism in children for the year 2020. It states that in the year 2000, there were 1 in 150 children who were born in 1992 diagnosed with autism. In 2020, they found 1 in 36 children born in 2012 were diagnosed with autism. Anti-vaccination groups believe this to be due to the increased number of vaccines being given to children. Although there has been an increase in vaccines, there has also been an increase in autism screenings. It is clear from the literature and the CDC the increased number of children diagnosed with autism is due to the increase in ways to diagnose it. Celebrity and social media involvement seem to play a role in the anti-vaccine movement. == Autism screening history == In the early 2000s, evidence-based tools were being used for children as early as 36 months to help with the diagnosis of autism, and parents of children were able to identify signs of autism by the time the child turned 2. In 2001, the Modified Autism Checklist for Toddlers (M-CHAT) was used and could diagnose children with signs of autism at 24 months. In 2006, the American Academy of Pediatricians mandated screening, specifically for autism, at a child's 18-month checkup and later mandated for the 24-month visit as well. As of May 2024, the CDC mentioned that healthcare workers, community members, and even schools can screen for autism. The diagnosis of autism in a child by the age of two conducted by a professional is evaluated as very reliable. == Claimed mechanisms == The claimed mechanisms have changed over time, in response to evidence refuting each in turn. Anti-vaccine groups claim that specific vaccine ingredients can cause autism. Some of the most frequently mentioned ones are thiomersal, aluminum adjuvants, and formaldehyde. === MMR vaccine === The idea of a link between the MMR vaccine and autism came to prominence after the publication of a paper by Andrew Wakefield and others in The Lancet in 1998. This paper, which was retracted in 2010 and whose publication led to Wakefield being struck off the United Kingdom medical register, has been described as "the most damaging medical hoax of the last 100 years". Wakefield's primary claim was that he had isolated evidence of vaccine-strain measles virus RNA in the intestines of autistic children, leading to a condition he termed autistic enterocolitis (a condition never recognised or adopted by the scientific community). This finding was later shown to be due to errors made by the laboratory where the polymerase chain reaction (PCR) tests were performed. In 2009, The Sunday Times reported that Wakefield had manipulated patient data and misreported results in his 1998 paper, thus falsifying a link with autism. A 2011 article in the British Medical Journal describes the way in which Wakefield manipulated the data in his study in order to arrive at his predetermined conclusion. An accompanying editorial in the same journal described Wakefield's work as an "elaborate fraud" which led to lower vaccination rates, putting hundreds of thousands of children at risk and diverting funding and other resources from research into the true cause of autism. On 12 February 2009, a special court convened in the United States to review claims under its National Vaccine Injury Compensation Program ruled that parents of autistic children are not entitled to compensation in their contention that certain vaccines caused their children to develop autism. The Centers for Disease Control and Prevention (CDC), the IOM of the United States National Academy of Sciences, and the National Health Service have all concluded that there is no link between the MMR vaccine and autism. A systematic review by the Cochrane Library concluded that there is no credible link between the MMR vaccine and autism, that the MMR vaccine has prevented diseases that still carry a heavy burden of death and complications, that the lack of confidence in the MMR vaccine has damaged public health, and that the design and reporting of safety outcomes in MMR vaccine studies are largely inadequate. Further, an epidemiology study concluded that even children labeled high risk for autism, due to an older autistic sibling, that received the MMR vaccine resulted in no causal connection between the vaccine and autism or the increased risk of being diagnosed with autism. The assumption that MMR vaccines cause autism is not isolated to the United States. A seven-year study was done in Denmark from 1991 to 1998 following children who received the MMR vaccine. The results of the study found that when comparing the vaccinated children to the unvaccinated children, the risk of autism in the vaccinated group was 0.92. Also, the risk of another autism disorder was 0.83. The study concluded there was no association between the MMR vaccine and autism. The result held even when exploring the age of the child when the vaccine was given, the vaccination date, or the amount of time after the vaccine. === Thiomersal === Thiomersal is an antifungal preservative used in small amounts in some multi-dose vaccines (where the same vial is opened and used for multiple patients) to prevent contamination of the vaccine. Thiomersal contains ethylmercury, a mercury compound which is related to, but significantly less toxic than, the neurotoxic pollutant methylmercury. Despite decades of safe use, public campaigns prompted the CDC and the American Academy of Pediatrics (AAP) to request vaccine makers to remove thiomersal from vaccines as quickly as possible on the precautionary principle. Thiomersal is now absent from all common United States and European Union vaccines, except for some preparations of influenza vaccine. (Trace amounts remain in some vaccines due to production processes, at an approximate maximum of 1 microgramme, around 15% of the average daily mercury intake in the US for adults and 2.5% of the daily level considered tolerable by the World Health Organization [WHO].) The action engendered concern thiomersal could have been responsible for autism. The idea that thiomersal was a cause or trigger for autism is now considered disproven, as incidence rates for autism increased steadily even after thiomersal was removed from childhood vaccines. The cause of autism and mercury poisoning being associated is improbable because the symptoms of mercury poisoning are not present and are inherently inconsistent with the behaviours or symptoms of autism. There is no accepted scientific evidence that exposure to thiomersal is a factor in causing autism. A study by the CDC exploring mercury poisoning in vaccines concluded no signs of poisoning were present. Under the U.S. Food and Drug Administration (FDA) Modernization Act (FDAMA) of 1997, the FDA conducted a comprehensive review of the use of thiomersal in childhood vaccines. Conducted in 1999, this review found no evidence of harm from the use of thiomersal as a vaccine preservative, other than local hypersensitivity reactions. Despite this, starting in 2000, parents in the United States pursued legal compensation from a federal fund arguing that thiomersal caused autism in their children. A 2004 Institute of Medicine (IOM) committee favored rejecting any causal relationship between autism and vaccines containing thiomersal and rulings from the vaccine court in three test claims in 2010 established the precedent that thiomersal is not considered a cause of autism. === Aluminium adjuvants === As mercury compounds in vaccines have been definitively ruled out as a cause of autism, some anti-vaccine activists propose aluminium adjuvants as the cause of autism. Aluminium adjuvants simulate immune receptors and cause a strengthened response to the antigen in a way that is natural to the body. Aluminium adjuvants can be used in the form of soluble salts, alumina, and hydroxide. There is no substantial scientific evidence that aluminium adjuvants are linked to autism. When confirming that aluminium adjuvants are not dangerous in vaccines, it was concluded that there was no traces of aluminium in the children's hair or blood over the minimum level of risks according to the Agency for Toxic Substances and Disease Registry. Anti-vaccination activists commonly cite a number of papers which claim that there is in fact a link. These are mainly published in predatory open access journals, where peer-review is virtually non-existent. Work conducted by Christopher Shaw, Christopher Exley and Lucija Tomljenovic has been funded by the anti-vaccination Dwoskin Family Foundation. The work published by Shaw et al. has been discredited by the World Health Organization. A review study published in the open access journal Toxics suggests a link between early aluminium adjuvant exposure and autism; and concludes that there is a lack of fundamental scientific data demonstrating that aluminium adjuvants are safe. === Formaldehyde === Formaldehyde is another assumed link between vaccines and autism. Even though the assumption still circles around, formaldehyde has been used safely in the diphtheria vaccines to detoxify the bacteria used to make the vaccine. Another way it can be used is to inactivate the disease to be used in the vaccine. Formaldehyde can be found naturally in the body and environment. The human body uses formaldehyde to build amino acids and to generate the energy we need. Formaldehyde is all around us in daily life activities. It can be found in preservatives, materials used to build, and many products in homes. There is no safety concern for formaldehyde in vaccines. The most concerning repercussion is cancer after exposure to high levels of formaldehyde in the air. The amount of formaldehyde in some vaccines is less than what the body naturally produces. === Vaccine overload === Following the belief that individual vaccines caused autism was the idea of vaccine overload, which claims that too many vaccines at once may overwhelm or weaken a child's immune system and lead to adverse effects. The Children's Hospital of Philadelphia Vaccine Education Center compiled a list of vaccines recommended to children throughout history. They found that from 1985-1994, the recommended number of vaccines totaled to eight. The schedule for 2011 to 2020 revealed the recommended number of vaccines totaled to fourteen. Vaccine overload became popular after the National Vaccine Injury Compensation Program in the United States accepted the case of nine-year-old Hannah Poling. After showing signs of developmental delay as a toddler, Poling was diagnosed with encephalopathy caused by a mitochondrial enzyme deficit, which her family argued was triggered by multiple vaccines she received at nineteen months old. There have been multiple cases reported similar to this one, which led to the belief that vaccine overload caused autism. However, scientific studies show that vaccines do not overwhelm the immune system. In fact, conservative estimates predict that the immune system can respond to thousands of viruses simultaneously. It is known that vaccines constitute only a tiny fraction of the pathogens already naturally encountered by a child in a typical year. Common fevers and middle ear infections pose a much greater challenge to the immune system than vaccines do. Other scientific findings support the idea that vaccinations, and even multiple concurrent vaccinations, do not weaken the immune system or compromise overall immunity and evidence that autism has any immune-mediated pathophysiology has still not been found. Vaccines recommended from 1985-1994: Diphtheria Tetanus Pertussis Measles Mumps Rubella Polio Hib Hepatitis B (1991) Vaccines recommended from 2011-2020: Diphtheria Tetanus Pertussis Measles Mumps Rubella Polio Hib Hepatitis B Varicella Hepatitis A Pneumococcal Influenza Rotavirus Diphtheria, Tetanus, and Pertussis were given together as the DTaP. Measles, Mumps, and Rubella were given together as MMR. == Celebrity involvement and social media == Some celebrities have spoken out on their views that autism is related to vaccination, including: Jenny McCarthy, Kristin Cavallari, Robert De Niro, Jim Carrey, Bill Maher, and Pete Evans. McCarthy, one of the most outspoken celebrities on the topic, has said her son Evan's autism diagnosis was a result of the MMR vaccine. She authored Louder than Words: A Mother's Journey in Healing Autism and co-authored Healing and Preventing Autism. She was also president of Generation Rescue, a non-profit organisation that claimed vaccines made children autistic and promoted various unproven treatments. Generation Rescue ceased operations in December 2019. In a September 2015 U.S. presidential debate, Republican Party candidate and future United States President Donald Trump stated he knew of a 2-year-old child who had recently received a combined vaccine, developed a fever, and subsequently autism. Robert F. Kennedy, Jr. is one of the most notable proponents of the anti-vaccine movement. Kennedy published the book Thimerosal: Let the Science Speak: The Evidence Supporting the Immediate Removal of Mercury--A Known Neurotoxin--From Vaccines. He is also chairman of the board of Children's Health Defense, a group and website widely known for its anti-vaccination stance. A study conducted through Facebook explored the results of anti-vaccine ads and pro-vaccine ads. The study found that even with a similar number of anti-vaccine ads and pro-vaccine ads, the middle point in the data set of ads per buyer was higher in anti-vaccine ads. Another difference the study revealed was that the anti-vaccine ads were primarily targeted toward women and young adults who possibly had children. The pro-vaccination ads were presented evenly to different ages. == Public opinion == In December 2020, a poll of 1,115 U.S. adults found 12% of respondents believed there is evidence vaccinations cause autism; 51% believed there is no evidence; and 37% did not know. An updated survey, conducted in March 2023, concluded that adults think the MMR health benefits are high/very high, at 72%, and the risk of side effects is low/very low, at 64%. There has also been a drop from 2019 in United States adults who believe students in schools should be fully vaccinated. The 2023 survey showed that a decrease to 70% of U.S. adults agree that children should be vaccinated for school but an increase to 28% believe that it is the parent's right to choose if the child is vaccinated for school. == Political support in the US == In March 2025, the U.S. Department of Health and Human Services, overseen by former chair of Children's Health Defense, Robert F. Kennedy Jr., hired vaccine critic David Geier to conduct a federal study on the long-debunked link; Geier holds no medical credentials. == References ==
Wikipedia/Vaccines_and_autism
Mill's methods are five methods of induction described by philosopher John Stuart Mill in his 1843 book A System of Logic. They are intended to establish a causal relationship between two or more groups of data, analyzing their respective differences and similarities. == The methods == === Direct method of agreement === If two or more instances of the phenomenon under investigation have only one circumstance in common, the circumstance in which alone all the instances agree, is the cause (or effect) of the given phenomenon. For a property to be a necessary condition it must always be present if the effect is present. Since this is so, then we are interested in looking at cases where the effect is present and taking note of which properties, among those considered to be 'possible necessary conditions' are present and which are absent. Obviously, any properties which are absent when the effect is present cannot be necessary conditions for the effect. This method is also referred to more generally within comparative politics as the most different systems design. Symbolically, the method of agreement can be represented as: A B C D occur together with w x y z A E F G occur together with w t u v —————————————————— Therefore A is the cause, or the effect, of w. To further illustrate this concept, consider two structurally different countries. Country A is a former colony, has a centre-left government, and has a federal system with two levels of government. Country B has never been a colony, has a centre-left government and is a unitary state. One factor that both countries have in common, the dependent variable in this case, is that they have a system of universal health care. Comparing the factors known about the countries above, a comparative political scientist would conclude that the government sitting on the centre-left of the spectrum would be the independent variable which causes a system of universal health care, since it is the only one of the factors examined which holds constant between the two countries, and the theoretical backing for that relationship is sound; social democratic (centre-left) policies often include universal health care. === Method of difference === If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance save one in common, that one occurring only in the former; the circumstance in which alone the two instances differ, is the effect, or cause, or an indispensable part of the cause, of the phenomenon. This method is also known more generally as the most similar systems design within comparative politics. A B C D occur together with w x y z B C D occur together with x y z —————————————————— Therefore A is the cause, or the effect, or a part of the cause of w. As an example of the method of difference, consider two similar countries. Country A has a centre-right government, a unitary system and was a former colony. Country B has a centre-right government, a unitary system but was never a colony. The difference between the countries is that Country A readily supports anti-colonial initiatives, whereas Country B does not. The method of difference would identify the independent variable to be the status of each country as a former colony or not, with the dependant variable being supportive for anti-colonial initiatives. This is because, out of the two similar countries compared, the difference between the two is whether or not they were formerly a colony. This then explains the difference on the values of the dependent variable, with the former colony being more likely to support decolonization than the country with no history of being a colony. === Indirect method of difference === If two or more instances in which the phenomenon occurs have only one circumstance in common, while two or more instances in which it does not occur have nothing in common save the absence of that circumstance; the circumstance in which alone the two sets of instances differ, is the effect, or cause, or a necessary part of the cause, of the phenomenon. Also called the "Joint Method of Agreement and Difference", this principle is a combination of two methods of agreement. Despite the name, it is weaker than the direct method of difference and does not include it. Symbolically, the Joint method of agreement and difference can be represented as: A B C occur together with x y z A D E occur together with x v w F G occur with y w —————————————————— Therefore A is the cause, or the effect, or a part of the cause of x. === Method of residue === Subduct from any phenomenon such part as is known by previous inductions to be the effect of certain antecedents, and the residue of the phenomenon is the effect of the remaining antecedents. If a range of factors are believed to cause a range of phenomena, and we have matched all the factors, except one, with all the phenomena, except one, then the remaining phenomenon can be attributed to the remaining factor. Symbolically, the Method of Residue can be represented as: A B C occur together with x y z B is known to be the cause of y C is known to be the cause of z —————————————————— Therefore A is the cause or effect of x. === Method of concomitant variations === Whatever phenomenon varies in any manner whenever another phenomenon varies in some particular manner, is either a cause or an effect of that phenomenon, or is connected with it through some fact of causation. If across a range of circumstances leading to a phenomenon, some property of the phenomenon varies in tandem with some factor existing in the circumstances, then the phenomenon can be associated with that factor. For instance, suppose that various samples of water, each containing both salt and lead, were found to be toxic. If the level of toxicity varied in tandem with the level of lead, one could attribute the toxicity to the presence of lead. Symbolically, the method of concomitant variation can be represented as (with ± representing a shift): A B C occur together with x y z A± B C results in x± y z. ————————————————————— Therefore A and x are causally connected Unlike the preceding four inductive methods, the method of concomitant variation doesn't involve the elimination of any circumstance. Changing the magnitude of one factor results in the change in the magnitude of another factor. == See also == Causal inference Controlled scientific experiments Baconian method Bayesian network Koch's postulates == References == == Further reading == Copi, Irving M.; Cohen, Carl (2001). Introduction to Logic. Prentice Hall. ISBN 978-0-13-033735-1. Ducheyne, Steffen (2008). "J.S. Mill's Canons of Induction: From true causes to provisional ones". History and Philosophy of Logic. 29 (4): 361–376. CiteSeerX 10.1.1.671.6256. doi:10.1080/01445340802164377. S2CID 170478055. Kreeft, Peter (2009). Socratic Logic, A Logic Text Using Socratic Method, Platonic Questions, and Aristotelian Principles. St. Augustine's Press, South Bend, Indiana. ISBN 978-1-890318-89-5. == External links == Causal Reasoning—Provides some examples Mill's methods for identifying causes—Provides some examples
Wikipedia/Mill's_methods
The deductive-nomological model (DN model) of scientific explanation, also known as Hempel's model, the Hempel–Oppenheim model, the Popper–Hempel model, or the covering law model, is a formal view of scientifically answering questions asking, "Why...?". The DN model poses scientific explanation as a deductive structure, one where truth of its premises entails truth of its conclusion, hinged on accurate prediction or postdiction of the phenomenon to be explained. Because of problems concerning humans' ability to define, discover, and know causality, this was omitted in initial formulations of the DN model. Causality was thought to be incidentally approximated by realistic selection of premises that derive the phenomenon of interest from observed starting conditions plus general laws. Still, the DN model formally permitted causally irrelevant factors. Also, derivability from observations and laws sometimes yielded absurd answers. When logical empiricism fell out of favor in the 1960s, the DN model was widely seen as a flawed or greatly incomplete model of scientific explanation. Nonetheless, it remained an idealized version of scientific explanation, and one that was rather accurate when applied to modern physics. In the early 1980s, a revision to the DN model emphasized maximal specificity for relevance of the conditions and axioms stated. Together with Hempel's inductive-statistical model, the DN model forms scientific explanation's covering law model, which is also termed, from critical angle, subsumption theory. == Form == The term deductive distinguishes the DN model's intended determinism from the probabilism of inductive inferences. The term nomological is derived from the Greek word νόμος or nomos, meaning "law". The DN model holds to a view of scientific explanation whose conditions of adequacy (CA)—semiformal but stated classically—are derivability (CA1), lawlikeness (CA2), empirical content (CA3), and truth (CA4). In the DN model, a law axiomatizes an unrestricted generalization from antecedent A to consequent B by conditional proposition—If A, then B—and has empirical content testable. A law differs from mere true regularity—for instance, George always carries only $1 bills in his wallet—by supporting counterfactual claims and thus suggesting what must be true, while following from a scientific theory's axiomatic structure. The phenomenon to be explained is the explanandum—an event, law, or theory—whereas the premises to explain it are explanans, true or highly confirmed, containing at least one universal law, and entailing the explanandum. Thus, given the explanans as initial, specific conditions C1, C2, ... Cn plus general laws L1, L2, ... Ln, the phenomenon E as explanandum is a deductive consequence, thereby scientifically explained. == Roots == Aristotle's scientific explanation in Physics resembles the DN model, an idealized form of scientific explanation. The framework of Aristotelian physics—Aristotelian metaphysics—reflected the perspective of this principally biologist, who, amid living entities' undeniable purposiveness, formalized vitalism and teleology, an intrinsic morality in nature. With emergence of Copernicanism, however, Descartes introduced mechanical philosophy, then Newton rigorously posed lawlike explanation, both Descartes and especially Newton shunning teleology within natural philosophy. At 1740, David Hume staked Hume's fork, highlighted the problem of induction, and found humans ignorant of either necessary or sufficient causality. Hume also highlighted the fact/value gap, as what is does not itself reveal what ought. Near 1780, countering Hume's ostensibly radical empiricism, Immanuel Kant highlighted extreme rationalism—as by Descartes or Spinoza—and sought middle ground. Inferring the mind to arrange experience of the world into substance, space, and time, Kant placed the mind as part of the causal constellation of experience and thereby found Newton's theory of motion universally true, yet knowledge of things in themselves impossible. Safeguarding science, then, Kant paradoxically stripped it of scientific realism. Aborting Francis Bacon's inductivist mission to dissolve the veil of appearance to uncover the noumena—metaphysical view of nature's ultimate truths—Kant's transcendental idealism tasked science with simply modeling patterns of phenomena. Safeguarding metaphysics, too, it found the mind's constants holding also universal moral truths, and launched German idealism. Auguste Comte found the problem of induction rather irrelevant since enumerative induction is grounded on the empiricism available, while science's point is not metaphysical truth. Comte found human knowledge had evolved from theological to metaphysical to scientific—the ultimate stage—rejecting both theology and metaphysics as asking questions unanswerable and posing answers unverifiable. Comte in the 1830s expounded positivism—the first modern philosophy of science and simultaneously a political philosophy—rejecting conjectures about unobservables, thus rejecting search for causes. Positivism predicts observations, confirms the predictions, and states a law, thereupon applied to benefit human society. From late 19th century into the early 20th century, the influence of positivism spanned the globe. Meanwhile, evolutionary theory's natural selection brought the Copernican Revolution into biology and eventuated in the first conceptual alternative to vitalism and teleology. == Growth == Whereas Comtean positivism posed science as description, logical positivism emerged in the late 1920s and posed science as explanation, perhaps to better unify empirical sciences by covering not only fundamental science—that is, fundamental physics—but special sciences, too, such as biology, psychology, economics, and anthropology. After defeat of National Socialism with World War II's close in 1945, logical positivism shifted to a milder variant, logical empiricism. All variants of the movement, which lasted until 1965, are neopositivism, sharing the quest of verificationism. Neopositivists led emergence of the philosophy subdiscipline philosophy of science, researching such questions and aspects of scientific theory and knowledge. Scientific realism takes scientific theory's statements at face value, thus accorded either falsity or truth—probable or approximate or actual. Neopositivists held scientific antirealism as instrumentalism, holding scientific theory as simply a device to predict observations and their course, while statements on nature's unobservable aspects are elliptical at or metaphorical of its observable aspects, rather. DN model received its most detailed, influential statement by Carl G Hempel, first in his 1942 article "The function of general laws in history", and more explicitly with Paul Oppenheim in their 1948 article "Studies in the logic of explanation". Leading logical empiricist, Hempel embraced the Humean empiricist view that humans observe sequence of sensory events, not cause and effect, as causal relations and casual mechanisms are unobservables. DN model bypasses causality beyond mere constant conjunction: first an event like A, then always an event like B. Hempel held natural laws—empirically confirmed regularities—as satisfactory, and if included realistically to approximate causality. In later articles, Hempel defended DN model and proposed probabilistic explanation by inductive-statistical model (IS model). DN model and IS model—whereby the probability must be high, such as at least 50%—together form covering law model, as named by a critic, William Dray. Derivation of statistical laws from other statistical laws goes to the deductive-statistical model (DS model). Georg Henrik von Wright, another critic, named the totality subsumption theory. == Decline == Amid failure of neopositivism's fundamental tenets, Hempel in 1965 abandoned verificationism, signaling neopositivism's demise. From 1930 onward, Karl Popper attacked positivism, although, paradoxically, Popper was commonly mistaken for a positivist. Even Popper's 1934 book embraces DN model, widely accepted as the model of scientific explanation for as long as physics remained the model of science examined by philosophers of science. In the 1940s, filling the vast observational gap between cytology and biochemistry, cell biology arose and established existence of cell organelles besides the nucleus. Launched in the late 1930s, the molecular biology research program cracked a genetic code in the early 1960s and then converged with cell biology as cell and molecular biology, its breakthroughs and discoveries defying DN model by arriving in quest not of lawlike explanation but of causal mechanisms. Biology became a new model of science, while special sciences were no longer thought defective by lacking universal laws, as borne by physics. In 1948, when explicating DN model and stating scientific explanation's semiformal conditions of adequacy, Hempel and Oppenheim acknowledged redundancy of the third, empirical content, implied by the other three—derivability, lawlikeness, and truth. In the early 1980s, upon widespread view that causality ensures the explanans' relevance, Wesley Salmon called for returning cause to because, and along with James Fetzer helped replace CA3 empirical content with CA3' strict maximal specificity. Salmon introduced causal mechanical explanation, never clarifying how it proceeds, yet reviving philosophers' interest in such. Via shortcomings of Hempel's inductive-statistical model (IS model), Salmon introduced statistical-relevance model (SR model). Although DN model remained an idealized form of scientific explanation, especially in applied sciences, most philosophers of science consider DN model flawed by excluding many types of explanations generally accepted as scientific. == Strengths == As theory of knowledge, epistemology differs from ontology, which is a subbranch of metaphysics, theory of reality. Ontology proposes categories of being—what sorts of things exist—and so, although a scientific theory's ontological commitment can be modified in light of experience, an ontological commitment inevitably precedes empirical inquiry. Natural laws, so called, are statements of humans' observations, thus are epistemological—concerning human knowledge—the epistemic. Causal mechanisms and structures existing putatively independently of minds exist, or would exist, in the natural world's structure itself, and thus are ontological, the ontic. Blurring epistemic with ontic—as by incautiously presuming a natural law to refer to a causal mechanism, or to trace structures realistically during unobserved transitions, or to be true regularities always unvarying—tends to generate a category mistake. Discarding ontic commitments, including causality per se, DN model permits a theory's laws to be reduced to—that is, subsumed by—a more fundamental theory's laws. The higher theory's laws are explained in DN model by the lower theory's laws. Thus, the epistemic success of Newtonian theory's law of universal gravitation is reduced to—thus explained by—Albert Einstein's general theory of relativity, although Einstein's discards Newton's ontic claim that universal gravitation's epistemic success predicting Kepler's laws of planetary motion is through a causal mechanism of a straightly attractive force instantly traversing absolute space despite absolute time. Covering law model reflects neopositivism's vision of empirical science, a vision interpreting or presuming unity of science, whereby all empirical sciences are either fundamental science—that is, fundamental physics—or are special sciences, whether astrophysics, chemistry, biology, geology, psychology, economics, and so on. All special sciences would network via covering law model. And by stating boundary conditions while supplying bridge laws, any special law would reduce to a lower special law, ultimately reducing—theoretically although generally not practically—to fundamental science. (Boundary conditions are specified conditions whereby the phenomena of interest occur. Bridge laws translate terms in one science to terms in another science.) == Weaknesses == By DN model, if one asks, "Why is that shadow 20 feet long?", another can answer, "Because that flagpole is 15 feet tall, the Sun is at x angle, and laws of electromagnetism". Yet by problem of symmetry, if one instead asked, "Why is that flagpole 15 feet tall?", another could answer, "Because that shadow is 20 feet long, the Sun is at x angle, and laws of electromagnetism", likewise a deduction from observed conditions and scientific laws, but an answer clearly incorrect. By the problem of irrelevance, if one asks, "Why did that man not get pregnant?", one could in part answer, among the explanans, "Because he took birth control pills"—if he factually took them, and the law of their preventing pregnancy—as covering law model poses no restriction to bar that observation from the explanans. Many philosophers have concluded that causality is integral to scientific explanation. DN model offers a necessary condition of a causal explanation—successful prediction—but not sufficient conditions of causal explanation, as a universal regularity can include spurious relations or simple correlations, for instance Z always following Y, but not Z because of Y, instead Y and then Z as an effect of X. By relating temperature, pressure, and volume of gas within a container, Boyle's law permits prediction of an unknown variable—volume, pressure, or temperature—but does not explain why to expect that unless one adds, perhaps, the kinetic theory of gases. Scientific explanations increasingly pose not determinism's universal laws, but probabilism's chance, ceteris paribus laws. Smoking's contribution to lung cancer fails even the inductive-statistical model (IS model), requiring probability over 0.5 (50%). (Probability standardly ranges from 0 (0%) to 1 (100%).) Epidemiology, an applied science that uses statistics in search of associations between events, cannot show causality, but consistently found higher incidence of lung cancer in smokers versus otherwise similar nonsmokers, although the proportion of smokers who develop lung cancer is modest. Versus nonsmokers, however, smokers as a group showed over 20 times the risk of lung cancer, and in conjunction with basic research, consensus followed that smoking had been scientifically explained as a cause of lung cancer, responsible for some cases that without smoking would not have occurred, a probabilistic counterfactual causality. == Covering action == Through lawlike explanation, fundamental physics—often perceived as fundamental science—has proceeded through intertheory relation and theory reduction, thereby resolving experimental paradoxes to great historical success, resembling covering law model. In early 20th century, Ernst Mach as well as Wilhelm Ostwald had resisted Ludwig Boltzmann's reduction of thermodynamics—and thereby Boyle's law—to statistical mechanics partly because it rested on kinetic theory of gas, hinging on atomic/molecular theory of matter. Mach as well as Ostwald viewed matter as a variant of energy, and molecules as mathematical illusions, as even Boltzmann thought possible. In 1905, via statistical mechanics, Albert Einstein predicted the phenomenon Brownian motion—unexplained since reported in 1827 by botanist Robert Brown. Soon, most physicists accepted that atoms and molecules were unobservable yet real. Also in 1905, Einstein explained the electromagnetic field's energy as distributed in particles, doubted until this helped resolve atomic theory in the 1910s and 1920s. Meanwhile, all known physical phenomena were gravitational or electromagnetic, whose two theories misaligned. Yet belief in aether as the source of all physical phenomena was virtually unanimous. At experimental paradoxes, physicists modified the aether's hypothetical properties. Finding the luminiferous aether a useless hypothesis, Einstein in 1905 a priori unified all inertial reference frames to state special principle of relativity, which, by omitting aether, converted space and time into relative phenomena whose relativity aligned electrodynamics with the Newtonian principle Galilean relativity or invariance. Originally epistemic or instrumental, this was interpreted as ontic or realist—that is, a causal mechanical explanation—and the principle became a theory, refuting Newtonian gravitation. By predictive success in 1919, general relativity apparently overthrew Newton's theory, a revolution in science resisted by many yet fulfilled around 1930. In 1925, Werner Heisenberg as well as Erwin Schrödinger independently formalized quantum mechanics (QM). Despite clashing explanations, the two theories made identical predictions. Paul Dirac's 1928 model of the electron was set to special relativity, launching QM into the first quantum field theory (QFT), quantum electrodynamics (QED). From it, Dirac interpreted and predicted the electron's antiparticle, soon discovered and termed positron, but the QED failed electrodynamics at high energies. Elsewhere and otherwise, strong nuclear force and weak nuclear force were discovered. In 1941, Richard Feynman introduced QM's path integral formalism, which if taken toward interpretation as a causal mechanical model clashes with Heisenberg's matrix formalism and with Schrödinger's wave formalism, although all three are empirically identical, sharing predictions. Next, working on QED, Feynman sought to model particles without fields and find the vacuum truly empty. As each known fundamental force is apparently an effect of a field, Feynman failed. Louis de Broglie's waveparticle duality had rendered atomism—indivisible particles in a void—untenable, and highlighted the very notion of discontinuous particles as self-contradictory. Meeting in 1947, Freeman Dyson, Richard Feynman, Julian Schwinger, and Sin-Itiro Tomonaga soon introduced renormalization, a procedure converting QED to physics' most predictively precise theory, subsuming chemistry, optics, and statistical mechanics. QED thus won physicists' general acceptance. Paul Dirac criticized its need for renormalization as showing its unnaturalness, and called for an aether. In 1947, Willis Lamb had found unexpected motion of electron orbitals, shifted since the vacuum is not truly empty. Yet emptiness was catchy, abolishing aether conceptually, and physics proceeded ostensibly without it, even suppressing it. Meanwhile, "sickened by untidy math, most philosophers of physics tend to neglect QED". Physicists have feared even mentioning aether, renamed vacuum, which—as such—is nonexistent. General philosophers of science commonly believe that aether, rather, is fictitious, "relegated to the dustbin of scientific history ever since" 1905 brought special relativity. Einstein was noncommittal to aether's nonexistence, simply said it superfluous. Abolishing Newtonian motion for electrodynamic primacy, however, Einstein inadvertently reinforced aether, and to explain motion was led back to aether in general relativity. Yet resistance to relativity theory became associated with earlier theories of aether, whose word and concept became taboo. Einstein explained special relativity's compatibility with an aether, but Einstein aether, too, was opposed. Objects became conceived as pinned directly on space and time by abstract geometric relations lacking ghostly or fluid medium. By 1970, QED along with weak nuclear field was reduced to electroweak theory (EWT), and the strong nuclear field was modeled as quantum chromodynamics (QCD). Comprised by EWT, QCD, and Higgs field, this Standard Model of particle physics is an "effective theory", not truly fundamental. As QCD's particles are considered nonexistent in the everyday world, QCD especially suggests an aether, routinely found by physics experiments to exist and to exhibit relativistic symmetry. Confirmation of the Higgs particle, modeled as a condensation within the Higgs field, corroborates aether, although physics need not state or even include aether. Organizing regularities of observations—as in the covering law model—physicists find superfluous the quest to discover aether. In 1905, from special relativity, Einstein deduced mass–energy equivalence, particles being variant forms of distributed energy, how particles colliding at vast speed experience that energy's transformation into mass, producing heavier particles, although physicists' talk promotes confusion. As "the contemporary locus of metaphysical research", QFTs pose particles not as existing individually, yet as excitation modes of fields, the particles and their masses being states of aether, apparently unifying all physical phenomena as the more fundamental causal reality, as long ago foreseen. Yet a quantum field is an intricate abstraction—a mathematical field—virtually inconceivable as a classical field's physical properties. Nature's deeper aspects, still unknown, might elude any possible field theory. Though discovery of causality is popularly thought science's aim, search for it was shunned by the Newtonian research program, even more Newtonian than was Isaac Newton. By now, most theoretical physicists infer that the four, known fundamental interactions would reduce to superstring theory, whereby atoms and molecules, after all, are energy vibrations holding mathematical, geometric forms. Given uncertainties of scientific realism, some conclude that the concept causality raises comprehensibility of scientific explanation and thus is key folk science, but compromises precision of scientific explanation and is dropped as a science matures. Even epidemiology is maturing to heed the severe difficulties with presumptions about causality. Covering law model is among Carl G Hempel's admired contributions to philosophy of science. == See also == Types of inference Deductive reasoning Inductive reasoning Abductive reasoning Related subjects Explanandum and explanans Hypothetico-deductive model Models of scientific inquiry Philosophy of science Scientific method == Notes == == Sources == Avent, Ryan, "The Q&A: Brian Greene—life after the Higgs", The Economist blog: Babbage, 19 Jul 2012. Ayala, Francisco J & Theodosius G Dobzhansky, eds, Studies in the Philosophy of Biology: Reduction and Related Problems (Berkeley & Los Angeles: University of California Press, 1974). Bechtel, William, Discovering Cell Mechanisms: The Creation of Modern Cell Biology (New York: Cambridge University Press, 2006). Bechtel, William, Philosophy of Science: An Overview for Cognitive Science (Hillsdale, NJ: Lawrence Erlbaum Associates, 1988). Bem, Sacha & Huib L de Jong, Theoretical Issues in Psychology: An Introduction, 2nd edn (London: Sage Publications, 2006). Ben-Menahem, Yemima, Conventionalism: From Poincaré to Quine (Cambridge: Cambridge University Press, 2006). Blackmore, J T & R Itagaki, S Tanaka, eds, Ernst Mach's Vienna 1895–1930: Or Phenomenalism as Philosophy of Science (Dordrecht: Kluwer Academic Publishers, 2001). Boffetta, Paolo, "Causation in the presence of weak associations", Critical Reviews in Food Science and Nutrition, 2010 Dec; 50(s1):13–16. Bolotin, David, An Approach to Aristotle's Physics: With Particular Attention to the Role of His Manner of Writing (Albany: State University of New York Press, 1998). Bourdeau, Michel, "Auguste Comte", in Edward N Zalta, ed, The Stanford Encyclopedia of Philosophy, Winter 2013 edn. Braibant, Sylvie & Giorgio Giacomelli, Maurizio Spurio, Particles and Fundamental Interactions: An Introduction to Particle Physics (Dordrecht, Heidelberg, London, New York: Springer, 2012). Buchen, Lizzie, "May 29, 1919: A major eclipse, relatively speaking", Wired, 29 May 2009. Buckle, Stephen, Hume's Enlightenment Tract: The Unity and Purpose of An Enquiry Concerning Human Understanding (New York: Oxford University Press, 2001). Burgess, Cliff & Guy Moore, The Standard Model: A Primer (New York: Cambridge University Press, 2007). Chakraborti, Chhanda, Logic: Informal, Symbolic and Inductive (New Delhi: Prentice-Hall of India, 2007). Chakravartty, Anjan, "Scientific realism", in Edward N Zalta, ed, The Stanford Encyclopedia of Philosophy, Summer 2013 edn. Close, Frank, "Much ado about nothing", Nova: The Nature of Reality, PBS Online / WGBH Educational Foundation, 13 Jan 2012. Comte, Auguste, auth, J. H. Bridges, trans, A General View of Positivism (London: Trübner and Co, 1865) [English translation from French as Comte's 2nd edn in 1851, after the 1st edn in 1848]. Cordero, Alberto, ch 3 "Rejected posits, realism, and the history of science", pp. 23–32, in Henk W de Regt, Stephan Hartmann & Samir Okasha, eds, EPSA Philosophy of Science: Amsterdam 2009 (New York: Springer, 2012). Crelinsten, Jeffrey, Einstein's Jury: The Race to Test Relativity (Princeton: Princeton University Press, 2006). Cushing, James T, Quantum Mechanics: Historical Contingency and the Copenhagen Hegemony (Chicago: University of Chicago Press, 1994). Einstein, Albert, "Ether and the theory of relativity", pp. 3–24, Sidelights on Relativity (London: Methuen, 1922), the English trans of Einstein, "Äther und Relativitätstheorie" (Berlin: Verlag Julius, 1920), based on Einstein's 5 May 1920 address at University of Leyden, and collected in Jürgen Renn, ed, The Genesis of General Relativity, Volume 3 (Dordrecht: Springer, 2007). Fetzer, James H, "Carl Hempel", in Edward N Zalta, ed, The Stanford Encyclopedia of Philosophy, Spring 2013 edn. Fetzer, James H., ed, Science, Explanation, and Rationality: Aspects of the Philosophy of Carl G Hempel (New York: Oxford University Press, 2000). Feynman, Richard P., QED: The Strange Theory of Light and Matter, w/new intro by A Zee (Princeton: Princeton University Press, 2006). Flew, Antony G, A Dictionary of Philosophy, 2nd edn (New York: St Martin's Press, 1984), "Positivism", p. 283. Friedman, Michael, Reconsidering Logical Positivism (New York: Cambridge University Press, 1999). Gattei, Stefano, Karl Popper's Philosophy of Science: Rationality without Foundations (New York: Routledge, 2009), ch 2 "Science and philosophy". Grandy, David A., Everyday Quantum Reality (Bloomington, Indiana : Indiana University Press, 2010). Hacohen, Malachi H, Karl Popper—the Formative Years, 1902–1945: Politics and Philosophy in Interwar Vienna (Cambridge: Cambridge University Press, 2000). Jegerlehner, Fred, "The Standard Model as a low-energy effective theory: What is triggering the Higgs mechanism?", arXiv (High Energy Physics—Phenomenology):1304.7813, 11 May 2013 (last revised). Karhausen, Lucien R (2000). "Causation: The elusive grail of epidemiology". Medicine, Health Care and Philosophy. 3 (1): 59–67. doi:10.1023/A:1009970730507. PMID 11080970. S2CID 24260908. Kay, Lily E, Molecular Vision of Life: Caltech, the Rockefeller Foundation, and the Rise of the New Biology (New York: Oxford University Press, 1993). Khrennikov, K, ed, Proceedings of the Conference: Foundations of Probability and Physics (Singapore: World Scientific Publishing, 2001). Kuhlmann, Meinard, "Physicists debate whether the world is made of particles or fields—or something else entirely", Scientific American, 24 July 2013. Kundi, Michael (Jul 2006). "Causality and the interpretation of epidemiologic evidence". Environmental Health Perspectives. 114 (7): 969–74. doi:10.1289/ehp.8297. PMC 1513293. PMID 16835045. Laudan, Larry, ed, Mind and Medicine: Problems of Explanation and Evaluation in Psychiatry and the Biomedical Sciences (Berkeley, Los Angeles, London: University of California Press, 1983). Laughlin, Robert B, A Different Universe: Reinventing Physics from the Bottom Down (New York: Basic Books, 2005). Lodge, Oliver, "The ether of space: A physical conception", Scientific American Supplement, 1909 Mar 27; 67(1734):202–03. Manninen, Juha & Raimo Tuomela, eds, Essays on Explanation and Understanding: Studies in the Foundation of Humanities and Social Sciences (Dordrecht: D. Reidel, 1976). Melia, Fulvio, The Black Hole at the Center of Our Galaxy (Princeton: Princeton University Press, 2003). Montuschi, Eleonora, Objects in Social Science (London & New York: Continuum Books, 2003). Morange, Michel, trans by Michael Cobb, A History of Molecular Biology (Cambridge MA: Harvard University Press, 2000). Moyer, Donald F, "Revolution in science: The 1919 eclipse test of general relativity", in Arnold Perlmutter & Linda F Scott, eds, Studies in the Natural Sciences: On the Path of Einstein (New York: Springer, 1979). Newburgh, Ronald & Joseph Peidle, Wolfgang Rueckner, "Einstein, Perrin, and the reality of atoms: 1905 revisited", American Journal of Physics, 2006 June; 74(6):478−481. Norton, John D, "Causation as folk science", Philosopher's Imprint, 2003; 3(4), collected as ch 2 in Price & Corry, eds, Causation, Physics, and the Constitution of Reality (Oxford U P, 2007). Ohanian, Hans C, Einstein's Mistakes: The Human Failings of Genius (New York: W W Norton & Company, 2008). Okasha, Samir, Philosophy of Science: A Very Short Introduction (New York: Oxford University Press, 2002). O'Shaughnessy, John, Explaining Buyer Behavior: Central Concepts and Philosophy of Science Issues (New York: Oxford University Press, 1992). Parascandola, M; Weed, D L (Dec 2001). "Causation in epidemiology". Journal of Epidemiology and Community Health. 55 (12): 905–12. doi:10.1136/jech.55.12.905. PMC 1731812. PMID 11707485. Pigliucci, Massimo, Answers for Aristotle: How Science and Philosophy Can Lead Us to a More Meaningful Life (New York: Basic Books, 2012). Popper, Karl, "Against big words", In Search of a Better World: Lectures and Essays from Thirty Years (New York: Routledge, 1996). Price, Huw & Richard Corry, eds, Causation, Physics, and the Constitution of Reality: Russell's Republic Revisited (New York: Oxford University Press, 2007). Redman, Deborah A, The Rise of Political Economy as a Science: Methodology and the Classical Economists (Cambridge MA: MIT Press, 1997). Reutlinger, Alexander & Gerhard Schurz, Andreas Hüttemann, "Ceteris paribus laws", in Edward N Zalta, ed, The Stanford Encyclopedia of Philosophy, Spring 2011 edn. Riesselmann, Kurt, "Concept of ether in explaining forces", Inquiring Minds: Questions About Physics, US Department of Energy: Fermilab, 28 Nov 2008. Rothman, Kenneth J; Greenland, Sander (2005). "Causation and causal inference in epidemiology". American Journal of Public Health. 95 (Suppl 1): S144–50. doi:10.2105/AJPH.2004.059204. hdl:10.2105/AJPH.2004.059204. PMID 16030331. Rowlands, Peter, Oliver Lodge and the Liverpool Physical Society (Liverpool: Liverpool University Press, 1990). Sarkar, Sahotra & Jessica Pfeifer, eds, The Philosophy of Science: An Encyclopedia, Volume 1: A–M (New York: Routledge, 2006). Schwarz, John H (1998). "Recent developments in superstring theory". Proceedings of the National Academy of Sciences of the United States of America. 95 (6): 2750–7. Bibcode:1998PNAS...95.2750S. doi:10.1073/pnas.95.6.2750. PMC 19640. PMID 9501161. Schweber, Silvan S, QED and the Men who Made it: Dyson, Feynman, Schwinger, and Tomonaga (Princeton: Princeton University Press, 1994). Schliesser, Eric, "Hume's Newtonianism and anti-Newtonianism", in Edward N Zalta, ed, The Stanford Encyclopedia of Philosophy, Winter 2008 edn. Spohn, Wolfgang, The Laws of Belief: Ranking Theory and Its Philosophical Applications (Oxford: Oxford University Press, 2012). Suppe, Frederick, ed, The Structure of Scientific Theories, 2nd edn (Urbana, Illinois: University of Illinois Press, 1977). Tavel, Morton, Contemporary Physics and the Limits of Knowledge (Piscataway, NJ: Rutgers University Press, 2002). Torretti, Roberto, The Philosophy of Physics (New York: Cambridge University Press, 1999). Vongehr, Sascha, "Higgs discovery rehabilitating despised Einstein Ether", Science 2.0: Alpha Meme website, 13 Dec 2011. Vongehr, Sascha, "Supporting abstract relational space-time as fundamental without doctrinism against emergence, arXiv (History and Philosophy of Physics):0912.3069, 2 Oct 2011 (last revised). von Wright, Georg Henrik, Explanation and Understanding (Ithaca, NY: Cornell University Press, 1971–2004). Wells, James D, Effective Theories in Physics: From Planetary Orbits to Elementary Particle Masses (Heidelberg, New York, Dordrecht, London: Springer, 2012). Wilczek, Frank, The Lightness of Being: Mass, Ether, and the Unification of Forces (New York: Basic Books, 2008). Whittaker, Edmund T, A History of the Theories of Aether and Electricity: From the Age of Descartes to the Close of the Nineteenth Century (London, New York, Bombay, Calcutta: Longmans, Green, and Co, 1910 / Dublin: Hodges, Figgis, & Co, 1910). Wilczek, Frank (Jan 1999). "The persistence of ether" (PDF). Physics Today. 52 (1): 11–13. Bibcode:1999PhT....52a..11W. doi:10.1063/1.882562. Wolfson, Richard, Simply Einstein: Relativity Demystified (New York: W W Norton & Co, 2003). Woodward, James, "Scientific explanation", in Edward N Zalta, ed, The Stanford Encyclopedia of Philosophy, Winter 2011 edn. Wootton, David, ed, Modern Political Thought: Readings from Machiavelli to Nietzsche (Indianapolis: Hackett Publishing, 1996). == Further reading == Carl G. Hempel, Aspects of Scientific Explanation and other Essays in the Philosophy of Science (New York: Free Press, 1965). Randolph G. Mayes, "Theories of explanation", in Fieser Dowden, ed, Internet Encyclopedia of Philosophy, 2006. Ilkka Niiniluoto, "Covering law model", in Robert Audi, ed., The Cambridge Dictionary of Philosophy, 2nd edn (New York: Cambridge University Press, 1996). Wesley C. Salmon, Four Decades of Scientific Explanation (Minneapolis: University of Minnesota Press, 1990 / Pittsburgh: University of Pittsburgh Press, 2006).
Wikipedia/Covering_law_model
In philosophy, empiricism is an epistemological view which holds that true knowledge or justification comes only or primarily from sensory experience and empirical evidence. It is one of several competing views within epistemology, along with rationalism and skepticism. Empiricists argue that empiricism is a more reliable method of finding the truth than purely using logical reasoning, because humans have cognitive biases and limitations which lead to errors of judgement. Empiricism emphasizes the central role of empirical evidence in the formation of ideas, rather than innate ideas or traditions. Empiricists may argue that traditions (or customs) arise due to relations of previous sensory experiences. Historically, empiricism was associated with the "blank slate" concept (tabula rasa), according to which the human mind is "blank" at birth and develops its thoughts only through later experience. Empiricism in the philosophy of science emphasizes evidence, especially as discovered in experiments. It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world rather than resting solely on a priori reasoning, intuition, or revelation. Empiricism, often used by natural scientists, believes that "knowledge is based on experience" and that "knowledge is tentative and probabilistic, subject to continued revision and falsification". Empirical research, including experiments and validated measurement tools, guides the scientific method. == Etymology == The English term empirical derives from the Ancient Greek word ἐμπειρία, empeiria, which is cognate with and translates to the Latin experientia, from which the words experience and experiment are derived. == Background == A central concept in science and the scientific method is that conclusions must be empirically based on the evidence of the senses. Both natural and social sciences use working hypotheses that are testable by observation and experiment. The term semi-empirical is sometimes used to describe theoretical methods that make use of basic axioms, established scientific laws, and previous experimental results to engage in reasoned model building and theoretical inquiry. Philosophical empiricists hold no knowledge to be properly inferred or deduced unless it is derived from one's sense-based experience. In epistemology (theory of knowledge) empiricism is typically contrasted with rationalism, which holds that knowledge may be derived from reason independently of the senses, and in the philosophy of mind it is often contrasted with innatism, which holds that some knowledge and ideas are already present in the mind at birth. However, many Enlightenment rationalists and empiricists still made concessions to each other. For example, the empiricist John Locke admitted that some knowledge (e.g. knowledge of God's existence) could be arrived at through intuition and reasoning alone. Similarly, Robert Boyle, a prominent advocate of the experimental method, held that we also have innate ideas. At the same time, the main continental rationalists (Descartes, Spinoza, and Leibniz) were also advocates of the empirical "scientific method". == History == === Early empiricism === Between 600 and 200 BCE, the Vaisheshika school of Hindu philosophy, founded by the ancient Indian philosopher Kanada, accepted perception and inference as the only two reliable sources of knowledge. This is enumerated in his work Vaiśeṣika Sūtra. The Charvaka school held similar beliefs, asserting that perception is the only reliable source of knowledge while inference obtains knowledge with uncertainty. The earliest Western proto-empiricists were the empiric school of ancient Greek medical practitioners, founded in 330 BCE. Its members rejected the doctrines of the dogmatic school, preferring to rely on the observation of phantasiai (i.e., phenomena, the appearances). The Empiric school was closely allied with the Pyrrhonist school of philosophy, which made the philosophical case for their proto-empiricism. The notion of tabula rasa ("clean slate" or "blank tablet") connotes a view of the mind as an originally blank or empty recorder (Locke used the words "white paper") on which experience leaves marks. This denies that humans have innate ideas. The notion dates back to Aristotle, c. 350 BC: What the mind (nous) thinks must be in it in the same sense as letters are on a tablet (grammateion) which bears no actual writing (grammenon); this is just what happens in the case of the mind. (Aristotle, On the Soul, 3.4.430a1). Aristotle's explanation of how this was possible was not strictly empiricist in a modern sense, but rather based on his theory of potentiality and actuality, and experience of sense perceptions still requires the help of the active nous. These notions contrasted with Platonic notions of the human mind as an entity that pre-existed somewhere in the heavens, before being sent down to join a body on Earth (see Plato's Phaedo and Apology, as well as others). Aristotle was considered to give a more important position to sense perception than Plato, and commentators in the Middle Ages summarized one of his positions as "nihil in intellectu nisi prius fuerit in sensu" (Latin for "nothing in the intellect without first being in the senses"). This idea was later developed in ancient philosophy by the Stoic school, from about 330 BCE. Stoic epistemology generally emphasizes that the mind starts blank, but acquires knowledge as the outside world is impressed upon it. The doxographer Aetius summarizes this view as "When a man is born, the Stoics say, he has the commanding part of his soul like a sheet of paper ready for writing upon." === Islamic Golden Age and Pre-Renaissance (5th to 15th centuries CE) === During the Middle Ages (from the 5th to the 15th century CE) Aristotle's theory of tabula rasa was developed by Islamic philosophers starting with Al Farabi (c. 872 – c. 951 CE), developing into an elaborate theory by Avicenna (c.  980 – 1037 CE) and demonstrated as a thought experiment by Ibn Tufail. For Avicenna (Ibn Sina), for example, the tabula rasa is a pure potentiality that is actualized through education, and knowledge is attained through "empirical familiarity with objects in this world from which one abstracts universal concepts" developed through a "syllogistic method of reasoning in which observations lead to propositional statements which when compounded lead to further abstract concepts". The intellect itself develops from a material intellect (al-'aql al-hayulani), which is a potentiality "that can acquire knowledge to the active intellect (al-'aql al-fa'il), the state of the human intellect in conjunction with the perfect source of knowledge". So the immaterial "active intellect", separate from any individual person, is still essential for understanding to occur. In the 12th century CE, the Andalusian Muslim philosopher and novelist Abu Bakr Ibn Tufail (known as "Abubacer" or "Ebu Tophail" in the West) included the theory of tabula rasa as a thought experiment in his Arabic philosophical novel, Hayy ibn Yaqdhan in which he depicted the development of the mind of a feral child "from a tabula rasa to that of an adult, in complete isolation from society" on a desert island, through experience alone. The Latin translation of his philosophical novel, entitled Philosophus Autodidactus, published by Edward Pococke the Younger in 1671, had an influence on John Locke's formulation of tabula rasa in An Essay Concerning Human Understanding. A similar Islamic theological novel, Theologus Autodidactus, was written by the Arab theologian and physician Ibn al-Nafis in the 13th century. It also dealt with the theme of empiricism through the story of a feral child on a desert island, but departed from its predecessor by depicting the development of the protagonist's mind through contact with society rather than in isolation from society. During the 13th century Thomas Aquinas adopted into scholasticism the Aristotelian position that the senses are essential to the mind. Bonaventure (1221–1274), one of Aquinas' strongest intellectual opponents, offered some of the strongest arguments in favour of the Platonic idea of the mind. === Renaissance Italy === In the late renaissance various writers began to question the medieval and classical understanding of knowledge acquisition in a more fundamental way. In political and historical writing Niccolò Machiavelli and his friend Francesco Guicciardini initiated a new realistic style of writing. Machiavelli in particular was scornful of writers on politics who judged everything in comparison to mental ideals and demanded that people should study the "effectual truth" instead. Their contemporary, Leonardo da Vinci (1452–1519) said, "If you find from your own experience that something is a fact and it contradicts what some authority has written down, then you must abandon the authority and base your reasoning on your own findings." Significantly, an empirical metaphysical system was developed by the Italian philosopher Bernardino Telesio which had an enormous impact on the development of later Italian thinkers, including Telesio's students Antonio Persio and Sertorio Quattromani, his contemporaries Thomas Campanella and Giordano Bruno, and later British philosophers such as Francis Bacon, who regarded Telesio as "the first of the moderns". Telesio's influence can also be seen on the French philosophers René Descartes and Pierre Gassendi. The decidedly anti-Aristotelian and anti-clerical music theorist Vincenzo Galilei (c. 1520 – 1591), father of Galileo and the inventor of monody, made use of the method in successfully solving musical problems, firstly, of tuning such as the relationship of pitch to string tension and mass in stringed instruments, and to volume of air in wind instruments; and secondly to composition, by his various suggestions to composers in his Dialogo della musica antica e moderna (Florence, 1581). The Italian word he used for "experiment" was esperimento. It is known that he was the essential pedagogical influence upon the young Galileo, his eldest son (cf. Coelho, ed. Music and Science in the Age of Galileo Galilei), arguably one of the most influential empiricists in history. Vincenzo, through his tuning research, found the underlying truth at the heart of the misunderstood myth of 'Pythagoras' hammers' (the square of the numbers concerned yielded those musical intervals, not the actual numbers, as believed), and through this and other discoveries that demonstrated the fallibility of traditional authorities, a radically empirical attitude developed, passed on to Galileo, which regarded "experience and demonstration" as the sine qua non of valid rational enquiry. === British empiricism === British empiricism, a retrospective characterization, emerged during the 17th century as an approach to early modern philosophy and modern science. Although both integral to this overarching transition, Francis Bacon, in England, first advocated for empiricism in 1620, whereas René Descartes, in France, laid the main groundwork upholding rationalism around 1640. (Bacon's natural philosophy was influenced by Italian philosopher Bernardino Telesio and by Swiss physician Paracelsus.) Contributing later in the 17th century, Thomas Hobbes and Baruch Spinoza are retrospectively identified likewise as an empiricist and a rationalist, respectively. In the Enlightenment of the late 17th century, John Locke in England, and in the 18th century, both George Berkeley in Ireland and David Hume in Scotland, all became leading exponents of empiricism, hence the dominance of empiricism in British philosophy. The distinction between rationalism and empiricism was not formally made until Immanuel Kant, in Germany, around 1780, who sought to merge the two views. In response to the early-to-mid-17th-century "continental rationalism", John Locke (1632–1704) proposed in An Essay Concerning Human Understanding (1689) a very influential view wherein the only knowledge humans can have is a posteriori, i.e., based upon experience. Locke is famously attributed with holding the proposition that the human mind is a tabula rasa, a "blank tablet", in Locke's words "white paper", on which the experiences derived from sense impressions as a person's life proceeds are written. There are two sources of our ideas: sensation and reflection. In both cases, a distinction is made between simple and complex ideas. The former are unanalysable, and are broken down into primary and secondary qualities. Primary qualities are essential for the object in question to be what it is. Without specific primary qualities, an object would not be what it is. For example, an apple is an apple because of the arrangement of its atomic structure. If an apple were structured differently, it would cease to be an apple. Secondary qualities are the sensory information we can perceive from its primary qualities. For example, an apple can be perceived in various colours, sizes, and textures but it is still identified as an apple. Therefore, its primary qualities dictate what the object essentially is, while its secondary qualities define its attributes. Complex ideas combine simple ones, and divide into substances, modes, and relations. According to Locke, our knowledge of things is a perception of ideas that are in accordance or discordance with each other, which is very different from the quest for certainty of Descartes. A generation later, the Irish Anglican bishop George Berkeley (1685–1753) determined that Locke's view immediately opened a door that would lead to eventual atheism. In response to Locke, he put forth in his Treatise Concerning the Principles of Human Knowledge (1710) an important challenge to empiricism in which things only exist either as a result of their being perceived, or by virtue of the fact that they are an entity doing the perceiving. (For Berkeley, God fills in for humans by doing the perceiving whenever humans are not around to do it.) In his text Alciphron, Berkeley maintained that any order humans may see in nature is the language or handwriting of God. Berkeley's approach to empiricism would later come to be called subjective idealism. Scottish philosopher David Hume (1711–1776) responded to Berkeley's criticisms of Locke, as well as other differences between early modern philosophers, and moved empiricism to a new level of skepticism. Hume argued in keeping with the empiricist view that all knowledge derives from sense experience, but he accepted that this has implications not normally acceptable to philosophers. He wrote for example, "Locke divides all arguments into demonstrative and probable. On this view, we must say that it is only probable that all men must die or that the sun will rise to-morrow, because neither of these can be demonstrated. But to conform our language more to common use, we ought to divide arguments into demonstrations, proofs, and probabilities—by ‘proofs’ meaning arguments from experience that leave no room for doubt or opposition." And, I believe the most general and most popular explication of this matter, is to say [See Mr. Locke, chapter of power.], that finding from experience, that there are several new productions in matter, such as the motions and variations of body, and concluding that there must somewhere be a power capable of producing them, we arrive at last by this reasoning at the idea of power and efficacy. But to be convinced that this explication is more popular than philosophical, we need but reflect on two very obvious principles. First, That reason alone can never give rise to any original idea, and secondly, that reason, as distinguished from experience, can never make us conclude, that a cause or productive quality is absolutely requisite to every beginning of existence. Both these considerations have been sufficiently explained: and therefore shall not at present be any farther insisted on. Hume divided all of human knowledge into two categories: relations of ideas and matters of fact (see also Kant's analytic-synthetic distinction). Mathematical and logical propositions (e.g. "that the square of the hypotenuse is equal to the sum of the squares of the two sides") are examples of the first, while propositions involving some contingent observation of the world (e.g. "the sun rises in the East") are examples of the second. All of people's "ideas", in turn, are derived from their "impressions". For Hume, an "impression" corresponds roughly with what we call a sensation. To remember or to imagine such impressions is to have an "idea". Ideas are therefore the faint copies of sensations. Hume maintained that no knowledge, even the most basic beliefs about the natural world, can be conclusively established by reason. Rather, he maintained, our beliefs are more a result of accumulated habits, developed in response to accumulated sense experiences. Among his many arguments Hume also added another important slant to the debate about scientific method—that of the problem of induction. Hume argued that it requires inductive reasoning to arrive at the premises for the principle of inductive reasoning, and therefore the justification for inductive reasoning is a circular argument. Among Hume's conclusions regarding the problem of induction is that there is no certainty that the future will resemble the past. Thus, as a simple instance posed by Hume, we cannot know with certainty by inductive reasoning that the sun will continue to rise in the East, but instead come to expect it to do so because it has repeatedly done so in the past. Hume concluded that such things as belief in an external world and belief in the existence of the self were not rationally justifiable. According to Hume these beliefs were to be accepted nonetheless because of their profound basis in instinct and custom. Hume's lasting legacy, however, was the doubt that his skeptical arguments cast on the legitimacy of inductive reasoning, allowing many skeptics who followed to cast similar doubt. === Phenomenalism === Most of Hume's followers have disagreed with his conclusion that belief in an external world is rationally unjustifiable, contending that Hume's own principles implicitly contained the rational justification for such a belief, that is, beyond being content to let the issue rest on human instinct, custom and habit. According to an extreme empiricist theory known as phenomenalism, anticipated by the arguments of both Hume and George Berkeley, a physical object is a kind of construction out of our experiences. Phenomenalism is the view that physical objects, properties, events (whatever is physical) are reducible to mental objects, properties, events. Ultimately, only mental objects, properties, events, exist—hence the closely related term subjective idealism. By the phenomenalistic line of thinking, to have a visual experience of a real physical thing is to have an experience of a certain kind of group of experiences. This type of set of experiences possesses a constancy and coherence that is lacking in the set of experiences of which hallucinations, for example, are a part. As John Stuart Mill put it in the mid-19th century, matter is the "permanent possibility of sensation". Mill's empiricism went a significant step beyond Hume in still another respect: in maintaining that induction is necessary for all meaningful knowledge including mathematics. As summarized by D.W. Hamlin: [Mill] claimed that mathematical truths were merely very highly confirmed generalizations from experience; mathematical inference, generally conceived as deductive [and a priori] in nature, Mill set down as founded on induction. Thus, in Mill's philosophy there was no real place for knowledge based on relations of ideas. In his view logical and mathematical necessity is psychological; we are merely unable to conceive any other possibilities than those that logical and mathematical propositions assert. This is perhaps the most extreme version of empiricism known, but it has not found many defenders. Mill's empiricism thus held that knowledge of any kind is not from direct experience but an inductive inference from direct experience. The problems other philosophers have had with Mill's position center around the following issues: Firstly, Mill's formulation encounters difficulty when it describes what direct experience is by differentiating only between actual and possible sensations. This misses some key discussion concerning conditions under which such "groups of permanent possibilities of sensation" might exist in the first place. Berkeley put God in that gap; the phenomenalists, including Mill, essentially left the question unanswered. In the end, lacking an acknowledgement of an aspect of "reality" that goes beyond mere "possibilities of sensation", such a position leads to a version of subjective idealism. Questions of how floor beams continue to support a floor while unobserved, how trees continue to grow while unobserved and untouched by human hands, etc., remain unanswered, and perhaps unanswerable in these terms. Secondly, Mill's formulation leaves open the unsettling possibility that the "gap-filling entities are purely possibilities and not actualities at all". Thirdly, Mill's position, by calling mathematics merely another species of inductive inference, misapprehends mathematics. It fails to fully consider the structure and method of mathematical science, the products of which are arrived at through an internally consistent deductive set of procedures which do not, either today or at the time Mill wrote, fall under the agreed meaning of induction. The phenomenalist phase of post-Humean empiricism ended by the 1940s, for by that time it had become obvious that statements about physical things could not be translated into statements about actual and possible sense data. If a physical object statement is to be translatable into a sense-data statement, the former must be at least deducible from the latter. But it came to be realized that there is no finite set of statements about actual and possible sense-data from which we can deduce even a single physical-object statement. The translating or paraphrasing statement must be couched in terms of normal observers in normal conditions of observation. There is, however, no finite set of statements that are couched in purely sensory terms and can express the satisfaction of the condition of the presence of a normal observer. According to phenomenalism, to say that a normal observer is present is to make the hypothetical statement that were a doctor to inspect the observer, the observer would appear to the doctor to be normal. But, of course, the doctor himself must be a normal observer. If we are to specify this doctor's normality in sensory terms, we must make reference to a second doctor who, when inspecting the sense organs of the first doctor, would himself have to have the sense data a normal observer has when inspecting the sense organs of a subject who is a normal observer. And if we are to specify in sensory terms that the second doctor is a normal observer, we must refer to a third doctor, and so on (also see the third man). === Logical empiricism === Logical empiricism (also logical positivism or neopositivism) was an early 20th-century attempt to synthesize the essential ideas of British empiricism (e.g. a strong emphasis on sensory experience as the basis for knowledge) with certain insights from mathematical logic that had been developed by Gottlob Frege and Ludwig Wittgenstein. Some of the key figures in this movement were Otto Neurath, Moritz Schlick and the rest of the Vienna Circle, along with A. J. Ayer, Rudolf Carnap and Hans Reichenbach. The neopositivists subscribed to a notion of philosophy as the conceptual clarification of the methods, insights and discoveries of the sciences. They saw in the logical symbolism elaborated by Frege (1848–1925) and Bertrand Russell (1872–1970) a powerful instrument that could rationally reconstruct all scientific discourse into an ideal, logically perfect, language that would be free of the ambiguities and deformations of natural language. This gave rise to what they saw as metaphysical pseudoproblems and other conceptual confusions. By combining Frege's thesis that all mathematical truths are logical with the early Wittgenstein's idea that all logical truths are mere linguistic tautologies, they arrived at a twofold classification of all propositions: the "analytic" (a priori) and the "synthetic" (a posteriori). On this basis, they formulated a strong principle of demarcation between sentences that have sense and those that do not: the so-called "verification principle". Any sentence that is not purely logical, or is unverifiable, is devoid of meaning. As a result, most metaphysical, ethical, aesthetic and other traditional philosophical problems came to be considered pseudoproblems. In the extreme empiricism of the neopositivists—at least before the 1930s—any genuinely synthetic assertion must be reducible to an ultimate assertion (or set of ultimate assertions) that expresses direct observations or perceptions. In later years, Carnap and Neurath abandoned this sort of phenomenalism in favor of a rational reconstruction of knowledge into the language of an objective spatio-temporal physics. That is, instead of translating sentences about physical objects into sense-data, such sentences were to be translated into so-called protocol sentences, for example, "X at location Y and at time T observes such and such". The central theses of logical positivism (verificationism, the analytic–synthetic distinction, reductionism, etc.) came under sharp attack after World War II by thinkers such as Nelson Goodman, W. V. Quine, Hilary Putnam, Karl Popper, and Richard Rorty. By the late 1960s, it had become evident to most philosophers that the movement had pretty much run its course, though its influence is still significant among contemporary analytic philosophers such as Michael Dummett and other anti-realists. === Pragmatism === In the late 19th and early 20th century, several forms of pragmatic philosophy arose. The ideas of pragmatism, in its various forms, developed mainly from discussions between Charles Sanders Peirce and William James when both men were at Harvard in the 1870s. James popularized the term "pragmatism", giving Peirce full credit for its patrimony, but Peirce later demurred from the tangents that the movement was taking, and redubbed what he regarded as the original idea with the name of "pragmaticism". Along with its pragmatic theory of truth, this perspective integrates the basic insights of empirical (experience-based) and rational (concept-based) thinking. Charles Peirce (1839–1914) was highly influential in laying the groundwork for today's empirical scientific method. Although Peirce severely criticized many elements of Descartes' peculiar brand of rationalism, he did not reject rationalism outright. Indeed, he concurred with the main ideas of rationalism, most importantly the idea that rational concepts can be meaningful and the idea that rational concepts necessarily go beyond the data given by empirical observation. In later years he even emphasized the concept-driven side of the then ongoing debate between strict empiricism and strict rationalism, in part to counterbalance the excesses to which some of his cohorts had taken pragmatism under the "data-driven" strict-empiricist view. Among Peirce's major contributions was to place inductive reasoning and deductive reasoning in a complementary rather than competitive mode, the latter of which had been the primary trend among the educated since David Hume wrote a century before. To this, Peirce added the concept of abductive reasoning. The combined three forms of reasoning serve as a primary conceptual foundation for the empirically based scientific method today. Peirce's approach "presupposes that (1) the objects of knowledge are real things, (2) the characters (properties) of real things do not depend on our perceptions of them, and (3) everyone who has sufficient experience of real things will agree on the truth about them. According to Peirce's doctrine of fallibilism, the conclusions of science are always tentative. The rationality of the scientific method does not depend on the certainty of its conclusions, but on its self-corrective character: by continued application of the method science can detect and correct its own mistakes, and thus eventually lead to the discovery of truth". In his Harvard "Lectures on Pragmatism" (1903), Peirce enumerated what he called the "three cotary propositions of pragmatism" (L: cos, cotis whetstone), saying that they "put the edge on the maxim of pragmatism". First among these, he listed the peripatetic-thomist observation mentioned above, but he further observed that this link between sensory perception and intellectual conception is a two-way street. That is, it can be taken to say that whatever we find in the intellect is also incipiently in the senses. Hence, if theories are theory-laden then so are the senses, and perception itself can be seen as a species of abductive inference, its difference being that it is beyond control and hence beyond critique—in a word, incorrigible. This in no way conflicts with the fallibility and revisability of scientific concepts, since it is only the immediate percept in its unique individuality or "thisness"—what the Scholastics called its haecceity—that stands beyond control and correction. Scientific concepts, on the other hand, are general in nature, and transient sensations do in another sense find correction within them. This notion of perception as abduction has received periodic revivals in artificial intelligence and cognitive science research, most recently for instance with the work of Irvin Rock on indirect perception. Around the beginning of the 20th century, William James (1842–1910) coined the term "radical empiricism" to describe an offshoot of his form of pragmatism, which he argued could be dealt with separately from his pragmatism—though in fact the two concepts are intertwined in James's published lectures. James maintained that the empirically observed "directly apprehended universe needs ... no extraneous trans-empirical connective support", by which he meant to rule out the perception that there can be any value added by seeking supernatural explanations for natural phenomena. James' "radical empiricism" is thus not radical in the context of the term "empiricism", but is instead fairly consistent with the modern use of the term "empirical". His method of argument in arriving at this view, however, still readily encounters debate within philosophy even today. John Dewey (1859–1952) modified James' pragmatism to form a theory known as instrumentalism. The role of sense experience in Dewey's theory is crucial, in that he saw experience as unified totality of things through which everything else is interrelated. Dewey's basic thought, in accordance with empiricism, was that reality is determined by past experience. Therefore, humans adapt their past experiences of things to perform experiments upon and test the pragmatic values of such experience. The value of such experience is measured experientially and scientifically, and the results of such tests generate ideas that serve as instruments for future experimentation, in physical sciences as in ethics. Thus, ideas in Dewey's system retain their empiricist flavour in that they are only known a posteriori. == See also == == Notes == == References == == External links == Fasko, Manuel; West, Peter. "British Empiricism". Internet Encyclopedia of Philosophy. Zalta, Edward N. (ed.). "Rationalism vs. Empiricism". Stanford Encyclopedia of Philosophy. Rationalism vs. Empiricism at the Indiana Philosophy Ontology Project Empiricism on In Our Time at the BBC Empiricist Man
Wikipedia/Empirical_limits_in_science
In philosophy of science, an observation is said to be "theory-laden" when shaped by the investigator's theoretical presuppositions. The thesis is chiefly associated with the late 1950s–early 1960s work of Norwood Russell Hanson, Thomas Kuhn, and Paul Feyerabend, though it was likely first put forth some 50 years earlier, at least implicitly, by Pierre Duhem. Semantic theory-ladenness refers to the impact of theoretical assumptions on the meaning of observational terms, while perceptual theory-ladenness refers to their impact on the perceptual experience itself. Theory-ladenness is also relevant for measurement outcomes: the data thus acquired may be said to be theory-laden since it is meaningless by itself unless interpreted as the outcome of the measurement processes involved. Theory-ladenness poses a problem for the confirmation of scientific theories since the observational evidence may already implicitly presuppose the thesis it is supposed to justify. This effect can present a challenge for reaching scientific consensus if the disagreeing parties make different observations due to their different theoretical backgrounds. == Forms == Two forms of theory-ladenness should be kept separate: (a) The semantic form: the meaning of observational terms is partially determined by theoretical presuppositions; (b) The perceptual form: the theories held by the investigator, at a very basic cognitive level, impinge on the perceptions of the investigator. The former may be referred to as semantic and the latter as perceptual theory-ladenness. In a book showing the theory-ladenness of psychiatric evidences, Massimiliano Aragona (Il mito dei fatti, 2009) distinguished three forms of theory-ladenness. The "weak form" was already affirmed by Karl Popper (it is weak because he maintains the idea of theoretical progress directed to the truth of scientific theories). The "strong" form was sustained by Kuhn and Feyerabend, with their notion of incommensurability. However, Kuhn was a moderate relativist and maintained the Kantian view that although reality is not directly knowable, it manifests itself "resisting" to our interpretations. On the contrary, Feyerabend completely reversed the relationship between observations and theories, introducing an "extra-strong" form of theory-ladenness in which "anything goes". == Measurement outcomes == Van Fraassen distinguishes between observations, phenomena (observed entities) and appearances (the contents of measurement outcomes). An example of an appearance is the temperature of 38°C of a patient as measured using a thermometer. The number "38" is meaningless by itself unless we interpret it as the outcome of a measurement process. Such an interpretation implicitly assumes various other theses about how the thermometer was used, how thermometers work etc. All appearances are theory-laden in this sense. But in many cases this does not pose serious practical problems as long as the presumed theses are either correct or only contain mistakes irrelevant to the intended application. == Problem of confirmation == Theory-ladenness is particularly relevant for the problem of confirmation of scientific theories. According to the scientific method, observational evidence is needed to develop scientific theories and to test their predictions. But if an observation is theory-laden then it already implicitly presumes various theses and therefore cannot act as neutral arbitrator between theories which affirm (or deny) the presumed theses. This is akin to the informal fallacy of begging the question. == Problem of scientific consensus == Theory-ladenness also poses problems for scientific consensus. Different researchers may initially hold different background beliefs. Ideally, the observations they make in the course of their research would enable each of them to discern which of these beliefs are false. So they would eventually reach an agreement on the central issues. But their different background beliefs may cause them to make different observations despite the fact that both observe the same phenomena. In such a case the disagreement happens not just on the level of the supported theories but also on the level of the supporting observational evidence that is supposed to arbitrate between the theories. Under those circumstances, gathering more theory-laden evidence would only deepen the problem instead of solving it. The problem of unresolved disagreements is more severe in the social sciences and philosophy than in the natural sciences. For example, disagreements in ethics or in metaphysics often end in a clash of the brute intuitions which act as evidence for or against the competing theories. But it is an open question to which extent these disagreements are due to theory-ladenness or other factors. == See also == Confirmation holism Duhem–Quine thesis Observation Metaphysics of presence == References ==
Wikipedia/Theory-laden
In the history of physics, aether theories (or ether theories) proposed the existence of a medium, a space-filling substance or field as a transmission medium for the propagation of electromagnetic or gravitational forces. Since the development of special relativity, theories using a substantial aether fell out of use in modern physics, and are now replaced by more abstract models. This early modern aether has little in common with the aether of classical elements from which the name was borrowed. The assorted theories embody the various conceptions of this medium and substance. == Historical models == === Luminiferous aether === Isaac Newton suggests the existence of an aether in the Third Book of Opticks (1st ed. 1704; 2nd ed. 1718): "Doth not this aethereal medium in passing out of water, glass, crystal, and other compact and dense bodies in empty spaces, grow denser and denser by degrees, and by that means refract the rays of light not in a point, but by bending them gradually in curve lines? ...Is not this medium much rarer within the dense bodies of the Sun, stars, planets and comets, than in the empty celestial space between them? And in passing from them to great distances, doth it not grow denser and denser perpetually, and thereby cause the gravity of those great bodies towards one another, and of their parts towards the bodies; every body endeavouring to go from the denser parts of the medium towards the rarer?" In the 19th century, luminiferous aether (or ether), meaning light-bearing aether, was a theorized medium for the propagation of light. James Clerk Maxwell developed a model to explain electric and magnetic phenomena using the aether, a model that led to what are now called Maxwell's equations and the understanding that light is an electromagnetic wave. Later, a series of increasingly careful experiments were carried out in the late 1800s, including the Michelson–Morley experiment, to try to detect the motion of Earth through the aether, but no drag was detected. A range of proposed aether-dragging theories could explain the null result but these were more complex, and tended to use arbitrary-looking coefficients and physical assumptions. Joseph Larmor discussed the aether in terms of a moving magnetic field caused by the acceleration of electrons. Hendrik Lorentz and George Francis FitzGerald offered, within the framework of Lorentz ether theory, an explanation of how the Michelson–Morley experiment could have failed to detect motion through the aether. However, the initial Lorentz theory predicted that motion through the aether would create a birefringence effect, which Rayleigh and Brace tested and failed to find (Experiments of Rayleigh and Brace). All of those results required the full application of the Lorentz transformation by Lorentz and Joseph Larmor in 1904. Summarizing the results of Michelson, Rayleigh and others, Hermann Weyl would later write that the aether had "betaken itself to the land of the shades in a final effort to elude the inquisitive search of the physicist". In addition to possessing more conceptual clarity, Albert Einstein's 1905 special theory of relativity could explain all of the experimental results without referring to an aether at all. This eventually led most physicists to conclude that the earlier notion of a luminiferous aether was not a useful concept. === Mechanical gravitational aether === From the 16th until the late 19th century, gravitational effects had also been modeled using an aether. In a note at the end of his work "A Dynamical Theory of the Electromagnetic Field", Maxwell discussed a model for gravity based on a medium similar to the one he used for the electromagnetic field. He concluded that the medium would have "an enormous intrinsic energy" and would necessarily have to be diminished in areas of mass. He could not "understand in what way a medium can possess such properties" so he did not pursue it further. The most well-known formulation is Le Sage's theory of gravitation, although variations on the idea were entertained by Isaac Newton, Bernhard Riemann, and Lord Kelvin. For example, Kelvin published a historical note on Le Sage's model in 1872, noting that Le Sage's proposal disagreed with conservation of energy. Kelvin suggested a possible way to salvage it using the Kelvin's vortex theory of the atom. That theory was extended by JJ Thomson but ultimately abandoned as not productive.: 56  None of those concepts are considered to be viable by the scientific community today. == Non-standard interpretations in modern physics == === General relativity === Albert Einstein sometimes used the word aether for the gravitational field within general relativity, but the only similarity of this relativistic aether concept with the classical aether models lies in the presence of physical properties in space, which can be identified through geodesics. As historians such as John Stachel argue, Einstein's views on the "new aether" are not in conflict with his abandonment of the aether in 1905. As Einstein himself pointed out, no "substance" and no state of motion can be attributed to that new aether. Einstein's use of the word "aether" found little support in the scientific community, and played no role in the continuing development of modern physics. === Quantum vacuum === Quantum mechanics can be used to describe spacetime as being non-empty at extremely small scales, fluctuating and generating particle pairs that appear and disappear incredibly quickly. It has been suggested by some such as Paul Dirac that this quantum vacuum may be the equivalent in modern physics of a particulate aether. However, Dirac's aether hypothesis was motivated by his dissatisfaction with quantum electrodynamics, and it never gained support from the mainstream scientific community. Physicist Robert B. Laughlin has suggested that the quantum vacuum could be viewed as a "relativistic ether". Paul Davies writes that while the quantum vacuum resembles in some ways the old concept of the aether, the two differ in a key respect: the quantum vacuum "has no privileged reference frame, no state of rest relative to which a material body could be said to move." === Pilot waves === Louis de Broglie stated, "Any particle, ever isolated, has to be imagined as in continuous "energetic contact" with a hidden medium." However, as de Broglie pointed out, this medium "could not serve as a universal reference medium, as this would be contrary to relativity theory." == See also == == References == == Further reading == Darrigol, Olivier (2000). Electrodynamics from Ampère to Einstein. Oxford: Clarendon Press. ISBN 978-0-19-850594-5. Decaen, Christopher A. (July 2004). "Aristotle's Aether and Contemporary Science". The Thomist. 68 (3): 375–4295. doi:10.1353/tho.2004.0015. S2CID 171374696. Archived from the original on 2012-03-05. Retrieved 2024-10-17. Epple, M. (1998). "Topology, Matter, and Space, I: Topological Notions in 19th-Century Natural Philosophy", Archive for History of Exact Sciences 52: 297–392. Harman, P. H. (1982). Energy, Force and Matter: The Conceptual Development of Nineteenth Century Physics. Cambridge: Cambridge University Press. ISBN 978-0-521-28812-5. Larmor, Joseph (1911). "Aether" . Encyclopædia Britannica. Vol. 1 (11th ed.). pp. 292–297. Oliver Lodge, "Ether", Encyclopædia Britannica, Thirteenth Edition (1926). Mackay, John Sturgeon (1878). "Ether (2.)" . Encyclopædia Britannica. Vol. VIII (9th ed.). pp. 655–658. Schaffner, Kenneth F. (1972). Nineteenth-century Aether Theories. Oxford: Pergamon Press. ISBN 978-0-08-015674-3. Whittaker, Edmund Taylor (1910). A History of the Theories of Aether and Electricity (1st ed.). Dublin: Longman, Green and Co. "A Ridiculously Brief History of Electricity and Magnetism; Mostly from E. T. Whittaker's A History of the Theories of Aether and Electricity]". (PDF format.)
Wikipedia/Aether_theory
Discourse on the Method of Rightly Conducting One's Reason and of Seeking Truth in the Sciences (French: Discours de la Méthode pour bien conduire sa raison, et chercher la vérité dans les sciences) is a philosophical and autobiographical treatise published by René Descartes in 1637. It is best known as the source of the famous quotation "Je pense, donc je suis" ("I think, therefore I am", or "I am thinking, therefore I exist"), which occurs in Part IV of the work. A similar argument, without this precise wording, is found in Meditations on First Philosophy (1641), and a Latin version of the same statement Cogito, ergo sum is found in Principles of Philosophy (1644). Discourse on the Method is one of the most influential works in the history of modern philosophy, and important to the development of natural sciences. In this work, Descartes tackles the problem of skepticism, which had previously been studied by other philosophers. While addressing some of his predecessors and contemporaries, Descartes modified their approach to account for a truth he found to be incontrovertible; he started his line of reasoning by doubting everything, so as to assess the world from a fresh perspective, clear of any preconceived notions. The book was originally published in Leiden, in the Netherlands. Later, it was translated into Latin and published in 1656 in Amsterdam. The book was intended as an introduction to three works: Dioptrique, Météores, and Géométrie. Géométrie contains Descartes's initial concepts that later developed into the Cartesian coordinate system. The text was written and published in French so as to reach a wider audience than Latin, the language in which most philosophical and scientific texts were written and published at that time, would have allowed. Most of Descartes' other works were written in Latin. Together with Meditations on First Philosophy, Principles of Philosophy and Rules for the Direction of the Mind, it forms the base of the epistemology known as Cartesianism. == Organization == The book is divided into six parts, described in the author's preface as: Various considerations touching the Sciences The principal rules of the Method which the Author has discovered Certain of the rules of Morals which he has deduced from this Method The reasonings by which he establishes the existence of God and of the Human Soul The order of the Physical questions which he has investigated, and, in particular, the explication of the motion of the heart and of some other difficulties pertaining to Medicine, as also the difference between the soul of man and that of the brutes What the Author believes to be required in order to greater advancement in the investigation of Nature than has yet been made, with the reasons that have induced him to write === Part I: Various scientific considerations === Descartes begins by allowing himself some wit: Good sense is, of all things among men, the most equally distributed; for every one thinks himself so abundantly provided with it, that those even who are the most difficult to satisfy in everything else, do not usually desire a larger measure of this quality than they already possess. A similar observation can be found in Hobbes, when he writes about human abilities, specifically wisdom and "their own wit": "But this proveth rather that men are in that point equal, than unequal. For there is not ordinarily a greater sign of the equal distribution of anything than that every man is contented with his share," but also in Montaigne, whose formulation indicates that it was a commonplace at the time: "Tis commonly said that the justest portion Nature has given us of her favors is that of sense; for there is no one who is not contented with his share." Descartes continues with a warning: For to be possessed of a vigorous mind is not enough; the prime requisite is rightly to apply it. The greatest minds, as they are capable of the highest excellences, are open likewise to the greatest aberrations; and those who travel very slowly may yet make far greater progress, provided they keep always to the straight road, than those who, while they run, forsake it. Descartes describes his disappointment with his education: "[A]s soon as I had finished the entire course of study…I found myself involved in so many doubts and errors, that I was convinced I had advanced no farther…than the discovery at every turn of my own ignorance." He notes his special delight with mathematics, and contrasts its strong foundations to "the disquisitions of the ancient moralists [which are] towering and magnificent palaces with no better foundation than sand and mud." === Part II: Principal rules of the Method === Descartes was in Germany, attracted thither by the wars in that country, and describes his intent by a "building metaphor" (see also: Neurath's boat). He observes that buildings, cities or nations that have been planned by a single hand are more elegant and commodious than those that have grown organically. He resolves not to build on old foundations, nor to lean upon principles which he had taken on faith in his youth. Descartes seeks to ascertain the true method by which to arrive at the knowledge of whatever lies within the compass of his powers. He presents four precepts: The first was never to accept anything for true which I did not clearly know to be such; that is to say, carefully to avoid precipitancy and prejudice, and to comprise nothing more in my judgment than what was presented to my mind so clearly and distinctly as to exclude all ground of doubt. The second, to divide each of the difficulties under examination into as many parts as possible, and as might be necessary for its adequate solution. The third, to conduct my thoughts in such order that, by commencing with objects the simplest and easiest to know, I might ascend by little and little, and, as it were, step by step, to the knowledge of the more complex; assigning in thought a certain order even to those objects which in their own nature do not stand in a relation of antecedence and sequence. And the last, in every case to make enumerations so complete, and reviews so general, that I might be assured that nothing was omitted. === Part III: Morals and Maxims of conducting the Method === Descartes uses the analogy of rebuilding a house from secure foundations, and extends the analogy to the idea of needing a temporary abode while his own house is being rebuilt. Descartes adopts the following "three or four" maxims in order to remain effective in the "real world" while experimenting with his method of radical doubt. They form a rudimentary belief system from which to act before his new system is fully developed: The first was to obey the laws and customs of my country, adhering firmly to the faith in which, by the grace of God, I had been educated from my childhood; and regulating my conduct in every other matter according to the most moderate opinions, and the farthest removed from extremes, which should happen to be adopted in practice with general consent of the most judicious of those among whom I might be living. Be as firm and resolute in my actions as I was able. Endeavor always to conquer myself rather than fortune, and change my desires rather than the order of the world, and in general, accustom myself to the persuasion that, except our own thoughts, there is nothing absolutely in our power; so that when we have done our best in things external to us, our ill-success cannot possibly be failure on our part. Finally, Descartes states his resolute belief that there is no better use of his time than to cultivate his reason and to advance his knowledge of the truth according to his method. === Part IV: Proof of God and the Soul === Applying the method to itself, Descartes challenges his own reasoning and reason itself. But Descartes believes three things are not susceptible to doubt and the three support each other to form a stable foundation for the method. He cannot doubt that something has to be there to do the doubting: I think, therefore I am. The method of doubt cannot doubt reason as it is based on reason itself. By reason there exists a God, and God is the guarantor that reason is not misguided. Descartes supplies three different proofs for the existence of God, including what is now referred to as the ontological proof of the existence of God. === Part V: Physics, the heart, and the soul of man and animals === Descartes briefly sketches how in an unpublished treatise (published posthumously as Le Monde) he had laid out his ideas regarding the laws of nature, the sun and stars, the moon as the cause of "ebb and flow" (meaning the tides), gravitation, light, and heat. Describing his work on light, he states: [I] expounded at considerable length what the nature of that light must be which is found in the sun and the stars, and how thence in an instant of time it traverses the immense spaces of the heavens. His work on such physico-mechanical laws is, however, framed as applying not to our world but to a theoretical "new world" created by God somewhere in the imaginary spaces [with] matter sufficient to compose ... [a "new world" in which He] ... agitate[d] variously and confusedly the different parts of this matter, so that there resulted a chaos as disordered as the poets ever feigned, and after that did nothing more than lend his ordinary concurrence to nature, and allow her to act in accordance with the laws which he had established. Descartes does this "to express my judgment regarding ... [his subjects] with greater freedom, without being necessitated to adopt or refute the opinions of the learned." (Descartes' hypothetical world would be a deistic universe.) He goes on to say that he "was not, however, disposed, from these circumstances, to conclude that this world had been created in the manner I described; for it is much more likely that God made it at the first such as it was to be." Despite this admission, it seems that Descartes' project for understanding the world was that of re-creating creation—a cosmological project which aimed, through Descartes' particular brand of experimental method, to show not merely the possibility of such a system, but to suggest that this way of looking at the world—one with (as Descartes saw it) no assumptions about God or nature—provided the only basis upon which he could see knowledge progressing (as he states in Book II). Thus, in Descartes' work, we can see some of the fundamental assumptions of modern cosmology in evidence—the project of examining the historical construction of the universe through a set of quantitative laws describing interactions which would allow the ordered present to be constructed from a chaotic past. He goes on to the motion of the blood in the heart and arteries, endorsing the findings of "a physician of England" about the circulation of blood, referring to William Harvey and his work De motu cordis in a marginal note.: 51  But then he disagrees strongly about the function of the heart as a pump, ascribing the motive power of the circulation to heat rather than muscular contraction. He describes that these motions seem to be totally independent of what we think, and concludes that our bodies are separate from our souls. He does not seem to distinguish between mind, spirit, and soul, all of which he identifies with our faculty for rational thinking. Hence the term "I think, therefore I am." All three of these words (particularly "mind" and "soul") can be signified by the single French term âme. === Part VI: Prerequisites for advancing the investigation of Nature === Descartes begins by obliquely referring to the recent trial of Galileo for heresy and the Church's condemnation of heliocentrism; he explains that for these reasons he has held back his own treatise from publication. However, he says, because people have begun to hear of his work, he is compelled to publish these small parts of it (that is, the Discourse, Dioptrique, Météores, and Géométrie) in order that people not wonder why he doesn't publish. The discourse ends with some discussion of scientific experimentation: Descartes believes that experimentation is indispensable, time-consuming, and yet not easily delegated to others. He exhorts the reader to investigate the claims laid out in Dioptrique, Météores, and Géométrie and communicate their findings or criticisms to his publisher; he commits to publishing any such queries he receives along with his answers. == Influencing future science == Skepticism had previously been discussed by philosophers such as Sextus Empiricus, Al-Kindi, Al-Ghazali, Francisco Sánchez and Michel de Montaigne. Descartes started his line of reasoning by doubting everything, so as to assess the world from a fresh perspective, clear of any preconceived notions or influences. This is summarized in the book's first precept to "never to accept anything for true which I did not clearly know to be such". This method of pro-foundational skepticism is considered to be the start of modern philosophy. == Quotations == "The most widely shared thing in the world is good sense, for everyone thinks he is so well provided with it that even those who are the most difficult to satisfy in everything else do not usually desire to have more good sense than they have. It is not likely that everyone is mistaken in this…" (part I, AT p. 1 sq.) "I know how very liable we are to delusion in what relates to ourselves; and also how much the judgments of our friends are to be suspected when given in our favor." (part I, AT p. 3) "… I believed that I had already given sufficient time to languages, and likewise to the reading of the writings of the ancients, to their histories and fables. For to hold converse with those of other ages and to travel, are almost the same thing." (part I, AT p. 6) "Of philosophy I will say nothing, except that when I saw that it had been cultivated for so many ages by the most distinguished men; and that yet there is not a single matter within its sphere which is still not in dispute and nothing, therefore, which is above doubt, I did not presume to anticipate that my success would be greater in it than that of others." (part I, AT p. 8) "… I entirely abandoned the study of letters, and resolved no longer to seek any other science than the knowledge of myself, or of the great book of the world.…" (part I, AT p. 9) "The first was to include nothing in my judgments than what presented itself to my mind so clearly and distinctly that I had no occasion to doubt it." (part II, AT p. 18) "… In what regards manners, everyone is so full of his own wisdom, that there might be as many reformers as heads.…" (part VI, AT p. 61) "… And although my speculations greatly please myself, I believe that others have theirs, which perhaps please them still more." (part VI, AT p. 61) == See also == Mathesis universalis Great Conversation == References == == External links == Descartes, René (1637). Discours de la méthode pour bien conduire sa raison et chercher la vérité dans les sciences, plus la dioptrique, les météores et la géométrie (in French), BnF Gallica{{cite book}}: CS1 maint: postscript (link) Discourse on the Method at Project Gutenberg Discours de la Méthode at Project Gutenberg (édition Victor Cousin, Paris 1824) Discours de la méthode, par Adam et Tannery, Paris 1902. (academic standard edition of the original text, 1637), Pdf, 80 pages, 362 kB. Contains Discourse on the Method, slightly modified for easier reading Free audiobook at librivox.org or at audioofclassics
Wikipedia/Discourse_on_Method
Philosophy of Science is dedicated to the furthering of studies and free discussion from diverse standpoints in the philosophy of science. It is a peer-reviewed academic journal. == Official affiliations == In January 1934 Philosophy of Science announced itself as the chief external expression of the Philosophy of Science Association, which seems to have been the expectation of its founder, William Malisoff. The journal is currently the official journal of the Association, which Philipp Frank and C. West Churchman formally constituted in December 1947. == Publication history == Malisoff, who was independently wealthy, seems to have financed the launch of Philosophy of Science. Correspondingly he became its first editor. In the first issue he sought papers ranging from studies on "the analysis of meaning, definition, symbolism," in scientific theories to those on "the nature and formulation of theoretical principles" and "in the function and significance of science within various contexts." Its initial editorial board comprised Eric Temple Bell, Albert Blumberg, Rudolf Carnap, Morris Raphael Cohen, W.W. Cook, Herbert Feigl, Karl S. Lashley, Henry Margenau, Hermann Joseph Muller, Susan Stebbing, Dirk Jan Struik and Alexander Weinstein. C. West Churchman became the second editor of Philosophy of Science upon the untimely death of William Malisoff. Churchman resigned as editor in January 1957, after which Richard Rudner, his friend and former student took over. Subsequent editors include Kenneth F. Schaffner, the philosopher and medical doctor, and Philip Kitcher. Currently the editor-in-chief is James Owen Weatherall and the editorial office is hosted by the Department of Logic and Philosophy of Science at the University of California, Irvine. Philosophy of Science was originally published in 1934 by The Williams and Wilkins Company of Baltimore, Maryland, which upon the death of Malisoff offered to bear the occasional losses of the journal and to share fifty-fifty in its profits. It is currently published by the Cambridge University Press on behalf of the Philosophy of Science Association. The journal contains essays, discussion articles, and book reviews in the field of the philosophy of science. Philosophy of Science was originally published quarterly. It is currently published four times a year, with a fifth issue each year containing proceedings from the biannual PSA meeting. == Abstracting and indexing == The services in which articles that appear in Philosophy of Science are abstracted and indexed, and are listed in the Abstracting and Indexing page of the website of the journal. The current impact factor of the journal appears in the About The Journal page in the website of the journal. == Landmark papers == Einstein, Albert (1934). "On the method of theoretical physics". Philosophy of Science. 1 (2): 163-169. Retrieved 27 March 2024. Rosenblueth, Arturo; Wiener, Norbert; Bigelow, Julian (1943). "Behavior, purpose and teleology". Philosophy of Science. 10 (1): 18-24. Retrieved 27 March 2024. == Further reading == 1944. Malisoff, W.H. Editorial: Philosophy of Science after ten years. Philosophy of Science. 11. 1-2. 1948. Frank, Philip and C. West Churchman. In Memoriam: Dr William M. Malisoff. Philosophy of Science. 15 (1). 1-3. 1984. Butts, Robert E. Philosophy of Science: 1934-1984. Philosophy of Science. 51 (1). 1-3. 2000. Solomon, Graham. The reception of German scientific philosophy in North America: 1930–1962. Chapter 13 in 'Witches, scientists, philosophers: Essays and lectures' by Robert E. Butts. Edited by Graham Solomon. Dordrecht: Springer. 2010. Douglas, Heather. Engagement for progress: applied philosophy of science in context. Synthese. 177. 317–335. https://link.springer.com/article/10.1007/s11229-010-9787-2 2020. Malaterre, Christophe; Lareau, Francis; Pulizzotto, Davide and Jonathan St-Onge. 2020. Eight journals over eight decades: a computational topic-modeling approach to contemporary philosophy of science. Synthese. 199. 2883–2923. https://link.springer.com/article/10.1007/s11229-020-02915-6 2021. Dewulf, Fons. The institutional stabilization of philosophy of science and its withdrawal from social concerns after the Second World War. British Journal for the History of Philosophy. 29 (5). 935-953. https://www.tandfonline.com/doi/abs/10.1080/09608788.2020.1848794 == References == == External links == Official website Publisher website
Wikipedia/Philosophy_of_Science_(journal)
A formal system is an abstract structure and formalization of an axiomatic system used for deducing, using rules of inference, theorems from axioms. In 1921, David Hilbert proposed to use formal systems as the foundation of knowledge in mathematics. The term formalism is sometimes a rough synonym for formal system, but it also refers to a given style of notation, for example, Paul Dirac's bra–ket notation. == Concepts == A formal system has the following: Formal language, which is a set of well-formed formulas, which are strings of symbols from an alphabet, formed by a formal grammar (consisting of production rules or formation rules). Deductive system, deductive apparatus, or proof system, which has rules of inference that take axioms and infers theorems, both of which are part of the formal language. A formal system is said to be recursive (i.e. effective) or recursively enumerable if the set of axioms and the set of inference rules are decidable sets or semidecidable sets, respectively. === Formal language === A formal language is a language that is defined by a formal system. Like languages in linguistics, formal languages generally have two aspects: the syntax is what the language looks like (more formally: the set of possible expressions that are valid utterances in the language) the semantics are what the utterances of the language mean (which is formalized in various ways, depending on the type of language in question) Usually only the syntax of a formal language is considered via the notion of a formal grammar. The two main categories of formal grammar are that of generative grammars, which are sets of rules for how strings in a language can be written, and that of analytic grammars (or reductive grammar), which are sets of rules for how a string can be analyzed to determine whether it is a member of the language. === Deductive system === A deductive system, also called a deductive apparatus, consists of the axioms (or axiom schemata) and rules of inference that can be used to derive theorems of the system. Such deductive systems preserve deductive qualities in the formulas that are expressed in the system. Usually the quality we are concerned with is truth as opposed to falsehood. However, other modalities, such as justification or belief may be preserved instead. In order to sustain its deductive integrity, a deductive apparatus must be definable without reference to any intended interpretation of the language. The aim is to ensure that each line of a derivation is merely a logical consequence of the lines that precede it. There should be no element of any interpretation of the language that gets involved with the deductive nature of the system. The logical consequence (or entailment) of the system by its logical foundation is what distinguishes a formal system from others which may have some basis in an abstract model. Often the formal system will be the basis for or even identified with a larger theory or field (e.g. Euclidean geometry) consistent with the usage in modern mathematics such as model theory. An example of a deductive system would be the rules of inference and axioms regarding equality used in first order logic. The two main types of deductive systems are proof systems and formal semantics. ==== Proof system ==== Formal proofs are sequences of well-formed formulas (or WFF for short) that might either be an axiom or be the product of applying an inference rule on previous WFFs in the proof sequence. The last WFF in the sequence is recognized as a theorem. Once a formal system is given, one can define the set of theorems which can be proved inside the formal system. This set consists of all WFFs for which there is a proof. Thus all axioms are considered theorems. Unlike the grammar for WFFs, there is no guarantee that there will be a decision procedure for deciding whether a given WFF is a theorem or not. The point of view that generating formal proofs is all there is to mathematics is often called formalism. David Hilbert founded metamathematics as a discipline for discussing formal systems. Any language that one uses to talk about a formal system is called a metalanguage. The metalanguage may be a natural language, or it may be partially formalized itself, but it is generally less completely formalized than the formal language component of the formal system under examination, which is then called the object language, that is, the object of the discussion in question. The notion of theorem just defined should not be confused with theorems about the formal system, which, in order to avoid confusion, are usually called metatheorems. ==== Formal semantics of logical system ==== A logical system is a deductive system (most commonly first order logic) together with additional non-logical axioms. According to model theory, a logical system may be given interpretations which describe whether a given structure - the mapping of formulas to a particular meaning - satisfies a well-formed formula. A structure that satisfies all the axioms of the formal system is known as a model of the logical system. A logical system is: Sound, if each well-formed formula that can be inferred from the axioms is satisfied by every model of the logical system. Semantically complete, if each well-formed formula that is satisfied by every model of the logical system can be inferred from the axioms. An example of a logical system is Peano arithmetic. The standard model of arithmetic sets the domain of discourse to be the nonnegative integers and gives the symbols their usual meaning. There are also non-standard models of arithmetic. == History == Early logic systems includes Indian logic of Pāṇini, syllogistic logic of Aristotle, propositional logic of Stoicism, and Chinese logic of Gongsun Long (c. 325–250 BCE). In more recent times, contributors include George Boole, Augustus De Morgan, and Gottlob Frege. Mathematical logic was developed in 19th century Europe. David Hilbert instigated a formalist movement called Hilbert’s program as a proposed solution to the foundational crisis of mathematics, that was eventually tempered by Gödel's incompleteness theorems. The QED manifesto represented a subsequent, as yet unsuccessful, effort at formalization of known mathematics. == See also == List of formal systems Formal method – Mathematical program specificationsPages displaying short descriptions of redirect targets Formal science – Study of abstract structures described by formal systems Logic translation – Translation of a text into a logical system Rewriting system – Replacing subterm in a formula with another termPages displaying short descriptions of redirect targets Substitution instance – Concept in logicPages displaying short descriptions of redirect targets Theory (mathematical logic) – Set of sentences in a formal language == References == == Sources == Hunter, Geoffrey (1996) [1971]. Metalogic: An Introduction to the Metatheory of Standard First-Order Logic. University of California Press (published 1973). ISBN 9780520023567. OCLC 36312727. (accessible to patrons with print disabilities) == Further reading == Hofstadter, Douglas, 1979. Gödel, Escher, Bach: An Eternal Golden Braid ISBN 978-0-465-02656-2. 777 pages. Kleene, Stephen C., 1967. Mathematical Logic Reprinted by Dover, 2002. ISBN 0-486-42533-9 Smullyan, Raymond M., 1961. Theory of Formal Systems: Annals of Mathematics Studies, Princeton University Press (April 1, 1961) 156 pages ISBN 0-691-08047-X == External links == Media related to Formal systems at Wikimedia Commons Encyclopædia Britannica, Formal system definition, 2007. Daniel Richardson, Formal systems, logic and semantics Formal System at PlanetMath. Encyclopedia of Mathematics, Formal system Peter Suber, Formal Systems and Machines: An Isomorphism Archived 2011-05-24 at the Wayback Machine, 1997. Ray Taol, Formal Systems What is a Formal System?: Some quotes from John Haugeland's `Artificial Intelligence: The Very Idea' (1985), pp. 48–64.
Wikipedia/Logical_calculus
A paradigm shift is a fundamental change in the basic concepts and experimental practices of a scientific discipline. It is a concept in the philosophy of science that was introduced and brought into the common lexicon by the American physicist and philosopher Thomas Kuhn. Even though Kuhn restricted the use of the term to the natural sciences, the concept of a paradigm shift has also been used in numerous non-scientific contexts to describe a profound change in a fundamental model or perception of events. Kuhn presented his notion of a paradigm shift in his influential book The Structure of Scientific Revolutions (1962). Kuhn contrasts paradigm shifts, which characterize a Scientific Revolution, to the activity of normal science, which he describes as scientific work done within a prevailing framework or paradigm. Paradigm shifts arise when the dominant paradigm under which normal science operates is rendered incompatible with new phenomena, facilitating the adoption of a new theory or paradigm. As one commentator summarizes: Kuhn acknowledges having used the term "paradigm" in two different meanings. In the first one, "paradigm" designates what the members of a certain scientific community have in common, that is to say, the whole of techniques, patents and values shared by the members of the community. In the second sense, the paradigm is a single element of a whole, say for instance Newton's Principia, which, acting as a common model or an example... stands for the explicit rules and thus defines a coherent tradition of investigation. Thus the question is for Kuhn to investigate by means of the paradigm what makes possible the constitution of what he calls "normal science". That is to say, the science which can decide if a certain problem will be considered scientific or not. Normal science does not mean at all a science guided by a coherent system of rules, on the contrary, the rules can be derived from the paradigms, but the paradigms can guide the investigation also in the absence of rules. This is precisely the second meaning of the term "paradigm", which Kuhn considered the most new and profound, though it is in truth the oldest. == History == The nature of scientific revolutions has been studied by modern philosophy since Immanuel Kant used the phrase in the preface to the second edition of his Critique of Pure Reason (1787). Kant used the phrase "revolution of the way of thinking" (Revolution der Denkart) to refer to Greek mathematics and Newtonian physics. In the 20th century, new developments in the basic concepts of mathematics, physics, and biology revitalized interest in the question among scholars. === Original usage === In his 1962 book The Structure of Scientific Revolutions, Kuhn explains the development of paradigm shifts in science into four stages: Normal science – In this stage, which Kuhn sees as most prominent in science, a dominant paradigm is active. This paradigm is characterized by a set of theories and ideas that define what is possible and rational to do, giving scientists a clear set of tools to approach certain problems. Some examples of dominant paradigms that Kuhn gives are: Newtonian physics, caloric theory, and the theory of electromagnetism. Insofar as paradigms are useful, they expand both the scope and the tools with which scientists do research. Kuhn stresses that, rather than being monolithic, the paradigms that define normal science can be particular to different people. A chemist and a physicist might operate with different paradigms of what a helium atom is. Under normal science, scientists encounter anomalies that cannot be explained by the universally accepted paradigm within which scientific progress has thereto been made. Extraordinary research – When enough significant anomalies have accrued against a current paradigm, the scientific discipline is thrown into a state of crisis. To address the crisis, scientists push the boundaries of normal science in what Kuhn calls “extraordinary research”, which is characterized by its exploratory nature. Without the structures of the dominant paradigm to depend on, scientists engaging in extraordinary research must produce new theories, thought experiments, and experiments to explain the anomalies. Kuhn sees the practice of this stage – “the proliferation of competing articulations, the willingness to try anything, the expression of explicit discontent, the recourse to philosophy and to debate over fundamentals” – as even more important to science than paradigm shifts. Adoption of a new paradigm – Eventually a new paradigm is formed, which gains its own new followers. For Kuhn, this stage entails both resistance to the new paradigm, and reasons for why individual scientists adopt it. According to Max Planck, "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." Because scientists are committed to the dominant paradigm, and paradigm shifts involve gestalt-like changes, Kuhn stresses that paradigms are difficult to change. However, paradigms can gain influence by explaining or predicting phenomena much better than before (i.e., Bohr's model of the atom) or by being more subjectively pleasing. During this phase, proponents for competing paradigms address what Kuhn considers the core of a paradigm debate: whether a given paradigm will be a good guide for future problems – things that neither the proposed paradigm nor the dominant paradigm are capable of solving currently. Aftermath of the scientific revolution – In the long run, the new paradigm becomes institutionalized as the dominant one. Textbooks are written, obscuring the revolutionary process. == Features == === Paradigm shifts and progress === A common misinterpretation of paradigms is the belief that the discovery of paradigm shifts and the dynamic nature of science (with its many opportunities for subjective judgments by scientists) are a case for relativism: the view that all kinds of belief systems are equal. Kuhn vehemently denies this interpretation and states that when a scientific paradigm is replaced by a new one, albeit through a complex social process, the new one is always better, not just different. === Incommensurability === These claims of relativism are, however, tied to another claim that Kuhn does at least somewhat endorse: that the language and theories of different paradigms cannot be translated into one another or rationally evaluated against one another—that they are incommensurable. This gave rise to much talk of different peoples and cultures having radically different worldviews or conceptual schemes—so different that whether or not one was better, they could not be understood by one another. Donald Davidson famously argued against this idea of conceptual relativism, claiming that the notion that any languages or theories could be incommensurable with one another was itself incoherent. If this is correct, Kuhn's claims must be taken in a weaker sense than they often are. Furthermore, the hold of the Kuhnian analysis on social science has long been tenuous, with the wide application of multi-paradigmatic approaches in order to understand complex human behaviour. === Gradualism vs. sudden change === Paradigm shifts tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, physics seemed to be a discipline filling in the last few details of a largely worked-out system. In The Structure of Scientific Revolutions, Kuhn wrote, "Successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science" (p. 12). Kuhn's idea was itself revolutionary in its time as it caused a major change in the way that academics talk about science. Thus, it could be argued that it caused or was itself part of a "paradigm shift" in the history and sociology of science. However, Kuhn would not recognise such a paradigm shift. In the social sciences, people can still use earlier ideas to discuss the history of science. Philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn's model, which synthesizes his original view with the gradualist model that preceded it. == Examples == === Natural sciences === Some of the "classical cases" of Kuhnian paradigm shifts in science are: 1543 – The transition in cosmology from a Ptolemaic cosmology to a Copernican one. 1543 – The acceptance of the work of Andreas Vesalius, whose work De humani corporis fabrica corrected the numerous errors in the previously held system of human anatomy created by Galen. 1687 – The transition in mechanics from Aristotelian mechanics to classical mechanics. 1783 – The acceptance of Lavoisier's theory of chemical reactions and combustion in place of phlogiston theory, known as the chemical revolution. The transition in optics from geometrical optics to physical optics with Augustin-Jean Fresnel's wave theory. 1826 – The discovery of hyperbolic geometry. 1830 to 1833 – Geologist Charles Lyell published Principles of Geology, which not only put forth the concept of uniformitarianism, which was in direct contrast to the popular geological theory, at the time, catastrophism, but also utilized geological proof to determine that the age of the Earth was older than 6,000 years, which was previously held to be true. 1859 – The revolution in evolution from goal-directed change to Charles Darwin's natural selection. 1880 – The germ theory of disease began overtaking Galen's miasma theory. 1905 – The development of quantum mechanics, which replaced classical mechanics at microscopic scales. 1887 to 1905 – The transition from the luminiferous aether present in space to electromagnetic radiation in spacetime. 1919 – The transition between the worldview of Newtonian gravity and general relativity. 1920 – The emergence of the modern view of the Milky Way as just one of countless galaxies within an immeasurably vast universe following the results of the Smithsonian's Great Debate between astronomers Harlow Shapley and Heber Curtis. 1952 – Chemists Stanley Miller and Harold Urey perform an experiment which simulated the conditions on the early Earth that favored chemical reactions that synthesized more complex organic compounds from simpler inorganic precursors, kickstarting decades of research into the chemical origins of life. 1964 – The discovery of cosmic microwave background radiation leads to the Big Bang theory being accepted over the steady state theory in cosmology. 1965 – The acceptance of plate tectonics as the explanation for large-scale geologic changes. 1969 – Astronomer Victor Safronov, in his book Evolution of the protoplanetary cloud and formation of the Earth and the planets, developed the early version of the current accepted theory of planetary formation. 1974 – The November Revolution, with the discovery of the J/psi meson, and the acceptance of the existence of quarks and the Standard Model of particle physics. 1960 to 1985 – The acceptance of the ubiquity of nonlinear dynamical systems as promoted by chaos theory, instead of a laplacian world-view of deterministic predictability. === Social sciences === In Kuhn's view, the existence of a single reigning paradigm is characteristic of the natural sciences, while philosophy and much of social science were characterized by a "tradition of claims, counterclaims, and debates over fundamentals." Others have applied Kuhn's concept of paradigm shift to the social sciences. The movement known as the cognitive revolution moved away from behaviourist approaches to psychology and the acceptance of cognition as central to studying human behavior. Anthropologist Franz Boas published The Mind of Primitive Man, which integrated his theories concerning the history and development of cultures and established a program that would dominate American anthropology in the following years. His research, along with that of his other colleagues, combatted and debunked the claims being made by scholars at the time, given scientific racism and eugenics were dominant in many universities and institutions that were dedicated to studying humans and society. Eventually anthropology would apply a holistic approach, utilizing four subcategories to study humans: archaeology, cultural, evolutionary, and linguistic anthropology. At the turn of the 20th century, sociologists, along with other social scientists developed and adopted methodological antipositivism, which sought to uphold a subjective perspective when studying human activities pertaining to culture, society, and behavior. This was in stark contrast to positivism, which took its influence from the methodologies utilized within the natural sciences. First proposed by Ferdinand de Saussure in 1879, the laryngeal theory in Indo-European linguistics postulated the existence of "laryngeal" consonants in the Proto-Indo-European language (PIE), a theory that was confirmed by the discovery of the Hittite language in the early 20th century. The theory has since been accepted by the vast majority of linguists, paving the way for the internal reconstruction of the syntax and grammatical rules of PIE and is considered one of the most significant developments in linguistics since the initial discovery of the Indo-European language family. The adoption of radiocarbon dating by archaeologists has been proposed as a paradigm shift because of how it greatly increased the time depth the archaeologists could reliably date objects from. Similarly the use of LIDAR for remote geospatial imaging of cultural landscapes, and the shift from processual to post-processual archaeology have both been claimed as paradigm shifts by archaeologists. The Marginal Revolution, a development of economic theory in the late 19th century led by William Stanley Jevons in England, Carl Menger in Austria, and Léon Walras in Switzerland and France which explained economic behavior in terms of marginal utility. === Applied sciences === More recently, paradigm shifts are also recognisable in applied sciences: In medicine, the transition from "clinical judgment" to evidence-based medicine. In Artificial Intelligence, the transition from a knowledge-based to a data-driven paradigm has been discussed from 2010. == Other uses == The term "paradigm shift" has found uses in other contexts, representing the notion of a major change in a certain thought pattern—a radical change in personal beliefs, complex systems or organizations, replacing the former way of thinking or organizing with a radically different way of thinking or organizing: M. L. Handa, a professor of sociology in education at O.I.S.E. University of Toronto, Canada, developed the concept of a paradigm within the context of social sciences. He defines what he means by "paradigm" and introduces the idea of a "social paradigm". In addition, he identifies the basic component of any social paradigm. Like Kuhn, he addresses the issue of changing paradigms, the process popularly known as "paradigm shift". In this respect, he focuses on the social circumstances that precipitate such a shift. Relatedly, he addresses how that shift affects social institutions, including the institution of education. The concept has been developed for technology and economics in the identification of new techno-economic paradigms as changes in technological systems that have a major influence on the behaviour of the entire economy (Carlota Perez; earlier work only on technological paradigms by Giovanni Dosi). This concept is linked to Joseph Schumpeter's idea of creative destruction. Examples include the move to mass production and the introduction of microelectronics. Two photographs of the Earth from space, "Earthrise" (1968) and "The Blue Marble" (1972), are thought to have helped to usher in the environmentalist movement, which gained great prominence in the years immediately following distribution of those images. Hans Küng applies Thomas Kuhn's theory of paradigm change to the entire history of Christian thought and theology. He identifies six historical "macromodels": 1) the apocalyptic paradigm of primitive Christianity, 2) the Hellenistic paradigm of the patristic period, 3) the medieval Roman Catholic paradigm, 4) the Protestant (Reformation) paradigm, 5) the modern Enlightenment paradigm, and 6) the emerging ecumenical paradigm. He also discusses five analogies between natural science and theology in relation to paradigm shifts. Küng addresses paradigm change in his books, Paradigm Change in Theology and Theology for the Third Millennium: An Ecumenical View. In the later part of the 1990s, 'paradigm shift' emerged as a buzzword, popularized as marketing speak and appearing more frequently in print and publication. In his book Mind The Gaffe, author Larry Trask advises readers to refrain from using it, and to use caution when reading anything that contains the phrase. It is referred to in several articles and books as abused and overused to the point of becoming meaningless. The concept of technological paradigms has been advanced, particularly by Giovanni Dosi. == Criticism == In a 2015 retrospective on Kuhn, the philosopher Martin Cohen describes the notion of the paradigm shift as a kind of intellectual virus – spreading from hard science to social science and on to the arts and even everyday political rhetoric today. Cohen claims that Kuhn had only a very hazy idea of what it might mean and, in line with the Austrian philosopher of science Paul Feyerabend, accuses Kuhn of retreating from the more radical implications of his theory, which are that scientific facts are never really more than opinions whose popularity is transitory and far from conclusive. Cohen says scientific knowledge is less certain than it is usually portrayed, and that science and knowledge generally is not the 'very sensible and reassuringly solid sort of affair' that Kuhn describes, in which progress involves periodic paradigm shifts in which much of the old certainties are abandoned in order to open up new approaches to understanding that scientists would never have considered valid before. He argues that information cascades can distort rational, scientific debate. He has focused on health issues, including the example of highly mediatised 'pandemic' alarms, and why they have turned out eventually to be little more than scares. == See also == == References == === Citations === === Sources === Kuhn, Thomas (1970). The Structure of Scientific Revolutions (2nd, enlarged ed.). University of Chicago Press. ISBN 978-0-226-45804-5. == External links == The dictionary definition of paradigm shift at Wiktionary MIT 6.933J – The Structure of Engineering Revolutions. From MIT OpenCourseWare, course materials (graduate level) for a course on the history of technology through a Kuhnian lens. ""Scientific Change"". Internet Encyclopedia of Philosophy.
Wikipedia/Revolutionary_science
Sir Karl Raimund Popper (28 July 1902 – 17 September 1994) was an Austrian–British philosopher, academic and social commentator. One of the 20th century's most influential philosophers of science, Popper is known for his rejection of the classical inductivist views on the scientific method in favour of empirical falsification, and for founding the LSE's Department of Philosophy. According to Popper, a theory in the empirical sciences can never be proven, but it can be falsified, meaning that it can (and should) be scrutinised with decisive experiments. Popper was opposed to the classical justificationist account of knowledge, which he replaced with "the first non-justificational philosophy of criticism in the history of philosophy", namely critical rationalism. In political discourse, he is known for his vigorous defence of liberal democracy and the principles of social criticism that he believed made a flourishing open society possible. His political thought resides within the camp of Enlightenment rationalism and humanism. He was a dogged opponent of totalitarianism, nationalism, fascism, romanticism, collectivism, and other kinds of (in Popper's view) reactionary and irrational ideas, and identified modern liberal democracies as the best-to-date embodiment of an open society. == Life and career == === Family and training === Karl Popper was born in Vienna (then in Austria-Hungary) in 1902 to upper-middle-class parents. All of Popper's grandparents were assimilated Jews; the Popper family converted to Lutheranism before he was born and so he received a Lutheran baptism. His father, Simon Siegmund Carl Popper (1856–1932), was a lawyer from Bohemia and a doctor of law at the Vienna University. His mother, Jenny Schiff (1864–1938), was an accomplished pianist of Silesian and Hungarian descent. Popper's uncle was the Austrian philosopher Josef Popper-Lynkeus. After establishing themselves in Vienna, the Poppers made a rapid social climb in Viennese society, as Popper's father became a partner in the law firm of Vienna's liberal mayor Raimund Grübl, and after Grübl's death in 1898 took over the business. Popper received his middle name after Raimund Grübl. (In his autobiography, Popper erroneously recalls that Grübl's first name was Carl). His parents were close friends of Sigmund Freud's sister Rosa Graf. His father was a bibliophile who had 12,000–14,000 volumes in his personal library and took an interest in philosophy, the classics, and social and political issues. Popper inherited both the library and the disposition from him. Later, he would describe the atmosphere of his upbringing as having been "decidedly bookish". Popper left school at the age of 16 and attended lectures in mathematics, physics, philosophy, psychology and the history of music as a guest student at the University of Vienna. In 1919, Popper became attracted by Marxism and subsequently joined the Association of Socialist School Students. He also became a member of the Social Democratic Workers' Party of Austria, which was at that time a party that fully adopted Marxism. After the street battle in the Hörlgasse on 15 June 1919, when police shot eight of his unarmed party comrades, he turned away from what he saw as the philosopher Karl Marx's historical materialism, abandoned the ideology, and remained a supporter of social liberalism throughout his life. Popper worked in street construction for a short time but was unable to cope with the heavy labour. Continuing to attend university as a guest student, he started an apprenticeship as a cabinetmaker, which he completed as a journeyman. He was dreaming at that time of starting a daycare facility for children, for which he assumed the ability to make furniture might be useful. After that, he did voluntary service in one of psychoanalyst Alfred Adler's clinics for children. In 1922, he did his matura by way of a second chance education and finally joined the university as an ordinary student. He completed his examination as a primary school teacher in 1924 and started working at an after-school care club for socially endangered children. In 1925, he went to the newly founded Pädagogisches Institut and continued studying philosophy and psychology. Around that time he started courting Josefine Anna Henninger, who later became his wife. Popper and his wife had chosen not to have children because of the circumstances of war in the early years of their marriage. Popper commented that this "was perhaps a cowardly but in a way a right decision". In 1928, Popper earned a doctorate in psychology, under the supervision of Karl Bühler—with Moritz Schlick being the second chair of the thesis committee. His dissertation was titled Zur Methodenfrage der Denkpsychologie (On Questions of Method in the Psychology of Thinking). In 1929, he obtained an authorisation to teach mathematics and physics in secondary school and began doing so. He married his colleague Josefine Anna Henninger (1906–1985) in 1930. Fearing the rise of Nazism and the threat of the Anschluss, he started to use the evenings and the nights to write his first book Die beiden Grundprobleme der Erkenntnistheorie (The Two Fundamental Problems of the Theory of Knowledge). He needed to publish a book to get an academic position in a country that was safe for people of Jewish descent. In the end, he did not publish the two-volume work; but instead, a condensed version with some new material, as Logik der Forschung (The Logic of Scientific Discovery) in 1934. Here, he criticised psychologism, naturalism, inductivism, and logical positivism, and put forth his theory of potential falsifiability as the criterion demarcating science from non-science. In 1935 and 1936, he took unpaid leave to go to the United Kingdom for a study visit. === Academic life === In 1937, Popper finally managed to get a position that allowed him to emigrate to New Zealand, where he became lecturer in philosophy at Canterbury University College of the University of New Zealand in Christchurch. It was here that he wrote his influential work The Open Society and Its Enemies. In Dunedin he met the Professor of Physiology John Carew Eccles and formed a lifelong friendship with him. In 1946, after the Second World War, he moved to the United Kingdom to become a reader in logic and scientific method at the London School of Economics (LSE), a constituent School of the University of London, where, three years later, in 1949, he was appointed professor of logic and scientific method. Popper was president of the Aristotelian Society from 1958 to 1959. He resided in Penn, Buckinghamshire. Popper retired from academic life in 1969, though he remained intellectually active for the rest of his life. In 1985, he returned to Austria so that his wife could have her relatives around her during the last months of her life; she died in November that year. After the Ludwig Boltzmann Gesellschaft failed to establish him as the director of a newly founded branch researching the philosophy of science, he went back again to the United Kingdom in 1986, settling in Kenley, Surrey. === Death === Popper died of "complications of cancer, pneumonia and kidney failure" in Kenley at the age of 92 on 17 September 1994. He had been working continuously on his philosophy until two weeks before when he suddenly fell terminally ill, writing his last letter two weeks before his death as well. After cremation, his ashes were taken to Vienna and buried at Lainzer cemetery adjacent to the ORF Centre, where his wife Josefine Anna Popper (called "Hennie") had already been buried. Popper's estate is managed by his secretary and personal assistant Melitta Mew and her husband Raymond. Popper's manuscripts went to the Hoover Institution at Stanford University, partly during his lifetime and partly as supplementary material after his death. The University of Klagenfurt acquired Popper's library in 1995. The Karl Popper Archives was established within the Klagenfurt University Library, holding Popper's library of approximately 6,000 books, including his precious bibliophilia, as well as hard copies of the original Hoover material and microfilms of the incremental material. The library as well as various other partial collections are open for researcher purposes. The remaining parts of the estate were mostly transferred to The Karl Popper Charitable Trust. In October 2008, the University of Klagenfurt acquired the copyrights from the estate. == Honours and awards == Popper won many awards and honours in his field, including the Lippincott Award of the American Political Science Association, the Sonning Prize, the Otto Hahn Peace Medal of the United Nations Association of Germany in Berlin and fellowships in the Royal Society, British Academy, London School of Economics, King's College London, Darwin College, Cambridge, Austrian Academy of Sciences and Charles University, Prague. Austria awarded him the Grand Decoration of Honour in Gold for Services to the Republic of Austria in 1986, and the Federal Republic of Germany its Grand Cross with Star and Sash of the Order of Merit, and the peace class of the Order Pour le Mérite. He was knighted by Queen Elizabeth II in 1965, and was elected a Fellow of the Royal Society in 1976. He was invested with the insignia of a Member of the Order of the Companions of Honour in 1982. Other awards and recognition for Popper included the City of Vienna Prize for the Humanities (1965), Karl Renner Prize (1978), Austrian Decoration for Science and Art (1980), Dr. Leopold Lucas Prize of the University of Tübingen (1980), Ring of Honour of the City of Vienna (1983) and the Premio Internazionale of the Italian Federico Nietzsche Society (1988). In 1989, he was the first awarded the Prize International Catalonia for "his work to develop cultural, scientific and human values all around the world". In 1992, he was awarded the Kyoto Prize in Arts and Philosophy for "symbolising the open spirit of the 20th century" and for his "enormous influence on the formation of the modern intellectual climate". == Philosophy == === Background to Popper's ideas === Popper's rejection of Marxism during his teenage years left a profound mark on his thought. He had at one point joined a socialist association, and for a few months in 1919 considered himself a communist. Although it is known that Popper worked as an office boy at the communist headquarters, whether or not he ever became a member of the Communist Party is unclear. During this time he became familiar with the Marxist view of economics, class conflict, and history. Although he quickly became disillusioned with the views expounded by Marxists, his flirtation with the ideology led him to distance himself from those who believed that spilling blood for the sake of a revolution was necessary. He then took the view that when it came to sacrificing human lives, one was to think and act with extreme prudence. The failure of democratic parties to prevent fascism from taking over Austrian politics in the 1920s and 1930s traumatised Popper. He suffered from the direct consequences of this failure since events after the Anschluss (the annexation of Austria by the German Reich in 1938) forced him into permanent exile. His most important works in the field of social science—The Poverty of Historicism (1944) and The Open Society and Its Enemies (1945)—were inspired by his reflection on the events of his time and represented, in a sense, a reaction to the prevalent totalitarian ideologies that then dominated Central European politics. His books defended democratic liberalism as a social and political philosophy. They also represented extensive critiques of the philosophical presuppositions underpinning all forms of totalitarianism. Popper believed that there was a contrast between the theories of Sigmund Freud and Alfred Adler, which he considered non-scientific, and Albert Einstein's theory of relativity which set off the revolution in physics in the early 20th century. Popper thought that Einstein's theory, as a theory properly grounded in scientific thought and method, was highly "risky", in the sense that it was possible to deduce consequences from it which differed considerably from those of the then-dominant Newtonian physics; one such prediction, that gravity could deflect light, was verified by Eddington's experiments in 1919. In contrast he thought that nothing could, even in principle, falsify psychoanalytic theories. He thus came to the conclusion that they had more in common with primitive myths than with genuine science. This led Popper to conclude that what was regarded as the remarkable strengths of psychoanalytical theories were actually their weaknesses. Psychoanalytical theories were crafted in a way that made them able to refute any criticism and to give an explanation for every possible form of human behaviour. The nature of such theories made it impossible for any criticism or experiment—even in principle—to show them to be false. When Popper later tackled the problem of demarcation in the philosophy of science, this conclusion led him to posit that the strength of a scientific theory lies in its both being susceptible to falsification, and not actually being falsified by criticism made of it. He considered that if a theory cannot, in principle, be falsified by criticism, it is not a scientific theory. === Philosophy of science === ==== Falsifiability and the problem of demarcation ==== Popper coined the term "critical rationalism" to describe his philosophy. Popper rejected the empiricist view (following from Kant) that basic statements are infallible; rather, according to Popper, they are descriptions in relation to a theoretical framework. Concerning the method of science, the term "critical rationalism" indicates his rejection of classical empiricism, and the classical observationalist-inductivist account of science that had grown out of it. Popper argued strongly against the latter, holding that scientific theories are abstract in nature and can be tested only indirectly, by reference to their implications. He also held that scientific theory, and human knowledge generally, is irreducibly conjectural or hypothetical, and is generated by the creative imagination to solve problems that have arisen in specific historico-cultural settings. Logically, no number of positive outcomes at the level of experimental testing can confirm a scientific theory, but a single counterexample is logically decisive; it shows the theory, from which the implication is derived, to be false. Popper's account of the logical asymmetry between verification and falsifiability lies at the heart of his philosophy of science. It also inspired him to take falsifiability as his criterion of demarcation between metaphysics and science: a theory should be considered scientific if, and only if, it makes predictions that can be falsified. This led him to attack the claims of both psychoanalysis and contemporary Marxism to scientific status, on the basis that it is not possible to falsify the predictions that they make. To say that a given statement (e.g., the statement of a law of some scientific theory)—call it "T"—is "falsifiable" does not mean that "T" is false. It means only that the background knowledge about existing technologies, which exists before and independently of the theory, allows the imagination or conceptualization of observations that are in contradiction with the theory. It is only required that these contradictory observations can potentially be observed with existing technologies—the observations must be inter-subjective. This is the material requirement of falsifiability. Alan Chalmers gives "The brick fell upward when released" as an example of an imaginary observation that shows that Newton's law of gravitation is falsifiable. In All Life is Problem Solving, Popper sought to explain the apparent progress of scientific knowledge—that is, how it is that our understanding of the universe seems to improve over time. This problem arises from his position that the truth content of our theories, even the best of them, cannot be verified by scientific testing, but can only be falsified. With only falsifications being possible logically, how can we explain the growth of knowledge? In Popper's view, the advance of scientific knowledge is an evolutionary process characterised by his formula: P S 1 → T T 1 → E E 1 → P S 2 . {\displaystyle \mathrm {PS} _{1}\rightarrow \mathrm {TT} _{1}\rightarrow \mathrm {EE} _{1}\rightarrow \mathrm {PS} _{2}.\,} In response to a given problem situation ( P S 1 {\displaystyle \mathrm {PS} _{1}} ), a number of competing conjectures, or tentative theories ( T T {\displaystyle \mathrm {TT} } ), are systematically subjected to the most rigorous attempts at falsification possible. This process, error elimination ( E E {\displaystyle \mathrm {EE} } ), performs a similar function for science that natural selection performs for biological evolution. Theories that better survive the process of refutation are not more true, but rather, more "fit"—in other words, more applicable to the problem situation at hand ( P S 1 {\displaystyle \mathrm {PS} _{1}} ). Consequently, just as a species' biological fitness does not ensure continued survival, neither does rigorous testing protect a scientific theory from refutation in the future. Yet, as it appears that the engine of biological evolution has, over many generations, produced adaptive traits equipped to deal with more and more complex problems of survival, likewise, the evolution of theories through the scientific method may, in Popper's view, reflect a certain type of progress: toward more and more interesting problems ( P S 2 {\displaystyle \mathrm {PS} _{2}} ). For Popper, it is in the interplay between the tentative theories (conjectures) and error elimination (refutation) that scientific knowledge advances toward greater and greater problems; in a process very much akin to the interplay between genetic variation and natural selection. Popper also wrote extensively against the famous Copenhagen interpretation of quantum mechanics. He strongly disagreed with Niels Bohr's instrumentalism and supported Albert Einstein's scientific realist approach to scientific theories about the universe. He found that Bohr's interpretation introduced subjectivity into physics, claiming later in his life that: Bohr was "a marvelous physicist, one of the greatest of all time, but he was a miserable philosopher, and one couldn't talk to him. He was talking all the time, allowing practically only one or two words to you and then at once cutting in."This Popper's falsifiability resembles Charles Peirce's nineteenth-century fallibilism. In Of Clocks and Clouds (1966), Popper remarked that he wished he had known of Peirce's work earlier. ==== Falsification and the problem of induction ==== Among his contributions to philosophy is his claim to have solved the philosophical problem of induction. He states that while there is no way to prove that the sun will rise, it is possible to formulate the theory that every day the sun will rise; if it does not rise on some particular day, the theory will be falsified and will have to be replaced by a different one. Until that day, there is no need to reject the assumption that the theory is true. Nor is it rational according to Popper to make instead the more complex assumption that the sun will rise until a given day, but will stop doing so the day after, or similar statements with additional conditions. Such a theory would be true with higher probability because it cannot be attacked so easily: to falsify the first one, it is sufficient to find that the sun has stopped rising; to falsify the second one, one additionally needs the assumption that the given day has not yet been reached. Popper held that it is the least likely, or most easily falsifiable, or simplest theory (attributes which he identified as all the same thing) that explains known facts that one should rationally prefer. His opposition to positivism, which held that it is the theory most likely to be true that one should prefer, here becomes very apparent. It is impossible, Popper argues, to ensure a theory to be true; it is more important that its falsity can be detected as easily as possible. Popper agreed with David Hume that there is often a psychological belief that the sun will rise tomorrow and that there is no logical justification for the supposition that it will, simply because it always has in the past. Popper writes, I approached the problem of induction through Hume. Hume, I felt, was perfectly right in pointing out that induction cannot be logically justified. === Rationality === Popper held that rationality is not restricted to the realm of empirical or scientific theories, but that it is merely a special case of the general method of criticism, the method of finding and eliminating contradictions in knowledge without ad-hoc measures. According to this view, rational discussion about metaphysical ideas, about moral values and even about purposes is possible. Popper's student W.W. Bartley III tried to radicalise this idea and made the controversial claim that not only can criticism go beyond empirical knowledge but that everything can be rationally criticised. To Popper, who was an anti-justificationist, traditional philosophy is misled by the false principle of sufficient reason. He thinks that no assumption can ever be or needs ever to be justified, so a lack of justification is not a justification for doubt. Instead, theories should be tested and scrutinised. It is not the goal to bless theories with claims of certainty or justification, but to eliminate errors in them. He writes, [T]here are no such things as good positive reasons; nor do we need such things [...] But [philosophers] obviously cannot quite bring [themselves] to believe that this is my opinion, let alone that it is right. (The Philosophy of Karl Popper, p. 1043) === Philosophy of arithmetic === Popper's principle of falsifiability runs into prima facie difficulties when the epistemological status of mathematics is considered. It is difficult to conceive how simple statements of arithmetic, such as "2 + 2 = 4", could ever be shown to be false. If they are not open to falsification they can not be scientific. If they are not scientific, it needs to be explained how they can be informative about real world objects and events. Popper's solution was an original contribution in the philosophy of mathematics. His idea was that a number statement such as "2 apples + 2 apples = 4 apples" can be taken in two senses. In its pure mathematics sense, "2 + 2 = 4" is logically true and cannot be refuted. Contrastingly, in its applied mathematics sense of it describing the physical behaviour of apples, it can be falsified. This can be done by placing two apples in a container, then proceeding to place another two apples in the same container. If there are five, three, or a number of apples that is not four in said container, the theory that "2 apples + 2 apples = 4 apples" is shown to be false. On the contrary, if there are four apples in the container, the theory of numbers is shown to be applicable to reality. === Political philosophy === In The Open Society and Its Enemies and The Poverty of Historicism, Popper developed a critique of historicism and a defence of the "Open Society". Popper considered historicism to be the theory that history develops inexorably and necessarily according to knowable general laws towards a determinate end. He argued that this view is the principal theoretical presupposition underpinning most forms of authoritarianism and totalitarianism. He argued that historicism is founded upon mistaken assumptions regarding the nature of scientific law and prediction. Since the growth of human knowledge is a causal factor in the evolution of human history, and since "no society can predict, scientifically, its own future states of knowledge", it follows, he argued, that there can be no predictive science of human history. For Popper, metaphysical and historical indeterminism go hand in hand. In his early years Popper was impressed by Marxism, whether of Communists or socialists. An event that happened in 1919 had a profound effect on him: During a riot, caused by the Communists, the police shot several unarmed people, including some of Popper's friends, when they tried to free party comrades from prison. The riot had, in fact, been part of a plan by which leaders of the Communist party with connections to Béla Kun tried to take power by a coup; Popper did not know about this at that time. However, he knew that the riot instigators were swayed by the Marxist doctrine that class struggle would produce vastly more dead men than the inevitable revolution brought about as quickly as possible, and so had no scruples to put the life of the rioters at risk to achieve their selfish goal of becoming the future leaders of the working class. This was the start of his later criticism of historicism. Popper began to reject Marxist historicism, which he associated with questionable means, and later socialism, which he associated with placing equality before freedom (to the possible disadvantage of equality). Popper said that he was a socialist for "several years", and maintained an interest in egalitarianism, but abandoned it as a whole because socialism was a "beautiful dream", but, just like egalitarianism, it was incompatible with individual liberty. Popper initially saw totalitarianism as exclusively right-wing in nature, although as early as 1945 in The Open Society he was describing Communist parties as giving a weak opposition to fascism due to shared historicism with fascism.: 730  Over time, primarily in defence of liberal democracy, Popper began to see Soviet-type communism as a form of totalitarianism, and viewed the main issue of the Cold War as not capitalism versus socialism, but democracy versus totalitarianism.: 732  In 1957, Popper would dedicate The Poverty of Historicism to "memory of the countless men, women and children of all creeds or nations or races who fell victims to the fascist and communist belief in Inexorable Laws of Historical Destiny." In 1947, Popper co-founded the Mont Pelerin Society, with Friedrich Hayek, Milton Friedman, Ludwig von Mises and others, although he did not fully agree with the think tank's charter and ideology. Specifically, he unsuccessfully recommended that socialists should be invited to participate, and that emphasis should be put on a hierarchy of humanitarian values rather than advocacy of a free market as envisioned by classical liberalism. ==== The paradox of tolerance ==== Although Popper was an advocate of toleration, he also warned against unlimited tolerance. In The Open Society and Its Enemies, he argued: Unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them. In this formulation, I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be most unwise. But we should claim the right to suppress them if necessary even by force; for it may easily turn out that they are not prepared to meet us on the level of rational argument, but begin by denouncing all argument; they may forbid their followers to listen to rational argument, because it is deceptive, and teach them to answer arguments by the use of their fists or pistols. We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant. We should claim that any movement preaching intolerance places itself outside the law, and we should consider incitement to intolerance and persecution as criminal, in the same way as we should consider incitement to murder, or to kidnapping, or to the revival of the slave trade, as criminal. ==== The "conspiracy theory of society" ==== Popper criticized what he termed the "conspiracy theory of society", the view that powerful people or groups, godlike in their efficacy, are responsible for purposely bringing about all the ills of society. This view cannot be right, Popper argued, because "nothing ever comes off exactly as intended." According to philosopher David Coady, "Popper has often been cited by critics of conspiracy theories, and his views on the topic continue to constitute an orthodoxy in some circles." However, philosopher Charles Pigden has pointed out that Popper's argument only applies to a very extreme kind of conspiracy theory, not to conspiracy theories generally. === Metaphysics === ==== Truth ==== As early as 1934, Popper wrote of the search for truth as "one of the strongest motives for scientific discovery." Still, he describes in Objective Knowledge (1972) early concerns about the much-criticised notion of truth as correspondence. Then came the semantic theory of truth formulated by the logician Alfred Tarski and published in 1933. Popper wrote of learning in 1935 of the consequences of Tarski's theory, to his intense joy. The theory met critical objections to truth as correspondence and thereby rehabilitated it. The theory also seemed, in Popper's eyes, to support metaphysical realism and the regulative idea of a search for truth. According to this theory, the conditions for the truth of a sentence as well as the sentences themselves are part of a metalanguage. So, for example, the sentence "Snow is white" is true if and only if snow is white. Although many philosophers have interpreted, and continue to interpret, Tarski's theory as a deflationary theory, Popper refers to it as a theory in which "is true" is replaced with "corresponds to the facts". He bases this interpretation on the fact that examples such as the one described above refer to two things: assertions and the facts to which they refer. He identifies Tarski's formulation of the truth conditions of sentences as the introduction of a "metalinguistic predicate" and distinguishes the following cases: "John called" is true. "It is true that John called." The first case belongs to the metalanguage whereas the second is more likely to belong to the object language. Hence, "it is true that" possesses the logical status of a redundancy. "Is true", on the other hand, is a predicate necessary for making general observations such as "John was telling the truth about Phillip." Upon this basis, along with that of the logical content of assertions (where logical content is inversely proportional to probability), Popper went on to develop his important notion of verisimilitude or "truthlikeness". The intuitive idea behind verisimilitude is that the assertions or hypotheses of scientific theories can be objectively measured with respect to the amount of truth and falsity that they imply. And, in this way, one theory can be evaluated as more or less true than another on a quantitative basis which, Popper emphasises forcefully, has nothing to do with "subjective probabilities" or other merely "epistemic" considerations. The simplest mathematical formulation that Popper gives of this concept can be found in the tenth chapter of Conjectures and Refutations. Here he defines it as: V s ( a ) = C T v ( a ) − C T f ( a ) {\displaystyle {\mathit {Vs}}(a)={\mathit {CT}}_{v}(a)-{\mathit {CT}}_{f}(a)\,} where V s ( a ) {\displaystyle {\mathit {Vs}}(a)} is the verisimilitude of a, C T v ( a ) {\displaystyle {\mathit {CT}}_{v}(a)} is a measure of the content of the truth of a, and C T f ( a ) {\displaystyle {\mathit {CT}}_{f}(a)} is a measure of the content of the falsity of a. Popper's original attempt to define not just verisimilitude, but an actual measure of it, turned out to be inadequate. However, it inspired a wealth of new attempts. ==== Popper's three worlds ==== Knowledge, for Popper, was objective, both in the sense that it is objectively true (or truthlike), and also in the sense that knowledge has an ontological status (i.e., knowledge as object) independent of the knowing subject (Objective Knowledge: An Evolutionary Approach, 1972). He proposed three worlds: World One, being the physical world, or physical states; World Two, being the world of mind, or individuals' private mental states, ideas and perceptions; and World Three, being the public body of human knowledge expressed in its manifold forms (e.g., "scientific theories, ethical principles, characters in novels, philosophy, art, poetry, in short our entire cultural heritage"), or the products of World Two made manifest in the materials of World One (e.g., books, papers, paintings, symphonies, cathedrals, particle accelerators). World Three, Popper argued, was the product of individual human beings in exactly the same sense that an animal path in the jungle is the creation of many individual animals but not planned or intended by any of them. World Three thus has an existence and an evolution independent of any individually known subjects. The influence of World Three on the individual human mind (World Two) is in Popper's view at least as strong as the influence of World One. In other words, the knowledge held by a given individual mind owes at least as much to the total, accumulated wealth of human knowledge made manifest as to the world of direct experience. As such, the growth of human knowledge could be said to be a function of the independent evolution of World Three. Many contemporary philosophers, such as Daniel Dennett, have not embraced Popper's Three World conjecture, mostly due to what they see as its resemblance to mind–body dualism. ==== Origin and evolution of life ==== The creation–evolution controversy raised the issue of whether creationistic ideas may be legitimately called science. In the debate, both sides and even courts in their decisions have invoked Popper's criterion of falsifiability (see Daubert standard). In this context, passages written by Popper are frequently quoted in which he speaks about such issues himself. For example, he famously stated "Darwinism is not a testable scientific theory, but a metaphysical research program—a possible framework for testable scientific theories." He continued: And yet, the theory is invaluable. I do not see how, without it, our knowledge could have grown as it has done since Darwin. In trying to explain experiments with bacteria which become adapted to, say, penicillin, it is quite clear that we are greatly helped by the theory of natural selection. Although it is metaphysical, it sheds much light upon very concrete and very practical researches. It allows us to study adaptation to a new environment (such as a penicillin-infested environment) in a rational way: it suggests the existence of a mechanism of adaptation, and it allows us even to study in detail the mechanism at work. He noted that theism, presented as explaining adaptation, "was worse than an open admission of failure, for it created the impression that an ultimate explanation had been reached". Popper later said: When speaking here of Darwinism...This is an immensely impressive and powerful theory. The claim that it completely explains evolution is of course a bold claim, and very far from being established. All scientific theories are conjectures, even those that have successfully passed many severe and varied tests. The Mendelian underpinning of modern Darwinism has been well tested, and so has the theory of evolution.... He explained that the difficulty of testing had led some people to describe natural selection as a tautology, and that he too had in the past described the theory as "almost tautological", and had tried to explain how the theory could be untestable (as is a tautology) and yet of great scientific interest: My solution was that the doctrine of natural selection is a most successful metaphysical research programme. It raises detailed problems in many fields, and it tells us what we would expect of an acceptable solution of these problems. I still believe that natural selection works in this way as a research programme. Nevertheless, I have changed my mind about the testability and logical status of the theory of natural selection; and I am glad to have an opportunity to make a recantation. Popper summarised his new view as follows: The theory of natural selection may be so formulated that it is far from tautological. In this case it is not only testable, but it turns out to be not strictly universally true. There seem to be exceptions, as with so many biological theories; and considering the random character of the variations on which natural selection operates, the occurrence of exceptions is not surprising. Thus not all phenomena of evolution are explained by natural selection alone. Yet in every particular case it is a challenging research program to show how far natural selection can possibly be held responsible for the evolution of a particular organ or behavioural program. These frequently quoted passages are only a small part of what Popper wrote on evolution, however, and may give the wrong impression that he mainly discussed questions of its falsifiability. Popper never invented this criterion to give justifiable use of words like science. In fact, Popper stressed that "the last thing I wish to do, however, is to advocate another dogma" and that "what is to be called a 'science' and who is to be called a 'scientist' must always remain a matter of convention or decision." He quotes Menger's dictum that "Definitions are dogmas; only the conclusions drawn from them can afford us any new insight" and notes that different definitions of science can be rationally debated and compared: I do not try to justify [the aims of science which I have in mind], however, by representing them as the true or the essential aims of science. This would only distort the issue, and it would mean a relapse into positivist dogmatism. There is only one way, as far as I can see, of arguing rationally in support of my proposals. This is to analyse their logical consequences: to point out their fertility—their power to elucidate the problems of the theory of knowledge. Popper had his own sophisticated views on evolution that go much beyond what the frequently-quoted passages say. In effect, Popper agreed with some points of both creationists and naturalists, but disagreed with both on crucial aspects. Popper understood the universe as a creative entity that invents new things, including life, but without the necessity of something like a god, especially not one who is pulling strings from behind the curtain. He said that evolution of the genotype must, as the creationists say, work in a goal-directed way but disagreed with their view that it must necessarily be the hand of god that imposes these goals onto the stage of life. Instead, he formulated the spearhead model of evolution, a version of genetic pluralism. According to this, living organisms have goals, and act according to these goals, each guided by a central control. In its most sophisticated form, this is the brain of humans, but controls also exist in much less sophisticated ways for species of lower complexity, such as the amoeba. This control organ plays a special role in evolution—it is the "spearhead of evolution". The goals bring the purpose into the world. Mutations in the genes that determine the structure of the control may then cause drastic changes in behaviour, preferences and goals, without having an impact on the organism's phenotype. Popper postulates that such purely behavioural changes are less likely to be lethal for the organism compared to drastic changes of the phenotype. Popper contrasts his views with the notion of the "hopeful monster" that has large phenotype mutations and calls it the "hopeful behavioural monster". After behaviour has changed radically, small but quick changes of the phenotype follow to make the organism fitter to its changed goals. This way it looks as if the phenotype were changing guided by some invisible hand, while it is merely natural selection working in combination with the new behaviour. For example, according to this hypothesis, the eating habits of the giraffe must have changed before its elongated neck evolved. Popper contrasted this view as "evolution from within" or "active Darwinism" (the organism actively trying to discover new ways of life and being on a quest for conquering new ecological niches), with the naturalistic "evolution from without" (which has the picture of a hostile environment only trying to kill the mostly passive organism, or perhaps segregate some of its groups). Popper was a key figure encouraging patent lawyer Günter Wächtershäuser to publish his iron–sulfur world hypothesis on abiogenesis and his criticism of "soup" theory. On the creation-evolution controversy, Popper initially wrote that he considered it a somewhat sensational clash between a brilliant scientific hypothesis concerning the history of the various species of animals and plants on earth, and an older metaphysical theory which, incidentally, happened to be part of an established religious belief with a footnote to the effect that he agree[s] with Professor C.E. Raven when...he calls this conflict 'a storm in a Victorian tea-cup'... In his later work, however, when he had developed his own "spearhead model" and "active Darwinism" theories, Popper revised this view and found some validity in the controversy: I have to confess that this cup of tea has become, after all, my cup of tea; and with it I have to eat humble pie. ==== Free will ==== Popper and John Eccles speculated on the problem of free will for many years, generally agreeing on an interactionist dualist theory of mind. However, although Popper was a body-mind dualist, he did not think that the mind is a substance separate from the body: he thought that mental or psychological properties or aspects of people are distinct from physical ones. When he gave the second Arthur Holly Compton Memorial Lecture in 1965, Popper revisited the idea of quantum indeterminacy as a source of human freedom. Eccles had suggested that "critically poised neurons" might be influenced by the mind to assist in a decision. Popper criticised Compton's idea of amplified quantum events affecting the decision. He wrote: The idea that the only alternative to determinism is just sheer chance was taken over by Schlick, together with many of his views on the subject, from Hume, who asserted that "the removal" of what he called "physical necessity" must always result in "the same thing with chance. As objects must either be conjoin'd or not,... 'tis impossible to admit of any medium betwixt chance and an absolute necessity". I shall later argue against this important doctrine according to which the alternative to determinism is sheer chance. Yet I must admit that the doctrine seems to hold good for the quantum-theoretical models which have been designed to explain, or at least to illustrate, the possibility of human freedom. This seems to be the reason why these models are so very unsatisfactory. Hume's and Schlick's ontological thesis that there cannot exist anything intermediate between chance and determinism seems to me not only highly dogmatic (not to say doctrinaire) but clearly absurd; and it is understandable only on the assumption that they believed in a complete determinism in which chance has no status except as a symptom of our ignorance. Popper called not for something between chance and necessity but for a combination of randomness and control to explain freedom, though not yet explicitly in two stages with random chance before the controlled decision, saying, "freedom is not just chance but, rather, the result of a subtle interplay between something almost random or haphazard, and something like a restrictive or selective control." Then in his 1977 book with John Eccles, The Self and its Brain, Popper finally formulates the two-stage model in a temporal sequence. And he compares free will to Darwinian evolution and natural selection: New ideas have a striking similarity to genetic mutations. Now, let us look for a moment at genetic mutations. Mutations are, it seems, brought about by quantum theoretical indeterminacy (including radiation effects). Accordingly, they are also probabilistic and not in themselves originally selected or adequate, but on them there subsequently operates natural selection which eliminates inappropriate mutations. Now we could conceive of a similar process with respect to new ideas and to free-will decisions, and similar things. That is to say, a range of possibilities is brought about by a probabilistic and quantum mechanically characterised set of proposals, as it were—of possibilities brought forward by the brain. On these there then operates a kind of selective procedure which eliminates those proposals and those possibilities which are not acceptable to the mind. === Religion and God === Popper was not a religious man in the formal sense of the word. He neither maintained any link with his Jewish ancestry nor was he an observant Lutheran. However, he did consider that every person including himself, was religious in the sense of believing in something more important and beyond us through which we can transcend ourselves. Popper called this something a Third World. In an interview that Popper gave in 1969 with the condition that it should be kept secret until after his death, he summarised his position on God as follows: "I don't know whether God exists or not (...) Some forms of atheism are arrogant and ignorant and should be rejected, but agnosticism—to admit that we don't know and to search—is all right. (...) When I look at what I call the gift of life, I feel a gratitude which is in tune with some religious ideas of God. However, the moment I even speak of it, I am embarrassed that I may do something wrong to God in talking about God." Aged fifteen, after reading Spinoza (at the suggestion of his father), Popper recounts that "it gave me a lifetime's dislike of theorizing about God". In 1936, applying to the Academic Assistance Council to leave Austria, he described himself as "Protestant, namely evangelical but of Jewish origin." Responding to the question of whether he wanted religious communities approached on his behalf, opposite the Jewish Orthodox section he wrote "NO", underlining it twice. Popper objected to organised religion, saying "it tends to use the name of God in vain", noting the danger of fanaticism because of religious conflicts: "The whole thing goes back to myths which, though they may have a kernel of truth, are untrue. Why then should the Jewish myth be true and the Indian and Egyptian myths not be true?" Ethical issues always constituted an important part of the background to Popper's philosophy. In later life he discussed ethics rarely, and religious questions hardly at all, but he sympathized with the religious stance of others, and was not prepared to endorse various "humanist and secular offensives". For Popper religion was definitely not science, but "because something isn’t science, however, does not mean it is meaningless". In a letter unrelated to the interview, he stressed his tolerant attitude: "Although I am not for religion, I do think that we should show respect for anybody who believes honestly." == Influence == Popper helped to establish the philosophy of science as an autonomous discipline within philosophy, both through his own prolific and influential works and through his influence on his contemporaries and students. In 1946, Popper founded the Department of Philosophy, Logic and Scientific Method at the London School of Economics (LSE) and there lectured and influenced both Imre Lakatos and Paul Feyerabend, two of the foremost philosophers of science in the next generation. (Lakatos significantly modified Popper's position,: 1  and Feyerabend repudiated it entirely, but the work of both was deeply influenced by Popper and engaged with many of the problems that Popper set.) Although there is some dispute as to the matter of influence, Popper had a longstanding and close friendship with economist Friedrich Hayek, who was also brought to LSE from Vienna. Each found support and similarities in the other's work, citing each other often, though not without qualification. In a letter to Hayek in 1944, Popper stated, "I think I have learnt more from you than from any other living thinker, except perhaps Alfred Tarski." Popper dedicated his Conjectures and Refutations to Hayek. For his part, Hayek dedicated a collection of papers, Studies in Philosophy, Politics, and Economics, to Popper, and in 1982 said, "ever since his Logik der Forschung first came out in 1934, I have been a complete adherent to his general theory of methodology." Popper also had long and mutually influential friendships with art historian Ernst Gombrich, biologist Peter Medawar, and neuroscientist John Carew Eccles. The German jurist Reinhold Zippelius uses Popper's method of "trial and error" in his legal philosophy. Peter Medawar called him "incomparably the greatest philosopher of science that has ever been". Popper's influence, both through his work in philosophy of science and through his political philosophy, has also extended beyond the academy. One of Popper's students at LSE was George Soros, who later became a billionaire investor and among whose philanthropic foundations is the Open Society Institute, a think-tank named in honour of Popper's The Open Society and Its Enemies. Soros revised his own philosophy, differing from some of Popper's epistemological assumptions, in a lecture entitled Open Society given at Central European University on 28 October 2009: Popper was mainly concerned with the problems of understanding of reality [...] He argued that and I quote "only democracy provides an institutional framework that permits reform without violence, and so the use of reason in politics matters." But his approach was based on a hidden assumption, namely, that the main purpose of thinking is to gain a better understanding of reality. And that was not necessarily the case. The manipulative function could take precedence over the cognitive function [...] How could Popper take it for granted that free political discourse is aimed at understanding reality? And even more intriguingly, how could I, who gave the manipulative function pride of place in the concept of reflexivity, follow him so blindly? [...] Let me spell out my conclusion more clearly, an open society is a desirable form of social organization, both as a means to an end, and an end in itself [...] provided it gives precedence to the cognitive over the manipulative function and people are willing to confront harsh realities. [...] The value of individual freedom is likely to assume increasing importance in the immediate future. == Criticism == Most criticisms of Popper's philosophy are of the falsification, or error elimination, element in his account of problem solving. Popper presents falsifiability as both an ideal and as an important principle in a practical method of effective human problem solving; as such, the current conclusions of science are stronger than pseudo-sciences or non-sciences, insofar as they have survived this particularly vigorous selection method. He does not argue that any such conclusions are therefore true, or that this describes the actual methods of any particular scientist. Rather, it is recommended as an essential principle of methodology that, if enacted by a system or community, will lead to slow but steady progress of a sort (relative to how well the system or community enacts the method). It has been suggested that Popper's ideas are often mistaken for a hard logical account of truth because of the historical co-incidence of their appearing at the same time as logical positivism, the followers of which mistook his aims for their own. The Quine–Duhem thesis argues that it is impossible to test a single hypothesis on its own, since each one comes as part of an environment of theories. Thus we can only say that the whole package of relevant theories has been collectively falsified, but cannot conclusively say which element of the package must be replaced. An example of this is given by the discovery of the planet Neptune: when the motion of Uranus was found not to match the predictions of Newton's laws, the theory "There are seven planets in the solar system" was rejected, and not Newton's laws themselves. Popper discussed this critique of naive falsificationism in Chapters 3 and 4 of The Logic of Scientific Discovery. The philosopher Thomas Kuhn writes in The Structure of Scientific Revolutions (1962) that he places an emphasis on anomalous experiences similar to that which Popper places on falsification. However, he adds that anomalous experiences cannot be identified with falsification, and questions whether theories could be falsified in the manner suggested by Popper. Kuhn argues in The Essential Tension (1977) that while Popper was correct that psychoanalysis cannot be considered a science, there are better reasons for drawing that conclusion than those Popper provided. Popper's student Imre Lakatos attempted to reconcile Kuhn's work with falsificationism by arguing that science progresses by the falsification of research programs rather than the more specific universal statements of naive falsificationism. Popper claimed to have recognised already in the 1934 version of his Logic of Discovery a fact later stressed by Kuhn, "that scientists necessarily develop their ideas within a definite theoretical framework", and to that extent to have anticipated Kuhn's central point about "normal science". However, Popper criticised what he saw as Kuhn's relativism, this criticism being at the heart of the Kuhn-Popper debate. Also, in his collection Conjectures and Refutations: The Growth of Scientific Knowledge (Harper & Row, 1963), Popper writes, Science must begin with myths, and with the criticism of myths; neither with the collection of observations, nor with the invention of experiments, but with the critical discussion of myths, and of magical techniques and practices. The scientific tradition is distinguished from the pre-scientific tradition in having two layers. Like the latter, it passes on its theories; but it also passes on a critical attitude towards them. The theories are passed on, not as dogmas, but rather with the challenge to discuss them and improve upon them. Another objection is that it is not always possible to demonstrate falsehood definitively, especially if one is using statistical criteria to evaluate a null hypothesis. More generally it is not always clear, if evidence contradicts a hypothesis, that this is a sign of flaws in the hypothesis rather than of flaws in the evidence. However, this is a misunderstanding of what Popper's philosophy of science sets out to do. Rather than offering a set of instructions that merely need to be followed diligently to achieve science, Popper makes it clear in The Logic of Scientific Discovery that his belief is that the resolution of conflicts between hypotheses and observations can only be a matter of the collective judgment of scientists, in each individual case. In Science Versus Crime, Houck writes that Popper's falsificationism can be questioned logically: it is not clear how Popper would deal with a statement like "for every metal, there is a temperature at which it will melt". The hypothesis cannot be falsified by any possible observation, for there will always be a higher temperature than tested at which the metal may in fact melt, yet it seems to be a valid scientific hypothesis. These examples were pointed out by Carl Gustav Hempel. Hempel came to acknowledge that logical positivism's verificationism was untenable, but argued that falsificationism was equally untenable on logical grounds alone. The simplest response to this is that, because Popper describes how theories attain, maintain and lose scientific status, individual consequences of currently accepted scientific theories are scientific in the sense of being part of tentative scientific knowledge, and both of Hempel's examples fall under this category. For instance, atomic theory implies that all metals melt at some temperature. An early adversary of Popper's critical rationalism, Karl-Otto Apel attempted a comprehensive refutation of Popper's philosophy. In Transformation der Philosophie (1973), Apel charged Popper with being guilty of, amongst other things, a pragmatic contradiction. The philosopher Adolf Grünbaum argues in The Foundations of Psychoanalysis (1984) that Popper's view that psychoanalytic theories, even in principle, cannot be falsified is incorrect. The philosopher Roger Scruton argues in Sexual Desire (1986) that Popper was mistaken to claim that Freudian theory implies no testable observation and therefore does not have genuine predictive power. Scruton maintains that Freudian theory has both "theoretical terms" and "empirical content". He points to the example of Freud's theory of repression, which in his view has "strong empirical content" and implies testable consequences. Nevertheless, Scruton also concluded that Freudian theory is not genuinely scientific. The philosopher Charles Taylor accuses Popper of exploiting his worldwide fame as an epistemologist to diminish the importance of philosophers of the 20th-century continental tradition. According to Taylor, Popper's criticisms are completely baseless, but they are received with an attention and respect that Popper's "intrinsic worth hardly merits". The philosopher John Gray argues that Popper's account of scientific method would have prevented the theories of Charles Darwin and Albert Einstein from being accepted. However, Gray's criticism with regards to Einstein is at odds with the fact that Popper frequently used Einstein's theory of general relativity as a case study of how the principle of falsifiability works in practice. The philosopher and psychologist Michel ter Hark writes in Popper, Otto Selz and the Rise of Evolutionary Epistemology (2004) that Popper took some of his ideas from his tutor, the German psychologist Otto Selz. Selz never published his ideas, partly because of the rise of Nazism, which forced him to quit his work in 1933 and prohibited any reference to his ideas. Popper, the historian of ideas and his scholarship, is criticised in some academic quarters for his treatment of Plato and Hegel. == Published works == A complete list of Popper’s writings is available as part 1.1 of the International personal bibliography of Karl R. Popper on the website of Karl Popper Archives at the University of Klagenfurt (see also External links). == Filmography == Interview Karl Popper, Open Universiteit, 1988. == See also == == Notes == == References == == Further reading == == External links == Portraits of Karl Popper at the National Portrait Gallery, London Works by or about Karl Popper at the Internet Archive The International personal bibliography of Karl R. Popper, maintained and published by the Karl Popper Archives at the University of Klagenfurt Karl Popper on Stanford Encyclopedia of Philosophy Popper, K. R. "Natural Selection and the Emergence of Mind", 1977. The Karl Popper Web Archived 3 December 2007 at the Wayback Machine Sir Karl R. Popper in Prague, May 1994 [Archived by Wayback Machine] Synopsis and background of The poverty of historicism "A Skeptical Look at Karl Popper" by Martin Gardner (archived 10 February 2017 by Wayback Machine) "A Sceptical Look at 'A Skeptical Look at Karl Popper'" by J C Lester. Singer, Peter (2 May 1974), "Discovering Karl Popper", The New York Review of Books, vol. 21, no. 7, archived from the original on 12 January 2016, retrieved 21 January 2016 The Liberalism of Karl Popper Archived 20 October 2017 at the Wayback Machine by John N. Gray Karl Popper on Information Philosopher History of Twentieth-Century Philosophy of Science, BOOK V: Karl Popper Archived 3 March 2020 at the Wayback Machine Site offers free downloads by chapter available for public use. Karl Popper at Liberal-international.org A science and technology hypotheses database following Karl Popper's refutability principle Popper, BBC Radio 4 discussion with John Worrall, Anthony O'Hear & Nancy Cartwright (In Our Time, 8 February 2007)
Wikipedia/Conjectures_and_Refutations
The Theory of Communicative Action (German: Theorie des kommunikativen Handelns) is a two-volume 1981 book by the philosopher Jürgen Habermas, in which the author continues his project of finding a way to ground "the social sciences in a theory of language", which had been set out in On the Logic of the Social Sciences (1967). The two volumes are Reason and the Rationalization of Society (Handlungsrationalität und gesellschaftliche Rationalisierung), in which Habermas establishes a concept of communicative rationality, and Lifeworld and System: A Critique of Functionalist Reason (Zur Kritik der funktionalistischen Vernunft), in which Habermas creates the two level concept of society and lays out the critical theory for modernity. After writing The Theory of Communicative Action, Habermas expanded upon the theory of communicative action by using it as the basis of his theory of morality, democracy, and law. The work has inspired many responses by social theorists and philosophers, and in 1998 was listed by the International Sociological Association as the eighth most important sociological book of the 20th century. == Theory == The theory of communicative action is a critical project which reconstructs a concept of reason which is not grounded in instrumental or objectivistic terms, but rather in an emancipatory communicative act. This reconstruction proposes "human action and understanding can be fruitfully analysed as having a linguistic structure", and each utterance relies upon the anticipation of freedom from unnecessary domination. These linguistic structures of communication can be used to establish a normative understanding of society. This conception of society is used "to make possible a conceptualization of the social-life context that is tailored to the paradoxes of modernity." This project started after the critical reception of Habermas's book Knowledge and Human Interests (1968), after which Habermas chose to move away from contextual and historical analysis of social knowledge toward what would become the theory of communicative action. The theory of communicative action understands language as the foundational component of society and is an attempt to update Marxism by "drawing on Systems theory (Luhmann), developmental psychology (Piaget, Kohlberg), and social theory (Weber, Durkheim, Parsons, Mead, etc.)". Based on lectures initially developed in On the Pragmatics of Social Interaction Habermas was able to expand his theory to a large understanding of society. Thomas A. McCarthy states that The Theory of Communicative Action has three interrelated concerns: (1) to develop a concept of rationality that is no longer tied to, and limited by, the subjectivistic and individualistic premises of modern philosophy and social theory; (2) to construct a two-level concept of society that integrates the lifeworld and systems paradigms; and, finally, (3) to sketch out, against this background, a critical theory of modernity which analyzes and accounts for its pathologies in a way that suggests a redirection rather than an abandonment of the project of enlightenment. == Volume 1 == The Theory of Communicative Action, Volume 1 sets out "to develop a concept of rationality that is no longer tied to, and limited by, the subjectivistic and individualistic premises of modern philosophy and social theory." With this failure of the search for ultimate foundations by "first philosophy" or "the philosophy of consciousness", an empirically tested theory of rationality must be a pragmatic theory based on science and social science. This implies that any universalist claims can only be validated by testing against counterexamples in historical (and geographical) contexts – not by using transcendental ontological assumptions. This leads him to look for the basis of a new theory of communicative action in the tradition of sociology. He starts by rereading Max Weber's description of rationality and arguing it has a limited view of human action. Habermas argues that Weber's basic theoretical assumptions with regard to social action prejudiced his analysis in the direction of purposive rationality, which purportedly arises from the conditions of commodity production. Taking the definition of action as human behaviour with intention, or with subjective meaning attached, then Weber's theory of action is based on a solitary acting subject and does not encompass the coordinating actions that are inherent to a social body. According to Weber, rationalisation (to use this word in the sense it has in sociological theory) creates three spheres of value: the differentiated zones of science, art and law. For him, this fundamental disunity of reason constitutes the danger of modernity. This danger arises not simply from the creation of separate institutional entities but through the specialisation of cognitive, normative, and aesthetic knowledge that in turn permeates and fragments everyday consciousness. This disunity of reason implies that culture moves from a traditional base in a consensual collective endeavour to forms which are rationalised by commodification and led by individuals with interests which are separated from the purposes of the population as a whole. This 'purposive rational action' is steered by the "media" of the state, which substitute for oral language as the medium of the coordination of social action. An antagonism arises between these two principles of societal integration—language, which is oriented to understanding and collective well being, and "media", which are systems of success-oriented action. Following Weber, Habermas sees specialisation as the key historical development, which leads to the alienating effects of modernity, which 'permeate and fragment everyday consciousness'. Habermas points out that the "sociopsychological costs" of this limited version of rationality are ultimately borne by individuals, which is what György Lukács had in mind when he developed Marx's concept of reification in his History and Class Consciousness (1923). They surface as widespread neurotic illnesses, addictions, psychosomatic disorders, and behavioural and emotional difficulties; or they find more conscious expression in criminal actions, protest groups and religious cults. Lukács thought that reification, although it runs deep, is constrained by the potential of rational argument to be self-reflexive and transcend its occupational use by oppressive agencies. Habermas agrees with this optimistic analysis, in contrast to Adorno and Horkheimer, and thinks that freedom and ideals of reconciliation are ingrained in the mechanisms of the linguistically mediated sociation of humanity. == Volume 2 == Habermas finds in the work of George Herbert Mead and Émile Durkheim concepts which can be used to free Weber's theory of rationalisation from the aporias of the philosophy of consciousness. Mead's most productive concept is his theoretical base of communication and Durkheim's is his idea of social integration. Mead also stressed the social character of perception: our first encounters are social. From these bases, Habermas develops his concept of communicative action: communicative action serves to transmit and renew cultural knowledge, in a process of achieving mutual understandings. It then coordinates action towards social integration and solidarity. Finally, communicative action is the process through which people form their identities. Following Weber again, an increasing complexity arises from the structural and institutional differentiation of the lifeworld, which follows the closed logic of the systemic rationalisation of our communications. There is a transfer of action co-ordination from 'language' over to 'steering media', such as money and power, which bypass consensus-oriented communication with a 'symbolic generalisation of rewards and punishments'. After this process the lifeworld "is no longer needed for the coordination of action". This results in humans ('lifeworld actors') losing a sense of responsibility with a chain of negative social consequences. Lifeworld communications lose their purpose becoming irrelevant for the coordination of central life processes. This has the effect of ripping the heart out of social discourse, allowing complex differentiation to occur but at the cost of social pathologies. "In the end, systemic mechanisms suppress forms of social integration even in those areas where a consensus dependent co-ordination of action cannot be replaced, that is, where the symbolic reproduction of the lifeworld is at stake. In these areas, the mediatization of the lifeworld assumes the form of colonisation". Habermas argues that Horkheimer and Adorno, like Weber before them, confused system rationality with action rationality. This prevented them from dissecting the effects of the intrusion of steering media into a differentiated lifeworld, and the rationalisation of action orientations that follows. They could then only identify spontaneous communicative actions within areas of apparently 'non-rational' action, art and love on the one hand or the charisma of the leader on the other, as having any value. According to Habermas, lifeworlds become colonised by steering media when four things happen: Traditional forms of life are dismantled. Social roles are sufficiently differentiated. There are adequate rewards of leisure and money for the alienated labour. Hopes and dreams become individuated by state canalization of welfare and culture. These processes are institutionalised by developing global systems of jurisprudence. He here indicates the limits of an entirely juridified concept of legitimation and practically calls for more anarchistic 'will formation' by autonomous networks and groups. "Counterinstitutions are intended to dedifferentiate some parts of the formally organised domains of action, remove them from the clutches of the steering media, and return these 'liberated areas' to the action co-ordinating medium of reaching understanding". After dispensing with Weber's overly negative use of rationalisation, it is possible to look at the Enlightenment ideal of reason in a fresh light. Rationality is redefined as thinking that is ready to submit to criticism and systematic examination as an ongoing process. A broader definition is that rationality is a disposition expressed in behaviour for which good reasons can be given. Habermas is now ready to make a preliminary definition of the process of communicative rationality: this is communication that is "oriented to achieving, sustaining and reviewing consensus – and indeed a consensus that rests on the intersubjective recognition of criticisable validity claims". With this key definition he shifts the emphasis in our concept of rationality from the individual to the social. This shift is fundamental to The Theory of Communicative Action. It is based on an assumption that language is implicitly social and inherently rational. Argument of some kind is central to the process of achieving a rational result. Contested validity claims are thematised and attempts are then made to vindicate or criticise them in a systematic and rigorous way. This may seem to favour verbal language, but allowance is also given for 'practical discourses' in which claims to normative rightness are made thematic and pragmatically tested. Non-verbal forms of cultural expression could often fall into this category. Habermas proposes three integrated conditions from which argumentative speech can produce valid results: "The structure of the ideal speech situation (which means that the discourse is) immunised against repression and inequality in a special way ... The structures of a ritualised competition for the better arguments… The structures that determine the construction of individual arguments and their interrelations". Granting such principles of rational argumentation, communicative rationality is: The processes by which different validity claims are brought to a satisfactory resolution. The relations to the world that people take to forward validity claims for the expressions they deem important. Habermas then discusses three further types of discourse that can be used to achieve valid results in addition to verbal argument: these are the aesthetic, the therapeutic and the explicative. Because these are not followed through in The Theory of Communicative Action the impression is given that these are secondary forms of discourse. === Aesthetic discourse === Aesthetic discourses work by mediators arguments bringing us to consider a work or performance which itself demonstrates a value. "A work validated through aesthetic experience can then in turn take the place of an argument and promote the acceptance of precisely those standards according to which it counts as an authentic work." Habermas considers the mediation of the critic, the curator or the promoter as essential to bring people to the revelatory aesthetic experience. This mediation is often locked into economic interests either directly or through state agency. When Habermas considers the question of context he refers to culture. "Every process of understanding takes place against the background of a culturally ingrained preunderstanding... The interpretative task consists in incorporating the others interpretation of the situation into one's own... this does not mean that interpretation must lead in every case to a stable and unambiguously differentiated assignment." Speech acts are embedded in contexts that are also changed by them. The relationship is dynamic and occurs in both directions. To see context as a fixed background or preunderstanding is to push it out of the sphere of communicative action. === Therapeutic discourse === Therapeutic discourse is that which serves to clarify systematic self-deception. Such self-deceptions typically arise from developmental experiences, which have left certain rigidities of behaviour or biases of value judgement. These rigidities do not allow flexible responses to present time exigencies. Habermas sees this in terms of psychoanalysis. A related aspect of this discourse is the adoption of a reflective attitude, which is a basic condition of rational communication. But the claim to be free from illusions implies a dimension of self-analysis if it is to engage with change. The most intractable illusions are surely embedded within our subconscious. === Explicative discourse === Explicative discourse focuses on the very means of reaching understanding – the means of (linguistic) expression. Rationality must include a willingness to question the grammar of any system of communication used to forward validity claims. The question of whether visual language can put forward an argument is not broached by Habermas. Although language is broadly defined as any communicative action upon which you can be reflective it is verbal discourse that is prioritised in Habermas' arguments. Verbal language certainly has the prominent place in his model of human action. Oral contexts of communication have been relatively little studied and the distinction between oral and literary forms is not made in The Theory of Communicative Action. As the system colonises the lifeworld most enterprises are not driven by the motives of their members. "The bureaucratic disempowering and desiccation of spontaneous processes of opinion and will formation expands the scope for engineering mass loyalty and makes it easier to uncouple political decision making from concrete, identity forming contexts of life." The system does this by rewarding or coercing that which legitimates it from the cultural spheres. Such conditions of public patronage invisibly negate the freedom that is supposedly available in the cultural field. == Reception == The Theory of Communicative Action was the subject of a collection of critical essays published in 1986. The philosopher Tom Rockmore, writing in 1989, commented that it was unclear whether The Theory of Communicative Action or Habermas's earlier work Knowledge and Human Interests (1968), was the most important of Habermas's works. The Theory of Communicative Action has inspired many responses by social theorists and philosophers, and in 1998 was listed by the International Sociological Association as the eighth most important sociological book of the 20th century, behind Norbert Elias' The Civilizing Process (1939) but ahead of Talcott Parsons' The Structure of Social Action (1937). == See also == Communicative rationality Foucault–Habermas debate Rationality and power Wilhelm von Humboldt == Notes == == References ==
Wikipedia/The_Theory_of_Communicative_Action
Feminist theory is the extension of feminism into theoretical, fictional, or philosophical discourse. It aims to understand the nature of gender inequality. It examines women's and men's social roles, experiences, interests, chores, and feminist politics in a variety of fields, such as anthropology and sociology, communication, media studies, psychoanalysis, political theory, home economics, literature, education, and philosophy. Feminist theory often focuses on analyzing gender inequality. Themes often explored in feminist theory include discrimination, objectification (especially sexual objectification), oppression, patriarchy, stereotyping, art history and contemporary art, and aesthetics. == History == "The Changing Woman" is a Navajo myth that gave credit to a woman who, in the end, populated the world. By the 1790s, the leading feminist voice in both the U.K. and U.S. was Mary Wollstonecraft, whose A Vindication of the Rights of Woman (1792) was influenced by the lesser-known American Judith Sargent Murray. Both women asserted that the best route to improving women's condition is education. Their ideas influenced American Charles Brockden Brown, who wrote Dialogues of Alcuin in 1797. The Anglophone world saw no feminist theory of note until "Men and Women: Brief Hypothesis Concerning the Difference in their Genius" (1824) by American John Neal, who repeated Wollstonecraft's and Murray's theories, but added the assertion that women are unlike, but not inferior to men. This and other essays by Neal in the 1820s filled an intellectual gap between female scholars in the 1790s and those surrounding the 1848 Seneca Falls Convention. As a male writer insulated from many common forms of attack against female feminist thinkers, Neal's advocacy was crucial in bringing the field back into the mainstream in the U.K. and the U.S. By the time of the convention, writing by Neal, Sarah Grimké, and Margaret Fuller had solidified ideas from sporadic publications over the previous sixty years into a movement that reached a wider audience. In 1851, Sojourner Truth addressed women's rights issues through her publication, "Ain't I a Woman". Sojourner Truth addressed the issue of women having limited rights due to men's flawed perception of women. Truth argued that if a woman of color can perform tasks that were supposedly limited to men, then any woman of any color could perform those same tasks. After her arrest for illegally voting, Susan B. Anthony gave a speech within court in which she addressed the issues of language within the constitution documented in her publication, "Speech after Arrest for Illegal voting" in 1872. Anthony questioned the authoritative principles of the constitution and its male-gendered language. She raised the question of why women are accountable to be punished under law but they cannot use the law for their own protection (women could not vote, own property, nor maintain custody of themselves in marriage). She also critiqued the constitution for its male-gendered language and questioned why women should have to abide by laws that do not specify women. Nancy Cott makes a distinction between modern feminism and its antecedents, particularly the struggle for suffrage. In the United States she places the turning point in the decades before and after women obtained the vote in 1920 (1910–1930). She argues that the prior woman movement was primarily about woman as a universal entity, whereas over this 20-year period it transformed itself into one primarily concerned with social differentiation, attentive to individuality and diversity. New issues dealt more with woman's condition as a social construct, gender identity, and relationships within and between genders. Politically, this represented a shift from an ideological alignment comfortable with the right, to one more radically associated with the left. Susan Kingsley Kent says that Freudian patriarchy was responsible for the diminished profile of feminism in the inter-war years, others such as Juliet Mitchell consider this to be overly simplistic since Freudian theory is not wholly incompatible with feminism. Some feminist scholarship shifted away from the need to establish the origins of family, and towards analyzing the process of patriarchy. In the immediate postwar period, Simone de Beauvoir stood in opposition to an image of "the woman in the home". De Beauvoir provided an existentialist dimension to feminism with the publication of Le Deuxième Sexe (The Second Sex) in 1949. As the title implies, the starting point is the implicit inferiority of women, and the first question de Beauvoir asks is "what is a woman"? A woman she realizes is always perceived of as the "other", "she is defined and differentiated with reference to man and not he with reference to her". In this book and her essay, "Woman: Myth & Reality", de Beauvoir anticipates Betty Friedan in seeking to demythologize the male concept of woman. "A myth invented by men to confine women to their oppressed state. For women, it is not a question of asserting themselves as women, but of becoming full-scale human beings." "One is not born, but rather becomes, a woman", or as Toril Moi puts it "a woman defines herself through the way she lives her embodied situation in the world, or in other words, through the way in which she makes something of what the world makes of her". Therefore, the woman must regain subject, to escape her defined role as "other", as a Cartesian point of departure. In her examination of myth, she appears as one who does not accept any special privileges for women. Ironically, feminist philosophers have had to extract de Beauvoir herself from out of the shadow of Jean-Paul Sartre to fully appreciate her. While more philosopher and novelist than activist, she did sign one of the Mouvement de Libération des Femmes manifestos. The resurgence of feminist activism in the late 1960s was accompanied by an emerging literature of concerns for the earth and spirituality, and environmentalism. This, in turn, created an atmosphere conducive to reigniting the study of and debate on matricentricity, as a rejection of determinism, such as Adrienne Rich and Marilyn French while for socialist feminists like Evelyn Reed, patriarchy held the properties of capitalism. Feminist psychologists, such as Jean Baker Miller, sought to bring a feminist analysis to previous psychological theories, proving that "there was nothing wrong with women, but rather with the way modern culture viewed them". Elaine Showalter describes the development of feminist theory as having a number of phases. The first she calls "feminist critique" – where the feminist reader examines the ideologies behind literary phenomena. The second Showalter calls "Gynocritics" – where the "woman is producer of textual meaning" including "the psychodynamics of female creativity; linguistics and the problem of a female language; the trajectory of the individual or collective female literary career and literary history". The last phase she calls "gender theory" – where the "ideological inscription and the literary effects of the sex/gender system" are explored". This model has been criticized by Toril Moi who sees it as an essentialist and deterministic model for female subjectivity. She also criticized it for not taking account of the situation for women outside the west. From the 1970s onwards, psychoanalytical ideas that have been arising in the field of French feminism have gained a decisive influence on feminist theory. Feminist psychoanalysis deconstructed the phallic hypotheses regarding the Unconscious. Julia Kristeva, Bracha Ettinger and Luce Irigaray developed specific notions concerning unconscious sexual difference, the feminine, and motherhood, with wide implications for film and literature analysis. In the 1990s and the first decades of the 21st century, intersectionality played a major role in feminist theory, leading to the development of transfeminism and queer feminism and the consolidation of Black, anti-racist and postcolonial feminisms, among others. The rise of the fourth wave in the 2010s led to new discussions on sexual violence, consent and body positivity, as well as a deepening of intersectional perspectives. Simultaneously, feminist philosophy and anthropology saw a rise in new materialist, affect-oriented, posthumanist and ecofeminist perspectives. == Disciplines == There are a number of distinct feminist disciplines, in which experts in other areas apply feminist techniques and principles to their own fields. Additionally, these are also debates which shape feminist theory and they can be applied interchangeably in the arguments of feminist theorists. === Bodies === In Western thought, the body has been historically associated solely with women, whereas men have been associated with the mind. Susan Bordo, a modern feminist philosopher, in her writings elaborates the dualistic nature of the mind/body connection by examining the early philosophies of Aristotle, Hegel, and Descartes, revealing how such distinguishing binaries such as spirit/matter and male activity/female passivity have worked to solidify gender characteristics and categorization. Bordo goes on to point out that while men have historically been associated with the intellect and the mind or spirit, women have long been associated with the body, the subordinated, negatively imbued term in the mind/body dichotomy. The notion of the body (but not the mind) being associated with women has served as a justification to deem women as property, objects, and exchangeable commodities (among men). For example, women's bodies have been objectified throughout history through the changing ideologies of fashion, diet, exercise programs, cosmetic surgery, childbearing, etc. This contrasts to men's role as a moral agent, responsible for working or fighting in bloody wars. The race and class of a woman can determine whether her body will be treated as decoration and protected, which is associated with middle or upper-class women's bodies. On the other hand, the other body is recognized for its use in labor and exploitation which is generally associated with women's bodies in the working-class or with women of color. Second-wave feminist activism has argued for reproductive rights and choice. The women's health movement and lesbian feminism are also associated with this Bodies debate. === The standard and contemporary sex and gender system === The standard sex determination and gender model consists of evidence based on the determined sex and gender of every individual and serve as norms for societal life. The model that the sex-determination of a person exists within a male/female dichotomy, giving importance to genitals and how they are formed via chromosomes and DNA-binding proteins (such as the sex-determining region Y genes), which are responsible for sending sex-determined initialization and completion signals to and from the biological sex-determination system in fetuses. Occasionally, variations occur during the sex-determining process, resulting in intersex conditions. The standard model defines gender as a social understanding/ideology that defines what behaviors, actions, and appearances are normal for males and females. Studies into biological sex-determining systems also have begun working towards connecting certain gender conducts such as behaviors, actions, and desires with sex-determinism. ==== Socially-biasing children sex and gender system ==== The socially biasing children's sex and gender model broadens the horizons of the sex and gender ideologies. It revises the ideology of sex to be a social construct that is not limited to either male or female. The Intersex Society of North America which explains that "nature doesn't decide where the category of 'male' ends and the category of 'intersex' begins, or where the category of 'intersex' ends and the category of 'female' begins. Humans decide. Humans (today, typically doctors) decide how small a penis has to be, or how unusual a combination of parts has to be before it counts as intersex". Therefore, sex is not a biological/natural construct but a social one instead since, society and doctors decide on what it means to be male, female, or intersex in terms of sex chromosomes and genitals, in addition to their personal judgment on who or how one passes as specific sex. The ideology of gender remains a social construct but is not as strict and fixed. Instead, gender is easily malleable and is forever changing. One example of where the standard definition of gender alters with time happens to be depicted in Sally Shuttleworth's Female Circulation in which the "abasement of the woman, reducing her from an active participant in the labor market to the passive bodily existence to be controlled by male expertise is indicative of the ways in which the ideological deployment of gender roles operated to facilitate and sustain the changing structure of familial and market relations in Victorian England". In other words, this quote shows what it meant growing up into the roles of a female (gender/roles) changed from being a homemaker to being a working woman and then back to being passive and inferior to males. In conclusion, the contemporary sex gender model is accurate because both sex and gender are rightly seen as social constructs inclusive of the wide spectrum of sexes and genders and in which nature and nurture are interconnected. === Epistemologies === Questions about how knowledge is produced, generated, and distributed have been central to Western conceptions of feminist theory and discussions on feminist epistemology. One debate proposes such questions as "Are there 'women's ways of knowing' and 'women's knowledge'?" And "How does the knowledge women produce about themselves differ from that produced by patriarchy?" Feminist theorists have also proposed the "feminist standpoint knowledge" which attempts to replace the "view from nowhere" with the model of knowing that expels the "view from women's lives". A feminist approach to epistemology seeks to establish knowledge production from a woman's perspective. It theorizes that from personal experience comes knowledge which helps each individual look at things from a different insight. It is central to feminism that women are systematically subordinated, and bad faith exists when women surrender their agency to this subordination (for example, acceptance of religious beliefs that a man is the dominant party in a marriage by the will of God). Simone de Beauvoir labels such women "mutilated" and "immanent". === Intersectionality === Intersectionality is the examination of various ways in which people are oppressed, based on the relational web of dominating factors of race, sex, class, nation and sexual orientation. Intersectionality "describes the simultaneous, multiple, overlapping, and contradictory systems of power that shape our lives and political options". While this theory can be applied to all people, and more particularly all women, it is specifically mentioned and studied within the realms of black feminism. Patricia Hill Collins argues that black women in particular, have a unique perspective on the oppression of the world as unlike white women, they face both racial and gender oppression simultaneously, among other factors. This debate raises the issue of understanding the oppressive lives of women that are not only shaped by gender alone but by other elements such as racism, classism, ageism, heterosexism, ableism etc. === Language === In this debate, women writers have addressed the issues of masculinized writing through male gendered language that may not serve to accommodate the literary understanding of women's lives. Such masculinized language that feminist theorists address is the use of, for example, "God the Father", which is looked upon as a way of designating the sacred as solely men (or, in other words, biblical language glorifies men through all of the masculine pronouns like "he" and "him" and addressing God as a "He"). Feminist theorists attempt to reclaim and redefine women through a deeper thinking of language. For example, feminist theorists have used the term "womyn" instead of "women". Some feminist theorists have suggested using neutral terminology when naming jobs (for example, police officer versus policeman or mail carrier versus mailman). Some feminist theorists have reclaimed and redefined such words as "dyke" and "bitch". === Psychology === Feminist psychology is a form of psychology centered on societal structures and gender. Feminist psychology critiques the fact that historically psychological research has been done from a male perspective with the view that males are the norm. Feminist psychology is oriented on the values and principles of feminism. It incorporates gender and the ways women are affected by issues resulting from it. Ethel Dench Puffer Howes was one of the first women to enter the field of psychology. She was the executive secretary of the National College Equal Suffrage League in 1914. One major psychological theory, relational-cultural theory, is based on the work of Jean Baker Miller, whose book Toward a New Psychology of Women proposes that "growth-fostering relationships are a central human necessity and that disconnections are the source of psychological problems". Inspired by Betty Friedan's Feminine Mystique, and other feminist classics from the 1960s, relational-cultural theory proposes that "isolation is one of the most damaging human experiences and is best treated by reconnecting with other people", and that a therapist should "foster an atmosphere of empathy and acceptance for the patient, even at the cost of the therapist's neutrality". The theory is based on clinical observations and sought to prove that "there was nothing wrong with women, but rather with the way modern culture viewed them". ==== Psychoanalysis ==== Psychoanalytic feminism and feminist psychoanalysis are based on Freud and his psychoanalytic theories, but they also supply an important critique of it. It maintains that gender is not biological but is based on the psycho-sexual development of the individual, but also that sexual difference and gender are different notions. Psychoanalytical feminists believe that gender inequality comes from early childhood experiences, which lead men to believe themselves to be masculine, and women to believe themselves feminine. It is further maintained that gender leads to a social system that is dominated by males, which in turn influences the individual psycho-sexual development. As a solution it was suggested by some to avoid the gender-specific structuring of the society coeducation. From the last 30 years of the 20th century, the contemporary French psychoanalytical theories concerning the feminine, that refer to sexual difference rather than to gender, with psychoanalysts like Julia Kristeva, Maud Mannoni, Luce Irigaray, and Bracha Ettinger that invented the concept matrixial space and matrixial Feminist ethics, have largely influenced not only feminist theory but also the understanding of the subject in philosophy, art, aesthetics and ethics and the general field of psychoanalysis itself. These French psychoanalysts are mainly post-Lacanian. Other feminist psychoanalysts and feminist theorists whose contributions have enriched the field through an engagement with psychoanalysis are Jessica Benjamin, Jacqueline Rose, Ranjana Khanna, and Shoshana Felman. === Literary theory === Feminist literary criticism is literary criticism informed by feminist theories or politics. Its history has been varied, from classic works of female authors such as George Eliot, Virginia Woolf, and Margaret Fuller to recent theoretical work in women's studies and gender studies by "third-wave" authors. In the most general terms, feminist literary criticism before the 1970s was concerned with the politics of women's authorship and the representation of women's condition within literature. Since the arrival of more complex conceptions of gender and subjectivity, feminist literary criticism has taken a variety of new routes. It has considered gender in the terms of Freudian and Lacanian psychoanalysis, as part of the deconstruction of existing power relations. === Film theory === Many feminist film critics, such as Laura Mulvey, have pointed to the "male gaze" that predominates in classical Hollywood film making. Through the use of various film techniques, such as shot reverse shot, the viewers are led to align themselves with the point of view of a male protagonist. Notably, women function as objects of this gaze far more often than as proxies for the spectator. Feminist film theory of the last twenty years is heavily influenced by the general transformation in the field of aesthetics, including the new options of articulating the gaze, offered by psychoanalytical French feminism, like Bracha Ettinger's feminine, maternal and matrixial gaze. === Art history === Linda Nochlin and Griselda Pollock are prominent art historians writing on contemporary and modern artists and articulating Art history from a feminist perspective since the 1970s. Pollock works with French psychoanalysis, and in particular with Kristeva's and Ettinger's theories, to offer new insights into art history and contemporary art with special regard to questions of trauma and trans-generation memory in the works of women artists. Other prominent feminist art historians include: Norma Broude and Mary Garrard; Amelia Jones; Mieke Bal; Carol Duncan; Lynda Nead; Lisa Tickner; Tamar Garb; Hilary Robinson; Katy Deepwell. === History === Feminist history refers to the re-reading and re-interpretation of history from a feminist perspective. It is not the same as the history of feminism, which outlines the origins and evolution of the feminist movement. It also differs from women's history, which focuses on the role of women in historical events. The goal of feminist history is to explore and illuminate the female viewpoint of history through rediscovery of female writers, artists, philosophers, etc., in order to recover and demonstrate the significance of women's voices and choices in the past. === Geography === Feminist geography is often considered part of a broader postmodern approach to the subject which is not primarily concerned with the development of conceptual theory in itself but rather focuses on the real experiences of individuals and groups in their own localities, upon the geographies that they live in within their own communities. In addition to its analysis of the real world, it also critiques existing geographical and social studies, arguing that academic traditions are delineated by patriarchy, and that contemporary studies which do not confront the nature of previous work reinforce the male bias of academic study. === Philosophy === The Feminist philosophy refers to a philosophy approached from a feminist perspective. Feminist philosophy involves attempts to use methods of philosophy to further the cause of the feminist movements, it also tries to criticize and/or reevaluate the ideas of traditional philosophy from within a feminist view. This critique stems from the dichotomy Western philosophy has conjectured with the mind and body phenomena. There is no specific school for feminist philosophy like there has been in regard to other theories. This means that Feminist philosophers can be found in the analytic and continental traditions, and the different viewpoints taken on philosophical issues with those traditions. Feminist philosophers also have many different viewpoints taken on philosophical issues within those traditions. Feminist philosophers who are feminists can belong to many different varieties of feminism. The writings of Judith Butler, Rosi Braidotti, Donna Haraway, Bracha Ettinger and Avital Ronell are the most significant psychoanalytically informed influences on contemporary feminist philosophy. === Sexology === Feminist sexology is an offshoot of traditional studies of sexology that focuses on the intersectionality of sex and gender in relation to the sexual lives of women. Feminist sexology shares many principles with the wider field of sexology; in particular, it does not try to prescribe a certain path or "normality" for women's sexuality, but only observe and note the different and varied ways in which women express their sexuality. Looking at sexuality from a feminist point of view creates connections between the different aspects of a person's sexual life. From feminists' perspectives, sexology, which is the study of human sexuality and sexual relationship, relates to the intersectionality of gender, race and sexuality. Men have dominant power and control over women in the relationship, and women are expected to hide their true feeling about sexual behaviors. Women of color face even more sexual violence in the society. Some countries in Africa and Asia even practice female genital cutting, controlling women's sexual desire and limiting their sexual behavior. Moreover, Bunch, the women's and human rights activist, states that society used to see lesbianism as a threat to male supremacy and to the political relationships between men and women. Therefore, in the past, people viewed being a lesbian as a sin and made it death penalty. Even today, many people still discriminate homosexuals. Many lesbians hide their sexuality and face even more sexual oppression. === Monosexual paradigm === Monosexual Paradigm is a term coined by Blasingame, a self-identified African American, bisexual female. Blasingame used this term to address the lesbian and gay communities who turned a blind eye to the dichotomy that oppressed bisexuals from both heterosexual and homosexual communities. This oppression negatively affects the gay and lesbian communities more so than the heterosexual community due to its contradictory exclusiveness of bisexuals. Blasingame argued that in reality dichotomies are inaccurate to the representation of individuals because nothing is truly black or white, straight or gay. Her main argument is that biphobia is the central message of two roots; internalized heterosexism and racism. Internalized heterosexism is described in the monosexual paradigm in which the binary states that you are either straight or gay and nothing in between. Gays and lesbians accept this internalized heterosexism by morphing into the monosexial paradigm and favoring single attraction and opposing attraction for both sexes. Blasingame described this favoritism as an act of horizontal hostility, where oppressed groups fight amongst themselves. Racism is described in the monosexual paradigm as a dichotomy where individuals are either black or white, again nothing in between. The issue of racism comes into fruition in regards to the bisexuals coming out process, where risks of coming out vary on a basis of anticipated community reaction and also in regards to the norms among bisexual leadership, where class status and race factor predominately over sexual orientation. === Politics === Feminist political theory is a recently emerging field in political science focusing on gender and feminist themes within the state, institutions and policies. It questions the "modern political theory, dominated by universalistic liberalist thought, which claims indifference to gender or other identity differences and has therefore taken its time to open up to such concerns". Feminist perspectives entered international relations in the late 1980s, at about the same time as the end of the Cold War. This time was not a coincidence because the last forty years the conflict between US and USSR had been the dominant agenda of international politics. After the Cold War, there was continuing relative peace between the main powers. Soon, many new issues appeared on international relation's agenda. More attention was also paid to social movements. Indeed, in those times feminist approaches also used to depict the world politics. Feminists started to emphasize that while women have always been players in international system, their participation has frequently been associated with non-governmental settings such as social movements. However, they could also participate in inter-state decision-making process as men did. Until more recently, the role of women in international politics has been confined to being the wives of diplomats, nannies who go abroad to find work and support their family, or sex workers trafficked across international boundaries. Women's contributions has not been seen in the areas where hard power plays significant role such as military. Nowadays, women are gaining momentum in the sphere of international relations in areas of government, diplomacy, academia, etc.. Despite barriers to more senior roles, women currently hold 11.1 percent of the seats in the U.S. Senate Foreign Relations Committee, and 10.8 percent in the House. In the U.S. Department of State, women make up 29 percent of the chiefs of mission, and 29 percent of senior foreign positions at USAID. In contrast, women are profoundly impacted by decisions the statepersons make. === Economics === Feminist economics broadly refers to a developing branch of economics that applies feminist insights and critiques to economics. However, in recent decades, feminists like for example Katrine Marçal, author of Who Cooked Adam Smith's Dinner? has also taken up a critique of economics. Research in feminist economics is often interdisciplinary, critical, or heterodox. It encompasses debates about the relationship between feminism and economics on many levels: from applying mainstream economic methods to under-researched "women's" areas, to questioning how mainstream economics values the reproductive sector, to deeply philosophical critiques of economic epistemology and methodology. One prominent issue that feminist economists investigate is how the gross domestic product (GDP) does not adequately measure unpaid labor predominantly performed by women, such as housework, childcare, and eldercare. Feminist economists have also challenged and exposed the rhetorical approach of mainstream economics. They have made critiques of many basic assumptions of mainstream economics, including the Homo economicus model. In the Houseworker's Handbook Betsy Warrior presents a cogent argument that the reproduction and domestic labor of women form the foundation of economic survival; although, unremunerated and not included in the GDP. According to Warrior:Economics, as it's presented today, lacks any basis in reality as it leaves out the very foundation of economic life. That foundation is built on women's labor; first her reproductive labor which produces every new laborer (and the first commodity, which is mother's milk and which nurtures every new "consumer/laborer"); secondly, women's labor composed of cleaning, cooking, negotiating social stability and nurturing, which prepares for market and maintains each laborer. This constitutes women's continuing industry enabling laborers to occupy every position in the work force. Without this fundamental labor and commodity there would be no economic activity.Warrior also notes that the unacknowledged income of men from illegal activities like arms, drugs and human trafficking, political graft, religious emoluments and various other undisclosed activities provide a rich revenue stream to men, which further invalidates GDP figures. Even in underground economies where women predominate numerically, like trafficking in humans, prostitution and domestic servitude, only a tiny fraction of the pimp's revenue filters down to the women and children he deploys. Usually the amount spent on them is merely for the maintenance of their lives and, in the case of those prostituted, some money may be spent on clothing and such accouterments as will make them more salable to the pimp's clients. For instance, focusing on just the U.S., according to a government sponsored report by the Urban Institute in 2014, "A street prostitute in Dallas may make as little as $5 per sex act. But pimps can take in $33,000 a week in Atlanta, where the sex business brings in an estimated $290 million per year." Proponents of this theory have been instrumental in creating alternative models, such as the capability approach and incorporating gender into the analysis of economic data to affect policy. Marilyn Power suggests that feminist economic methodology can be broken down into five categories. === Legal theory === Feminist legal theory is based on the feminist view that law's treatment of women in relation to men has not been equal or fair. The goals of feminist legal theory, as defined by leading theorist Clare Dalton, consist of understanding and exploring the female experience, figuring out if law and institutions oppose females, and figuring out what changes can be committed to. This is to be accomplished through studying the connections between the law and gender as well as applying feminist analysis to concrete areas of law. Feminist legal theory stems from the inadequacy of the current structure to account for discrimination women face, especially discrimination based on multiple, intersecting identities. Kimberlé Crenshaw's work is central to feminist legal theory, particularly her article Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory, and Antiracist Politics. DeGraffenreid v. General Motors is an example of such a case. In this instance, the court ruled the plaintiffs, five Black women including Emma DeGraffenreid, who were employees of General Motors, were not eligible to file a complaint on the grounds they, as black women, were not "a special class to be protected from discrimination". The ruling in DeGraffenreid against the plaintiff revealed the courts inability to understand intersectionality's role in discrimination. Moore v. Hughes Helicopters, Inc. is another ruling, which serves to reify the persistent discrediting of intersectionality as a factor in discrimination. In the case of Moore, the plaintiff brought forth statistical evidence revealing a disparity in promotions to upper-level and supervisory jobs between men and women and, to a lesser extent, between Black and white men. Ultimately, the court denied the plaintiff the ability to represent all Blacks and all females. The decision dwindled the pool of statistical information the plaintiff could pull from and limited the evidence only to that of Black women, which is a ruling in direct contradiction to DeGraffenreid. Further, because the plaintiff originally claimed discrimination as a Black female rather than, more generally, as a female the court stated it had concerns whether the plaintiff could "adequately represent white female employees". Payne v. Travenol serves as yet another example of the courts inconsistency when dealing with issues revolving around intersections of race and sex. The plaintiffs in Payne, two Black females, filed suit against Travenol on behalf of both Black men and women on the grounds the pharmaceutical plant practiced racial discrimination. The court ruled the plaintiffs could not adequately represent Black males; however, they did allow the admittance of statistical evidence, which was inclusive of all Black employees. Despite the more favorable outcome after it was found there was extensive racial discrimination, the courts decided the benefits of the ruling – back pay and constructive seniority – would not be extended to Black males employed by the company. Moore contends Black women cannot adequately represent white women on issues of sex discrimination, Payne suggests Black women cannot adequately represent Black men on issues of race discrimination, and DeGraffenreid argues Black women are not a special class to be protected. The rulings, when connected, display a deep-rooted problem in regards to addressing discrimination within the legal system. These cases, although they are outdated are used by feminists as evidence of their ideas and principles. === Communication theory === Feminist communication theory has evolved over time and branches out in many directions. Early theories focused on the way that gender influenced communication and many argued that language was "man made". This view of communication promoted a "deficiency model" asserting that characteristics of speech associated with women were negative and that men "set the standard for competent interpersonal communication", which influences the type of language used by men and women. These early theories also suggested that ethnicity, cultural and economic backgrounds also needed to be addressed. They looked at how gender intersects with other identity constructs, such as class, race, and sexuality. Feminist theorists, especially those considered to be liberal feminists, began looking at issues of equality in education and employment. Other theorists addressed political oratory and public discourse. The recovery project brought to light many women orators who had been "erased or ignored as significant contributors". Feminist communication theorists also addressed how women were represented in the media and how the media "communicated ideology about women, gender, and feminism". Feminist communication theory also encompasses access to the public sphere, whose voices are heard in that sphere, and the ways in which the field of communication studies has limited what is regarded as essential to public discourse. The recognition of a full history of women orators overlooked and disregarded by the field has effectively become an undertaking of recovery, as it establishes and honors the existence of women in history and lauds the communication by these historically significant contributors. This recovery effort, begun by Andrea Lunsford, Professor of English and Director of the Program in Writing and Rhetoric at Stanford University and followed by other feminist communication theorists also names women such as Aspasia, Diotima, and Christine de Pisan, who were likely influential in rhetorical and communication traditions in classical and medieval times, but who have been negated as serious contributors to the traditions. Feminist communication theorists are also concerned with a recovery effort in attempting to explain the methods used by those with power to prohibit women like Maria W. Stewart, Sarah Moore Grimké, and Angelina Grimké, and more recently, Ella Baker and Anita Hill, from achieving a voice in political discourse and consequently being driven from the public sphere. Theorists in this vein are also interested in the unique and significant techniques of communication employed by these women and others like them to surmount some of the oppression they experienced. Feminist theorist also evaluate communication expectations for students and women in the work place, in particular how the performance of feminine versus masculine styles of communicating are constructed. Judith Butler, who coined the term "gender performativity" further suggests that, "theories of communication must explain the ways individuals negotiate, resist, and transcend their identities in a highly gendered society". This focus also includes the ways women are constrained or "disciplined" in the discipline of communication in itself, in terms of biases in research styles and the "silencing" of feminist scholarship and theory. Who is responsible for deciding what is considered important public discourse is also put into question by feminist theorists in communication scholarship. This lens of feminist communication theory is labeled as revalorist theory which honors the historical perspective of women in communication in an attempt to recover voices that have been historically neglected. There have been many attempts to explain the lack of representative voices in the public sphere for women including, the notion that, "the public sphere is built on essentialist principles that prevent women from being seen as legitimate communicators in that sphere", and theories of subalternity", which, "under extreme conditions of oppression...prevent those in positions of power from even hearing their communicative attempts". === Public relations === Feminist theory can be applied to the field of public relations. The feminist scholar Linda Hon examined the major obstacles that women in the field experienced. Some common barriers included male dominance and gender stereotypes. Hon shifted the feminist theory of PR from "women's assimilation into patriarchal systems " to "genuine commitment to social restructuring". Similarly to the studies Hon conducted, Elizabeth Lance Toth studied Feminist Values in Public Relations. Toth concluded that there is a clear link between feminist gender and feminist value. These values include honesty, sensitivity, perceptiveness, fairness, and commitment. === Design === Technical writers have concluded that visual language can convey facts and ideas clearer than almost any other means of communication. According to the feminist theory, "gender may be a factor in how human beings represent reality." Men and women will construct different types of structures about the self, and, consequently, their thought processes may diverge in content and form. This division depends on the self-concept, which is an "important regulator of thoughts, feelings and actions" that "governs one's perception of reality". With that being said, the self-concept has a significant effect on how men and women represent reality in different ways. Recently, "technical communicators' terms such as 'visual rhetoric,' 'visual language,' and 'document design' indicate a new awareness of the importance of visual design". Deborah S. Bosley explores this new concept of the "feminist theory of design" by conducting a study on a collection of undergraduate males and females who were asked to illustrate a visual, on paper, given to them in a text. Based on this study, she creates a "feminist theory of design" and connects it to technical communicators. In the results of the study, males used more angular illustrations, such as squares, rectangles and arrows, which are interpreted as a "direction" moving away from or a moving toward, thus suggesting more aggressive positions than rounded shapes, showing masculinity. Females, on the other hand, used more curved visuals, such as circles, rounded containers and bending pipes. Bosley takes into account that feminist theory offers insight into the relationship between females and circles or rounded objects. According to Bosley, studies of women and leadership indicate a preference for nonhierarchical work patterns (preferring a communication "web" rather than a communication "ladder"). Bosley explains that circles and other rounded shapes, which women chose to draw, are nonhierarchical and often used to represent inclusive, communal relationships, confirming her results that women's visual designs do have an effect on their means of communications. Based on these conclusions, this "feminist theory of design" can go on to say that gender does play a role in how humans represent reality. === Black feminist criminology === Black feminist criminology theory is a concept created by Hillary Potter in 2006 to act as a bridge that integrates Feminist theory with criminology. It is based on the integration of Black feminist theory and critical race feminist theory. As Potter articulates this theory, Black feminist criminology describes experiences of Black women as victims of crimes. Other scholars, such as Patrina Duhaney and Geniece Crawford Mondé, have explored Black feminist criminology in relation to current and formerly incarcerated Black women. For years, Black women were historically overlooked and disregarded in the study of crime and criminology; however, with a new focus on Black feminism that sparked in the 1980s, Black feminists began to contextualize their unique experiences and examine why the general status of Black women in the criminal justice system was lacking in female specific approaches. Potter explains that because Black women usually have "limited access to adequate education and employment as consequences of racism, sexism, and classism", they are often disadvantaged. This disadvantage materializes into "poor responses by social service professionals and crime-processing agents to Black women's interpersonal victimization". Most crime studies focused on White males/females and Black males. Any results or conclusions targeted to Black males were usually assumed to be the same situation for Black females. This was very problematic since Black males and Black females differ in what they experience. For instance, economic deprivation, status equality between the sexes, distinctive socialization patterns, racism, and sexism should all be taken into account between Black males and Black females. The two will experience all of these factors differently; therefore, it was crucial to resolve this dilemma. Black feminist criminology is proposed as the solution to this problem. It takes four factors into account: The social structural oppression of Black women (such as through the lens of Crenshaw's intersectionality). Nuances of Black communities and cultures. Black intimate and familial relations. The Black woman as an individual. These four factors, Potter argues, helps Black feminist criminology describe the differences between Black women's and Black men's experiences within the criminal justice system. Still, Potter urges caution, noting that, just because this theory aims to help understand and explain Black women's experiences with the criminal justice system, one cannot generalize so much that nuances in experiences are ignored. Potter writes that Black women's "individual circumstances must always be considered in conjunction with the shared experiences of these women." === Feminist science and technology studies === Feminist science and technology studies (STS) refers to the transdisciplinary field of research on the ways gender and other markers of identity intersect with technology, science, and culture. The practice emerged from feminist critique on the masculine-coded uses of technology in the fields of natural, medical, and technical sciences, and its entanglement in gender and identity. A large part of feminist technoscience theory explains science and technologies to be linked and should be held accountable for the social and cultural developments resulting from both fields. Some key issues feminist technoscience studies address include: The use of feminist analysis when applied to scientific ideas and practices Intersections between race, class, gender, science, and technology The implications of situated knowledges Politics of gender on how to understand agency, body, rationality, and the boundaries between nature and culture === Ecological feminism or ecofeminism === In the 1970s, the impacts of post-World War II technological development led many women to organise against issues from the toxic pollution of neighbourhoods to nuclear weapons testing on indigenous lands. This grassroots activism emerging across every continent was both intersectional and cross-cultural in its struggle to protect the conditions for reproduction of Life on Earth. Known as ecofeminism, the political relevance of this movement continues to expand. Classic statements in its literature include Carolyn Merchant, United States, The Death of Nature; Maria Mies, Germany, Patriarchy and Accumulation on a World Scale; Vandana Shiva, India, Staying Alive: Women Ecology and Development; Ariel Salleh, Australia, Ecofeminism as Politics: nature, Marx, and the postmodern. Ecofeminism involves a profound critique of Eurocentric epistemology, science, economics, and culture. It is increasingly prominent as a feminist response to the contemporary breakdown of the planetary ecosystem. == See also == == References == == Further reading == "Lexicon of Debates". Feminist Theory: A Reader. 2nd Ed. Edited by Kolmar, Wendy and Bartowski, Frances. New York: McGraw-Hill, 2005. 42–60. == External links == Evolutionary Feminism Feminist theory website (Center for Digital Discourse and Culture, Virginia Tech) Feminist Theories and Anthropology by Heidi Armbruster The Radical Women Manifesto: Socialist Feminist Theory, Program and Organizational Structure (Seattle: Red Letter Press, 2001) Pembroke Center for Teaching and Research on Women, Brown University Feminist Theory Archive, Brown University The Feminist eZine – An Archive of Historical Feminist Articles Women, Poverty, and Economics- Facts and Figures (archived 3 November 2013)
Wikipedia/Feminist_theory
"Cultural Marxism" refers to a far-right antisemitic conspiracy theory that misrepresents Western Marxism (especially the Frankfurt School) as being responsible for modern progressive movements, identity politics, and political correctness. The conspiracy theory posits that there is an ongoing and intentional academic and intellectual effort to subvert Western society via a planned culture war that undermines the supposed Christian values of traditionalist conservatism and seeks to replace them with culturally progressive values. A revival of the Nazi propaganda term "Cultural Bolshevism", the contemporary version of the conspiracy theory originated in the United States during the 1990s. Originally found only on the far-right political fringe, the term began to enter mainstream discourse in the 2010s and is now found globally. The conspiracy theory of a Marxist culture war is promoted by right-wing politicians, fundamentalist religious leaders, political commentators in mainstream print and television media, and white supremacist terrorists, and has been described as "a foundational element of the alt-right worldview". Scholarly analysis of the conspiracy theory has concluded that it has no basis in fact. == Origins == European reactionaries, following their defeat in the culture wars of the 1960s against liberals and Marxists, split from the mainstream conservatism of the "Old Right", forming a loose intellectual grouping (the "New Right") that criticised the contemporaneous society and attempted to transform cultural norms and values. In the 21st century, The European New Right influenced the US alt-right to focus on nonviolent ways to delegitimize the liberal status quo. This included criticising the perceived decline of Western culture and the influence of pop culture, which they claimed was the result of a collusion between capitalism and what they called "Cultural Marxism". === Michael Minnicino and the LaRouche Movement === Michael Minnicino's 1992 essay New Dark Age: The Frankfurt School and 'Political Correctness' has been described as a starting point for the contemporary conspiracy theory in the United States. Minnicino's interest in the subject derived from his involvement in the LaRouche movement. Lyndon LaRouche had begun developing conspiracy theories regarding the Frankfurt School in 1974, when he alleged that Herbert Marcuse and Angela Davis were acting as part of COINTELPRO. Other features of the conspiracy theory had developed across the 1970s and 80s in the movement's magazine, EIR, according to the researcher Andrew Woods. Minnicino's essay argued that late twentieth-century America had become a "New Dark Age" as a result of the abandonment of Judeo-Christian and Renaissance ideals, which he claimed had been replaced in modern art with a "tyranny of ugliness". He attributed this to an alleged plot to instill cultural pessimism in America, carried out in three stages by Georg Lukács, the Frankfurt School, and elite media figures and political campaigners. Minnicino asserted there were two aspects of the Frankfurt School plan to destroy Western culture. Firstly, a cultural critique, by Theodor Adorno and Walter Benjamin, to use art and culture to promote alienation and replace Christianity with socialism. This included the development of opinion polling and advertising techniques to brainwash the populace and control political campaigning. Secondly, the plan supposedly included attacks on the traditional family structure by Herbert Marcuse and Erich Fromm to promote women's rights, sexual liberation, and polymorphous perversity to subvert patriarchal authority. Minnicino claimed the Frankfurt School was responsible for elements of the counterculture of the 1960s and a "psychedelic revolution", distributing hallucinogenic drugs to encourage sexual perversion and promiscuity. After the 2011 terrorist attacks in Norway by Anders Breivik, a follower of the conspiracy theory, Minnicino repudiated his own essay. Minnicino wrote, "I still like to think that some of my research was validly conducted and useful. However, I see very clearly that the whole enterprise—and especially the conclusions—was hopelessly deformed by self-censorship and the desire to in some way support Mr. LaRouche's crack-brained world-view." === Paul Weyrich and William Lind === Paul Weyrich and William Lind were prominent figures of cultural conservatism in the United States; Weyrich had co-founded right-wing groups including the Free Congress Foundation, which he led. Weyrich equated political correctness with Cultural Marxism in a speech to the Conservative Leadership Conference of the Civitas Institute in 1998. He argued that "we have lost the culture war" and that "a legitimate strategy for us to follow is to look at ways to separate ourselves from the institutions that have been captured by the ideology of Political Correctness, or by other enemies of our traditional culture." For the Free Congress Foundation, Weyrich commissioned Lind, a paleoconservative activist, to write a history of Cultural Marxism, defined as "a brand of Western Marxism ... commonly known as 'multiculturalism' or, less formally, Political Correctness." In the 2000 speech The Origins of Political Correctness, Lind wrote, "If we look at it analytically, if we look at it historically, we quickly find out exactly what it is. Political correctness is cultural Marxism. It is Marxism translated from economic into cultural terms. It is an effort that goes back not to the 1960s and the Hippies and the peace movement, but back to World War I. If we compare the basic tenets of Political Correctness with classical Marxism, the parallels are very obvious." Lind employed the conspiracy theory to argue that leftist and liberal ideologies were alien to the United States. He argued that Lukács and Antonio Gramsci had aimed to subvert Western culture because it was an obstacle to the Marxist goal of proletarian revolution. He alleged that the Frankfurt School under Max Horkheimer had hoped to destroy Western civilization and establish totalitarianism (even though some members had fled Nazi totalitarianism), using four main strategies. First, Lind said, Horkheimer's critical theory would undermine the authority of family and government while segregating society into opposing groups of victims and oppressors. Second, he said, concepts of the authoritarian personality and the F-scale measuring susceptibility to fascism, developed by Adorno, would be used to accuse Americans with right-wing views of having fascist principles. Third, he said, polymorphous perversity would undermine family structure by promoting free love and homosexuality. Fourth, he characterized Herbert Marcuse as saying that left victim-groups should be allowed to speak while groups on the right were silenced. Lind said that Marcuse considered a coalition of "Blacks, students, feminist women, and homosexuals" as a feasible vanguard of cultural revolution in the 1960s. Lind also wrote that Cultural Marxism was an example of fourth-generation warfare. Pat Buchanan brought more attention among paleoconservatives to Weyrich and Lind's iteration of the conspiracy theory. Jérôme Jamin refers to Buchanan as the "intellectual momentum" of the conspiracy theory, and to Anders Breivik as the "violent impetus". Both of them relied on Lind, who edited a multi-authored work called "Political Correctness: A Short History of an Ideology" that Jamin calls the core text that "has been unanimously cited as 'the' reference since 2004." Lind and the Free Congress Foundation produced the video Political Correctness: The Frankfurt School in 1999. It was further distributed by the Council of Conservative Citizens, a white supremacist group, which added its own introduction. The film includes decontextualized clips of historian Martin Jay, who was not aware of the nature of the production at the time. Jay has since become a recognized expert on the conspiracy theory. Concerning right-wing exploitation of his statements, Jay wrote, "Those beans I allegedly spilled had been on the plate for a very long time," going on to confirm that the Frankfurt school were Marxists concerned with culture, and that Marcuse promulgated the idea of repressive tolerance. However, the conspiracy theory presents an "improverished cartoon version" of these ideas. Jay wrote that Lind's documentary was effective Cultural Marxism propaganda because it "spawned a number of condensed, textual versions, which were reproduced on a number of radical, right-wing [web] sites." Jay further writes: These, in turn, led to a plethora of new videos, now available on YouTube, which feature an odd cast of pseudo-experts regurgitating exactly the same line. The message is numbingly simplistic: All the 'ills' of modern American culture, from feminism, affirmative action, sexual liberation, racial equality, multiculturalism and gay rights to the decay of traditional education, and even environmentalism, are ultimately attributable to the insidious intellectual influence of the members of the Institute for Social Research who came to America in the 1930s. Lind's documentary also featured Lazlo Pasztor, a former member of the Hungarian Arrow Cross Party, who collaborated with the Nazis and later served five years in prison for crimes against humanity. === Others === David Solway sees a "master plan" in Marxist revolutionaries and Cultural Marxists advocating for or predicting the dissolution of marriage. The charge is that they have a "'master plan' for the overthrow of Western civilization from within, personified by those members of the Frankfurt School [...]". === Popularization === Rachel Busbridge, Benjamin Moffitt and Joshua Thorburn describe the conspiracy theory as being promoted by the far-right, but note that it "has gained ground over the past quarter century"; they conclude that "[t]hrough the lens of the Cultural Marxist conspiracy, however, it is possible to discern a relationship of empowerment between mainstream and fringe, whereby certain talking points and tropes are able to be transmitted, taken up and adapted by 'mainstream' figures, thus giving credence and visibility to ideologies that would have previously been constrained to the margins." Andrew Breitbart, founder of Breitbart News, authored a 2011 book Righteous Indignation: Excuse Me While I Save the World that represents one of the conspiracy theory's moves towards the mainstream. Breitbart's interpretation of the conspiracy is similar in most respects to that of Lind. Breitbart attributes the spread of the ideas of the Frankfurt School from universities to a wider audience to "trickledown intellectualism", and claims that Saul Alinsky introduced cultural Marxism to the masses in his 1971 handbook Rules for Radicals. Woods argues that Breitbart focuses on Alinsky in order to associate cultural Marxism with the modern Democratic Party, and Hillary Clinton. Breitbart claims that George Soros funds the alleged cultural Marxism project. Martin Jay wrote that Breitbart's book displayed "appalling ignorance" of the actual work of the Frankfurt School. Breitbart News has published the idea that Theodor Adorno's atonal music was an attempt at inducing mental illness on a mass scale. Former Breitbart contributors Ben Shapiro and Charlie Kirk, founder of Turning Point USA, have promoted the conspiracy theory, especially the claim that Cultural Marxist activity is happening in universities. In the late 2010s, Canadian clinical psychologist Jordan Peterson popularized the term, for example, by blaming "Cultural Marxism" for demanding the use of gender-neutral pronouns as a threat to free speech, thus moving the term into mainstream discourse. Critics state that Peterson misuses postmodernism as a stand-in term for the conspiracy without understanding its antisemitic implications, specifying that "Peterson isn't an ideological anti-Semite; there's every reason to believe that when he re-broadcasts fascist propaganda, he doesn't even hear the dog-whistles he's emitting". Spencer Sunshine and journalist Ari Paul have criticized traditional media such as The New York Times, New York Magazine and The Washington Post for their coverage of the conspiracy theory, arguing that they have either not clarified the nature of the conspiracy theory or "allow[ed] it to live on their pages." An example is an article in The New York Times by David Brooks, who Paul and Sunshine argue "rebrands cultural Marxism as mere political correctness, giving the Nazi-inspired phrase legitimacy for the American right. It is dropped in or quoted in other stories—some of them lighthearted, like the fashion cues of the alt-right—without describing how fringe this notion is. It's akin to letting conspiracy theories about chem trails or vaccines get unearned space in mainstream press." Another is Andrew Sullivan, who went on "to denounce 'cultural Marxists' for inspiring social justice movements on campuses." Paul and Sunshine argue that failure to highlight the nature of the conspiracy theory "has bitter consequences. 'It is legitimizing the use of that framework, and therefore it's coded antisemitism.'" Supporters of the conspiracy theory include paleoconservative political philosopher Paul Gottfried. Gottfried was at one time a student of Herbert Marcuse (with whom he disagreed) and edited the academic journal Telos. Under Gottfried's tenure, Telos became far-right in its outlook, writing favorably about Carl Schmitt and Alain de Benoist. Gottfried influenced Richard Spencer and has been called the "godfather" of the alt-right. He defended William Lind against accusations that "Cultural Marxism" has anti-semitic undertones. Gottfried identifies as reactionary and questions the value of political equality. Gottfried defines cultural Marxism as "a particular movement for change that combines some elements of Marxist socialism with a call for sexual and cultural revolution". However, he says that the term "cultural Marxism" is not ideal since the connection with Marxism is tenuous. Gottfried writes that the influence of the Frankfurt School lives on in modern left-wing politics mainly in the form of a tendency to conflate the right wing with fascism. == Aspects == The conspiracy theory states that an elite of Marxist theorists and Frankfurt School intellectuals are subverting Western society. None of the Frankfurt School's members were part of any kind of international conspiracy to destroy Western civilization, and Horkheimer strictly prohibited members of the Frankfurt school from engaging in political activism in the United States. According to Marc Tuters, "the analysis of Marxism proffered by this literature would certainly not stand up to scrutiny by any serious historian of the subject." Conspiracy theorists misrepresent the nature of Theodor Adorno's work on the Princeton Radio Project, wherein Adorno sought to understand the ability of mass media to influence the public, which he saw as a danger to be mitigated, rather than a plan to be implemented. Conspiracy theorists position themselves as defending "Western civilization", which serves as a floating signifier often focusing on capitalism and freedom of speech. The conspiracy theory is an extreme assessment of political correctness, accusing the latter of being a project to destroy Christianity, nationalism, and the nuclear family. Scholars associated with the Frankfurt School sought to create a better society by warning against patriarchy and capitalist exploitation, goals that could seem threatening to others who have an interest in maintaining the status quo. The conspiracy theory exaggerates the influence of the Frankfurt School; Stuart Jeffries, discussing it, noted their "negligible real-world impact". According to Joan Braune, Cultural Marxism in the sense referred to by the conspiracy theorists never existed, and does not correspond to any historical school of thought. She also states that Frankfurt School scholars are referred to as "Critical Theorists", not "Cultural Marxists". She points out that, contrary to the claims of the conspiracy theorists, postmodernism tends to be wary of or even hostile towards Marxism, including towards the grand narratives typically supported by Critical Theory. == Antisemitism == The author Matthew Rose wrote that arguments by the American neo-Nazi Francis Parker Yockey after World War II were an early example of the conspiracy theory. William Lind on one occasion presented his theories at a Holocaust denial conference. Spencer Sunshine, an associate fellow at the Political Research Associates, stated that "the focus on the Frankfurt School by the right serves to highlight its inherent Jewishness." According to Samuel Moyn, "[t]he wider discourse around cultural Marxism today resembles nothing so much as a version of the Jewish Bolshevism myth updated for a new age." Maxime Dafaure likewise states that Cultural Marxism is a contemporary update of antisemitic conspiracy theories, such as the Nazi concept of "Cultural Bolshevism", and is directly associated with the concept of "Jewish Bolshevism". According to philosopher Slavoj Žižek, the term Cultural Marxism "plays the same structural role as that of the 'Jewish plot' in anti-Semitism: it projects (or rather, transposes) the immanent antagonism of our socio-economic life onto an external cause: what the conservative alt-right deplores as the ethical disintegration of our lives (feminism, attacks on patriarchy, political correctness, etc.) must have an external cause—because it cannot, for them, emerge out of the antagonisms and tensions of our own societies." Dominic Green wrote a conservative critique of conservatives' complaints about Cultural Marxism in Spectator USA, stating: "For the Nazis, the Frankfurter [sic] School and its vaguely Jewish exponents fell under the rubric of Kulturbolshewismus, 'Cultural Bolshevism.'" Andrew Woods in the essay "Cultural Marxism and the Cathedral: Two Alt-Right Perspectives on Critical Theory" (2019), acknowledges comparisons to Cultural Bolshevism, but argues against the idea the modern conspiracy theory was derived from Nazi propaganda. He writes instead that its antisemitism is "profoundly American".: 47  In Commune magazine, Woods detailed a genealogy of the conspiracy theory beginning with the LaRouche movement. Kevin MacDonald has written several anti-semitic texts centering on the Frankfurt School. MacDonald criticized Breivik's manifesto for not being more hostile to Jews. === Circulation in the alt-right === Neo-Nazi and white supremacists promoted the conspiracy theory and help expand its reach. Websites such as the American Renaissance have run articles with titles like "Cultural Marxism in Action: Media Matters Engineers Cancellation of Vdare.com Conference". The Daily Stormer regularly runs stories about "Cultural Marxism" with titles such as "Jewish Cultural Marxism is Destroying Abercrombie & Fitch", "Hollywood Strikes Again: Cultural Marxism through the Medium of Big Box-Office Movies" and "The Left-Center-Right Political Spectrum of Immigration = Cultural Marxism". Neo-nazis associated with Stormfront have strategically used the Frankfurt School as a euphemism to refer to Jewish people more generally, in venues where more forthright anti-semitism would be censored or rejected. Timothy Matthews criticized the Frankfurt School from an explicitly Christian right perspective in the Catholic weekly newspaper The Wanderer. According to Matthews, the Frankfurt School, under the influence of Satan, seeks to destroy the traditional Christian family using critical theory and Marcuse's concept of polymorphous perversity, thereby encouraging homosexuality and breaking down the patriarchal family. Andrew Woods wrote that the plot Matthews describes does not resemble the Frankfurt School so much as the alleged aims of communists in The Naked Communist by W. Cleon Skousen. Nonetheless, Matthews' account was circulated credulously by right-wing and alt-right news media, as well as in far-right internet forums, such as Stormfront. Following the 2011 Norway attacks, the conspiracy theory was taken up by a number of far-right outlets and forums, including alt-right websites such as AltRight Corporation, InfoWars and VDARE which have promoted the theory. The AltRight Corporation's website, altright.com, featured articles with titles such as "Ghostbusters and the Suicide of Cultural Marxism", "#3 — Sweden: The World Capital of Cultural Marxism" and "Beta Leftists, Cultural Marxism and Self-Entitlement". InfoWars ran numerous headlines such as "Is Cultural Marxism America's New Mainline Ideology?" VDARE ran similar articles with similar titles such as "Yes, Virginia (Dare) There Is A Cultural Marxism—And It's Taking Over Conservatism Inc." Richard B. Spencer, head of the National Policy Institute, has promoted the conspiracy theory. Spencer's master's thesis was on the topic of Theodor Adorno. A combination of homophobia and anti-globalism within the alt-right has produced the concept of "globohomo", a variant of "Cultural Marxism" alleging that media and business elites seek to impose a homogeneous "uniculture" on the world, and to weaken populations by promoting feminism, sexual freedom, gender fluidity, liberalism, and immigration. "Globohomo" stands in for global neoliberalism, which is believed to be responsible for replacing a diversity of local cultures (especially white, Western culture) with generic consumerism. The concept was promoted by pick-up artist James C. Weidmann through his blog Chateau Heartiste. == Political violence == On July 22, 2011, Anders Breivik murdered 77 people in the 2011 Norway attacks. About 90 minutes before enacting the violence, Breivik e-mailed 1,003 people his manifesto 2083: A European Declaration of Independence and a copy of Political Correctness: A Short History of an Ideology. Cultural Marxism was the primary subject of Breivik's manifesto. Breivik wrote that the "sexually transmitted disease (STD) epidemic in Western Europe is a result of cultural Marxism", that "Cultural Marxism defines Muslims, feminist women, homosexuals, and some additional minority groups, as virtuous, and they view ethnic Christian European men as evil" and that the "European Court of Human Rights (ECHR) in Strasbourg is a cultural-Marxist-controlled political entity." A number of other far-right terrorists have espoused the conspiracy theory. Jack Renshaw, a neo-Nazi child sex offender convicted of plotting the assassination of Labour MP Rosie Cooper, promoted the conspiracy theory in a video for the British National Party. John T. Earnest, the perpetrator of the 2019 Poway synagogue shooting, was inspired by white nationalist ideology. In an online manifesto, Earnest stated that he believed "every Jew is responsible for the meticulously planned genocide of the European race" through the promotion of "cultural Marxism and communism." Concerning the real-life political violence caused by the conspiracy theory, law professor Samuel Moyn wrote: "That 'cultural Marxism' is a crude slander, referring to something that does not exist, unfortunately does not mean actual people are not being set up to pay the price, as scapegoats, to appease a rising sense of anger and anxiety. And for that reason, 'cultural Marxism' is not only a sad diversion from framing legitimate grievances but also a dangerous lure in an increasingly unhinged moment." == Analysis == Sociologists Julia Lux and John David Jordan argue that the conspiracy theory can be broken down into its key elements: "misogynist anti-feminism, neo-eugenic science (broadly defined as various forms of genetic determinism), genetic and cultural white supremacy, McCarthyist anti-Leftism fixated on postmodernism, radical anti-intellectualism applied to the social sciences, and the idea that a purge is required to restore normality." They go on to say that all of these items are "supported, proselytised and academically buoyed by intellectuals, politicians, and media figures with extremely credible educational backgrounds." In "Taking On Hate: One NGO's Strategies" (2009), the political scientist Heidi Beirich says the Cultural Marxism theory demonizes the cultural bêtes noires of conservatism such as feminists, LGBT social movements, secular humanists, multiculturalists, sex educators, environmentalists, immigrants and black nationalists. Jamin writes on the flexibility of the conspiracy theory to serve the rhetorical purposes of different groups with diverse sets of enemies: Next to the global dimension of the Cultural Marxism conspiracy theory, there is its innovative and original dimension, which lets its authors avoid racist discourses and pretend to be defenders of democracy. As such, Cultural Marxism is innovative in comparison with old styled theories of a similar nature, such as those involving Freemasons, Bavarian Illuminati, Jews or even Wall Street bankers. For Lind, Buchanan and Breivik, the threat does not come from the migrant or the Jew because he is a migrant or a Jew. For Lind, the threat comes from the Communist ideology, which is considered as a danger for freedom and democracy, and which is associated with different authoritarian political regimes (Russia, China, Cambodia, Cuba, etc.). For Buchanan, the threat comes from atheism, relativism and hard capitalism which, when combined, transform people and nations into an uncontrolled mass of alienated consumers. For Breivik, a self-indoctrinated lone-wolf, the danger comes from Islam, a religion seen as a totalitarian ideology which threatens liberal democracies from Western Europe as much as its Judeo-Christian heritage. In Lind, Buchanan and Breivik, overt racism is studiously avoided. Literary scholar Aaron Hanlon says "the objectives of proponents of conspiratorial views about Cultural Marxism were (and are) not to give a current account of Critical Theory, but to advance a conservative version of US liberalism against the scapegoat of global conspiracy theory" and "In short, what Critical Theory provides to those who use 'critical theory' to signal a socialist threat to liberalism is not only a link to Marxist thought, but also a straw man against which to advance neoliberal politics." Philosophy professor Matthew Sharpe on The Conversation noted that "The last four decades have seen a relative decline of Marxist thought in academia. Its influence has been superseded by 'post-structuralist' (or 'postmodernist') thinkers like Jacques Derrida, Michel Foucault, Judith Butler and Gilles Deleuze. Post-structuralism is primarily indebted to thinkers of the European 'conservative revolution' led by Nietzsche and Heidegger. Where Marxism is built on hopes for reason, revolution and social progress, post-structuralist thinkers roundly reject such optimistic 'grand narratives'. Post-structuralists are as preoccupied with culture as our conservative news columnists. But their analyses of identity and difference challenge the primacy Marxism affords to economics as much as they oppose liberal or conservative ideas." == By country == === Australia === Shortly after the Norway attacks, mainstream right-wing politicians began espousing the conspiracy. In 2013, Cory Bernardi, a member of the ruling Liberal Party, wrote in his book The Conservative Revolution that "cultural Marxism has been one of the most corrosive influences on society over the last century." Five years later, Fraser Anning, former Australian Senator, initially sitting as a member of Pauline Hanson's One Nation and then Katter's Australian Party, declared during his maiden speech in 2018 that "Cultural Marxism is not a throwaway line but a literal truth" and spoke of the need for a "final solution to the immigration problem." === Brazil === In Brazil, the government of Jair Bolsonaro contained a number of administration members who promoted the conspiracy theory, including Eduardo Bolsonaro, the president's son who "enthusiastically described Steve Bannon as an opponent of Cultural Marxism." Jair Bolsonaro sought to expunge the influence of Paulo Freire from Brazilian universities. This had the opposite effect, driving sales of Freire's book Pedagogy of the Oppressed. === Cuba === In 2010, former head of state Fidel Castro called attention to a version of the conspiracy theory by Daniel Estulin, which proposed that the Bilderberg Group sought to influence world events via the spread of rock and roll music. Estulin's work was based on Minnicino's 1992 essay which emphasized Adorno's involvement in the Radio Research Project. Martin Jay described Estulin's text as "risible" and explained that, although some in the Frankfurt School wrote about the potential for mass media to pacify labor movements, it was something they lamented rather than planned to implement. Castro invited Estulin to Cuba, where they issued a joint statement claiming Osama bin Laden was a CIA asset and that the United States was planning a nuclear war against Russia. In 2019, Jay wrote that Castro's interest in the conspiracy theory had no long-term consequences. === Hungary === Hungarian prime minister Viktor Orbán has invoked a cultural Marxism frame in justifying certain illiberal policies and authoritarian centralization of power. Orbán, who wrote a master's thesis on Antonio Gramsci, references Gramscian cultural hegemony as an impetus to contest left-aligned epistemic institutions, including universities and the media. In alignment with the cultural Marxism frame, Hungarian minister Bence Rétvári said that gender studies should be regarded as ideology rather than science. The Hungarian government withdrew state recognition of gender studies degree programs in 2018. === United Kingdom === During the Brexit debate in 2019, a number of Conservatives and Brexiteers were criticized for using the phrase "cultural Marxism" due to its conspiracy theory connotations. Suella Braverman, the Conservative Member of Parliament (MP), said in a pro-Brexit speech for the Bruges Group, a Eurosceptic think tank, that "[w]e are engaging in many battles right now. As Conservatives, we are engaged in a battle against cultural Marxism, where banning things is becoming de rigueur, where freedom of speech is becoming a taboo, where our universities — quintessential institutions of liberalism — are being shrouded in censorship and a culture of no-platforming." Her usage of the conspiracy theory was condemned as hate speech by other MPs, the Board of Deputies of British Jews and the anti-racist organization Hope Not Hate. After meeting with her later, the Board of Deputies of British Jews said that she is "not in any way antisemitic." Braverman was alerted to this connection by journalist Dawn Foster, but she defended using the term. Braverman denied that the term Cultural Marxism is an antisemitic trope, stating during a question and answer session "whether she stood by the term, given its far-right connections. She said: 'Yes, I do believe we are in a battle against cultural Marxism, as I said. We have culture evolving from the far left which has allowed the snuffing out of freedom of speech, freedom of thought.'" Braverman further added that she was "very aware of that ongoing creep of cultural Marxism, which has come from Jeremy Corbyn." Nigel Farage has promoted the cultural Marxist conspiracy theory, for which he has been condemned by Jewish groups, such as the Board of Deputies of British Jews, as well as a number of Members of Parliament, who said he used it as a dog-whistle code for antisemitism. Farage said that the United Kingdom faced "cultural Marxism", a term described in its report by The Guardian as "originating in a conspiracy theory based on a supposed plot against national governments, which is closely linked to the far right and antisemitism." Farage's spokesman "condemned previous criticism of his language by Jewish groups and others as 'pathetic' and 'a manufactured story.'" In The War Against the BBC (2020), Patrick Barwise and Peter York write how the Cultural Marxism conspiracy theory has been pushed by some on the right as part of an alleged bias of the BBC. Yasmin Alibhai-Brown cites Dominic Cummings, Tim Montgomerie and the right-wing website Guido Fawkes as examples of "relentlessly [complaining] about the institution's 'cultural Marxism' or left-wing bias. This now happens on a near-daily basis." In November 2020, a letter signed by 28 Conservative MPs, published in The Telegraph, accused the National Trust of being "coloured by cultural Marxist dogma, colloquially known as the 'woke agenda'". The use of this terminology in the letter was described by the All-Party Parliamentary Group Against Antisemitism, Jewish Council for Racial Equality, anti-racist charity Hope Not Hate and the Campaign Against Antisemitism as antisemitic. === United States === Cultural Marxism discourse was found in several strands of U.S. right-wing politics post-2000, including the religious right and the Tea Party movement. Shortly after the election of Donald Trump in 2016, Alex Ross wrote an article in The New Yorker titled, "The Frankfurt School Knew Trump was Coming". It argued that Trump represented the kind of authoritarian identified by Theodor Adorno's F-scale. This idea prompted academic conferences on the same theme at the New School for Social Research and the Leo Baeck Institute. Martin Jay linked election rhetoric of Trump supporters as "deplorables" to Adorno's authoritarian personality concept, saying it "counterproductively forecloses treating those it categorized as anything but objects of contempt." Jay encouraged empathy and dialogue to resolve political polarization. In 2017, it was reported that advisor Rich Higgins was fired from the United States National Security Council for publishing the memorandum '"POTUS & Political Warfare" that alleged the existence of a left-wing conspiracy to destroy Donald Trump's presidency because "American public intellectuals of Cultural Marxism, foreign Islamicists, and globalist bankers, the news media, and politicians from the Republican and Democratic parties were attacking Trump, because he represents an existential threat to the cultural Marxist memes that dominate the prevailing cultural narrative in the US." Higgins also asserted that the Frankfurt School "sought to deconstruct everything in order to destroy it, giving rise to society-wide nihilism." The memo was read by Donald Trump Jr. who passed on a copy of it to his father. Matt Shea, a Republican former member of the Washington House of Representatives has also promoted Higgins' memo. In June 2023, Florida governor and then-candidate for President in the 2024 election Ron DeSantis defined "woke" as a "form of Cultural Marxism". Texas U.S. senator Ted Cruz used both terms in the title of his 2023 book, Unwoke: How to Defeat Cultural Marxism in America. === South Korea === National Human Rights Commission of Korea chairperson Ahn Chang-ho stated "Many cultural Marxists declared 'Our fundamental enemy is Christianity' and promoted homosexuality as a means of bringing about a communist revolution.". During nomination hearing he said "I have heard that there are some neo-Marxists who suggest that homosexuality is a key means in a communist revolution." and "[if anti-discrimination law is enacted,] Marxists and fascists operate with impunity in society." == Online harassment == Gamergate was an online harassment campaign beginning in 2014, particularly targeting women, that had the purported aim of promoting ethics in video games journalism. Participants in Gamergate referred to their opposition as cultural Marxists, and cited free-speech grounds to justify harassing their targets. Noted harassment associated with the online movement included doxing, swatting, and threats of rape and death. The Southern Poverty Law Center described the Gamergate campaign as one in a number of examples of male supremacy, which it said views society as "a matriarchy propped up by 'cultural Marxism' meant to eradicate or subjugate men". == See also == == Notes == == References == == Further reading == Catlin, Jonathon (2020). "The Frankfurt School on Antisemitism, Authoritarianism, and Right-wing Radicalism: The Politics of Unreason: The Frankfurt School and the Origins of Modern Antisemitism, by Lars Rensmann, Albany, NY, SUNY Press, 2017, xv + 600 pp., $25.95 (paperback), ISBN 978-1-43846-594-4". European Journal of Cultural and Political Sociology. 7 (2): 198–214. doi:10.1080/23254823.2020.1742018. S2CID 216306994. De Bruin, Robin (2021). "European union as a road to serfdom: The Alt-Right's inversion of narratives on European integration". Journal of Contemporary European Studies. 30: 52–66. doi:10.1080/14782804.2021.1960489. hdl:11245.1/76db2cfc-4262-4059-bea4-eb310bc689dd. S2CID 238810398. Grumke, Thomas (2004). "'Take this country back!': Die neue Rechte in den USA". Die Neue Rechte — eine Gefahr für die Demokratie? [The New Right—A Danger to Democracy?] (in German). VS Verlag für Sozialwissenschaften. pp. 175–185. ISBN 978-3-322-81016-8. Jamin, Jérôme (2013). "Anders Breivik et le 'marxisme culturel': Etats-Unis/Europe". Amnis (12). doi:10.4000/AMNIS.2004. Richards, Imogen; Jones, Callum (2021). "Quillette, classical liberalism, and the international New Right". Contemporary Far-Right Thinkers and the Future of Liberal Democracy. Routledge. ISBN 978-1-003-10517-6.
Wikipedia/Cultural_Marxism_conspiracy_theory
"Fourth Industrial Revolution", "4IR", or "Industry 4.0", is a neologism describing rapid technological advancement in the 21st century. It follows the Third Industrial Revolution (the "Information Age"). The term was popularised in 2016 by Klaus Schwab, the World Economic Forum founder and former executive chairman, who asserts that these developments represent a significant shift in industrial capitalism. A part of this phase of industrial change is the joining of technologies like artificial intelligence, gene editing, to advanced robotics that blur the lines between the physical, digital, and biological worlds. Throughout this, fundamental shifts are taking place in how the global production and supply network operates through ongoing automation of traditional manufacturing and industrial practices, using modern smart technology, large-scale machine-to-machine communication (M2M), and the Internet of things (IoT). This integration results in increasing automation, improving communication and self-monitoring, and the use of smart machines that can analyse and diagnose issues without the need for human intervention. It also represents a social, political, and economic shift from the digital age of the late 1990s and early 2000s to an era of embedded connectivity distinguished by the ubiquity of technology in society (i.e. a metaverse) that changes the ways humans experience and know the world around them. It posits that we have created and are entering an augmented social reality compared to just the natural senses and industrial ability of humans alone. The Fourth Industrial Revolution is sometimes expected to mark the beginning of an imagination age, where creativity and imagination become the primary drivers of economic value. == History == The phrase Fourth Industrial Revolution was first introduced by a team of scientists developing a high-tech strategy for the German government. Klaus Schwab, former executive chairman of the World Economic Forum (WEF), introduced the phrase to a wider audience in a 2015 article published by Foreign Affairs. "Mastering the Fourth Industrial Revolution" was the 2016 theme of the World Economic Forum Annual Meeting, in Davos-Klosters, Switzerland. On 10 October 2016, the Forum announced the opening of its Centre for the Fourth Industrial Revolution in San Francisco. This was also subject and title of Schwab's 2016 book. Schwab includes in this fourth era technologies that combine hardware, software, and biology (cyber-physical systems), and emphasises advances in communication and connectivity. Schwab expects this era to be marked by breakthroughs in emerging technologies in fields such as robotics, artificial intelligence, nanotechnology, quantum computing, biotechnology, the internet of things, the industrial internet of things, decentralised consensus, fifth-generation wireless technologies, 3D printing, and fully autonomous vehicles. In The Great Reset proposal by the WEF, The Fourth Industrial Revolution is included as a strategic intelligence in the solution to rebuild the economy sustainably following the COVID-19 pandemic. === First Industrial Revolution === The First Industrial Revolution was marked by a transition from hand production methods to machines through the use of steam power and water power. The implementation of new technologies took a long time, so the period which this refers to was between 1760 and 1820, or 1840 in Europe and the United States. Its effects had consequences on textile manufacturing, which was first to adopt such changes, as well as iron industry, agriculture, and mining–although it also had societal effects with an ever stronger middle class. === Second Industrial Revolution === The Second Industrial Revolution, also known as the Technological Revolution, is the period between 1871 and 1914 that resulted from installations of extensive railroad and telegraph networks, which allowed for faster transfer of people and ideas, as well as electricity. Increasing electrification allowed for factories to develop the modern production line. === Third Industrial Revolution === The Third Industrial Revolution, also known as the Digital Revolution, began in the late 20th century. It is characterized by the shift to an economy centered on information technology, marked by the advent of personal computers, the Internet, and the widespread digitalization of communication and industrial processes. A book by Jeremy Rifkin titled The Third Industrial Revolution, published in 2011, focused on the intersection of digital communications technology and renewable energy. It was made into a 2017 documentary by Vice Media. == Characteristics == In essence, the Fourth Industrial Revolution is the trend towards automation and data exchange in manufacturing technologies and processes which include cyber-physical systems (CPS), Internet of Things (IoT), cloud computing, cognitive computing, and artificial intelligence. Machines improve human efficiency in performing repetitive functions, and the combination of machine learning and computing power allows machines to carry out increasingly complex tasks. The Fourth Industrial Revolution has been defined as technological developments in cyber-physical systems such as high capacity connectivity; new human-machine interaction modes such as touch interfaces and virtual reality systems; and improvements in transferring digital instructions to the physical world including robotics and 3D printing (additive manufacturing); "big data" and cloud computing; improvements to and uptake of Off-Grid / Stand-Alone Renewable Energy Systems: solar, wind, wave, hydroelectric and the electric batteries (lithium-ion renewable energy storage systems (ESS) and EV). It also emphasizes decentralized decisions – the ability of cyber physical systems to make decisions on their own and to perform their tasks as autonomously as possible. Only in the case of exceptions, interference, or conflicting goals, are tasks delegated to a higher level. === Distinctiveness === Proponents of the Fourth Industrial Revolution suggest it is a distinct revolution rather than simply a prolongation of the Third Industrial Revolution. This is due to the following characteristics: Velocity — exponential speed at which incumbent industries are affected and displaced Scope and systems impact – the large amount of sectors and firms that are affected Paradigm shift in technology policy – new policies designed for this new way of doing are present. An example is Singapore's formal recognition of Industry 4.0 in its innovation policies. Critics of the concept dismiss Industry 4.0 as a marketing strategy. They suggest that although revolutionary changes are identifiable in distinct sectors, there is no systemic change so far. In addition, the pace of recognition of Industry 4.0 and policy transition varies across countries; the definition of Industry 4.0 is not harmonised. One of the most known figures is Jeremy Rifkin who "agree[s] that digitalization is the hallmark and defining technology in what has become known as the Third Industrial Revolution". However, he argues "that the evolution of digitalization has barely begun to run its course and that its new configuration in the form of the Internet of Things represents the next stage of its development". === Components === The application of the Fourth Industrial Revolution operates through: Mobile devices Location detection technologies (electronic identification) Advanced human-machine interfaces Authentication and fraud detection Smart sensors Big analytics and advanced processes Multilevel customer interaction and customer profiling Augmented reality/wearables On-demand availability of computer system resources Data visualisation Industry 4.0 networks a wide range of new technologies to create value. Using cyber-physical systems that monitor physical processes, a virtual copy of the physical world can be designed. Characteristics of cyber-physical systems include the ability to make decentralised decisions independently, reaching a high degree of autonomy. The value created in Industry 4.0 can be relied upon in electronic identification, in which the smart manufacturing requires set technologies to be incorporated in the manufacturing process to thus be classified as in the development path of Industry 4.0 and no longer digitisation. == Trends == === Smart factories === The Fourth Industrial Revolution fosters "smart factories", which are production environments where facilities and logistics systems are organised with minimal human intervention. The technical foundations on which smart factories are based are cyber-physical systems that communicate with each other using IoT. An important part of this process is the exchange of data between the product and the production line. This enables more efficient supply chain connectivity and better organisation within a production environment. Within modular structured smart factories, cyber-physical systems monitor physical processes, create a virtual copy of the physical world, and make decentralised decisions. Over the internet of things, cyber-physical systems communicate and cooperate with each other and with humans in synchronic time both internally and across organizational services offered and used by participants of the value chain. === Artificial intelligence === Artificial intelligence (AI) has a wide range of applications across all sectors of the economy. It gained prominence following advancements in deep learning during the 2010s, and its impact intensified in the 2020s with the rise of generative AI, a period often referred to as the "AI boom". Models like GPT-4o can engage in verbal and textual discussions and analyze images. AI is a key driver of Industry 4.0, orchestrating technologies like robotics, automated vehicles, and real-time data analytics. By enabling machines to perform complex tasks, AI is redefining production processes and reducing changeover times. AI could also significantly accelerate, or even automate software development. Some experts believe that AI alone could be as transformative as an industrial revolution. Multiple companies such as OpenAI and Meta have expressed the goal of creating artificial general intelligence (AI that can do virtually any cognitive task a human can), making large investments in data centers and GPUs to train more capable AI models. ==== Robotics ==== Humanoid robots have traditionally lacked usefulness. They had difficulty picking simple objects due to imprecise control and coordination, and they wouldn't understand their environment and how physics works. They were often explicitly programmed to do narrow tasks, failing when encountering new situations. Modern humanoid robots, however, are typically based on machine learning, and in particular reinforcement learning. In 2024, humanoid robots are rapidly becoming more flexible, easier to train, and versatile. === Predictive maintenance === Industry 4.0 facilitates predictive maintenance, due to the use of advanced technologies, including IoT sensors. Predictive maintenance, which can identify potential maintenance issues in real time, allows machine owners to perform cost-effective maintenance before the machinery fails or gets damaged. For example, a company in Los Angeles could understand if a piece of equipment in Singapore is running at an abnormal speed or temperature. They could then decide whether or not it needs to be repaired. === 3D printing === The Fourth Industrial Revolution is said to have extensive dependency on 3D printing technology. Some advantages of 3D printing for industry are that 3D printing can print many geometric structures, as well as simplify the product design process. It is also relatively environmentally friendly. In low-volume production, it can also decrease lead times and total production costs. Moreover, it can increase flexibility, reduce warehousing costs and help the company towards the adoption of a mass customisation business strategy. In addition, 3D printing can be very useful for printing spare parts and installing it locally, therefore reducing supplier dependence and reducing the supply lead time. === Smart sensors === Sensors and instrumentation drive the central forces of innovation, not only for Industry 4.0 but also for other "smart" megatrends, such as smart production, smart mobility, smart homes, smart cities, and smart factories. Smart sensors are devices which generate the data and allow further functionality from self-monitoring and self-configuration to condition monitoring of complex processes. With the capability of wireless communication, they reduce installation effort to a great extent and help realise a dense array of sensors. The importance of sensors, measurement science, and smart evaluation for Industry 4.0 has been recognised and acknowledged by various experts and has already led to the statement "Industry 4.0: nothing goes without sensor systems." However, there are a few issues, such as time synchronisation error, data loss, and dealing with large amounts of harvested data, which all limit the implementation of full-fledged systems. Moreover, additional limits on these functionalities represent the battery power. One example of the integration of smart sensors in the electronic devices, is the case of smart watches, where sensors receive the data from the movement of the user, process the data and as a result, provide the user with the information about how many steps they have walked in a day and also converts the data into calories burned. ==== Agriculture and food industries ==== Smart sensors in these two fields are still in the testing stage. These connected sensors collect, interpret and communicate the information available in the plots (leaf area, vegetation index, chlorophyll, hygrometry, temperature, water potential, radiation). Based on this scientific data, the objective is to enable real-time monitoring via a smartphone with a range of advice that optimises plot management in terms of results, time and costs. On the farm, these sensors can be used to detect crop stages and recommend inputs and treatments at the right time, as well as controlling the level of irrigation. The food industry requires more and more security and transparency and full documentation is required. This new technology is used as a tracking system as well as the collection of human data and product data. === Accelerated transition to the knowledge economy === Knowledge economy is an economic system in which production and services are largely based on knowledge-intensive activities that contribute to an accelerated pace of technical and scientific advance, as well as rapid obsolescence. Industry 4.0 aids transitions into knowledge economy by increasing reliance on intellectual capabilities rather than on physical inputs or natural resources. == Challenges == Challenges in implementation of Industry 4.0: === Economic === High economic cost Business model adaptation Unclear economic benefits/excessive investment Driving significant economic changes through automation and technological advancements, leading to both job displacement and the creation of new roles, necessitating widespread workforce reskilling and systemic adaptation. === Social === Privacy concerns Surveillance and distrust General reluctance to change by stakeholders Threat of redundancy of the corporate IT department Loss of many jobs to automatic processes and IT-controlled processes, especially for blue-collar workers Increased risk of gender inequalities in professions with job roles most susceptible to replacement with AI === Political === Lack of regulation, standards, and forms of certifications Unclear legal issues and data security === Organizational === IT security issues, which are greatly aggravated by the inherent need to open up previously closed production shops Reliability and stability needed for critical machine-to-machine communication (M2M), including very short and stable latency times Need to maintain the integrity of production processes Need to avoid any IT snags, as those would cause expensive production outages Need to protect industrial know-how (contained also in the control files for the industrial automation gear) Lack of adequate skill-sets to expedite the transition towards Industry 4.0 Low top management commitment Insufficient qualification of employees == Country applications == Many countries have set up institutional mechanisms to foster the adoption of Industry 4.0 technologies. For example, === Australia === Australia has a Digital Transformation Agency (est. 2015) and the Prime Minister's Industry 4.0 Taskforce (est. 2016), which promotes collaboration with industry groups in Germany and the USA. === Germany === The term "Industrie 4.0", shortened to I4.0 or simply I4, originated in 2011 from a project in the high-tech strategy of the German government and specifically relates to that project policy, rather than a wider notion of a Fourth Industrial Revolution of 4IR, which promotes the computerisation of manufacturing. The term "Industrie 4.0" was publicly introduced in the same year at the Hannover Fair. German professor Wolfgang Wahlster is sometimes called the inventor of the "Industry 4.0" term. In October 2012, the Working Group on Industry 4.0 presented a set of Industry 4.0 implementation recommendations to the German federal government. The workgroup members and partners are recognised as the founding fathers and driving force behind Industry 4.0. On 8 April 2013 at the Hannover Fair, the final report of the Working Group Industry 4.0 was presented. This working group was headed by Siegfried Dais, of Robert Bosch GmbH, and Henning Kagermann, of the German Academy of Science and Engineering. As Industry 4.0 principles have been applied by companies, they have sometimes been rebranded. For example, the aerospace parts manufacturer Meggitt PLC has branded its own Industry 4.0 research project M4. The discussion of how the shift to Industry 4.0–and especially digitisation–will affect the labour market is being discussed in Germany under the topic of Work 4.0. The federal government in Germany is a leader in the development of the I4.0 policy through its ministries of the German federal Ministry of Education and Research (BMBF) and BMWi. Through the publishing of set objectives and goals for enterprises to achieve, the German federal government attempts to set the direction of the digital transformation. However, there is a gap between German enterprise's collaboration and knowledge of these set policies. The biggest challenge SMEs in Germany are currently facing regarding digital transformation of their manufacturing processes is ensuring that there is a concrete IT and application landscape to support further digital transformation efforts. The characteristics of the German government's Industry 4.0 strategy involve the strong customisation of products under the conditions of highly flexible (mass-) production. The required automation technology is improved by the introduction of methods of self-optimization, self-configuration, self-diagnosis, cognition and intelligent support of workers in their increasingly complex work. The largest project in Industry 4.0 as of July 2013 is the BMBF leading-edge cluster "Intelligent Technical Systems Ostwestfalen-Lippe (its OWL)". Another major project is the BMBF project RES-COM, as well as the Cluster of Excellence "Integrative Production Technology for High-Wage Countries". In 2015, the European Commission started the international Horizon 2020 research project CREMA (cloud-based rapid elastic manufacturing) as a major initiative to foster the Industry 4.0 topic. === Estonia === In Estonia, the digital transformation dubbed as the 4th Industrial Revolution by Klaus Schwab and the World Economic Forum in 2015 started with the restoration of independence in 1991. Although a latecomer to the information revolution due to 50 years of Soviet occupation, Estonia leapfrogged to the digital era, while skipping the analogue connections almost completely. The early decisions made by Prime Minister Mart Laar on the course of the country's economic development led to the establishment of what is today known as e-Estonia, one of the worlds most digitally advanced nations. According to the goals set in Estonia's Digital Agenda 2030, the next advances in the country's digital transformation will involve switching to event-based and proactive services, both in private and business environments, as well as developing a green, AI-powered, and human-centric digital government. === Indonesia === Another example is the Indonesian initiative Making Indonesia 4.0, which focuses on improving industrial performance. === India === India, with its expanding economy and extensive manufacturing sector, has embraced the digital revolution, leading to significant advancements in manufacturing. The Indian program for Industry 4.0 centers around leveraging technology to produce globally competitive products at cost-effective rates while adopting the latest technological advancements of Industry 4.0. === Japan === Society 5.0 envisions a society that prioritizes the well-being of its citizens, striking a harmonious balance between economic progress and the effective addressing of societal challenges through a closely interconnected system of both the digital realm and the physical world. This concept was introduced in 2019 in the 5th Science and Technology Basic Plan for Japanese Government as a blueprint for a forthcoming societal framework. === Malaysia === Malaysia's national policy on Industry 4.0 is known as Industry4WRD. Launched in 2018, key initiatives in this policy include enhancing digital infrastructure, equipping the workforce with 4IR skills, and fostering innovation and technology adoption across industries. === South Africa === South Africa appointed a Presidential Commission on the Fourth Industrial Revolution in 2019, consisting of about 30 stakeholders with a background in academia, industry and government. South Africa has also established an Inter ministerial Committee on Industry 4.0. === South Korea === The Republic of Korea has had a Presidential Committee on the Fourth Industrial Revolution since 2017. The Republic of Korea's I-Korea strategy (2017) is focusing on new growth engines that include AI, drones, and autonomous cars, in line with the government's innovation-driven economic policy. === Uganda === Uganda adopted its own National 4IR Strategy in October 2020 with emphasis on e-governance, urban management (smart cities), healthcare, education, agriculture, and the digital economy; to support local businesses, the government was contemplating introducing a local start-ups bill in 2020 which would require all accounting officers to exhaust the local market prior to procuring digital solutions from abroad. === United Kingdom === In a policy paper published in 2019, the UK's Department for Business, Energy & Industrial Strategy, titled "Regulation for the Fourth Industrial Revolution", outlined the need to evolve current regulatory models to remain competitive in evolving technological and social settings. === United States === The Department of Homeland Security in 2019 published a paper called 'The Industrial Internet of things (IIOT): Opportunities, Risks, Mitigation'. The base pieces of critical infrastructure are increasingly digitised for greater connectivity and optimisation. Hence, its implementation, growth and maintenance must be carefully planned and safeguarded. The paper discusses not only applications of IIOT but also the associated risks. It has suggested some key areas where risk mitigation is possible. To increase coordination between the public, private, law enforcement, academia and other stakeholders the DHS formed the National Cybersecurity and Communications Integration Center (NCCIC). == Industry applications == The aerospace industry has sometimes been characterised as "too low volume for extensive automation". However, Industry 4.0 principles have been investigated by several aerospace companies, and technologies have been developed to improve productivity where the upfront cost of automation cannot be justified. One example of this is the aerospace parts manufacturer Meggitt PLC's M4 project. The increasing use of the industrial internet of things is referred to as Industry 4.0 at Bosch, and generally in Germany. Applications include machines that can predict failures and trigger maintenance processes autonomously or self-organised coordination that react to unexpected changes in production. in 2017, Bosch launched the Connectory, a Chicago, Illinois based innovation incubator that specializes in IoT, including Industry 4.0. Industry 4.0 inspired Innovation 4.0, a move toward digitisation for academia and research and development. In 2017, the £81M Materials Innovation Factory (MIF) at the University of Liverpool opened as a center for computer aided materials science, where robotic formulation, data capture, and modelling are being integrated into development practices. == Criticism == With the consistent development of automation of everyday tasks, some saw the benefit in the exact opposite of automation where self-made products are valued more than those that involved automation. This valuation is named the IKEA effect, a term coined by Michael I. Norton of Harvard Business School, Daniel Mochon of Yale, and Dan Ariely of Duke. Another problem that is expected to accelerate with the growth of IR4 is the prevalence of mental disorders, a known issue within high-tech operators. Also, the IR4 has sparked significant criticism regarding AI bias and ethical issues, as algorithms used in decision-making processes often perpetuate existing social inequalities, disproportionately impacting marginalized groups while lacking transparency and accountability. == Future == === Industry 5.0 === Industry 5.0 has been proposed as a strategy to create a paradigm shift for an industrial landscape in which the primary focus should no longer be on increasing efficiency, but rather on promoting the well-being of society and sustainability of the economy and industrial production. == See also == AI boom Computer-integrated manufacturing Cyber manufacturing Digital modelling and fabrication Industrial control system Intelligent maintenance systems Lights-out manufacturing List of emerging technologies Machine to machine Nondestructive Evaluation 4.0 Simulation software Technological singularity Technological unemployment The War on Normal People Work 4.0 World Economic Forum 2016 == References == === Sources === This article incorporates text from a free content work. Text taken from UNESCO Science Report: the Race Against Time for Smarter Development.​, Schneegans, S., T. Straza and J. Lewis (eds), UNESCO.
Wikipedia/Fifth_Industrial_Revolution
Structural functionalism, or simply functionalism, is "a framework for building theory that sees society as a complex system whose parts work together to promote solidarity and stability". This approach looks at society through a macro-level orientation, which is a broad focus on the social structures that shape society as a whole, and believes that society has evolved like organisms. This approach looks at both social structure and social functions. Functionalism addresses society as a whole in terms of the function of its constituent elements; namely norms, customs, traditions, and institutions. A common analogy called the organic or biological analogy, popularized by Herbert Spencer, presents these parts of society as human body "organs" that work toward the proper functioning of the "body" as a whole. In the most basic terms, it simply emphasizes "the effort to impute, as rigorously as possible, to each feature, custom, or practice, its effect on the functioning of a supposedly stable, cohesive system". For Talcott Parsons, "structural-functionalism" came to describe a particular stage in the methodological development of social science, rather than a specific school of thought. == Theory == In sociology, classical theories are defined by a tendency towards biological analogy and notions of social evolutionism: Functionalist thought, from Comte onwards, has looked particularly towards biology as the science providing the closest and most compatible model for social science. Biology has been taken to provide a guide to conceptualizing the structure and function of social systems and analyzing evolution processes via mechanisms of adaptation ... functionalism strongly emphasises the pre-eminence of the social world over its individual parts (i.e. its constituent actors, human subjects). While one may regard functionalism as a logical extension of the organic analogies for societies presented by political philosophers such as Rousseau, sociology draws firmer attention to those institutions unique to industrialized capitalist society (or modernity). Auguste Comte believed that society constitutes a separate "level" of reality, distinct from both biological and inorganic matter. Explanations of social phenomena had therefore to be constructed within this level, individuals being merely transient occupants of comparatively stable social roles. In this view, Comte was followed by Émile Durkheim. A central concern for Durkheim was the question of how certain societies maintain internal stability and survive over time. He proposed that such societies tend to be segmented, with equivalent parts held together by shared values, common symbols or (as his nephew Marcel Mauss held), systems of exchanges. Durkheim used the term mechanical solidarity to refer to these types of "social bonds, based on common sentiments and shared moral values, that are strong among members of pre-industrial societies". In modern, complex societies, members perform very different tasks, resulting in a strong interdependence. Based on the metaphor above of an organism in which many parts function together to sustain the whole, Durkheim argued that complex societies are held together by "organic solidarity", i.e. "social bonds, based on specialization and interdependence, that are strong among members of industrial societies". The central concern of structural functionalism may be regarded as a continuation of the Durkheimian task of explaining the apparent stability and internal cohesion needed by societies to endure over time. Societies are seen as coherent, bounded and fundamentally relational constructs that function like organisms, with their various (or social institutions) working together in an unconscious, quasi-automatic fashion toward achieving an overall social equilibrium. All social and cultural phenomena are therefore seen as functional in the sense of working together, and are effectively deemed to have "lives" of their own. They are primarily analyzed in terms of this function. The individual is significant not in and of themselves, but rather in terms of their status, their position in patterns of social relations, and the behaviours associated with their status. Therefore, the social structure is the network of statuses connected by associated roles. Functionalism also has an anthropological basis in the work of theorists such as Marcel Mauss, Bronisław Malinowski and Radcliffe-Brown. The prefix 'structural' emerged in Radcliffe-Brown's specific usage. Radcliffe-Brown proposed that most stateless, "primitive" societies, lacking strong centralized institutions, are based on an association of corporate-descent groups, i.e. the respective society's recognised kinship groups. Structural functionalism also took on Malinowski's argument that the basic building block of society is the nuclear family, and that the clan is an outgrowth, not vice versa. It is simplistic to equate the perspective directly with political conservatism. The tendency to emphasize "cohesive systems", however, leads functionalist theories to be contrasted with "conflict theories" which instead emphasize social problems and inequalities. == Prominent theorists == === Auguste Comte === Auguste Comte, the "Father of Positivism", pointed out the need to keep society unified as many traditions were diminishing. He was the first person to coin the term sociology. Comte suggests that sociology is the product of a three-stage development: Theological stage: From the beginning of human history until the end of the European Middle Ages, people took a religious view that society expressed God's will. In the theological state, the human mind, seeking the essential nature of beings, the first and final causes (the origin and purpose) of all effects—in short, absolute knowledge—supposes all phenomena to be produced by the immediate action of supernatural beings. Metaphysical stage: People began seeing society as a natural system as opposed to the supernatural. This began with enlightenment and the ideas of Hobbes, Locke, and Rousseau. Perceptions of society reflected the failings of a selfish human nature rather than the perfection of God. Positive or scientific stage: Describing society through the application of the scientific approach, which draws on the work of scientists. === Herbert Spencer === Herbert Spencer (1820–1903) was a British philosopher famous for applying the theory of natural selection to society. He was in many ways the first true sociological functionalist. In fact, while Durkheim is widely considered the most important functionalist among positivist theorists, it is known that much of his analysis was culled from reading Spencer's work, especially his Principles of Sociology (1874–96). In describing society, Spencer alludes to the analogy of a human body. Just as the structural parts of the human body—the skeleton, muscles, and various internal organs—function independently to help the entire organism survive, social structures work together to preserve society. While reading Spencer's massive volumes can be tedious (long passages explicating the organic analogy, with reference to cells, simple organisms, animals, humans and society), there are some important insights that have quietly influenced many contemporary theorists, including Talcott Parsons, in his early work The Structure of Social Action (1937). Cultural anthropology also consistently uses functionalism. This evolutionary model, unlike most 19th century evolutionary theories, is cyclical, beginning with the differentiation and increasing complication of an organic or "super-organic" (Spencer's term for a social system) body, followed by a fluctuating state of equilibrium and disequilibrium (or a state of adjustment and adaptation), and, finally, the stage of disintegration or dissolution. Following Thomas Malthus' population principles, Spencer concluded that society is constantly facing selection pressures (internal and external) that force it to adapt its internal structure through differentiation. Every solution, however, causes a new set of selection pressures that threaten society's viability. Spencer was not a determinist in the sense that he never said that Selection pressures will be felt in time to change them; They will be felt and reacted to; or The solutions will always work. In fact, he was in many ways a political sociologist, and recognized that the degree of centralized and consolidated authority in a given polity could make or break its ability to adapt. In other words, he saw a general trend towards the centralization of power as leading to stagnation and ultimately, pressures to decentralize. More specifically, Spencer recognized three functional needs or prerequisites that produce selection pressures: they are regulatory, operative (production) and distributive. He argued that all societies need to solve problems of control and coordination, production of goods, services and ideas, and, finally, to find ways of distributing these resources. Initially, in tribal societies, these three needs are inseparable, and the kinship system is the dominant structure that satisfies them. As many scholars have noted, all institutions are subsumed under kinship organization, but, with increasing population (both in terms of sheer numbers and density), problems emerge with regard to feeding individuals, creating new forms of organization—consider the emergent division of labour—coordinating and controlling various differentiated social units, and developing systems of resource distribution. The solution, as Spencer sees it, is to differentiate structures to fulfill more specialized functions; thus, a chief or "big man" emerges, soon followed by a group of lieutenants, and later kings and administrators. The structural parts of society (e.g. families, work) function interdependently to help society function. Therefore, social structures work together to preserve society. === Talcott Parsons === Talcott Parsons began writing in the 1930s and contributed to sociology, political science, anthropology, and psychology. Structural functionalism and Parsons have received much criticism. Numerous critics have pointed out Parsons' underemphasis of political and monetary struggle, the basics of social change, and the by and large "manipulative" conduct unregulated by qualities and standards. Structural functionalism, and a large portion of Parsons' works, appear to be insufficient in their definitions concerning the connections amongst institutionalized and non-institutionalized conduct, and the procedures by which institutionalization happens. Parsons was heavily influenced by Durkheim and Max Weber, synthesizing much of their work into his action theory, which he based on the system-theoretical concept and the methodological principle of voluntary action. He held that "the social system is made up of the actions of individuals". His starting point, accordingly, is the interaction between two individuals faced with a variety of choices about how they might act, choices that are influenced and constrained by a number of physical and social factors. Parsons determined that each individual has expectations of the other's action and reaction to their own behavior, and that these expectations would (if successful) be "derived" from the accepted norms and values of the society they inhabit. As Parsons himself emphasized, in a general context there would never exist any perfect "fit" between behaviors and norms, so such a relation is never complete or "perfect". Social norms were always problematic for Parsons, who never claimed (as has often been alleged) that social norms were generally accepted and agreed upon, should this prevent some kind of universal law. Whether social norms were accepted or not was for Parsons simply a historical question. As behaviors are repeated in more interactions, and these expectations are entrenched or institutionalized, a role is created. Parsons defines a "role" as the normatively-regulated participation "of a person in a concrete process of social interaction with specific, concrete role-partners". Although any individual, theoretically, can fulfill any role, the individual is expected to conform to the norms governing the nature of the role they fulfill. Furthermore, one person can and does fulfill many different roles at the same time. In one sense, an individual can be seen to be a "composition" of the roles he inhabits. Certainly, today, when asked to describe themselves, most people would answer with reference to their societal roles. Parsons later developed the idea of roles into collectivities of roles that complement each other in fulfilling functions for society. Some roles are bound up in institutions and social structures (economic, educational, legal and even gender-based). These are functional in the sense that they assist society in operating and fulfilling its functional needs so that society runs smoothly. Contrary to prevailing myth, Parsons never spoke about a society where there was no conflict or some kind of "perfect" equilibrium. A society's cultural value-system was in the typical case never completely integrated, never static and most of the time, like in the case of the American society, in a complex state of transformation relative to its historical point of departure. To reach a "perfect" equilibrium was not any serious theoretical question in Parsons analysis of social systems, indeed, the most dynamic societies had generally cultural systems with important inner tensions like the US and India. These tensions were a source of their strength according to Parsons rather than the opposite. Parsons never thought about system-institutionalization and the level of strains (tensions, conflict) in the system as opposite forces per se. The key processes for Parsons for system reproduction are socialization and social control. Socialization is important because it is the mechanism for transferring the accepted norms and values of society to the individuals within the system. Parsons never spoke about "perfect socialization"—in any society socialization was only partial and "incomplete" from an integral point of view. Parsons states that "this point ... is independent of the sense in which [the] individual is concretely autonomous or creative rather than 'passive' or 'conforming', for individuality and creativity, are to a considerable extent, phenomena of the institutionalization of expectations"; they are culturally constructed. Socialization is supported by the positive and negative sanctioning of role behaviours that do or do not meet these expectations. A punishment could be informal, like a snigger or gossip, or more formalized, through institutions such as prisons and mental homes. If these two processes were perfect, society would become static and unchanging, but in reality, this is unlikely to occur for long. Parsons recognizes this, stating that he treats "the structure of the system as problematic and subject to change", and that his concept of the tendency towards equilibrium "does not imply the empirical dominance of stability over change". He does, however, believe that these changes occur in a relatively smooth way. Individuals in interaction with changing situations adapt through a process of "role bargaining". Once the roles are established, they create norms that guide further action and are thus institutionalized, creating stability across social interactions. Where the adaptation process cannot adjust, due to sharp shocks or immediate radical change, structural dissolution occurs and either new structures (or therefore a new system) are formed, or society dies. This model of social change has been described as a "moving equilibrium", and emphasizes a desire for social order. === Davis and Moore === Kingsley Davis and Wilbert E. Moore (1945) gave an argument for social stratification based on the idea of "functional necessity" (also known as the Davis-Moore hypothesis). They argue that the most difficult jobs in any society have the highest incomes in order to motivate individuals to fill the roles needed by the division of labour. Thus, inequality serves social stability. This argument has been criticized as fallacious from a number of different angles: the argument is both that the individuals who are the most deserving are the highest rewarded, and that a system of unequal rewards is necessary, otherwise no individuals would perform as needed for the society to function. The problem is that these rewards are supposed to be based upon objective merit, rather than subjective "motivations." The argument also does not clearly establish why some positions are worth more than others, even when they benefit more people in society, e.g., teachers compared to athletes and movie stars. Critics have suggested that structural inequality (inherited wealth, family power, etc.) is itself a cause of individual success or failure, not a consequence of it. === Robert Merton === Robert K. Merton made important refinements to functionalist thought. He fundamentally agreed with Parsons' theory but acknowledged that Parsons' theory could be questioned, believing that it was over generalized. Merton tended to emphasize middle range theory rather than a grand theory, meaning that he was able to deal specifically with some of the limitations in Parsons' thinking. Merton believed that any social structure probably has many functions, some more obvious than others. He identified three main limitations: functional unity, universal functionalism and indispensability. He also developed the concept of deviance and made the distinction between manifest and latent functions. Manifest functions referred to the recognized and intended consequences of any social pattern. Latent functions referred to unrecognized and unintended consequences of any social pattern. Merton criticized functional unity, saying that not all parts of a modern complex society work for the functional unity of society. Consequently, there is a social dysfunction referred to as any social pattern that may disrupt the operation of society. Some institutions and structures may have other functions, and some may even be generally dysfunctional, or be functional for some while being dysfunctional for others. This is because not all structures are functional for society as a whole. Some practices are only functional for a dominant individual or a group. There are two types of functions that Merton discusses the "manifest functions" in that a social pattern can trigger a recognized and intended consequence. The manifest function of education includes preparing for a career by getting good grades, graduation and finding good job. The second type of function is "latent functions", where a social pattern results in an unrecognized or unintended consequence. The latent functions of education include meeting new people, extra-curricular activities, school trips. Another type of social function is "social dysfunction" which is any undesirable consequences that disrupts the operation of society. The social dysfunction of education includes not getting good grades, a job. Merton states that by recognizing and examining the dysfunctional aspects of society we can explain the development and persistence of alternatives. Thus, as Holmwood states, "Merton explicitly made power and conflict central issues for research within a functionalist paradigm." Merton also noted that there may be functional alternatives to the institutions and structures currently fulfilling the functions of society. This means that the institutions that currently exist are not indispensable to society. Merton states "just as the same item may have multiple functions, so may the same function be diversely fulfilled by alternative items." This notion of functional alternatives is important because it reduces the tendency of functionalism to imply approval of the status quo. Merton's theory of deviance is derived from Durkheim's idea of anomie. It is central in explaining how internal changes can occur in a system. For Merton, anomie means a discontinuity between cultural goals and the accepted methods available for reaching them. Merton believes that there are 5 situations facing an actor. Conformity occurs when an individual has the means and desire to achieve the cultural goals socialized into them. Innovation occurs when an individual strives to attain the accepted cultural goals but chooses to do so in novel or unaccepted method. Ritualism occurs when an individual continues to do things as prescribed by society but forfeits the achievement of the goals. Retreatism is the rejection of both the means and the goals of society. Rebellion is a combination of the rejection of societal goals and means and a substitution of other goals and means. Thus it can be seen that change can occur internally in society through either innovation or rebellion. It is true that society will attempt to control these individuals and negate the changes, but as the innovation or rebellion builds momentum, society will eventually adapt or face dissolution. === Almond and Powell === In the 1970s, political scientists Gabriel Almond and Bingham Powell introduced a structural-functionalist approach to comparing political systems. They argued that, in order to understand a political system, it is necessary to understand not only its institutions (or structures) but also their respective functions. They also insisted that these institutions, to be properly understood, must be placed in a meaningful and dynamic historical context. This idea stood in marked contrast to prevalent approaches in the field of comparative politics—the state-society theory and the dependency theory. These were the descendants of David Easton's system theory in international relations, a mechanistic view that saw all political systems as essentially the same, subject to the same laws of "stimulus and response"—or inputs and outputs—while paying little attention to unique characteristics. The structural-functional approach is based on the view that a political system is made up of several key components, including interest groups, political parties and branches of government. In addition to structures, Almond and Powell showed that a political system consists of various functions, chief among them political socialization, recruitment and communication: socialization refers to the way in which societies pass along their values and beliefs to succeeding generations, and in political terms describe the process by which a society inculcates civic virtues, or the habits of effective citizenship; recruitment denotes the process by which a political system generates interest, engagement and participation from citizens; and communication refers to the way that a system promulgates its values and information. == Unilineal descent == In their attempt to explain the social stability of African "primitive" stateless societies where they undertook their fieldwork, Evans-Pritchard (1940) and Meyer Fortes (1945) argued that the Tallensi and the Nuer were primarily organized around unilineal descent groups. Such groups are characterized by common purposes, such as administering property or defending against attacks; they form a permanent social structure that persists well beyond the lifespan of their members. In the case of the Tallensi and the Nuer, these corporate groups were based on kinship which in turn fitted into the larger structures of unilineal descent; consequently Evans-Pritchard's and Fortes' model is called "descent theory". Moreover, in this African context territorial divisions were aligned with lineages; descent theory therefore synthesized both blood and soil as the same. Affinal ties with the parent through whom descent is not reckoned, however, are considered to be merely complementary or secondary (Fortes created the concept of "complementary filiation"), with the reckoning of kinship through descent being considered the primary organizing force of social systems. Because of its strong emphasis on unilineal descent, this new kinship theory came to be called "descent theory". With no delay, descent theory had found its critics. Many African tribal societies seemed to fit this neat model rather well, although Africanists, such as Paul Richards, also argued that Fortes and Evans-Pritchard had deliberately downplayed internal contradictions and overemphasized the stability of the local lineage systems and their significance for the organization of society. However, in many Asian settings the problems were even more obvious. In Papua New Guinea, the local patrilineal descent groups were fragmented and contained large amounts of non-agnates. Status distinctions did not depend on descent, and genealogies were too short to account for social solidarity through identification with a common ancestor. In particular, the phenomenon of cognatic (or bilateral) kinship posed a serious problem to the proposition that descent groups are the primary element behind the social structures of "primitive" societies. Leach's (1966) critique came in the form of the classical Malinowskian argument, pointing out that "in Evans-Pritchard's studies of the Nuer and also in Fortes's studies of the Tallensi unilineal descent turns out to be largely an ideal concept to which the empirical facts are only adapted by means of fictions". People's self-interest, manoeuvring, manipulation and competition had been ignored. Moreover, descent theory neglected the significance of marriage and affinal ties, which were emphasized by Lévi-Strauss's structural anthropology, at the expense of overemphasizing the role of descent. To quote Leach: "The evident importance attached to matrilateral and affinal kinship connections is not so much explained as explained away." == Biological == Biological functionalism is an anthropological paradigm, asserting that all social institutions, beliefs, values and practices serve to address pragmatic concerns. In many ways, the theorem derives from the longer-established structural functionalism, yet the two theorems diverge from one another significantly. While both maintain the fundamental belief that a social structure is composed of many interdependent frames of reference, biological functionalists criticise the structural view that a social solidarity and collective conscience is required in a functioning system. By that fact, biological functionalism maintains that our individual survival and health is the driving provocation of actions, and that the importance of social rigidity is negligible. === Everyday application === Although the actions of humans without doubt do not always engender positive results for the individual, a biological functionalist would argue that the intention was still self-preservation, albeit unsuccessful. An example of this is the belief in luck as an entity; while a disproportionately strong belief in good luck may lead to undesirable results, such as a huge loss in money from gambling, biological functionalism maintains that the newly created ability of the gambler to condemn luck will allow them to be free of individual blame, thus serving a practical and individual purpose. In this sense, biological functionalism maintains that while bad results often occur in life, which do not serve any pragmatic concerns, an entrenched cognitive psychological motivation was attempting to create a positive result, in spite of its eventual failure. == Decline == Structural functionalism reached the peak of its influence in the 1940s and 1950s, and by the 1960s was in rapid decline. By the 1980s, its place was taken in Europe by more conflict-oriented approaches, and more recently by structuralism. While some of the critical approaches also gained popularity in the United States, the mainstream of the discipline has instead shifted to a myriad of empirically oriented middle-range theories with no overarching theoretical orientation. To most sociologists, functionalism is now "as dead as a dodo". As the influence of functionalism in the 1960s began to wane, the linguistic and cultural turns led to a myriad of new movements in the social sciences: "According to Giddens, the orthodox consensus terminated in the late 1960s and 1970s as the middle ground shared by otherwise competing perspectives gave way and was replaced by a baffling variety of competing perspectives. This third generation of social theory includes phenomenologically inspired approaches, critical theory, ethnomethodology, symbolic interactionism, structuralism, post-structuralism, and theories written in the tradition of hermeneutics and ordinary language philosophy." While absent from empirical sociology, functionalist themes remained detectable in sociological theory, most notably in the works of Luhmann and Giddens. There are, however, signs of an incipient revival, as functionalist claims have recently been bolstered by developments in multilevel selection theory and in empirical research on how groups solve social dilemmas. Recent developments in evolutionary theory—especially by biologist David Sloan Wilson and anthropologists Robert Boyd and Peter Richerson—have provided strong support for structural functionalism in the form of multilevel selection theory. In this theory, culture and social structure are seen as a Darwinian (biological or cultural) adaptation at the group level. == Criticisms == In the 1960s, functionalism was criticized for being unable to account for social change, or for structural contradictions and conflict (and thus was often called "consensus theory"). Also, it ignores inequalities including race, gender, class, which cause tension and conflict. The refutation of the second criticism of functionalism, that it is static and has no concept of change, has already been articulated above, concluding that while Parsons' theory allows for change, it is an orderly process of change [Parsons, 1961:38], a moving equilibrium. Therefore, referring to Parsons' theory of society as static is inaccurate. It is true that it does place emphasis on equilibrium and the maintenance or quick return to social order, but this is a product of the time in which Parsons was writing (post-World War II, and the start of the cold war). Society was in upheaval and fear abounded. At the time social order was crucial, and this is reflected in Parsons' tendency to promote equilibrium and social order rather than social change. Furthermore, Durkheim favoured a radical form of guild socialism along with functionalist explanations. Also, Marxism, while acknowledging social contradictions, still uses functionalist explanations. Parsons' evolutionary theory describes the differentiation and reintegration systems and subsystems and thus at least temporary conflict before reintegration (ibid). "The fact that functional analysis can be seen by some as inherently conservative and by others as inherently radical suggests that it may be inherently neither one nor the other." Stronger criticisms include the epistemological argument that functionalism is tautologous, that is, it attempts to account for the development of social institutions solely through recourse to the effects that are attributed to them, and thereby explains the two circularly. However, Parsons drew directly on many of Durkheim's concepts in creating his theory. Certainly Durkheim was one of the first theorists to explain a phenomenon with reference to the function it served for society. He said, "the determination of function is…necessary for the complete explanation of the phenomena." However Durkheim made a clear distinction between historical and functional analysis, saying, "When ... the explanation of a social phenomenon is undertaken, we must seek separately the efficient cause which produces it and the function it fulfills." If Durkheim made this distinction, then it is unlikely that Parsons did not. However Merton does explicitly state that functional analysis does not seek to explain why the action happened in the first instance, but why it continues or is reproduced. By this particular logic, it can be argued that functionalists do not necessarily explain the original cause of a phenomenon with reference to its effect. Yet the logic stated in reverse, that social phenomena are (re)produced because they serve ends, is unoriginal to functionalist thought. Thus functionalism is either undefinable or it can be defined by the teleological arguments which functionalist theorists normatively produced before Merton. Another criticism describes the ontological argument that society cannot have "needs" as a human being does, and even if society does have needs they need not be met. Anthony Giddens argues that functionalist explanations may all be rewritten as historical accounts of individual human actions and consequences (see Structuration). A further criticism directed at functionalism is that it contains no sense of agency, that individuals are seen as puppets, acting as their role requires. Yet Holmwood states that the most sophisticated forms of functionalism are based on "a highly developed concept of action," and as was explained above, Parsons took as his starting point the individual and their actions. His theory did not however articulate how these actors exercise their agency in opposition to the socialization and inculcation of accepted norms. As has been shown above, Merton addressed this limitation through his concept of deviance, and so it can be seen that functionalism allows for agency. It cannot, however, explain why individuals choose to accept or reject the accepted norms, why and in what circumstances they choose to exercise their agency, and this does remain a considerable limitation of the theory. Further criticisms have been levelled at functionalism by proponents of other social theories, particularly conflict theorists, Marxists, feminists and postmodernists. Conflict theorists criticized functionalism's concept of systems as giving far too much weight to integration and consensus, and neglecting independence and conflict. Lockwood, in line with conflict theory, suggested that Parsons' theory missed the concept of system contradiction. He did not account for those parts of the system that might have tendencies to mal-integration. According to Lockwood, it was these tendencies that come to the surface as opposition and conflict among actors. However Parsons thought that the issues of conflict and cooperation were very much intertwined and sought to account for both in his model. In this however he was limited by his analysis of an ‘ideal type' of society which was characterized by consensus. Merton, through his critique of functional unity, introduced into functionalism an explicit analysis of tension and conflict. Yet Merton's functionalist explanations of social phenomena continued to rest on the idea that society is primarily co-operative rather than conflicted, which differentiates Merton from conflict theorists. Marxism, which was revived soon after the emergence of conflict theory, criticized professional sociology (functionalism and conflict theory alike) for being partisan to advanced welfare capitalism. Gouldner thought that Parsons' theory specifically was an expression of the dominant interests of welfare capitalism, that it justified institutions with reference to the function they fulfill for society. It may be that Parsons' work implied or articulated that certain institutions were necessary to fulfill the functional prerequisites of society, but whether or not this is the case, Merton explicitly states that institutions are not indispensable and that there are functional alternatives. That he does not identify any alternatives to the current institutions does reflect a conservative bias, which as has been stated before is a product of the specific time that he was writing in. As functionalism's prominence was ending, feminism was on the rise, and it attempted a radical criticism of functionalism. It believed that functionalism neglected the suppression of women within the family structure. Holmwood shows, however, that Parsons did in fact describe the situations where tensions and conflict existed or were about to take place, even if he did not articulate those conflicts. Some feminists agree, suggesting that Parsons provided accurate descriptions of these situations. On the other hand, Parsons recognized that he had oversimplified his functional analysis of women in relation to work and the family, and focused on the positive functions of the family for society and not on its dysfunctions for women. Merton, too, although addressing situations where function and dysfunction occurred simultaneously, lacked a "feminist sensibility". Postmodernism, as a theory, is critical of claims of objectivity. Therefore, the idea of grand theory and grand narrative that can explain society in all its forms is treated with skepticism. This critique focuses on exposing the danger that grand theory can pose when not seen as a limited perspective, as one way of understanding society. Jeffrey Alexander (1985) sees functionalism as a broad school rather than a specific method or system, such as Parsons, who is capable of taking equilibrium (stability) as a reference-point rather than assumption and treats structural differentiation as a major form of social change. The name 'functionalism' implies a difference of method or interpretation that does not exist. This removes the determinism criticized above. Cohen argues that rather than needs a society has dispositional facts: features of the social environment that support the existence of particular social institutions but do not cause them. == Influential theorists == Kingsley Davis Michael Denton Émile Durkheim David Keen Niklas Luhmann Bronisław Malinowski Robert K. Merton Wilbert E. Moore George Murdock Talcott Parsons Alfred Reginald Radcliffe-Brown Herbert Spencer Fei Xiaotong == See also == Causation (sociology) Functional structuralism Historicism Neofunctionalism (sociology) New institutional economics Pure sociology Sociotechnical system Systems theory Vacancy chain Dennis Wrong (critic of structural functionalism) == Notes == == References == Barnard, A. 2000. History and Theory in Anthropology. Cambridge: CUP. Barnard, A., and Good, A. 1984. Research Practices in the Study of Kinship. London: Academic Press. Barnes, J. 1971. Three Styles in the Study of Kinship. London: Butler & Tanner. Elster, J., (1990), “Merton's Functionalism and the Unintended Consequences of Action”, in Clark, J., Modgil, C. & Modgil, S., (eds) Robert Merton: Consensus and Controversy, Falmer Press, London, pp. 129–35 Gingrich, P., (1999) “Functionalism and Parsons” in Sociology 250 Subject Notes, University of Regina, accessed, 24/5/06, uregina.ca Holy, L. 1996. Anthropological Perspectives on Kinship. London: Pluto Press. Homans, George Casper (1962). Sentiments and Activities. New York: The Free Press of Glencoe. Hoult, Thomas Ford (1969). Dictionary of Modern Sociology. Kuper, A. 1996. Anthropology and Anthropologists. London: Routledge. Layton, R. 1997. An Introduction to Theory in Anthropology. Cambridge: CUP. Leach, E. 1954. Political Systems of Highland Burma. London: Bell. Leach, E. 1966. Rethinking Anthropology. Northampton: Dickens. Lenski, Gerhard (1966). "Power and Privilege: A Theory of Social Stratification." New York: McGraw-Hill. Lenski, Gerhard (2005). "Evolutionary-Ecological Theory." Boulder, CO: Paradigm. Levi-Strauss, C. 1969. The Elementary Structures of Kinship. London: Eyre and Spottis-woode. Maryanski, Alexandra (1998). "Evolutionary Sociology." Advances in Human Ecology. 7:1–56. Maryanski, Alexandra and Jonathan Turner (1992). "The Social Cage: Human Nature and the Evolution of Society." Stanford: Stanford University Press. Marshall, Gordon (1994). The Concise Oxford Dictionary of Sociology. ISBN 0-19-285237-X Parsons, T., (1961) Theories of Society: foundations of modern sociological theory, Free Press, New York Perey, Arnold (2005) "Malinowski, His Diary, and Men Today (with a note on the nature of Malinowskian functionalism) Ritzer, George and Douglas J. Goodman (2004). Sociological Theory, 6th ed. New York: McGraw-Hill. Sanderson, Stephen K. (1999). "Social Transformations: A General Theory of Historical Development." Lanham, MD: Rowman & Littlefield. Turner, Jonathan (1995). "Macrodynamics: Toward a Theory on the Organization of Human Populations." New Brunswick: Rutgers University Press. Turner, Jonathan and Jan Stets (2005). "The Sociology of Emotions." Cambridge. Cambridge University Press.
Wikipedia/Structural_functionalism
Industrial sociology, until recently a crucial research area within the field of sociology of work, examines "the direction and implications of trends in technological change, globalization, labour markets, work organization, managerial practices and employment relations" to "the extent to which these trends are intimately related to changing patterns of inequality in modern societies and to the changing experiences of individuals and families", and " the ways in which workers challenge, resist and make their own contributions to the patterning of work and shaping of work institutions". == Labour process theory == One branch of industrial sociology is labour process theory (LPT). In 1974, Harry Braverman wrote Labor and Monopoly Capital, which provided a critical analysis of scientific management. This book analysed capitalist productive relations from a Marxist perspective. Following Marx, Braverman argued that work within capitalist organizations was exploitative and alienating, and therefore workers had to be coerced into servitude. For Braverman the pursuit of capitalist interests over time ultimately leads to deskilling and routinization of the worker. The Taylorist work design is the ultimate embodiment of this tendency. Braverman demonstrated several mechanisms of control in both the factory blue-collar and clerical white-collar labour force. His key contribution is his "deskilling" thesis. Braverman argued that capitalist owners and managers were incessantly driven to deskill the labour force to lower production costs and ensure higher productivity. Deskilled labour is cheap and above all easy to control due to the workers' lack of direct engagement in the production process. In turn work becomes intellectually or emotionally unfulfilling; the lack of capitalist reliance on human skill reduces the need of employers to reward workers in anything but a minimal economic way. Braverman's contribution to the sociology of work and industry (i.e., industrial sociology) has been important and his theories of the labour process continue to inform teaching and research. Braverman's thesis has, however, been contested, notably by Andrew Freidman in his work Industry and Labour (1977). In it, Freidman suggests that whilst the direct control of labour is beneficial for the capitalist under certain circumstances, a degree of "responsible autonomy" can be granted to unionized or "core" workers, in order to harness their skill under controlled conditions. Also, Richard Edwards showed in 1979 that although hierarchy in organizations has remained constant, additional forms of control (such as technical control via email monitoring, call monitoring; bureaucratic control via procedures for leave, sickness etc.) have been added to gain the interests of the capitalist class versus the workers. Duncan Gallie has shown how important it is to approach the question of skill from a social class perspective. In his study, the majority of non-manual, intermediate and skilled manual workers believed that their work had come to demand a higher level of skill, but the majority of manual workers felt that the responsibility and skill needed in their work had either remained constant or declined. This implies that Braverman's claims can't be applied to all social classes. The notion the particular type of technology workers were exposed to shapes their experience was most forcefully argued in a classic study by Robert Blauner. He argued that some work is alienating more than other types because of the different technologies workers use. Alienation, to Blauner, has four dimensions: powerlessness, meaninglessness, isolation, and self-estrangement. Individuals are powerless when they can't control their own actions or conditions of work; work is meaningless when it gives employees little or no sense of value, interest or worth; work is isolating when workers cannot identify with their workplace; and work is self-estranging when, at the subjective level, the worker has no sense of involvement in the job. Blauner's claims however fail to recognize that the same technology can be experienced in a variety of ways. Studies have shown that cultural differences with regard to management–union relations, levels of hierarchical control, and reward and performance appraisal policies mean that the experience of the same kind of work can vary considerably between countries and firms. The individualization of work and the need for workers to have more flexible skills in order to respond to technological changes means that Blauner's characterization of work experience is no longer valid. Additionally, workers today may work in teams to alleviate workers' sense of alienation, since they are involved in the entire process, rather than just a small part of it. In conclusion, automative technologies and computerized work systems have typically enhanced workers' job satisfaction and skill deployment in the better-paid, secure public and private sector jobs. But, in more non-skilled manual work, they have just perpetuated job dissatisfaction, especially for the many women involved in this type of work. == See also == Bibliography of sociology Economic sociology Industrial and organizational psychology == References == === Footnotes === === Bibliography === == Further reading ==
Wikipedia/Industrial_sociology
The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society (German: Strukturwandel der Öffentlichkeit. Untersuchungen zu einer Kategorie der bürgerlichen Gesellschaft) is a 1962 book by the philosopher Jürgen Habermas. It was translated into English in 1989 by Thomas Burger and Frederick Lawrence. An important contribution to modern understanding of democracy, it is notable for "transforming media studies into a hard-headed discipline." In 2022 Habermas published a brief sequel, A New Structural Transformation of the Public Sphere and Deliberative Politics. == The public sphere == According to Habermas, the notion of the "public sphere" began evolving during the Renaissance in Western Europe. Brought on partially by merchants' need for accurate information about distant markets as well as by the growth of democracy and individual liberty and popular sovereignty, the public sphere was a place between private individuals and government authorities in which people could meet and have critical debates about public matters. Such discussions served as a counterweight to political authority and happened physically in face-to-face meetings in coffee houses and cafes and public squares as well as in the media in letters, books, drama, and art. Habermas saw a vibrant public sphere as a positive force keeping authorities within bounds lest their rulings be ridiculed. According to the journalist David Randall, "In Habermasian theory, the bourgeois public sphere was preceded by a literary public sphere whose favored genres revealed the interiority of the self and emphasized an audience-oriented subjectivity." == Habermas' thesis == The Structural Transformation of the Public Sphere was Habermas's first major work. It also satisfied the rigorous requirements for a professorship in Germany; in this system, independent scholarly research, usually resulting in a published book, must be submitted, and defended before an academic committee; this process is known as Habilitationsschrift or habilitation. The work was overseen by the political scientist Wolfgang Abendroth, to whom Habermas dedicated it. Habermas describes the development of a bourgeois public sphere in the eighteenth and early nineteenth centuries as well as its subsequent decline. The first transition occurred in England, France, the United States, and Germany over the course of 150 years or so from the late seventeenth century. England led the way in the early eighteenth century, with Germany following in the late eighteenth century. Habermas tries to explain the growth and decline of the public sphere by relating political, social, cultural and philosophical developments to each other in a multi-disciplinary approach. Initially, there were monarchical and feudal societies which made no distinction between state and society or between public and private, and which had organized themselves politically around symbolic representation and status. These feudal societies were transformed into a bourgeois liberal constitutional order which distinguished between the public and private realms; further, within the private realm, there was a bourgeois public sphere for rational-critical political debate which formed a new phenomenon called public opinion. Spearheading this shift was the growth of a literary public sphere in which the bourgeoisie learned to critically reflect upon itself and its role in society. This first major shift occurred alongside the rise of early non-industrial capitalism and the philosophical articulation of political liberalism by such thinkers as Hobbes, Locke, Montesquieu (See: The Spirit of the Laws), Rousseau, and then Kant. The bourgeois public sphere flourished within the early laissez-faire, free-market, largely pre-industrial capitalist order of liberalism from the late eighteenth century to the mid-nineteenth century. The second part of Habermas' account traces the transition from the liberal bourgeois public sphere to the modern mass society of the social welfare state. Starting in the 1830s, extending from the late nineteenth century to the early twentieth century, a new constellation of social, cultural, political, and philosophical developments took shape. Hegel's critique of Kant's liberal philosophy anticipated the shift, according to Habermas, and this shift came to a philosophical head in Marx's astute diagnosis of the contradictions inherent in the liberal constitutional social order. Habermas saw the modified liberalism of Mill and Tocqueville with their ambivalence toward the public sphere as emblematic manifestations of these contradictions. Paralleling this philosophical progression against classical liberalism were major socio-economic transformations based on industrialization, and the result was the rise of mass societies characterized by consumer capitalism in the twentieth century. Clear demarcations between public and private and between state and society became blurred. The bourgeois public sphere was transformed by the increasing re-integration and entwining of state and society that resulted in the modern social welfare state. This shift, according to Habermas, can be seen as part of a larger dialectic in which political changes were made in an attempt to save the liberal constitutional order, but had the ultimate effect of destroying the bourgeois public sphere. He highlights the pernicious effects of commercialization and consumerization on the public sphere through the rise of mass media, public relations, and consumer culture. He delineates how these developments thwarted rational-critical political debate, including political parties functioning in a way that bypassed the public sphere, undermining parliamentary politics. Habermas drew on the cultural critiques of critical theory from the Frankfurt School, which included important thinkers such as Theodor Adorno, who was one of his teachers at the Institute for Social Research from 1956 to 1959. Habermas began his habilitation during this period, but due to intellectual tensions with the Institute's director, philosopher and sociologist Max Horkheimer, he moved to the University of Marburg, where he completed the work under Wolfgang Abendroth. == Reception == The book was reprinted many times in German and other languages, and has been enormously influential, especially since its translation into English, for scholars of political science, media studies, and rhetoric. It is also an important work for historians of philosophy and scholars of intellectual history. After publication, Habermas was identified as an important philosopher of the twentieth century. Since publication, the Structural Transformation of the Public Sphere has been critiqued for Habermas’s formulation of the concept of a public sphere which he claimed "stood or fell with the principle of universal access ... A public sphere from which specific groups would be eo ipso excluded was less than merely incomplete; it was not a public sphere at all." (Habermas 1962:85) However, the bourgeois public sphere required as preconditions of entry an excellent education and property ownership – which correlated to membership of the upper classes. Critics have argued that the bourgeois public sphere cannot be considered an ideal form of politics, since the public sphere was limited to upper-class strata of society and did not represent most of the citizens in these emerging nation-states. Some critics claim the public sphere, as such, never existed, or existed only in the sense of excluding many important groups, such as the poor, women, slaves, migrants, and criminals. They maintain that the public sphere remains an idealized conception, little changed since Kant, since the ideal is still to a great extent what Habermas might call an unfinished project of modernity. (Cubitt 2005:93) Similar critiques regarding the exclusivity of the bourgeois public sphere have been made by feminist and post-colonial authors. == Notes == Habermas, Jürgen (1962 trans 1989) The Structural Transformation of the Public Sphere: An Inquiry into a category of Bourgeois Society, Polity, Cambridge. ISBN 0-7456-0274-6 Cubitt, Sean (2005) Ecomedia, Rodopi, Amsterdam. == References == == Further reading == Calhoun, Craig, ed. (1993). Habermas and the Public Sphere. MIT Press. ISBN 0-262-53114-3. Downie, J.A. “The Myth of the Bourgeois Public Sphere.” The Restoration and Eighteenth Century. Ed. Cynthia Wall. Malden, MA: Blackwell Publishing, 2005. 58-79. Print. == External links == Selected excerpts from The Structural Transformation of the Public Sphere (archived 12 November 2014) Public Sphere Guide A Research Guide, Teaching Guide and Resource for the Renewal of the Public Sphere Transformations of the Public Sphere Essay Forum
Wikipedia/The_Structural_Transformation_of_the_Public_Sphere
"Fourth Industrial Revolution", "4IR", or "Industry 4.0", is a neologism describing rapid technological advancement in the 21st century. It follows the Third Industrial Revolution (the "Information Age"). The term was popularised in 2016 by Klaus Schwab, the World Economic Forum founder and former executive chairman, who asserts that these developments represent a significant shift in industrial capitalism. A part of this phase of industrial change is the joining of technologies like artificial intelligence, gene editing, to advanced robotics that blur the lines between the physical, digital, and biological worlds. Throughout this, fundamental shifts are taking place in how the global production and supply network operates through ongoing automation of traditional manufacturing and industrial practices, using modern smart technology, large-scale machine-to-machine communication (M2M), and the Internet of things (IoT). This integration results in increasing automation, improving communication and self-monitoring, and the use of smart machines that can analyse and diagnose issues without the need for human intervention. It also represents a social, political, and economic shift from the digital age of the late 1990s and early 2000s to an era of embedded connectivity distinguished by the ubiquity of technology in society (i.e. a metaverse) that changes the ways humans experience and know the world around them. It posits that we have created and are entering an augmented social reality compared to just the natural senses and industrial ability of humans alone. The Fourth Industrial Revolution is sometimes expected to mark the beginning of an imagination age, where creativity and imagination become the primary drivers of economic value. == History == The phrase Fourth Industrial Revolution was first introduced by a team of scientists developing a high-tech strategy for the German government. Klaus Schwab, former executive chairman of the World Economic Forum (WEF), introduced the phrase to a wider audience in a 2015 article published by Foreign Affairs. "Mastering the Fourth Industrial Revolution" was the 2016 theme of the World Economic Forum Annual Meeting, in Davos-Klosters, Switzerland. On 10 October 2016, the Forum announced the opening of its Centre for the Fourth Industrial Revolution in San Francisco. This was also subject and title of Schwab's 2016 book. Schwab includes in this fourth era technologies that combine hardware, software, and biology (cyber-physical systems), and emphasises advances in communication and connectivity. Schwab expects this era to be marked by breakthroughs in emerging technologies in fields such as robotics, artificial intelligence, nanotechnology, quantum computing, biotechnology, the internet of things, the industrial internet of things, decentralised consensus, fifth-generation wireless technologies, 3D printing, and fully autonomous vehicles. In The Great Reset proposal by the WEF, The Fourth Industrial Revolution is included as a strategic intelligence in the solution to rebuild the economy sustainably following the COVID-19 pandemic. === First Industrial Revolution === The First Industrial Revolution was marked by a transition from hand production methods to machines through the use of steam power and water power. The implementation of new technologies took a long time, so the period which this refers to was between 1760 and 1820, or 1840 in Europe and the United States. Its effects had consequences on textile manufacturing, which was first to adopt such changes, as well as iron industry, agriculture, and mining–although it also had societal effects with an ever stronger middle class. === Second Industrial Revolution === The Second Industrial Revolution, also known as the Technological Revolution, is the period between 1871 and 1914 that resulted from installations of extensive railroad and telegraph networks, which allowed for faster transfer of people and ideas, as well as electricity. Increasing electrification allowed for factories to develop the modern production line. === Third Industrial Revolution === The Third Industrial Revolution, also known as the Digital Revolution, began in the late 20th century. It is characterized by the shift to an economy centered on information technology, marked by the advent of personal computers, the Internet, and the widespread digitalization of communication and industrial processes. A book by Jeremy Rifkin titled The Third Industrial Revolution, published in 2011, focused on the intersection of digital communications technology and renewable energy. It was made into a 2017 documentary by Vice Media. == Characteristics == In essence, the Fourth Industrial Revolution is the trend towards automation and data exchange in manufacturing technologies and processes which include cyber-physical systems (CPS), Internet of Things (IoT), cloud computing, cognitive computing, and artificial intelligence. Machines improve human efficiency in performing repetitive functions, and the combination of machine learning and computing power allows machines to carry out increasingly complex tasks. The Fourth Industrial Revolution has been defined as technological developments in cyber-physical systems such as high capacity connectivity; new human-machine interaction modes such as touch interfaces and virtual reality systems; and improvements in transferring digital instructions to the physical world including robotics and 3D printing (additive manufacturing); "big data" and cloud computing; improvements to and uptake of Off-Grid / Stand-Alone Renewable Energy Systems: solar, wind, wave, hydroelectric and the electric batteries (lithium-ion renewable energy storage systems (ESS) and EV). It also emphasizes decentralized decisions – the ability of cyber physical systems to make decisions on their own and to perform their tasks as autonomously as possible. Only in the case of exceptions, interference, or conflicting goals, are tasks delegated to a higher level. === Distinctiveness === Proponents of the Fourth Industrial Revolution suggest it is a distinct revolution rather than simply a prolongation of the Third Industrial Revolution. This is due to the following characteristics: Velocity — exponential speed at which incumbent industries are affected and displaced Scope and systems impact – the large amount of sectors and firms that are affected Paradigm shift in technology policy – new policies designed for this new way of doing are present. An example is Singapore's formal recognition of Industry 4.0 in its innovation policies. Critics of the concept dismiss Industry 4.0 as a marketing strategy. They suggest that although revolutionary changes are identifiable in distinct sectors, there is no systemic change so far. In addition, the pace of recognition of Industry 4.0 and policy transition varies across countries; the definition of Industry 4.0 is not harmonised. One of the most known figures is Jeremy Rifkin who "agree[s] that digitalization is the hallmark and defining technology in what has become known as the Third Industrial Revolution". However, he argues "that the evolution of digitalization has barely begun to run its course and that its new configuration in the form of the Internet of Things represents the next stage of its development". === Components === The application of the Fourth Industrial Revolution operates through: Mobile devices Location detection technologies (electronic identification) Advanced human-machine interfaces Authentication and fraud detection Smart sensors Big analytics and advanced processes Multilevel customer interaction and customer profiling Augmented reality/wearables On-demand availability of computer system resources Data visualisation Industry 4.0 networks a wide range of new technologies to create value. Using cyber-physical systems that monitor physical processes, a virtual copy of the physical world can be designed. Characteristics of cyber-physical systems include the ability to make decentralised decisions independently, reaching a high degree of autonomy. The value created in Industry 4.0 can be relied upon in electronic identification, in which the smart manufacturing requires set technologies to be incorporated in the manufacturing process to thus be classified as in the development path of Industry 4.0 and no longer digitisation. == Trends == === Smart factories === The Fourth Industrial Revolution fosters "smart factories", which are production environments where facilities and logistics systems are organised with minimal human intervention. The technical foundations on which smart factories are based are cyber-physical systems that communicate with each other using IoT. An important part of this process is the exchange of data between the product and the production line. This enables more efficient supply chain connectivity and better organisation within a production environment. Within modular structured smart factories, cyber-physical systems monitor physical processes, create a virtual copy of the physical world, and make decentralised decisions. Over the internet of things, cyber-physical systems communicate and cooperate with each other and with humans in synchronic time both internally and across organizational services offered and used by participants of the value chain. === Artificial intelligence === Artificial intelligence (AI) has a wide range of applications across all sectors of the economy. It gained prominence following advancements in deep learning during the 2010s, and its impact intensified in the 2020s with the rise of generative AI, a period often referred to as the "AI boom". Models like GPT-4o can engage in verbal and textual discussions and analyze images. AI is a key driver of Industry 4.0, orchestrating technologies like robotics, automated vehicles, and real-time data analytics. By enabling machines to perform complex tasks, AI is redefining production processes and reducing changeover times. AI could also significantly accelerate, or even automate software development. Some experts believe that AI alone could be as transformative as an industrial revolution. Multiple companies such as OpenAI and Meta have expressed the goal of creating artificial general intelligence (AI that can do virtually any cognitive task a human can), making large investments in data centers and GPUs to train more capable AI models. ==== Robotics ==== Humanoid robots have traditionally lacked usefulness. They had difficulty picking simple objects due to imprecise control and coordination, and they wouldn't understand their environment and how physics works. They were often explicitly programmed to do narrow tasks, failing when encountering new situations. Modern humanoid robots, however, are typically based on machine learning, and in particular reinforcement learning. In 2024, humanoid robots are rapidly becoming more flexible, easier to train, and versatile. === Predictive maintenance === Industry 4.0 facilitates predictive maintenance, due to the use of advanced technologies, including IoT sensors. Predictive maintenance, which can identify potential maintenance issues in real time, allows machine owners to perform cost-effective maintenance before the machinery fails or gets damaged. For example, a company in Los Angeles could understand if a piece of equipment in Singapore is running at an abnormal speed or temperature. They could then decide whether or not it needs to be repaired. === 3D printing === The Fourth Industrial Revolution is said to have extensive dependency on 3D printing technology. Some advantages of 3D printing for industry are that 3D printing can print many geometric structures, as well as simplify the product design process. It is also relatively environmentally friendly. In low-volume production, it can also decrease lead times and total production costs. Moreover, it can increase flexibility, reduce warehousing costs and help the company towards the adoption of a mass customisation business strategy. In addition, 3D printing can be very useful for printing spare parts and installing it locally, therefore reducing supplier dependence and reducing the supply lead time. === Smart sensors === Sensors and instrumentation drive the central forces of innovation, not only for Industry 4.0 but also for other "smart" megatrends, such as smart production, smart mobility, smart homes, smart cities, and smart factories. Smart sensors are devices which generate the data and allow further functionality from self-monitoring and self-configuration to condition monitoring of complex processes. With the capability of wireless communication, they reduce installation effort to a great extent and help realise a dense array of sensors. The importance of sensors, measurement science, and smart evaluation for Industry 4.0 has been recognised and acknowledged by various experts and has already led to the statement "Industry 4.0: nothing goes without sensor systems." However, there are a few issues, such as time synchronisation error, data loss, and dealing with large amounts of harvested data, which all limit the implementation of full-fledged systems. Moreover, additional limits on these functionalities represent the battery power. One example of the integration of smart sensors in the electronic devices, is the case of smart watches, where sensors receive the data from the movement of the user, process the data and as a result, provide the user with the information about how many steps they have walked in a day and also converts the data into calories burned. ==== Agriculture and food industries ==== Smart sensors in these two fields are still in the testing stage. These connected sensors collect, interpret and communicate the information available in the plots (leaf area, vegetation index, chlorophyll, hygrometry, temperature, water potential, radiation). Based on this scientific data, the objective is to enable real-time monitoring via a smartphone with a range of advice that optimises plot management in terms of results, time and costs. On the farm, these sensors can be used to detect crop stages and recommend inputs and treatments at the right time, as well as controlling the level of irrigation. The food industry requires more and more security and transparency and full documentation is required. This new technology is used as a tracking system as well as the collection of human data and product data. === Accelerated transition to the knowledge economy === Knowledge economy is an economic system in which production and services are largely based on knowledge-intensive activities that contribute to an accelerated pace of technical and scientific advance, as well as rapid obsolescence. Industry 4.0 aids transitions into knowledge economy by increasing reliance on intellectual capabilities rather than on physical inputs or natural resources. == Challenges == Challenges in implementation of Industry 4.0: === Economic === High economic cost Business model adaptation Unclear economic benefits/excessive investment Driving significant economic changes through automation and technological advancements, leading to both job displacement and the creation of new roles, necessitating widespread workforce reskilling and systemic adaptation. === Social === Privacy concerns Surveillance and distrust General reluctance to change by stakeholders Threat of redundancy of the corporate IT department Loss of many jobs to automatic processes and IT-controlled processes, especially for blue-collar workers Increased risk of gender inequalities in professions with job roles most susceptible to replacement with AI === Political === Lack of regulation, standards, and forms of certifications Unclear legal issues and data security === Organizational === IT security issues, which are greatly aggravated by the inherent need to open up previously closed production shops Reliability and stability needed for critical machine-to-machine communication (M2M), including very short and stable latency times Need to maintain the integrity of production processes Need to avoid any IT snags, as those would cause expensive production outages Need to protect industrial know-how (contained also in the control files for the industrial automation gear) Lack of adequate skill-sets to expedite the transition towards Industry 4.0 Low top management commitment Insufficient qualification of employees == Country applications == Many countries have set up institutional mechanisms to foster the adoption of Industry 4.0 technologies. For example, === Australia === Australia has a Digital Transformation Agency (est. 2015) and the Prime Minister's Industry 4.0 Taskforce (est. 2016), which promotes collaboration with industry groups in Germany and the USA. === Germany === The term "Industrie 4.0", shortened to I4.0 or simply I4, originated in 2011 from a project in the high-tech strategy of the German government and specifically relates to that project policy, rather than a wider notion of a Fourth Industrial Revolution of 4IR, which promotes the computerisation of manufacturing. The term "Industrie 4.0" was publicly introduced in the same year at the Hannover Fair. German professor Wolfgang Wahlster is sometimes called the inventor of the "Industry 4.0" term. In October 2012, the Working Group on Industry 4.0 presented a set of Industry 4.0 implementation recommendations to the German federal government. The workgroup members and partners are recognised as the founding fathers and driving force behind Industry 4.0. On 8 April 2013 at the Hannover Fair, the final report of the Working Group Industry 4.0 was presented. This working group was headed by Siegfried Dais, of Robert Bosch GmbH, and Henning Kagermann, of the German Academy of Science and Engineering. As Industry 4.0 principles have been applied by companies, they have sometimes been rebranded. For example, the aerospace parts manufacturer Meggitt PLC has branded its own Industry 4.0 research project M4. The discussion of how the shift to Industry 4.0–and especially digitisation–will affect the labour market is being discussed in Germany under the topic of Work 4.0. The federal government in Germany is a leader in the development of the I4.0 policy through its ministries of the German federal Ministry of Education and Research (BMBF) and BMWi. Through the publishing of set objectives and goals for enterprises to achieve, the German federal government attempts to set the direction of the digital transformation. However, there is a gap between German enterprise's collaboration and knowledge of these set policies. The biggest challenge SMEs in Germany are currently facing regarding digital transformation of their manufacturing processes is ensuring that there is a concrete IT and application landscape to support further digital transformation efforts. The characteristics of the German government's Industry 4.0 strategy involve the strong customisation of products under the conditions of highly flexible (mass-) production. The required automation technology is improved by the introduction of methods of self-optimization, self-configuration, self-diagnosis, cognition and intelligent support of workers in their increasingly complex work. The largest project in Industry 4.0 as of July 2013 is the BMBF leading-edge cluster "Intelligent Technical Systems Ostwestfalen-Lippe (its OWL)". Another major project is the BMBF project RES-COM, as well as the Cluster of Excellence "Integrative Production Technology for High-Wage Countries". In 2015, the European Commission started the international Horizon 2020 research project CREMA (cloud-based rapid elastic manufacturing) as a major initiative to foster the Industry 4.0 topic. === Estonia === In Estonia, the digital transformation dubbed as the 4th Industrial Revolution by Klaus Schwab and the World Economic Forum in 2015 started with the restoration of independence in 1991. Although a latecomer to the information revolution due to 50 years of Soviet occupation, Estonia leapfrogged to the digital era, while skipping the analogue connections almost completely. The early decisions made by Prime Minister Mart Laar on the course of the country's economic development led to the establishment of what is today known as e-Estonia, one of the worlds most digitally advanced nations. According to the goals set in Estonia's Digital Agenda 2030, the next advances in the country's digital transformation will involve switching to event-based and proactive services, both in private and business environments, as well as developing a green, AI-powered, and human-centric digital government. === Indonesia === Another example is the Indonesian initiative Making Indonesia 4.0, which focuses on improving industrial performance. === India === India, with its expanding economy and extensive manufacturing sector, has embraced the digital revolution, leading to significant advancements in manufacturing. The Indian program for Industry 4.0 centers around leveraging technology to produce globally competitive products at cost-effective rates while adopting the latest technological advancements of Industry 4.0. === Japan === Society 5.0 envisions a society that prioritizes the well-being of its citizens, striking a harmonious balance between economic progress and the effective addressing of societal challenges through a closely interconnected system of both the digital realm and the physical world. This concept was introduced in 2019 in the 5th Science and Technology Basic Plan for Japanese Government as a blueprint for a forthcoming societal framework. === Malaysia === Malaysia's national policy on Industry 4.0 is known as Industry4WRD. Launched in 2018, key initiatives in this policy include enhancing digital infrastructure, equipping the workforce with 4IR skills, and fostering innovation and technology adoption across industries. === South Africa === South Africa appointed a Presidential Commission on the Fourth Industrial Revolution in 2019, consisting of about 30 stakeholders with a background in academia, industry and government. South Africa has also established an Inter ministerial Committee on Industry 4.0. === South Korea === The Republic of Korea has had a Presidential Committee on the Fourth Industrial Revolution since 2017. The Republic of Korea's I-Korea strategy (2017) is focusing on new growth engines that include AI, drones, and autonomous cars, in line with the government's innovation-driven economic policy. === Uganda === Uganda adopted its own National 4IR Strategy in October 2020 with emphasis on e-governance, urban management (smart cities), healthcare, education, agriculture, and the digital economy; to support local businesses, the government was contemplating introducing a local start-ups bill in 2020 which would require all accounting officers to exhaust the local market prior to procuring digital solutions from abroad. === United Kingdom === In a policy paper published in 2019, the UK's Department for Business, Energy & Industrial Strategy, titled "Regulation for the Fourth Industrial Revolution", outlined the need to evolve current regulatory models to remain competitive in evolving technological and social settings. === United States === The Department of Homeland Security in 2019 published a paper called 'The Industrial Internet of things (IIOT): Opportunities, Risks, Mitigation'. The base pieces of critical infrastructure are increasingly digitised for greater connectivity and optimisation. Hence, its implementation, growth and maintenance must be carefully planned and safeguarded. The paper discusses not only applications of IIOT but also the associated risks. It has suggested some key areas where risk mitigation is possible. To increase coordination between the public, private, law enforcement, academia and other stakeholders the DHS formed the National Cybersecurity and Communications Integration Center (NCCIC). == Industry applications == The aerospace industry has sometimes been characterised as "too low volume for extensive automation". However, Industry 4.0 principles have been investigated by several aerospace companies, and technologies have been developed to improve productivity where the upfront cost of automation cannot be justified. One example of this is the aerospace parts manufacturer Meggitt PLC's M4 project. The increasing use of the industrial internet of things is referred to as Industry 4.0 at Bosch, and generally in Germany. Applications include machines that can predict failures and trigger maintenance processes autonomously or self-organised coordination that react to unexpected changes in production. in 2017, Bosch launched the Connectory, a Chicago, Illinois based innovation incubator that specializes in IoT, including Industry 4.0. Industry 4.0 inspired Innovation 4.0, a move toward digitisation for academia and research and development. In 2017, the £81M Materials Innovation Factory (MIF) at the University of Liverpool opened as a center for computer aided materials science, where robotic formulation, data capture, and modelling are being integrated into development practices. == Criticism == With the consistent development of automation of everyday tasks, some saw the benefit in the exact opposite of automation where self-made products are valued more than those that involved automation. This valuation is named the IKEA effect, a term coined by Michael I. Norton of Harvard Business School, Daniel Mochon of Yale, and Dan Ariely of Duke. Another problem that is expected to accelerate with the growth of IR4 is the prevalence of mental disorders, a known issue within high-tech operators. Also, the IR4 has sparked significant criticism regarding AI bias and ethical issues, as algorithms used in decision-making processes often perpetuate existing social inequalities, disproportionately impacting marginalized groups while lacking transparency and accountability. == Future == === Industry 5.0 === Industry 5.0 has been proposed as a strategy to create a paradigm shift for an industrial landscape in which the primary focus should no longer be on increasing efficiency, but rather on promoting the well-being of society and sustainability of the economy and industrial production. == See also == AI boom Computer-integrated manufacturing Cyber manufacturing Digital modelling and fabrication Industrial control system Intelligent maintenance systems Lights-out manufacturing List of emerging technologies Machine to machine Nondestructive Evaluation 4.0 Simulation software Technological singularity Technological unemployment The War on Normal People Work 4.0 World Economic Forum 2016 == References == === Sources === This article incorporates text from a free content work. Text taken from UNESCO Science Report: the Race Against Time for Smarter Development.​, Schneegans, S., T. Straza and J. Lewis (eds), UNESCO.
Wikipedia/Fourth_Industrial_Revolution
Theory of generations (or sociology of generations) is a theory posed by Karl Mannheim in his 1928 essay, "Das Problem der Generationen," and translated into English in 1952 as "The Problem of Generations." This essay has been described as "the most systematic and fully developed" and even "the seminal theoretical treatment of generations as a sociological phenomenon". According to Mannheim, people are significantly influenced by the socio-historical environment (in particular, notable events that involve them actively) of their youth; giving rise, on the basis of shared experience, to social cohorts that in their turn influence events that shape future generations. Because of the historical context in which Mannheim wrote, some critics contend that the theory of generations centers on Western ideas and lacks a broader cultural understanding. Others argue that the theory of generations should be global in scope, due to the increasingly globalized nature of contemporary society. == Theory == Mannheim defined a generation (note that some have suggested that the term cohort is more correct) to distinguish social generations from the kinship (family, blood-related generations) as a group of individuals of similar ages whose members have experienced a noteworthy historical event within a set period of time. According to Mannheim, social consciousness and perspective of youth reaching maturity in a particular time and place (what he termed "generational location") is significantly influenced by the major historical events of that era (thus becoming a "generation in actuality"). A key point, however, is that this major historical event has to occur, and has to involve the individuals in their young age (thus shaping their lives, as later experiences will tend to receive meaning from those early experiences); a mere chronological contemporaneity is not enough to produce a common generational consciousness. Mannheim in fact stressed that not every generation will develop an original and distinctive consciousness. Whether a generation succeeds in developing a distinctive consciousness is significantly dependent on the pace of social change ("tempo of change"). Mannheim notes also that social change can occur gradually, without the need for major historical events, but those events are more likely to occur in times of accelerated social and cultural change. Mannheim did also note that the members of a generation are internally stratified (by their location, culture, class, etc.), thus they may view different events from different angles and thus are not totally homogenous. Even with the "generation in actuality", there may be differing forms of response to the particular historical situation, thus stratifying by a number of "generational units" (or "social generations"). == Application == Mannheim's theory of generations has been applied to explain how important historical, cultural, and political events of the late 1950s and the early 1960s educated youth of the inequalities in American society, such as their involvement along with other generations in the Civil Rights Movement, and have given rise to a belief that those inequalities need to be changed by individual and collective action. This has pushed an influential minority of young people in the United States toward social movement activity. On the other hand, the generation which came of age in the later part of the 1960s and 1970s was much less engaged in social movement activity, because - according to the theory of generations - the events of that era were more conducive to a political orientation stressing individual fulfillment instead of participation in such social movements questioning the status quo. Other notable applications of Mannheim's theory that illustrate the dynamics of generational change include: The effects of the Great Depression in the U.S. on young people's orientations toward work and politics How the Nazi regime in Germany affected young Germans' political attitudes Collective memories of important historical events that happen during late adolescence or early adulthood Changing patterns of civic engagement in the U.S. The effects of coming of age during the second-wave feminist movement in the U.S. on feminist identity Explaining the rise of same-sex marriage in the United States The effects of the Chinese Cultural Revolution on youth political activism Social generation studies have mainly focused on the youth experience from the perspective of the Western society. "Social generations theory lacks ample consideration of youth outside of the West. Increased empirical attention to non-Western cases corrects the tendency of youth studies to 'other' non-Western youth and provides a more in-depth understanding of the dynamics of reflexive life management." The constraints and opportunities affecting a youth's experiences within particular sociopolitical contexts require research to be done in a wide array of spaces to better reflect the theory and its implications on youth's experiences. Recent works discuss the difficulty of managing generational structures as global processes, proceeding to design glocal structures. == See also == Generation Strauss–Howe generational theory Sociology of aging Sociology of knowledge == References ==
Wikipedia/Theory_of_generations
Truth and Method (German: Wahrheit und Methode) is a 1960 book by the philosopher Hans-Georg Gadamer, in which the author deploys the concept of "philosophical hermeneutics" as it is worked out in Martin Heidegger's Being and Time (1927). The book is considered Gadamer's major work. == Summary == Gadamer draws heavily on the ideas of Romantic hermeneuticists such as Friedrich Schleiermacher and the work of later hermeneuticists such as Wilhelm Dilthey. He rejects as unachievable the goal of objectivity, and instead suggests that meaning is created through intersubjective communication. Gadamer's philosophical project, as explained in Truth and Method, was to elaborate on the concept of "philosophical hermeneutics", which Heidegger in his Being and Time initiated but never dealt with at length. Gadamer's goal was to uncover the nature of human understanding. In the book Gadamer argued that "truth" and "method" were at odds with one another. He was critical of two approaches to the human sciences (Geisteswissenschaften). On the one hand, he was critical of modern approaches to humanities that modelled themselves on the natural sciences (and thus on rigorous scientific methods). On the other hand, he took issue with the traditional German approach to the humanities, represented for instance by Dilthey and Schleiermacher, which believed that correctly interpreting a text meant recovering the original intention of the author who wrote it. In contrast to both of these positions, Gadamer argued that people have a 'historically effected consciousness' (wirkungsgeschichtliches Bewußtsein) and that they are embedded in the particular history and culture that shaped them. Thus interpreting a text involves a fusion of horizons (Horizontverschmelzung) where the scholar finds the ways that the text's history articulates with their own background. Truth and Method is not meant to be a programmatic statement about a new 'hermeneutic' method of interpreting texts. Gadamer intended Truth and Method to be a description of what we always do when we interpret things (even if we do not know it): "My real concern was and is philosophic: not what we do or what we ought to do, but what happens to us over and above our wanting and doing". Importantly, as Gadamer puts it, in relation to his chapter "The hermeneutic circle and the problem of prejudice", "[t]he overcoming of all prejudices, this is the global demand of the Enlightenment, will itself prove to be a prejudice." == Publication history == Truth and Method was published twice in English, and the revised edition is now considered authoritative. The German-language edition of Gadamer's Collected Works includes a volume in which Gadamer elaborates his argument and discusses the critical response to the book. Finally, Gadamer's essay on poet Paul Celan (entitled "Who Am I and Who Are You?") has been considered by many—including Heidegger and Gadamer himself—as a "second volume" or continuation of the argument in Truth and Method. == Reception == Truth and Method is regarded as Gadamer's magnum opus, and has influenced many philosophers and sociologists, notably Jürgen Habermas. In reaction to Gadamer, the critic E. D. Hirsch reasserted a traditionalist approach to interpretation (following Dilthey and Schleiermacher), seeing the task of interpretation as consisting of reconstructing the intentions of the original author of a text. The philosopher Adolf Grünbaum criticized Truth and Method, maintaining that Gadamer misunderstood the methods of science, and made an incorrect contrast between the natural and the human sciences. The critic George Steiner writes that Gadamer's influential model of textual understanding is "developed explicitly out of Heidegger's concept and practice of language." == References ==
Wikipedia/Truth_and_Method
Dependency theory is the idea that resources flow from a "periphery" of poor and exploited states to a "core" of wealthy states, enriching the latter at the expense of the former. A central contention of dependency theory is that poor states are impoverished and rich ones enriched by the way poor states are integrated into the "world system". This theory was officially developed in the late 1960s following World War II, as scholars searched for the root issue in the lack of development in Latin America. The theory arose as a reaction to modernization theory, an earlier theory of development which held that all societies progress through similar stages of development, that today's underdeveloped areas are thus in a similar situation to that of today's developed areas at some time in the past, and that, therefore, the task of helping the underdeveloped areas out of poverty is to accelerate them along this supposed common path of development, by various means such as investment, technology transfers, and closer integration into the world market. Dependency theory rejected this view, arguing that underdeveloped countries are not merely primitive versions of developed countries, but have unique features and structures of their own; and, importantly, are in the situation of being the weaker members in a world market economy. Some writers have argued for its continuing relevance as a conceptual orientation to the global division of wealth. Dependency theorists can typically be divided into two categories: liberal reformists and neo-Marxists. Liberal reformists typically advocate for targeted policy interventions, while the neo-Marxists propose a planned economy. == Basics == The premises of dependency theory are that: Poor nations provide natural resources, cheap labour, a destination for obsolete technology, and markets for developed nations, without which the latter could not have the standard of living they enjoy. Wealthy nations actively perpetuate a state of dependence by various means. This influence may be multifaceted, involving economics, media control, politics, banking and finance, education, culture, and sport. == History == Dependency theory originates with two papers published in 1949, one by Hans Singer and one by Raúl Prebisch, in which the authors observe that the terms of trade for underdeveloped countries relative to the developed countries had deteriorated over time: the underdeveloped countries were able to purchase fewer and fewer manufactured goods from the developed countries in exchange for a given quantity of their raw materials exports. This idea is known as the Prebisch–Singer thesis. Prebisch, an Argentine economist at the United Nations Commission for Latin America (UNCLA), went on to conclude that the underdeveloped nations must employ some degree of protectionism in trade if they were to enter a self-sustaining development path. He argued that import-substitution industrialisation (ISI), not a trade-and-export orientation, was the best strategy for underdeveloped countries. The theory was developed from a Marxian perspective by Paul A. Baran in 1957 with the publication of his The Political Economy of Growth. Dependency theory shares many points with earlier, Marxist, theories of imperialism by Rosa Luxemburg and Vladimir Lenin, and has attracted continued interest from Marxists. Some authors identify two main streams in dependency theory: the Latin American Structuralist, typified by the work of Prebisch, Celso Furtado, and Aníbal Pinto at the United Nations Economic Commission for Latin America (ECLAC, or, in Spanish, CEPAL); and the American Marxist, developed by Paul A. Baran, Paul Sweezy, and Andre Gunder Frank. Using the Latin American dependency model, the Guyanese Marxist historian Walter Rodney, in his book How Europe Underdeveloped Africa, described in 1972 an Africa that had been consciously exploited by European imperialists, leading directly to the modern underdevelopment of most of the continent. The theory was popular in the 1960s and 1970s as a criticism of modernization theory, which was falling increasingly out of favor because of continued widespread poverty in much of the world. At that time the assumptions of liberal theories of development were under attack. It was used to explain the causes of overurbanization, a theory that urbanization rates outpaced industrial growth in several developing countries. The Latin American Structuralist and the American Marxist schools had significant differences but, according to economist Matias Vernengo, they agreed on some basic points:[B]oth groups would agree that at the core of the dependency relation between center and periphery lays [lies] the inability of the periphery to develop an autonomous and dynamic process of technological innovation. Technology – the Promethean force unleashed by the Industrial Revolution – is at the center of stage. The Center countries controlled the technology and the systems for generating technology. Foreign capital could not solve the problem, since it only led to limited transmission of technology, but not the process of innovation itself. Baran and others frequently spoke of the international division of labour – skilled workers in the center; unskilled in the periphery – when discussing key features of dependency. Baran placed surplus extraction and capital accumulation at the center of his analysis. Development depends on a population's producing more than it needs for bare subsistence (a surplus). Further, some of that surplus must be used for capital accumulation – the purchase of new means of production – if development is to occur; spending the surplus on things like luxury consumption does not produce development. Baran noted two predominant kinds of economic activity in poor countries. In the older of the two, plantation agriculture, which originated in colonial times, most of the surplus goes to the landowners, who use it to emulate the consumption patterns of wealthy people in the developed world; much of it thus goes to purchase foreign-produced luxury items –automobiles, clothes, etc. – and little is accumulated for investing in development. The more recent kind of economic activity in the periphery is industry—but of a particular kind. It is usually carried out by foreigners, although often in conjunction with local interests. It is often under special tariff protection or other government concessions. The surplus from this production mostly goes to two places: part of it is sent back to the foreign shareholders as profit; the other part is spent on conspicuous consumption in a similar fashion to that of the plantation aristocracy. Again, little is used for development. Baran thought that political revolution was necessary to break this pattern. In the 1960s, members of the Latin American Structuralist school argued that there is more latitude in the system than the Marxists believed. They argued that it allows for partial development or "dependent development"–development, but still under the control of outside decision makers. They cited the partly successful attempts at industrialisation in Latin America around that time (Argentina, Brazil, Mexico) as evidence for this hypothesis. They were led to the position that dependency is not a relation between commodity exporters and industrialised countries, but between countries with different degrees of industrialisation. In their approach, there is a distinction made between the economic and political spheres: economically, one may be developed or underdeveloped; but even if (somewhat) economically developed, one may be politically autonomous or dependent. More recently, Guillermo O'Donnell has argued that constraints placed on development by neoliberalism were lifted by the military coups in Latin America that came to promote development in authoritarian guise (O'Donnell, 1982). These positions particularly in regard of Latin America were notably challenged theoretically in the work and teaching of Ruy Mauro Marini who developed wider recognition for a specifically Marxist Dependency Theory, after close reading of Marx, that super-exploitation and unequal exchange characteristically arose out of the specific forms in the capital reproduction of dependency, and the specific class relations particular to that dependency in the periphery . The importance of multinational corporations and state promotion of technology were emphasised by the Latin American Structuralists. Fajnzylber has made a distinction between systemic or authentic competitiveness, which is the ability to compete based on higher productivity, and spurious competitiveness, which is based on low wages. The third-world debt crisis of the 1980s and continued stagnation in Africa and Latin America in the 1990s caused some doubt as to the feasibility or desirability of "dependent development". The sine qua non of the dependency relationship is not the difference in technological sophistication, as traditional dependency theorists believe, but rather the difference in financial strength between core and peripheral countries–particularly the inability of peripheral countries to borrow in their own currency. He believes that the hegemonic position of the United States is very strong because of the importance of its financial markets and because it controls the international reserve currency – the US dollar. He believes that the end of the Bretton Woods international financial agreements in the early 1970s considerably strengthened the United States' position because it removed some constraints on their financial actions. "Standard" dependency theory differs from Marxism, in arguing against internationalism and any hope of progress in less developed nations towards industrialization and a liberating revolution. Theotonio dos Santos described a "new dependency", which focused on both the internal and external relations of less-developed countries of the periphery, derived from a Marxian analysis. Former Brazilian President Fernando Henrique Cardoso (in office 1995–2002) wrote extensively on dependency theory while in political exile during the 1960s, arguing that it was an approach to studying the economic disparities between the centre and periphery. Cardoso summarized his version of dependency theory as follows: there is a financial and technological penetration by the developed capitalist centers of the countries of the periphery and semi-periphery; this produces an unbalanced economic structure both within the peripheral societies and between them and the centers; this leads to limitations on self-sustained growth in the periphery; this favors the appearance of specific patterns of class relations; these require modifications in the role of the state to guarantee both the functioning of the economy and the political articulation of a society, which contains, within itself, foci of inarticulateness and structural imbalance. The analysis of development patterns in the 1990s and beyond is complicated by the fact that capitalism develops not smoothly, but with very strong and self-repeating ups and downs, called cycles. Relevant results are given in studies by Joshua Goldstein, Volker Bornschier, and Luigi Scandella. With the economic growth of India and some East Asian economies, dependency theory has lost some of its former influence. It still influences some NGO campaigns, such as Make Poverty History and the fair trade movement. == Other theorists and related theories == Two other early writers relevant to dependency theory were François Perroux and Kurt Rothschild. Other leading dependency theorists include Herb Addo, Walden Bello, Ruy Mauro Marini, Enzo Faletto, Armando Cordova, Ernest Feder, Pablo González Casanova, Keith Griffin, Kunibert Raffer, Paul Israel Singer, Walter Rodney and Osvaldo Sunkel. Many of these authors focused their attention on Latin America; dependency theory in the Arab world was primarily refined by the Egyptian economist Samir Amin. Tausch, based on works of Amin from 1973 to 1997, lists the following main characteristics of periphery capitalism: Regression in both agriculture and small scale industry characterizes the period after the onslaught of foreign domination and colonialism Unequal international specialization of the periphery leads to the concentration of activities in export-oriented agriculture and or mining. Some industrialization of the periphery is possible under the condition of low wages, which, together with rising productivity, determine that unequal exchange sets in (double factorial terms of trade < 1.0; see Raffer, 1987) These structures determine in the long run a rapidly growing tertiary sector with hidden unemployment and the rising importance of rent in the overall social and economic system Chronic current account balance deficits, re-exported profits of foreign investments, and deficient business cycles at the periphery that provide important markets for the centers during world economic upswings Structural imbalances in the political and social relationships, inter alia a strong 'compradore' element and the rising importance of state capitalism and an indebted state class The American sociologist Immanuel Wallerstein refined the Marxist aspect of the theory and expanded on it, to form world-systems theory. World Systems Theory is also known as WST and aligns closely with the idea of the "rich get richer and the poor get poorer". Wallerstein states that the poor and peripheral nations continue to get more poor as the developed core nations use their resources to become richer. Wallerstein developed the World Systems Theory utilizing the Dependence theory along with the ideas of Marx and the Annales School. This theory postulates a third category of countries, the semi-periphery, intermediate between the core and periphery. Wallerstein believed in a tri-modal rather than a bi-modal system because he viewed the world-systems as more complicated than a simplistic classification as either core or periphery nations. To Wallerstein, many nations do not fit into one of these two categories, so he proposed the idea of a semi-periphery as an in between state within his model. In this model, the semi-periphery is industrialized, but with less sophistication of technology than in the core; and it does not control finances. The rise of one group of semi-peripheries tends to be at the cost of another group, but the unequal structure of the world economy based on unequal exchange tends to remain stable. Tausch traces the beginnings of world-systems theory to the writings of the Austro-Hungarian socialist Karl Polanyi after the First World War, but its present form is usually associated with the work of Wallerstein. Dependency theorists hold that short-term spurts of growth notwithstanding, long-term growth in the periphery will be imbalanced and unequal, and will tend towards high negative current account balances. Cyclical fluctuations also have a profound effect on cross-national comparisons of economic growth and societal development in the medium and long run. What seemed like spectacular long-run growth may in the end turn out to be just a short run cyclical spurt after a long recession. Cycle time plays an important role. Giovanni Arrighi believed that the logic of accumulation on a world scale shifts over time, and that the 1980s and beyond once more showed a deregulated phase of world capitalism with a logic, characterized - in contrast to earlier regulatory cycles - by the dominance of financial capital. == Criticism == Economic policies based on dependency theory have been criticized by free-market economists such as Peter Bauer and Martin Wolf and others: Lack of competition: by subsidizing in-country industries and preventing outside imports, these companies may have less incentive to improve their products, to try to become more efficient in their processes, to please customers, or to research new innovations. Sustainability: industries reliant on government support may not be sustainable for very long, particularly in poorer countries and countries which largely depend on foreign aid from more developed countries. Domestic opportunity costs: subsidies on domestic industries come out of state coffers and therefore represent money not spent in other ways, like development of domestic infrastructure, seed capital or need-based social welfare programs. At the same time, the higher prices caused by tariffs and restrictions on imports require the people either to forgo these goods altogether or buy them at higher prices, forgoing other goods. Market economists cite a number of examples in their arguments against dependency theory. The improvement of India's economy after it moved from state-controlled business to open trade is one of the most often cited. India's example seems to contradict dependency theorists' claims concerning comparative advantage and mobility, as much as its economic growth originated from movements such as outsourcing – one of the most mobile forms of capital transfer. In Africa, states that have emphasized import-substitution development, such as Zimbabwe, have typically been among the worst performers, while the continent's most successful non-oil based economies, such as Egypt, South Africa, and Tunisia, have pursued trade-based development. According to economic historian Robert C. Allen, dependency theory's claims are "debatable" due to fact that the protectionism that was implemented in Latin America as a solution ended up failing. The countries incurred too much debt and Latin America went into a recession. One of the problems was that the Latin American countries simply had too small national markets to be able to efficiently produce complex industrialized goods, such as automobiles. == Examples of dependency theory == Many nations have been affected by both the positive and negative effects of the Dependency Theory. The idea of national dependency on another nation is not a relatively new concept even though the dependency theory itself is rather new. Dependency is perpetuated by using capitalism and finance. The dependent nations come to owe the developed nations so much money and capital that it is not possible to escape the debt, continuing the dependency for the foreseeable future. An example of the dependency theory is that during the years of 1650 to 1900 European nations such as Britain and France took over or colonialized other nations. They used their superior military technology and naval strength at the time to do this. This began an economic system in the Americas, Africa, and Asia to then export the natural materials from their land to Europe. After shipping the materials to Europe, Britain and the other European countries made products with these materials and then sent them back to colonized parts of the Americas, Africa, and Asia. This resulted in the transfer of wealth from these regions’ products to Europe for taking control of the products. Some scholars and politicians claim that with the decline of colonialism, dependency has been erased. Other scholars counter this approach, and state that our society still has national powerhouses such as the United States, European Nations such as Germany and Britain, China, and rising India that hundreds of other nations rely on for military aid, economic investments, etc. == Aid dependency == Aid dependency is an economic problem described as the reliance of less developed countries (LDCs) on more developed countries (MDCs) for financial aid and other resources. More specifically, aid dependency refers to the proportion of government spending that is given by foreign donors. A nation having an aid dependency ratio of about 15%-20% or higher is correlated with negative outcomes for that nation. What causes dependency is the inhibition of development and economic/political reform that results from trying to use aid as a long-term solution to poverty-ridden countries. Aid dependency arose from long term provisions of aid to countries in need in which the receiving country became accustomed to and developed a dependency syndrome. Aid dependency is most common today in Africa. The top donors as of 2013 were the United States, the United Kingdom, and Germany while the top receivers were Afghanistan, Vietnam, and Ethiopia. === History of aid dependence === International development aid became widely popularized post World-War Two due to first-world countries trying to create a more open economy as well as cold war competition. In 1970, the United Nations agreed on 0.7% of Gross National Income per country as the target for how much should be dedicated for international aid. In his book “Ending Aid Dependence”, Yash Tondon describes how organizations like the International Monetary Fund (IMF) and the World Bank (WB) have driven many African countries into dependency. During the economic crisis in the 1980s and the 1990s, a great deal of Sub-Saharan countries in Africa saw an influx of aid money which in turn resulted in dependency over the next few decades. These countries became so dependent that the President of Tanzania, Benjamin W. Mkapa, stated that “Development aid has taken deep root to the psyche of the people, especially in the poorer countries of the South. It is similar to drug addiction.” === Motives for giving aid === While the widespread belief is that aid is motivated only by assisting poor countries, and this is true in some cases, there is substantial evidence that suggests strategic, political, and welfare interests of the donors are driving forces behind aid. Maizels and Nissanke (MN 1984), and McKinlay and Little (ML, 1977) have conducted studies to analyze donors’ motives. From these studies they found that US aid flows are influenced by military as well as strategic factors. British and French aid is given to countries that were former colonies, and also to countries in which they have significant investment interest and strong trade relations. === Stunted economic growth === A main concern revolving around the issue of foreign aid is that the citizens in the country that is benefiting from aid lose motivation to work after receiving aid. In addition, some citizens will deliberately work less, resulting in a lower income, which in turn qualifies them for aid provision. Aid dependent countries are associated with having a lowly motivated workforce, a result from being accustomed to constant aid, and therefore the country is less likely to make economic progress and the living-standards are less likely to be improved. A country with long-term aid dependency remains unable to be self-sufficient and is less likely to make meaningful GDP growth which would allow for them to rely less on aid from richer countries. Food aid has been criticized heavily along with other aid imports due to its damage to the domestic economy. A higher dependency on aid imports results in a decline in the domestic demand for those products. In the long-run, the agricultural industry in LDC countries grows weaker due to long-term declines in demand as a result from food aid. In the future when aid is decreased, many LDC countries's agricultural markets are under-developed and therefore it is cheaper to import agricultural products. This occurred in Haiti, where 80% of their grain stocks come from the United States even after a large decrease in aid. In countries where there is a primary-product dependency on an item being imported as aid, such as wheat, economic shocks can occur and push the country further into an economic crisis. === Political dependency === Political dependency occurs when donors have too much influence in the governance of the receiving country. Many donors maintain a strong say in the government due to the country's reliance on their money, causing a decrease in the effectiveness and democratic-quality of the government. This results in the receiving country's government making policy that the donor agrees with and supports rather than what the people of the country desire. Government corruptibility increases as a result and inhibits reform of the government and political process in the country. These donors can include other countries or organizations with underlying intentions that may not be in favor of the people. Political dependency is an even stronger negative effect of aid dependency in countries where many of the problems stem from already corrupt politics and a lack of civil rights. For example, Zimbabwe and the Democratic Republic of the Congo both have extremely high aid dependency ratios and have experienced political turmoil. The politics of the Democratic Republic of the Congo have involved civil war and changing of regimes in the 21st century and have one of the highest aid dependency ratios in Africa. As aid dependence can shift accountability away from the public and to being between state and donors, “presidentialism” can arise. Presidentialism is when the president and the cabinet within a political system have the power in political decision-making. In a democracy, budgets and public investment plans are to be approved by parliament. It is common for donors to fund projects outside of this budget and therefore go without parliament review. This further reinforces presidentialism and establishes practices that undermine democracy. Disputes over taxation and use of revenues are important in a democracy and can lead to better lives for citizens, but this cannot happen if citizens and parliaments don't know the complete proposed budget and spending priorities. Aid dependency also compromises ownership which is marked by the ability of a government to implement its own ideas and policies. In aid dependent countries, the interests and ideas of aid agencies start to become priority and therefore erode ownership. === Corruption === Aid dependent countries rank worse in terms of level of corruption than in countries that are not dependent. Foreign aid is a potential source of rents, and rent-seeking can manifest as increased public sector employment. As public firms displace private investment, there is less pressure on the government to remain accountable and transparent as a result of the weakened private sector. Aid assists corruption which then fosters more corruption and creates a cycle. Foreign aid provides corrupt governments with free cash flow which further facilitates the corruption. Corruption works against economic growth and development, holding these poor countries down. === Efforts to end aid dependence === Since 2000, aid dependency has decreased by about ⅓. This can be seen in countries like Ghana, whose aid dependency decreased from 47% to 27%, as well as in Mozambique, where the aid dependency decreased from 74% to 58%. Target areas to decrease aid dependence include job creation, regional integration, and commercial engagement and trade. Long-term investment in agriculture and infrastructure are key requirements to end aid dependency as it will allow a country to slowly decrease the amount of food aid received and begin to develop its own agricultural economy and solve food insecurity. === Countering political corruption === Political corruption has been a strong force associated with maintaining dependency and being unable to see economic growth. During the Obama administration, congress claimed that the anti-corruption criteria The Millennium Challenge Corporation (MCC) used was not strict enough and was one of the obstacles to decreasing aid dependence. Often, in countries with a high corruption perception index the aid money is taken from government officials in the public sector or taken from other corrupt individuals in the private sector. Efforts to disapprove aid to countries where corruption is very prevalent have been a common tool used by organizations and governments to ensure funding is used properly but also to encourage other countries to fix the corruption. === Other methods of aid === It has been proven that foreign aid can prove useful in the long-run when directed towards the appropriate sector and managed accordingly. Specific pairing between organizations and donors with similar goals has produced more success in decreasing dependency than the tradition form of international aid which involves government to government communication. Botswana is a successful example of this. Botswana first began receiving aid in 1966. In this case, Botswana decided which areas needed aid and found donors accordingly rather than simply accepting aid from other countries whose governments had a say in where the money would be distributed towards. Recipient-led cases such as Botswana are more effective partially because it negates the donor's desirability to report numbers on the efficiency of their programs (that often include short-term figures such as food distributed) and instead focuses more on long-term growth and development that may be directed more towards infrastructure, education, and job development. == See also == == References == Bibliography So, Alvin (1990). Social Change and Development: Modernization, Dependency, and World-Systems Theory. Newbury Park, London: SAGE Publications. Vernengo, Matias (2004). "Technology, Finance and Dependency: Latin American Radical Political Economy in Retrospect" (PDF). Archived from the original (PDF) on 17 March 2012. Working Paper No. 2004-06, University of Utah Dept. of Economics. Later published as:Vernengo, Matias (2006). "Technology, Finance, and Dependency: Latin American Radical Political Economy in Retrospect". Review of Radical Political Economics. 38 (4): 551–568. doi:10.1177/0486613406293220. S2CID 55837218. == Further reading == Amin S. (1976), 'Unequal Development: An Essay on the Social Formations of Peripheral Capitalism' New York: Monthly Review Press. Amin S. (1994c), 'Re-reading the postwar period: an intellectual itinerary' Translated by Michael Wolfers. New York: Monthly Review Press. Amin S. (1997b), 'Die Zukunft des Weltsystems. Herausforderungen der Globalisierung. Herausgegeben und aus dem Franzoesischen uebersetzt von Joachim Wilke' Hamburg: VSA. Amadi, Luke. 2012. “Africa, Beyond the New Dependency: A Political Economy.” African Journal of Political Science and International Relations 6(8):191–203. Andrade, Rogerio P. and Renata Carvalho Silva. n.d. “Doing Dissenting Economics in the Periphery: The Political Economy of Maria Da Conceição Tavares.” Bornschier V. (1996), 'Western society in transition' New Brunswick, N.J.: Transaction Publishers. Bornschier V. and Chase - Dunn C. (1985), 'Transnational Corporations and Underdevelopment' N.Y., N.Y.: Praeger. Boianovsky, Mauro and Ricaedo Solis. 2014. “The Origins and Development of the Latin American Structuralist Approach to the Balance of Payments, 1944–1964.” Review of Political Economy 26(1):23–59. Cardoso, F. H. and Faletto, E. (1979), 'Dependency and development in Latin América'. University of California Press. Cesaratto, Sergio. 2015. “Balance of Payments or Monetary Sovereignty? In Search of the EMU’s Original Sin.” International Journal of Political Economy 44(2):142–56. Chilcote, Ronald H. 2009. “Trotsky and Development Theory in Latin America.” Critical Sociology 35(6):719–41. Cypher, James M. (2013). "Neodevelopmentalism vs. Neoliberalism: Differential Evolutionary Institutional Structures and Policy Response in Brazil and Mexico". Journal of Economic Issues. 47 (2): 391–400. doi:10.2753/JEI0021-3624470212. S2CID 153406707. Dávila-Fernández, Marwil and Adrianna Amado. n.d. “Conciliating Prebisch-Singer and Thirlwall: An Assessment of the Dynamics of Terms-of-Trade in a Balance-of-Payments-Constraint Growth Model.” https://web.archive.org/web/20220510222231/http://www.sseg.uniparthenope.it/Program_files/Davila-paper.pdf Garcia-Arias, Jorge; Fernandez-Huerga, Eduardo; Salvador, Ana (2013). "European Periphery Crises, International Financial Markets, and Democracy". American Journal of Economics and Sociology. 72 (4): 826–850. doi:10.1111/ajes.12031. Grinin, Leonid; Korotayev, Andrey; Tausch, Arno (2016). Economic Cycles, Crises, and the Global Periphery. Springer. doi:10.1007/978-3-319-41262-7. ISBN 978-3-319-41260-3. Kufakurinani, U. Kvangraven, IH., Santanta, F., Styve, MD. (eds) (2017), Dialogues on Development. Volume 1: Dependency, New York: Institute for New Economic Thinking. Henke, Holger (2000), 'Between Self-Determination and Dependency: Jamaica's Foreign Relations, 1972-1989' Kingston: University of the West Indies Press. Jalata, Asafa. 2013. “Colonial Terrorism, Global Capitalism and African Underdevelopment: 500 Years of Crimes Against African Peoples.” The Journal of Pan-African Studies 5(9):1–43. Kay, Cristóbal. 2005. “André Gunder Frank: From the ‘Development of Underdevelopment’ to the ‘World System.’” Development and Change 36(6):1177–83. Kay, Cristóbal. 2011. “Andre Gunder Frank: ‘Unity in Diversity’ from the Development of Underdevelopment to the World System.” New Political Economy 16(4):523–38. Kohler, Gernot, et al. Globalization : Critical Perspectives. Nova Science Publishers, New York, 2003. With contributions by Samir Amin, Immanuel Wallerstein, Christopher Chase-Dunn, Kimmo Kiljunen, Arno Tausch, Patrick Bond, Andre Gunder Frank, Robert J. S. Ross, et al. Pre-publication download of Chapter 5: The European Union: global challenge or global governance? 14 world system hypotheses and two scenarios on the future of the Union, pages 93 - 196 Arno Tausch at http://edoc.vifapol.de/opus/volltexte/2012/3587/pdf/049.pdf Archived 2021-09-11 at the Wayback Machine. Kohler G. and Tausch A. (2002) Global Keynesianism: Unequal exchange and global exploitation. Huntington NY, Nova Science. Lavoie, Marc. 2015. “The Eurozone Crisis: A Balance-of-Payments Problem or a Crisis Due to a Flawed Monetary Design?” International Journal of Political Economy 44(2):157–60. Marini, Ruy Mauro (2022) The Dialectics of Dependency Monthly Review Press, New York. Olutayo, Akinpelu O. and Ayokunle O. Omobowale. 2007. “Capitalism, Globalisation and the Underdevelopment Process in Africa: History in Perpetuity.” Africa Development 32(2). Osorio, Jaime and Reyes, Cristobal (2024) Labour Super-Exploitation, Unequal Exchange and Capital Reproduction: Writings on Marxist Dependency Theory ibidem Verlag, Hannover, Stuttgart. Puntigliano, Andrés Rivarola and Örjan Appelqvist. 2011. “Prebisch and Myrdal: Development Economics in the Core and on the Periphery.” Journal of Global History 6(01):29–52. Sunkel O. (1966), 'The Structural Background of Development Problems in Latin America' Weltwirtschaftliches Archiv, 97, 1: pp. 22 ff. Sunkel O. (1973), 'El subdesarrollo latinoamericano y la teoria del desarrollo' Mexico: Siglo Veintiuno Editores, 6a edicion. Yotopoulos P. and Sawada Y. (1999), Free Currency Markets, Financial Crises And The Growth Debacle: Is There A Causal Relationship? Archived 2010-07-18 at the Wayback Machine, Revised November 1999, Stanford University, USA, and University of Tokyo. Yotopoulos P. and Sawada Y. (2005), Exchange Rate Misalignment: A New test of Long-Run PPP Based on Cross-Country Data (CIRJE Discussion Paper CIRJE-F-318), February 2005, Faculty of Economics, University of Tokyo. Tarhan, Ali. 2013. “Financial Crises and Center-Periphery Capital Flows.” Journal of Economic Issues 47(2):411–18. Vernengo, Matías and David Fields. 2016. “DisORIENT: Money, Technological Development and the Rise of the West.” Review of Radical Political Economics 48(4):562–68. == External links == Centro Argentino de Estudios Internacionales ECLAC/CEPAL Santiago Archived 2012-03-08 at the Wayback Machine Revista Entelequia University of Texas Inequality Project
Wikipedia/Dependency_theory
Non-representational theory is the study of a specific theory focused on human geography. It is the work of Nigel Thrift (Warwick University). The theory is based on using social theory, conducting geographical research, and the 'embodied experience.' == Definition == Instead of studying and representing social relationships, non-representational theory focuses upon practices – how human and nonhuman formations are enacted or performed – not simply on what is produced. "First, it valorizes those processes that operate before … conscious, reflective thought … [and] second, it insists on the necessity of not prioritizing representations as the primary epistemological vehicles through which knowledge is extracted from the world". Recent studies have examined a wide range of activities including dance, musical performance, walking, gardening, rave, listening to music and children's play. == Post-structuralist origins == This is a post-structuralist theory inspired in part by the ideas of the physicist-philosopher Niels Bohr, and thinkers such as Michel Foucault, Gilles Deleuze, Félix Guattari, Bruno Latour, Michel Serres and Karen Barad, and by phenomenonologists such as Martin Heidegger and Maurice Merleau-Ponty. More recently it considers views from political science (including ideas about radical democracy) and anthropological discussions of the material dimensions of human life. It parallels the conception of "hybrid geographies" developed by Sarah Whatmore. == Criticism == Critics have suggested that Thrift's use of the term "non-representational theory" is problematic, and that other non-representational theories could be developed. Richard G. Smith said that Baudrillard's work could be considered a "non-representational theory", for example, which has fostered some debate. In 2005, Hayden Lorimer (Glasgow University) said that the term "more-than-representational" was preferable. == References == == Further reading == Macpherson, H. (2010), Non‐Representational Approaches to Body–Landscape Relations. Geography Compass, 4: 1-13. doi:10.1111/j.1749-8198.2009.00276.x
Wikipedia/Non-representational_theory
The Logic of Modern Physics is a 1927 philosophy of science book by American physicist and Nobel laureate Percy Williams Bridgman. The book is notable for explicitly identifying, analyzing, and explaining operationalism for the first time, and coining the term operational definition. Widely read by scholars in the social sciences, it had a huge influence in the 1930s and 1940s, and its major influence on the field of psychology in particular surpassed even that on methodology in physics, for which it was originally intended. == History == The Logic of Modern Physics is a 1927 philosophy of science book by American physicist and Nobel laureate Percy Williams Bridgman notable for explicitly identifying, analyzing, and explaining operationalism for the first time. Pragmatic philosophers like Charles Sanders Peirce in the 1870s had already advanced solutions to the related ontological problems. Also, Sir Arthur Eddington had discussed notions similar to operationalization in 1920 before Bridgman. Bridgman's formulation, however, became the most influential. In 1955 the variant operationism was described by A. Cornelius Benjamin. == Influence == Operationalism can be considered a variation on the positivist theme, and, arguably, a very powerful and influential one. The book was widely read by scholars in the social sciences, in which it had a huge influence in the 1930s and 1940s, In the social sciences, the main influence has been in psychology, (behaviorism), where it has been even greater than that on the methodology in physics, for which it was originally intended. Examples of the influence on psychology in the 1930s and 1940s include Stanley Smith Stevens (The Operational Basis of Psychology and The Operational Definition of Psychological Concepts), and Clark L. Hull (The Principles of Behavior: An Introduction to Behavior Theory). Since then, it has been the central influence of the official epistemology governing psychological method for the whole century." == See also == Edward C. Tolman Heisenberg uncertainty principle Henry Schultz Herbert A. Simon Talcott Parsons == Notes and references == == External links == Full text of The Logic of Modern Physics at the Internet Archive
Wikipedia/The_Logic_of_Modern_Physics
Communication Monographs is a quarterly peer-reviewed academic journal covering research on human communication. The journal is published by Taylor & Francis on behalf of the National Communication Association. Communication Monographs publishes original scholarship that contributes to the understanding of human communication. Articles in Communication Monographs should endeavor to ask questions about the diverse and complex issues that interest communication scholars. The journal especially welcomes questions that bridge boundaries traditionally separating scholars within the communication discipline and that address issues of clear theoretical, conceptual, methodological, and/or social importance. Diverse approaches to addressing and answering these questions, including theoretical argument, quantitative and qualitative empirical research, and rhetorical and textual analysis, as well as acknowledgement of the often tentative and partial nature of any answers, are welcomed. Approaches to answering questions should be clearly relevant to the questions asked, rigorous in terms of both argument and method, cognizant of alternative interpretations, and contextualized within the wider body of communication scholarship. == Abstracting and indexing == The journal is abstracted and indexed in == External links == Official website
Wikipedia/Communication_Monographs
Quare theory was created by E. Patrick Johnson in 2001. Quare theory was created to promote the voices of queer people of color. Quare theory is similar to queer theory; they are both forms of critical theory that focus on the study and theories of queer identities and actions. E. Patrick Johnson believed that within queer theory there was an erasure or minimization of queer people of color's voices. Johnson created Quare theory to uplift the voices that queer theory historically hadn't. With Queer theory there is "a significant theoretical gap" meaning queer theorists often silence or lessen the voices of queer people of color. == History == Quare theory was created by E. Patrick Johnson who is a scholar and artist who studies gender, race, and sexuality performance. Johnson has written several books in the focus of his studies. He focuses on gender, race, and sexuality performance "modes of scholarly and artistic production". He is a professor at Annenberg University for performance studies as well as a professor at Northwestern University for African American studies. Besides being a professor, Johnson is also Dean of the School of Communication at Northwestern University. In 2020, Johnson was inducted into the American Academy of Arts and Sciences. Johnson proposed quare theory to fill the gap left by queer theory. Johnson's essay on Quare studies proposes to address queer behavior and history from people of color's perspectives. The term quare was inspired from Johnson's grandmother because when she said the word queer with her accent it sounded like she said quare. == Definition == Quare theory is focused on "the racialized bodies, experiences, and knowledge of" queer people of color. It's a perspective that's meant to shed light upon marginalized voices. The theory "narrows the gap between theory and practice" and highlights the performance aspect of the body. Quare theory focuses on uplifting and addressing the needs of queer people "across issues of race, gender, class, and other subject positions". == Quare vs queer theory == Quare theory is "both a counter theory" and "a counternarrative to queer theory". Johnson discusses in his essay regarding quare theory that queer theory, whether intentionally or not, does not include queer people of color's voices. With quare studies, Johnson intended to increase the exposure of queer people of color and their issues. Quare studies explores different identities focusing on queer people from "racialized and class knowledges". Quare studies fills in the gaps that queer theory left empty. == E. Patrick Johnson's works relating to quare theory == Books: Honeypot: Black Southern Women Who Love Women, University of North Carolina Press, 2019. Black. Queer. Southern. Women.: An Oral History, University of North Carolina Press, 2019. Sweet Tea: Black Gay Men of the South—An Oral History, University of North Carolina Press, 2008. Appropriating Blackness: Performance and the Politics of Authenticity. Duke University Press, 2003. Edited Collections: Blacktino Queer Performance (with Ramon Rivera-Servera). Duke University Press, 2016. No Tea, No Shade: New Writings in Black Queer Studies. Duke University Press, 2016. Cultural Struggles: Performance, Ethnography, Praxis. Edited collection of essays by Dwight Conquergood. University of Michigan Press, 2013. solo/black/woman: scripts, interviews, essays. (with Ramon Rivera-Servera), Northwestern University Press, 2013. Black Queer Studies: A Critical Anthology. (with Mae G. Henderson), Duke University Press, 2005. Journal Articles: "Put a Little Honey in My Sweet Tea: Oral History as Quare Performance." Women's Studies Quarterly 44.3/4 (Fall/Winter 2016): 51–67. "Pleasure and Pain in Black Queer Oral History and Performance." (with Jason Ruiz) QED: A Journal of GLBTQ Worldmaking 1.2 (Summer 2014): 160 – 170. "'Quare' Studies Or (Almost) Everything I Know About Queer Studies I Learned From My Grandmother." Text and Performance Quarterly 21 (January 2001): 1-25. Reprinted in Readings on Rhetoric and Performance. Ed. Stephen Olbrys Gencarella and Phaedra C. Pezzullo. State College, PA: Strata, 2010. 233–257. The Ashgate Research Companion to Queer Theory. Ed. Noreen Giffney and Michael O'Rourke. Farnham, England: Ashgate Publishing Company, 2009. 451–469. Sexualities and Communication in Everyday Life: A Reader. Ed. Karen Lovaas and Mercilee Jenkins. Thousand Oaks, CA: Sage Publications, 2006. 69–86, 297–300. Black Queer Studies: A Critical Anthology. Ed. E. Patrick Johnson and Mae G. Henderson. Durham: Duke University Press, 2005. 124–157. "Feeling the Spirit in the Dark: Expanding Notions of the Sacred in the African American Gay Community." Callaloo 21.2 (Winter/Spring 1998): 399–416. Reprinted in The Greatest Taboo: Homosexuality in Black Communities. Ed. Delroy Constantine-Simms. Los Angeles: Alyson Publications, 2000. 88–109. == See also == E. Patrick Johnson Queer theory Critical theory == References ==
Wikipedia/Quare_theory
Productive forces, productive powers, or forces of production (German: Produktivkräfte) is a central idea in Marxism and historical materialism. In Karl Marx and Friedrich Engels' own critique of political economy, it refers to the combination of the means of labor (tools, machinery, land, infrastructure, and so on) with human labour power. Marx and Engels probably derived the concept from Adam Smith's reference to the "productive powers of labour" (see e.g. chapter 8 of The Wealth of Nations (1776)), although the German political economist Friedrich List also mentions the concept of "productive powers" in The National System of Political Economy (1841). All those forces which are applied by people in the production process (body and brain, tools and techniques, materials, resources, quality of workers' cooperation, and equipment) are encompassed by this concept, including those management and engineering functions technically indispensable for production (as contrasted with social control functions). Human knowledge can also be a productive force. Together with the social and technical relations of production, the productive forces constitute a historically specific mode of production. == Labor == Karl Marx emphasized that with few exceptions means of labour are not a productive force unless they are actually operated, maintained and conserved by living human labour. Without applying living human labour, their physical condition and value would deteriorate, depreciate, or be destroyed (an example would be a ghost town or capital depreciation due to strike action). Simultaneously, technological developments which serve as means of production would not exist without what Marx refers to as "general intellect," the human innovation and industry which motivates industrial development. Capital itself, being one of the factors of production, comes to be viewed in capitalist society as a productive force in its own right, independent from labour, a subject with "a life of its own". Indeed, Marx sees the essence of what he calls "the capital relation" as being summarised by the circumstance that "capital buys labour", i.e. the power of property ownership to command human energy and labour-time, and thus of inanimate "things" to exert an autonomous power over people. What disappears from view is that the power of capital depends in the last instance on human cooperation. "The production of life, both of one's own in labour and of fresh life in procreation... appears as a double relationship: on the one hand as a natural, on the other as a social relationship. By social we understand the co-operation of several individuals, no matter under what conditions, in what manner and to what end. It follows from this that a certain mode of production, or industrial stage, is always combined with a certain mode of co-operation, or social stage, and this mode of co-operation is itself a “productive force.” The productive power of cooperation comes to be viewed as the productive power of capital, because it is capital which forcibly organises people, rather than people organising capital. Marx regarded this as a supreme reification. Unlike British classical economics, Marxian economics classifies financial capital as being an element of the relations of production, rather than the factors or forces of production ("not a thing, but a social relation between persons, established by the instrumentality of things"). == Destructive forces == Marx and Engels did not believe that human history featured a continuous growth of the productive forces. Rather, the development of the productive forces was characterised by social conflicts. Some productive forces destroyed other productive forces, sometimes productive techniques were lost or destroyed, and sometimes productive forces could be turned into destructive forces: "How little highly developed productive forces are safe from complete destruction, given even a relatively very extensive commerce, is proved by the Phoenicians, whose inventions were for the most part lost for a long time to come through the ousting of this nation from commerce, its conquest by Alexander and its consequent decline. Likewise, for instance, glass-painting in the Middle Ages. Only when commerce has become world commerce, and has as its basis large-scale industry, when all nations are drawn into the competitive struggle, is the permanence of the acquired productive forces assured. (...) Competition soon compelled every country that wished to retain its historical role to protect its manufactures [sic] by renewed customs regulations (the old duties were no longer any good against big industry) and soon after to introduce big industry under protective duties. Big industry universalised competition in spite of these protective measures (it is practical free trade; the protective duty is only a palliative, a measure of defence within free trade), established means of communication and the modern world market, subordinated trade to itself, transformed all capital into industrial capital, and thus produced the rapid circulation (development of the financial system) and the centralisation of capital. By universal competition it forced all individuals to strain their energy to the utmost. It destroyed as far as possible ideology, religion, morality, etc. and where it could not do this, made them into a palpable lie. It produced world history for the first time, insofar as it made all civilised nations and every individual member of them dependent for the satisfaction of their wants on the whole world, thus destroying the former natural exclusiveness of separate nations. It made natural science subservient to capital and took from the division of labour the last semblance of its natural character. It destroyed natural growth in general, as far as this is possible while labour exists, and resolved all natural relationships into money relationships. In the place of naturally grown towns it created the modern, large industrial cities which have sprung up overnight. Wherever it penetrated, it destroyed the crafts and all earlier stages of industry. It completed the victory of the commercial town over the countryside. [Its first premise] was the automatic system. [Its development] produced a mass of productive forces, for which private [property] became just as much a fetter as the guild had been for manufacture and the small, rural workshop for the developing craft. These productive forces received under the system of private property a one-sided development only, and became for the majority destructive forces; moreover, a great multitude of such forces could find no application at all within this system. (...) from the conception of history we have sketched we obtain these further conclusions: (1) In the development of productive forces there comes a stage when productive forces and means of intercourse are brought into being, which, under the existing relationships, only cause mischief, and are no longer forces of production but forces of destruction (machinery and money); and connected with this a class is called forth, which has to bear all the burdens of society without enjoying its advantages, which, ousted from society, is forced into the most decided antagonism to all other classes; a class which forms the majority of all members of society, and from which emanates the consciousness of the necessity of a fundamental revolution, the communist consciousness, which may, of course, arise among the other classes too through the contemplation of the situation of this class. (...) Both for the production on a mass scale of this communist consciousness, and for the success of the cause itself, the changing of men on a mass scale is, necessary, a change which can only take place in a practical movement, a revolution; this revolution is necessary, therefore, not only because the ruling class cannot be overthrown in any other way, but also because the class overthrowing it, can only in a revolution succeed in ridding itself of all the muck of ages, and become fitted to found society anew. (From The German Ideology) == Marxist–Leninist definition in the Soviet Union == The Institute of Economics of the Academy of Sciences of the U.S.S.R., textbook (1957, p xiv) says that "[t]he productive forces reflect the relationship of people to the objects and forces of nature used for the production of material wealth." (italics added) While productive forces are a human activity, the concept of productive forces includes the concept that technology mediates the human-nature relationship. Productive forces do not include the subject of labor (the raw materials or materials from nature being worked on). Productive forces are not the same thing as the means of production. Marx identified three components of production: human labor, subject of labor, and means of labor (1967, p 174). Productive forces are the union of human labor and the means of labor; means of production are the union of the subject of labor and the means of labor. (Institute of Economics of the Academy of Sciences of the U.S.S.R., 1957, p xiii). On the other hand, The Great Soviet Encyclopedia (1969–1978) states: Society's principal productive forces are people—the participants in social production, or the workers and the toiling masses in general (K. Marx and F. Engels, vol. 46, part 1, p. 403; V. I. Lenin, Poln. sobr. soch., 5th ed., vol. 38, p. 359). <…> Through the purposeful expenditure of labor power in labor activity, human beings “objectify” or embody themselves in the material world. The material elements of the productive forces (the means of production and the means of consumption) are the product of human reason and labor. The means of production include the means of labor, which transmit human influence to nature, and the objects of labor, to which human labor is applied. The most important components of the means of labor are the instruments of labor (for example, tools, devices, and machines). (From Productive forces. — The Great Soviet Encyclopedia: in 30 volumes. — Moscow: «Soviet Encyclopedia», 1969–1978.; English web-version of the article [2]; original version in Russian [3]) According to this, productive forces have such structure: People (human labour power) Means (the material elements of the productive forces) Means of production Means of labour Instruments of labour Objects of labour (also known as Subject of labour) Means of consumption Marxism in USSR served as core philosophical paradigm or platform, and had been developing as a science. So different views, hypotheses and approaches were widely discussed, tested and refined with time. == Reification of technology == Other interpretations, sometimes influenced by postmodernism and the concept of commodity fetishism have by contrast emphasized the reification of the powers of technology, said to occur by the separation of technique from the producers, and by falsely imputing human powers to technology as autonomous force, the effect being a perspective of inevitable and unstoppable technological progress operating beyond any human control, and impervious to human choices. In turn, this is said to have the effect of naturalising and legitimating social arrangements produced by people, by asserting that they are technically inevitable. The error here seems to be that social relations between people are confused and conflated with technical relations between people and things, and object relations between things; but this error is said to be a spontaneous result of the operation of a universal market and the process of commercialization. == Productivity == Marx's concept of productive forces also has some relevance for discussions in economics about the meaning and measurement of productivity. Modern economics theorises productivity in terms of the marginal product of the factors of production. Marx theorises productivity within the capitalist mode of production in terms of the social and technical relations of production, with the concept of the organic composition of capital and the value product. He suggests there is no completely neutral view of productivity possible; how productivity is defined depends on the values and interests people have. Thus, different social classes have different notions of productivity reflecting their own station in life, and giving rise to different notions of productive and unproductive labour. == Chinese contexts == In 1984, Deng Xiaoping declared "the fundamental task for the socialist stage is to develop the productive forces". For Deng, "only by constantly developing the productive forces can a country gradually become strong and prosperous, with a rising standard of living." Deng Xiaoping in 1988 described science and technology as the primary productive force.: 100  This idea was incorporated into Deng Xiaoping Theory.: 100  == References == Karl Marx, The Poverty of Philosophy Karl Marx, The German Ideology Karl Marx, "The Trinity Formula", chapter 48 in volume 3 of Marx's Capital. Josef V. Stalin, Dialectical and Historical Materialism. G. A. Cohen, Karl Marx's Theory of History: A Defence. Perry Anderson, Arguments within English Marxism. Isaac I. Rubin, Essays on Marx's Theory of value. Bertell Ollman, Alienation: Marx's Conception of Man in Capitalist Society. Kostas Axelos, Alienation, Praxis and Techne in the Thought of Karl Marx. Peter L. Berger, Pyramids of Sacrifice. John Kenneth Galbraith, The New Industrial State. Jacques Ellul, The Technological Society. Leo Kofler, Technologische Rationalität im Spätkapitalismus. Anwar Shaikh, "Laws of Production and Laws of Algebra: The Humbug Production Function", in The Review of Economics and Statistics, Volume 56(1), February 1974, pp. 115–120. Francisco Louça and Christopher Freeman, As Time Goes By; From the Industrial Revolutions to the Information Revolution. David F. Noble, Progress Without People: In Defense of Luddism Institute of Economics of the Academy of Sciences of the U.S.S.R. (1957). Political Economy: A Textbook. London: Lawrence and Wishart. Marx, Karl (1867 | 1967). Capital Vol. I. New York: International Publishers. Specific == External links ==
Wikipedia/Productive_forces
Psychoanalytic theory is the theory of personality organization and the dynamics of personality development relating to the practice of psychoanalysis, a clinical method for treating psychopathology. First laid out by Sigmund Freud in the late 19th century (particularly in his 1899 book The Interpretation of Dreams), psychoanalytic theory has undergone many refinements since his work. The psychoanalytic theory came to full prominence in the last third of the twentieth century as part of the flow of critical discourse regarding psychological treatments after the 1960s, long after Freud's death in 1939. Freud had ceased his analysis of the brain and his physiological studies and shifted his focus to the study of the psyche, and on treatment using free association and the phenomena of transference. His study emphasized the recognition of childhood events that could influence the mental functioning of adults. His examination of the genetic and then the developmental aspects gave the psychoanalytic theory its characteristics. == Definition == Psychoanalytic and psychoanalytical are used in English. The latter is the older term, and at first, simply meant 'relating to the analysis of the human psyche.' But with the emergence of psychoanalysis as a distinct clinical practice, both terms came to describe that. Although both are still used, today, the normal adjective is psychoanalytic. Psychoanalysis is defined in the Oxford English Dictionary as A therapeutic method, originated by Sigmund Freud, for treating mental disorders by investigating the interaction of conscious and unconscious elements in the patient's mind and bringing repressed fears and conflicts into the conscious mind, using techniques such as dream interpretation and free association. Also: a system of psychological theory is associated with this method. == The beginnings == Freud began his studies on psychoanalysis in collaboration with Dr. Josef Breuer, most notably in relation to the case study of Anna O. Anna O. was subject to a number of psychosomatic disturbances, such as not being able to drink out of fear. Breuer and Freud found that hypnosis was a great help in discovering more about Anna O. and her treatment. Freud frequently referred to the study on Anna O. in his lectures on the origin and development of psychoanalysis. Observations in the Anna O. case led Freud to theorize that the problems faced by hysterical patients could be associated with painful childhood experiences that could not be recalled. The influence of these lost memories shaped the feelings, thoughts, and behaviors of patients. These studies contributed to the development of the psychoanalytic theory. == The unconscious == In psychoanalytic theory, the unconscious mind consists of ideas and drives that have been subject to the mechanism of Repression: anxiety-producing impulses in childhood are barred from consciousness, but do not cease to exist, and exert a constant pressure in the direction of consciousness. However, the content of the unconscious is only knowable to consciousness through its representation in a disguised or distorted form, by way of dreams and neurotic symptoms, as well as in slips of the tongue and jokes. The psychoanalyst seeks to interpret these conscious manifestations in order to understand the nature of the repressed. In psychoanalytic terms, the unconscious does not include all that is not conscious, but rather that which is actively repressed from conscious thought. Freud viewed the unconscious as a repository for socially unacceptable ideas, anxiety-producing wishes or desires, traumatic memories, and painful emotions put out of consciousness by the mechanism of repression. Such unconscious mental processes can only be recognized through analysis of their effects in consciousness. Unconscious thoughts are not directly accessible to ordinary introspection, but they are capable of partially evading the censorship mechanism of repression in a disguised form, manifesting, for example, as dream elements or neurotic symptoms. Dreams and symptoms are supposed to be capable of being "interpreted" during psychoanalysis, with the help of methods such as free association, dream analysis, and analysis of verbal slips. == Personality structure == In Freud's model the psyche consists of three different elements, the id, ego, and the superego. The id is the aspect of personality that is driven by internal and basic drives and needs, such as hunger, thirst, and the drive for sex, or libido. The id acts in accordance with the pleasure principle. Due to the instinctual quality of the id, it is impulsive and unaware of the implications of actions. The superego is driven by the morality principle. It enforces the morality of social thought and action on an intrapsychic level. It employs morality, judging wrong and right and using guilt to discourage socially unacceptable behavior. The ego is driven by the reality principle. The ego seeks to balance the conflicting aims of the id and superego, by trying to satisfy the id's drives in ways that are compatible with reality. The Ego is how we view ourselves: it is what we refer to as 'I' (Freud's word is the German ich, which simply means 'I'). == Defense mechanisms == The ego balances demands of the id, the superego, and of reality to maintain a healthy state of consciousness, where there is only minimal intrapsychic conflict. It thus reacts to protect the individual from stressors and from anxiety by distorting internal or external reality to a lesser or greater extent. This prevents threatening unconscious thoughts and material from entering the consciousness. The ten different defence mechanisms initially enumerated by Anna Freud are: repression, regression, reaction formation, isolation of affect, undoing, projection, introjection, turning against the self, reversal into the opposite, and sublimation. In the same work, however, she details other manoeuvres such as identification with the aggressor and intellectualisation that would later come to be considered defence mechanisms in their own right. Furthermore, this list has been greatly expanded upon by other psychoanalysts, with some authors claiming to enumerate in excess of one hundred defence mechanisms. == Psychology theories == === Psychosexual development === Freud's take on the development of the personality (psyche). It is a stage theory that believes progress occurs through stages as the libido is directed to different body parts. The different stages, listed in order of progression, are Oral, Anal, Phallic (Oedipus complex), Latency, and Genital. The Genital stage is achieved if people meet all their needs throughout the other stages with enough available sexual energy. Individuals who do not meet their needs in a given stage become fixated or "stuck" in that stage. === Neo-analytic theory === Freud's theory and work with psychosexual development led to Neo-Analytic/Neo-Freudians who also believed in the importance of the unconscious, dream interpretations, defense mechanisms, and the integral influence of childhood experiences but had objections to the theory as well. They do not support the idea that personality development stops at age 6. Instead, they believe development spreads across the lifespan. They extended Freud's work and encompassed more influence from the environment and the importance of conscious thought and the unconscious. The most important theorists are Erik Erikson (Psychosocial Development), Anna Freud, Carl Jung, Alfred Adler and Karen Horney, and including the school of object relations. Erikson's Psychosocial Development theory is based on eight stages of development. The stages are trust vs. mistrust, autonomy vs. shame, initiative vs. guilt, industry vs. inferiority, identity vs. confusion, intimacy vs. isolation, generatively vs. stagnation, and integrity vs. despair. These are important to the psychoanalytic theory because they describe the different stages that people go through in life. Each stage has a major impact on their life outcomes since they are going through conflicts at each stage and whichever route they decide to take, will have certain outcomes. == Criticisms == Some claim that the theory is lacking in empirical data and too focused on pathology. Other criticisms are that the theory lacks consideration of culture and its influence on personality. Psychoanalytic theory comes from Freud and is focused on childhood. This might be an issue since most believe studying children can be inconclusive. One major concern lies in if observed personality will be a lifelong occurrence or if the child will shed it later in life. == Application to the arts and humanities == Psychoanalytic theory is a major influence in Continental philosophy and in aesthetics in particular. Freud is sometimes considered a philosopher. The psychoanalyst Jacques Lacan, and the philosophers Michel Foucault, and Jacques Derrida, have written extensively on how psychoanalysis informs philosophical analysis. Other philosophers such as Alain Badiou and Rafael Holmberg have argued that the meaning of psychoanalysis for philosophy was not immediately clear, but that they have come to reciprocally define each other. When analyzing literary texts, the psychoanalytic theory is sometimes used (often specifically with regard to the motives of the author and the characters) to reveal purported concealed meanings or to purportedly better understand the author's intentions. == See also == Freud's psychoanalytic theories == References == == Further reading == === Books === Brenner, C. (1973). An Elementary Textbook of Psychoanalysis – Revised edition. New York: International Universities Press. ISBN 0-385-09884-7 Ellman, S. (2010). When Theories Touch: A Historical and Theoretical Integration of Psychoanalytic Thought. London: Karnac Books. ISBN 1-85575-868-7 Laplanche, J. & Pontalis, J. B. (1974). The Language of Psycho-Analysis. W. W. Norton & Company, ISBN 0-393-01105-4 === Online papers === Benjamin, J. (1995). Recognition and destruction: An outline of intersubjectivity Boesky, D. (2005). Psychoanalytic controversies contextualized Boston Process of Change Study Group. (2005). The "something more" than interpretation Brenner, C. (1992). The mind as conflict and compromise formation Eagle, M. (1984). Developmental deficit versus dynamic conflict Gill, M. (1984). Psychoanalysis and psychotherapy: A revision Kernberg, O. (2000). Psychoanalysis, psychoanalytic psychotherapy and supportive psychotherapy: contemporary controversies Mitchell, Stephen A. (1984). Object relations theories and the developmental tilt Rubinstein, B. (1975). On the clinical psychoanalytic theory and its role in the inference and confirmation of particular clinical hypotheses Schwartz, W. (2013) Essentials of Psychoanalytic Theory and Practice Sprenger, Scott (2002) Freudian Psychoanalytic Theory === Others === Freud, Sigmund 1900, Interpretation of Dreams (Chapter 2). Standard Edition. Grünbaum, Adolf 1986. Precis of Foundations of Psycho-Analysis. Behavioral and Brain Sciences 9 : 217–284. Greenberg, J. and Mitchell, S.A. (1983). Object Relations in Psychoanalytic Theory. Cambridge MASS and London: Harvard University Press. Klein, Melanie 1932. Chapter 2, The Psychoanalysis of Children. In The Writings of Melanie Klein Volume 2. London: Hogarth Press. Klein, Melanie (1935), A contribution to the psychogenesis of manic-depressive states, International Journal of Psycho-Analysis 16: 145–74. Republished: Hogarth Press. Bion, W. (1957), 'On Arrogance', in Second Thoughts. London: Heinemann, pp. 86–92, 161–6. Benjamin, J. (1990). An Outline of Intersubjectivity: the development of recognition. Psychoanalytic Psychology 7S:33–46. == External links == PSY-LOG: Psychoanalytic Web Directory (in French, German and English) René Major article on Foucault and psychoanalysis (in French) The États Generaux de la psychanalyse, which was organized in part by Jacques Derrida and René Major (in French) Critical psychology glossary American Psychoanalytic Association's official website Psychoanalysis – Techniques and Practice
Wikipedia/Psychoanalytic_theory
Since 2020, efforts have been made by people including conservatives to challenge critical race theory (CRT) in schools in the United States. Following the 2020 protests of the murders of Ahmaud Arbery and George Floyd, and the killing of Breonna Taylor, school districts began to introduce additional curricula and create diversity, equity, and inclusion (DEI)-positions to address "disparities stemming from race, economics, disabilities and other factors". These measures were met with criticism from conservatives, particularly those in the Republican Party. Political scientist Jennifer Victor of George Mason University has described this as part of a cycle of backlash against progress toward racial equality and equity. Outspoken critics of critical race theory include U.S. president Donald Trump, conservative activist Christopher Rufo, various Republican officials, and conservative commentators on Fox News and right-wing talk radio shows. Movements have arisen from the controversy; in particular, the No Left Turn in Education movement, which has been described as one of the largest groups targeting school boards regarding critical race theory. In response to the assertion that CRT was being taught in public schools, dozens of states have introduced bills that limit what schools can teach regarding race, American history, politics, and gender. == Background == Critical race theory (CRT) is a cross-disciplinary intellectual and social movement of civil-rights scholars and activists who seek to examine the intersection of race, society, and law in the United States and to challenge mainstream American liberal approaches to racial justice. Conservative activism and efforts to censor curricula has resulted in the introduction of legislation banning the teaching of critical race theory in schools in many states across the United States. == United States == In the run-up to and aftermath of the 2020 U.S. presidential election, opposition to CRT was adopted as a campaign theme by president Donald Trump and various conservative commentators on Fox News and right-wing talk radio shows. In an interview on Fox in September 2020, Conservative activist Christopher Rufo strongly denounced critical race theory. After appearing on Fox, Rufo was invited to a series of meetings with Trump. Trump then publicly denounced critical race theory in a speech on September 17, 2020, and announced the formation of the 1776 Commission to promote "patriotic education". Trump also issued an executive order directing agencies of the U.S. federal government to cancel funding for programs that mention "white privilege" or "critical race theory", on the basis that it constituted "divisive, un-American propaganda" and that it was "racist". The most outspoken critics of CRT include Trump, Rufo, and Republican Party officials. According to The Washington Post, CRT became a "flash point" in the culture wars in the United States, and is used as "a catchall phrase for nearly any examination of systemic racism" by conservative lawmakers and activists. === Elected officials === Trump's messaging during the 2020 U.S. presidential election campaign and its aftermath included strong messaging against critical race theory. In December 2020, Trump appointed former Mississippi Governor Phil Bryant as a member of the 1776 Commission, which would to produce a report in response to The New York Times' 1619 Project. On January 18, 2021, The 1776 Report was submitted in the form of a 41-page "national plan" for a "patriotic education" as a rebuttal to the 1619 Project. The commission also criticized what they alleged as being CRT's theoretical underpinnings—Italian Marxist Antonio Gramsci, Herbert Marcuse, and the Frankfurt School, identity politics, and Howard Zinn. In contrast, the Trump White House described The 1776 Report as the "definitive chronicle of the American founding, a powerful description of the effect the principles of the Declaration of Independence have had on this Nation's history, and a dispositive rebuttal of reckless "re-education" attempts that seek to reframe American history around the idea that the United States is not an exceptional country but an evil one." The commission was dissolved on January 21 in an executive order signed by President Joe Biden in his first day in office. Republican senator Tom Cotton introduced an amendment to the 2021 budget reconciliation package that would prohibit the use of federal funds in CRT promotion in Pre-K programs and K-12 schools in August 2021, which passed 50 to 49. Cotton's Stop CRT Act was introduced in July 2021. Rufo praised Cotton's actions, saying that the "fight against CRT has gone national" and Cotton was "leading the way." Jim Pillen won the Republican primary race for governor in the 2022 Nebraska gubernatorial election with an election campaign based on his opposition to critical race theory as well as his stance against abortion rights. Republican candidates Glenn Youngkin and Jason Miyares have also campaigned against CRT. On his first day as governor of Virginia, Youngkin signed executive orders barring the teaching of critical race theory in public schools. === Advocacy groups === Opposition to what was purported to be critical race theory has been adopted as a major theme by several conservative think tanks and pressure groups, including The Heritage Foundation, the Idaho Freedom Foundation, the American Legislative Exchange Council (ALEC), and organizations funded by the Koch brothers. Rufo, a senior fellow at the Manhattan Institute, has been one of the most active critics of CRT, saying that it is anti-American, poses a "an existential threat to the United States", and had "pervaded every aspect of the federal government". In 2021 he wrote on Twitter, "The goal is to have the public read something crazy in the newspaper and immediately think 'critical race theory'" and "We have decodified the term and will recodify it to annex the entire range of cultural constructions that are unpopular with Americans." The advocacy group No Left Turn in Education has been described by NBC News as "one of the largest groups targeting school boards" regarding critical race theory. Media Matters for America has described No Left Turn in Education as one of the "leading groups fearmongering about the teaching of critical race theory in schools". The article said that the group and Elana Yaron Fishbein its founder, frequently "used toxic and bigoted rhetoric on social media and in right-wing media to downplay CRT". In his 2022 book, How to Be an Antiracist, American radical activist Ibram X. Kendi described how Fishbein created No Left Turn in Education in the summer of 2020. Fishbein had pulled her children out of Gladwyne Elementary School and sent the superintendent of Lower Merion School District (LMSD) an email on June 18, 2020, challenging the LMSD's decision to introduce additional lessons in "cultural proficiency" in the wake of the murder of George Floyd, and that as an unspecified number of non-white students were launching a campaign calling for "antiracist education", Fishbein "rejected the premise of antiracism, CRT, comprehensive sex education (CSE), and climate change". Her movement was relatively small initially, but was really launched when she began to be invited as a guest on the prime time Tucker Carlson show in September 2020. Fishbein describes the movement as a grassroots parental organization that uses veteran GOP activists' playbook to enact change on school boards. === Mass media === Conservative commentators on Fox News and right-wing talk radio shows have been strongly critical of CRT. Right-wing media outlets weaponized CRT in advance of the 2021 off-year and 2022 midterm elections. Media Matters for America reported that in June 2021, Fox News network mentioned "critical race theory" a record high of 901 times. Fox News also promoted No Left Turn in Education. American cultural critic James A. Lindsay, known mainly for his role in the grievance studies affair, published Race Marxism: The Truth About Critical Race Theory and Practice in February 2022 in which he criticized critical race theory. In his book, he cited numerous extracts from texts on critical race theory as proof of CRT's flaws. In February 2021, William A. Jacobson, a conservative blogger and law professor at Cornell University, launched an online database of colleges across the United States teaching what he calls "critical race training", in order to enable parents to avoid those schools. === Public attitudes === The Economist and Reuters have conducted polls on how much the general public understands CRT, a "once-obscure academic concept", and they found that most people are unfamiliar with CRT and misunderstand it. Those who support CRT promote the idea that it is an "intellectual tool set developed by legal scholars for examining systemic racism". CRT originated in legal studies and was intended for legal scholars and academics. The Economist, based on YouGov data from 2021, said that 50% of Americans thought they had a "good idea of what critical race theory was and most people thought it was bad for America. However, The Economist asserted that "the attitudes and beliefs of 70% of Americans actually "chime" with CRT—that racism is a significant social problem in the United States". The claim that CRT makes is that racism is "woven into the U.S. legal system and ingrained in its primary institutions", according to Reuters. Further, according to a survey conducted by The Economist, "a majority" of adult Americans believes that racism exists in the US Congress, in American legal structures, financial institutions, and in organizations and agencies, including the police force. A Reuters 2021 national opinion survey found that 57% of American adults said that they were not familiar with CRT. Of those who did claim they were familiar with CRT claims, Reuters found that follow-up responses to specific questions about CRT tenets, were informed by "misconceptions about critical race theory that have been largely circulating among conservative media outlets". When asked true-false questions about CRT history and teachings, only 5% of those who said they were familiar with CRT, could provide the correct answers. === Public school boards === Republicans focused on banning CRT from being used in public schools across the United States. By mid-summer 2021 conservative groups were bringing the battle over CRT to school boards. In Texas, Southlake, Tarrant County Carroll High School a group of POC students vied to address alleged racism after reported incidents dating back to at least 2018, and helped form the Southlake Anti-Racism Coalition (SARC). Dissenting parents formed the Southlake Families PAC and fought against them, endorsing a mayoral candidate and candidates for the school board and the city council. The PAC endorsed candidates won with about 70% of the vote. When they voted down the "call for cultural awareness into the curriculum", the PAC wrote on Twitter, "Critical Race Theory ain't coming here. This is what happens when good people stand up and say, not in my town, not on my watch." On their website, the PAC wrote: "CRT is a theoretical framework which views society as dominated by white supremacy and categorizes people as 'privileged' or 'oppressed' based on their skin color....It also teaches kids to hate America. Ask yourself who in their right mind would want this taught in public schools?" One of the first suspensions related to critical race theory took place on September 1, 2021, in Colleyville in Tarrant County. James Whitfield, who was the high school's first Black principal at Colleyville Heritage High School, was suspended for allegedly promoting CRT—Whitfield repeatedly denied the allegation. Parents and dozens of teachers pleaded with the school district's board of trustees to reinstate Whitfield, a popular principal. Students held walkouts to support the principal. In the summer of 2020, Whitfield wrote an open letter sharing his concerns over the murders of Ahmaud Arbery and George Floyd, and the killing of Breonna Taylor. In response, a former school board candidate, Stetson Clark, spoke up at a July school board meeting and accused Whitfield of promoting CRT. A few people attending the meeting called out, "Fire him." According to Dallas-based television station WFAA, by February 2022, some Christian pastors were fighting back against what they call the "far-right playbook to take down Texas public schools" saying it is "sheer destructive chaos" that has resulted in an "obscure term called Critical Race Theory...consum[ing] parents who believe it is being taught in Texas classrooms." It has also resulted in the resignations of several school district superintendents, including in the Independent school district (ISD)s of Dallas and Fort Worth. On June 16, 2022, ProPublica and Frontline published an article on how a group of vocal and organized anti-CRT White parents in the Cherokee County School District (CCSD) in Georgia had targeted Cecelia Lewis, a Black educator, who had been offered a job in early 2021 as CCSD's first diversity, equity and inclusion (DEI)–focused administrative position. Some of the parents researched and targeted the educator and were successful in making her next job impossible, which led to her resignation. === State-level legislation === In December 2020, the conservative nonprofit organization American Legislative Exchange Council (ALEC), which works with state legislators to draft and share model acts, facilitated a workshop entitled "Against Critical Theory's Onslaught Reclaiming Education and the American Dream" with Rufo as a featured guest and 31 state legislators in attendance. ALEC provides a forum for collaboration on model bills—helping state legislators draft legislation that other states can also modify and introduce as bills. In early 2021, Republican-backed bills were introduced to restrict teaching about race, ethnicity, or slavery in public schools in several states, including Idaho, Iowa, Oklahoma, Tennessee and Texas. Several of these bills specifically mention "critical race theory" or single out the 1619 Project. CRT is taught at the university level, and public school teachers do not generally use the phrase "Critical Race Theory" or its legal frameworks. In mid-April 2021, a bill was introduced in the Idaho Legislature that would effectively ban any educational entity from teaching or advocating "sectarianism", including critical race theory or other programs involving social justice. On May 4, 2021, the bill was signed into law by Governor Brad Little. On June 10, 2021, the Florida Board of Education unanimously voted to ban public schools from teaching critical race theory at the urging of governor Ron DeSantis. As of July 2021, 10 U.S. states have introduced bills or taken other steps that would restrict teaching critical race theory, and 26 others were in the process of doing so. In June 2021, the American Association of University Professors, the American Historical Association, the Association of American Colleges and Universities, and PEN America released a joint statement stating their opposition to such legislation, and by August 2021, 167 professional organizations had signed onto the statement. In August 2021, the Brookings Institution recorded that eight states—Idaho, Oklahoma, Tennessee, Texas, Iowa, New Hampshire, Arizona, and South Carolina—had passed regulation on the issue, though also noted that none of the bills that passed, with the exception of Idaho's, actually contained the words "critical race theory". Brookings also noted that these laws often extend beyond race to discussions of gender. Timothy D. Snyder, historian and professor at Yale University, has called these new state laws memory laws–"government actions designed to guide public interpretation of the past". Early memory laws were intended to protect victim groups, such as from revisionism attempts by holocaust deniers, but most recently have been used by Russia to protect "the feelings of the powerful", then by Donald Trump's 1776 Report in January 2021, followed by Republican-led legislatures submitting these bills. Snyder called the Idaho version "Kafkaesque in its censorship: It affirms freedom of speech and then bans divisive speech." From January 2021 through February 2022, 35 states had introduced 137 bills that limit what "schools can teach with regard to race, American history, politics, sexual orientation and gender identity". PEN America, an American nonprofit association of writers "dedicated to free speech" that is affiliated with the International Freedom of Expression Exchange has been monitoring this legislation. Jeffrey Sachs, who is tracking the legislation, said that the "recent flurry" of bills means that the classroom has become a "minefield" for educators who want to teach "slavery, Jim Crow laws or the Holocaust". An April 2022 article in Education Week said that 42 states had either introduced legislation or "taken other steps" to restrict "teaching critical race theory" and, "more broadly, limit how teachers can discuss racism and sexism in class." The first state to ban CRT was Idaho when a bill was introduced in mid-April 2021, and signed by Governor Brad Little on May 4. By May 2021, multiple state legislatures introduced bills restricting the teaching of critical race theory (CRT) in public schools. Bills were passed in 14 states, all of which had both Republican-majority legislatures and Republican governors. Several of these bills specifically mention "critical race theory" or single out the New York Times' 1619 Project. The Texas state legislature, which is predominantly Republican, banned teachers from using the 1619 Project as part of coursework. The 1619 Project revisits the role of African Americans in American history by reframing the consequences of slavery in the United States. Texas House Bill 3979 which was authored by Senator Bryan Hughes (R-Mineola) and others, became law in December 2021 limiting the way in which Texas schools can teach about race and racism, as well as other issues. Under Bill 3979, this bill declares that teachers should avoid teaching the following concepts "(1) one race or sex is inherently superior to the other; (2) an individual is inherently racist, sexist or oppressive because of their own race or sex; (3) an individual should be treated unfairly because their race or sex; (4) members of certain groups should not disrespect individuals on the basis of race, sex, or religion; (5) an individual's morality is based on their race or sex; (6) an individual bears responsibility for actions committed in the past by people of the same race or sex; (7) an individual should be ashamed or guilty because of their race or sex; (8) meritocracy is racist or sexist." Teachers also no longer have any obligation to undertake any training on how to deal with racism in the classroom. The list is based on the "divisive concepts" listed in Trump's September 28, 2020, executive order, On June 10, 2021, the Florida Board of Education banned CRT out of concerns that the concept of racism embedded in CRT is one that continues to uphold white supremacy in American society and its legal systems. The board's vote, which was encouraged by governor Ron DeSantis, was unanimous. In April 2022, when Republican Governor Brian Kemp signed bills banning CRT, he said that the state was protecting parents' fundamental rights to direct their children's education by preventing classrooms in Georgia from becoming "pawns to those who indoctrinate our kids with their partisan political agendas." A number of state laws to ban CRT from being taught in state public schools, did not include the words "critical race theory," with the exception of the laws passed in Idaho, according to a July 2, 2021, Brookings Institution article. === Religious organizations === Discourse around CRT has been divisive for many churches. The Southern Baptist Convention in a 2019 resolution stated that "[c]ritical race theory and intersectionality [...] can aid in evaluating a variety of human experiences". In 2020 six SBC presidents declared critical race theory to be 'incompatible' with SBC's statement of faith. Consequently, several pastors have left the SBC. In November 2020, student leaders of Cru, a student Christian organization, wrote a letter to Cru's president associating CRT with "unbiblical ideas that have led us to disunity". In an American Association of Christian Counselors talk in 2021 entitled "The Five Greatest Global Epidemics", evangelist Josh McDowell cited critical race theory as the first epidemic. He stated that he did not "believe Blacks, African Americans, and many other minorities have equal opportunity". On Twitter (now X) he later clarified and apologised for some of his comments, and maintained that "Racism has kept equality from being achieved in our nation". == Other countries == In countries outside of the United States, the teaching of critical race theory and white privilege has also been controversial. === Australia === In Australia, the conservative Coalition government supported a Senate motion by Pauline Hanson to ban the teaching of critical race theory in the Australian National Curriculum. The Senate motion occurred during a review of the Australian National Curriculum. The motion did not recognize that the curriculum document did not have a reference to Critical Race Theory. Such a pre-emptive move has been linked to transnational culture wars between the UK, US and Australia. In June 2021, following media reports that the proposed national curriculum was "preoccupied with the oppression, discrimination and struggles of Indigenous Australians", the Australian Senate approved a motion tabled by right-wing senator Pauline Hanson calling on the federal government to reject CRT, despite it not being included in the curriculum. Despite this, CRT is gaining increasing popularity in Australian academic circles, to investigate Indigenous issues/studies, Islamophobia and Black Africans' experiences. === France === In January 2022, the French minister of education Jean-Michel Blanquer called for "combat against an intellectual frame originating from American universities [...] which seeks to essentialise communities and identities, which is something that goes against to our republican model". === The Netherlands === In 2021 there were 34 ministers out of the 150 member house of representatives who were in favor of removing critical race theory from the curriculum. Representative Caroline van der Plas said in a debate on 25 January 2023: We will need to continue to have the conversation about [our] history. We will have to keep looking for one anothers stories. I am of the belief we should have a debate here about what is thought, felt and discussed in society [in The Netherlands], not based on what comes blowing over from [the United States]. What happens on universities in [the United states], blows over to [The Netherlands]. It lands in the student life of Amsterdam and in the grachtengordel. But outside of [the randstad], the A10, people are not concerned with this at all. Those people are not busy with terms such as white fragility, critical race theory or decolonization. So I would like to make a call to have the Dutch debate about slavery, and not the American debate. === United Kingdom === In the United Kingdom, educators were warned that teachers teaching white privilege would be breaking the law. Conservatives within the UK government began to criticize CRT in late 2020. Equalities Minister Kemi Badenoch, who is of Nigerian descent, said during a parliamentary debate to mark Black History Month, "We do not want to see teachers teaching their pupils about white privilege and inherited racial guilt [...] Any school which teaches these elements of critical race theory, or which promotes partisan political views such as defunding the police without offering a balanced treatment of opposing views, is breaking the law." In an open letter, 101 writers of the Black Writers' Guild denounced Badenoch for remarks about popular anti-racism books such as White Fragility and Why I'm No Longer Talking to White People About Race, made in an interview in The Spectator, in which she said, "many of these books—and, in fact, some of the authors and proponents of critical race theory—actually want a segregated society". Anti-CRT group Color Us United promised to "battle" The Salvation Army due to the latter's guidance pamphlet titled "Let's Talk About Racism". In September 2023, an Employment Tribunal ruled that opposition to critical race theory, with support for the attitude of Martin Luther King towards race, was a philosophical belief protected under the Equality Act 2010. == See also == Anti-bias curriculum Judicial aspects of race in the United States Racism in the United States == Notes == == References == == Works cited ==
Wikipedia/2020s_controversies_around_critical_race_theory
In social science and economics, corporate capitalism is a capitalist marketplace characterized by the dominance of hierarchical and bureaucratic corporations. == Overview == In the developed world, corporations dominate the marketplace, comprising 50% or more of all businesses. Those businesses which are not corporations contain the same bureaucratic structure of corporations, but there is usually a sole owner or group of owners who are liable to bankruptcy and criminal charges relating to their business. Corporations have limited liability. Corporations are usually called public entities or publicly traded entities when parts of their business can be bought in the form of shares on the stock market. This is done as a way of raising capital to finance the investments of the corporation. The shareholders appoint the executives of the corporation, who are the ones running the corporation via a hierarchical chain of power, where the bulk of investor decisions are made at the top and have effects on those beneath them. == Criticisms == Corporate capitalism has been criticized for the amount of power and influence corporations and large business interest groups have over government policy, including the policies of regulatory agencies and influencing political campaigns. Many social scientists have criticized corporations for failing to act in the interests of the people, and their existence seems to circumvent the principles of democracy, which assumes equal power relations between individuals in a society. Dwight D. Eisenhower criticized the notion of the confluence of corporate power and de facto fascism, but nevertheless brought attention to the "conjunction of an immense military establishment and a large arms industry" (the military–industrial complex) in his 1961 Farewell Address to the Nation, and stressed "the need to maintain balance in and among national programs—balance between the private and the public economy, balance between cost and hoped for advantage". == See also == Capitalist mode of production Capitalist state Corporation Corporatocracy Criticism of capitalism == References == == External links == Vitali, Stefania; Glattfelder, James B.; Battiston, Stefano (October 26, 2011). Montoya, Alejandro Raul Hernandez (ed.). "The Network of Global Corporate Control". PLOS ONE. 6 (10). Public Library of Science (PLoS): e25995. arXiv:1107.5728. Bibcode:2011PLoSO...625995V. doi:10.1371/journal.pone.0025995. ISSN 1932-6203. PMC 3202517. PMID 22046252. "Revealed – the capitalist network that runs the world". New Scientist. Retrieved August 17, 2017.
Wikipedia/Corporate_capitalism
Nomothetic and idiographic are terms used by Neo-Kantian philosopher Wilhelm Windelband to describe two distinct approaches to knowledge, each one corresponding to a different intellectual tendency, and each one corresponding to a different branch of academia. To say that Windelband supported that last dichotomy is a consequent misunderstanding of his own thought. For him, any branch of science and any discipline can be handled by both methods as they offer two integrating points of view. Nomothetic is based on what Kant described as a tendency to generalize, and is typical for the natural sciences. It describes the effort to derive laws that explain types or categories of objective phenomena, in general. Idiographic is based on what Kant described as a tendency to specify, and is typical for the humanities. It describes the effort to understand the meaning of contingent, unique, and often cultural or subjective phenomena. == Use in the social sciences == The problem of whether to use nomothetic or idiographic approaches is most sharply felt in the social sciences, whose subject are unique individuals (idiographic perspective), but who have certain general properties or behave according to general rules (nomothetic perspective). Often, nomothetic approaches are quantitative, and idiographic approaches are qualitative, although the "Personal Questionnaire" developed by Monte B. Shapiro and its further developments (e.g. Discan scale and PSYCHLOPS) are both quantitative and idiographic. Another very influential quantitative but idiographic tool is the Repertory grid when used with elicited constructs and perhaps elicited elements. Personal cognition (D.A. Booth) is idiographic, qualitative and quantitative, using the individual's own narrative of action within situation to scale the ongoing biosocial cognitive processes in units of discrimination from norm (with M.T. Conner 1986, R.P.J. Freeman 1993 and O. Sharpe 2005). Methods of "rigorous idiography" allow probabilistic evaluation of information transfer even with fully idiographic data. In psychology, idiographic describes the study of the individual, who is seen as a unique agent with a unique life history, with properties setting them apart from other individuals (see idiographic image). A common method to study these unique characteristics is an (auto)biography, i.e. a narrative that recounts the unique sequence of events that made the person who they are. Nomothetic describes the study of classes or cohorts of individuals. Here the subject is seen as an exemplar of a population and their corresponding personality traits and behaviours. It is widely held that the terms idiographic and nomothetic were introduced to American psychology by Gordon Allport in 1937, but Hugo Münsterberg used them in his 1898 presidential address at the American Psychological Association meeting. This address was published in Psychological Review in 1899. Theodore Millon stated that when spotting and diagnosing personality disorders, first clinicians start with the nomothetic perspective and look for various general scientific laws; then when they believe they have identified a disorder, they switch their view to the idiographic perspective to focus on the specific individual and his or her unique traits. In sociology, the nomothetic model tries to find independent variables that account for the variations in a given phenomenon (e.g. What is the relationship between timing/frequency of childbirth and education?). Nomothetic explanations are probabilistic and usually incomplete. The idiographic model focuses on a complete, in-depth understanding of a single case (e.g. Why do I not have any pets?). In anthropology, idiographic describes the study of a group, seen as an entity, with specific properties that set it apart from other groups. Nomothetic refers to the use of generalization rather than specific properties in the same context. == See also == Nomological == References == == Further reading == Cone, J. D. (1986). "Idiographic, nomothetic, and related perspectives in behavioral assessment." In: R. O. Nelson & S. C. Hayes (eds.): Conceptual foundations of behavioral assessment (pp. 111–128). New York: Guilford. Thomae, H. (1999). "The nomothetic-idiographic issue: Some roots and recent trends." International Journal of Group Tensions, 28(1), 187–215.
Wikipedia/Nomothetic_and_idiographic
Karl Marx's theory of alienation describes the separation and estrangement of people from their work, their wider world, their human nature, and their selves. Alienation is a consequence of the division of labour in a capitalist society, wherein a human being's life is lived as a mechanistic part of a social class. The theoretical basis of alienation is that a worker invariably loses the ability to determine life and destiny when deprived of the right to think (conceive) of themselves as the director of their own actions; to determine the character of these actions; to define relationships with other people; and to own those items of value from goods and services, produced by their own labour. Although the worker is an autonomous, self-realised human being, as an economic entity this worker is directed to goals and diverted to activities that are dictated by the bourgeoisie—who own the means of production—in order to extract from the worker the maximum amount of surplus value in the course of business competition among industrialists. The theory, while found throughout Marx's writings, is explored most extensively in his early works, particularly the Economic and Philosophic Manuscripts of 1844, and in his later working notes for Capital, the Grundrisse. Marx's theory draws heavily from Georg Wilhelm Friedrich Hegel, and from The Essence of Christianity (1841) by Ludwig Feuerbach. Max Stirner extended Feuerbach's analysis in The Ego and its Own (1845), claiming that even the idea of 'humanity' is itself an alienating concept. Marx and Friedrich Engels responded to these philosophical propositions in The German Ideology (1845). == Two forms of alienation == In his writings from the early 1840s, Karl Marx uses the German words Entfremdung ("alienation" or "estrangement", derived from 'fremd', which means "alien") and Entäusserung ("externalisation" or "alienation", which alludes to the idea of relinquishment or surrender) to suggest an unharmonious or hostile separation between entities that naturally belong together. The concept of alienation has two forms: "subjective" and "objective". Alienation is "subjective" when human individuals feel "estranged" or do not feel at home in the modern social world. By this account, alienation consists in an individuals' experience of his or her life as meaningless, or his/herself as worthless. "Objective" alienation, by contrast, makes no reference to the beliefs or feelings of human beings. Rather, human beings are objectively alienated when they are hindered from developing their essential human capacities. For Marx, objective alienation is the cause of subjective alienation: individuals experience their lives as lacking meaning or fulfilment because modern society does not promote the deployment of their human capacities. Marx derives this concept from Georg Wilhelm Friedrich Hegel, whom he credits with significant insight into the basic structure of the modern social world, and how it is disfigured by alienation. Hegel's view is that, in the modern social world, objective alienation has already been vanquished, as the institutions of the rational or modern state enable individuals to fulfill themselves. Hegel believes that the family, civil society, and the political state facilitate people's actualization, both as individuals and members of a community. Nonetheless, there still exists widespread subjective alienation, where people feel estranged from the modern social world, or do not recognize modern society as a home. Hegel's project is not to reform or change the institutions of the modern social world, but to change the way in which society is understood by its members. Marx shares Hegel's belief that subjective alienation is widespread, but denies that the modern state enables individuals to actualize themselves. Marx instead takes widespread subjective alienation to indicate that objective alienation has not yet been overcome. == Dimensions of alienated labour == Marx stated that in a capitalist society, workers are alienated from their labour - they cannot decide on their own productive activities, nor can they use or own the value of what they produce.: 155  In the "Notes on James Mill" (1844), Marx explained alienation thus: Let us suppose that we had carried out production as human beings . . . In my production I would have objectified my individuality, its specific character, and, therefore, enjoyed not only an individual manifestation of my life during the activity, but also, when looking at the object, I would have the individual pleasure of knowing my personality to be objective, visible to the senses, and, hence, a power beyond all doubt. ... Our products would be so many mirrors in which we saw reflected our essential nature.: 86  In the first manuscript of the Economic and Philosophic Manuscripts of 1844, Marx identifies four interrelated dimensions of alienated labour: alienated labour alienates the worker, first, from the product of his labour; second, from his own activity; third, from what Marx, following Feuerbach, calls species-being; and fourth, from other human beings. === From a worker's product === Marx begins with an account of man's alienation from the products of his labour. In work, a worker objectifies his labour in the object that he produces. The objectification of a worker's labour is simultaneously its alienation. The worker loses control of the product to the owner of the means of production, the capitalist. The product's sale by the capitalist further reinforces the capitalist's power of wealth over the worker. The worker thus relates to the product as an alien object, which dominates and enslaves him. The products of his labour constitute a separate world of objects, which is alien to him. The worker creates an object, which appears to be his property. However, he now becomes its property. Where in earlier historical epochs, one person ruled over another, now the thing rules over the person, the product over the producer. The design of the product and how it is produced are determined, not by the producers who make it (the workers), nor by the consumers of the product (the buyers), but by the capitalist class who besides accommodating the worker's manual labour also accommodate the intellectual labour of the engineer and the industrial designer who create the product in order to shape the taste of the consumer to buy the goods and services at a price that yields a maximal profit. Aside from the workers having no control over the design-and-production protocol, alienation (Entfremdung) broadly describes the conversion of labour (work as an activity), which is performed to generate a use value (the product), into a commodity, which—like products—can be assigned an exchange value. That is, the capitalist gains control of the manual and intellectual workers and the benefits of their labour, with a system of industrial production that converts this labour into concrete products (goods and services) that benefit the consumer. Moreover, the capitalist production system also reifies labour into the "concrete" concept of "work" (a job), for which the worker is paid wages— at the lowest-possible rate— that maintain a maximum rate of return on the capitalist's investment capital; this is an aspect of exploitation. Furthermore, with such a reified system of industrial production, the profit (exchange value) generated by the sale of the goods and services (products) that could be paid to the workers is instead paid to the capitalist classes: the functional capitalist, who manages the means of production; and the rentier capitalist, who owns the means of production. === From a worker's productive activity === In the capitalist mode of production, the generation of products (goods and services) is accomplished with an endless sequence of discrete, repetitive motions that offer the worker little psychological satisfaction for "a job well done." By means of commodification, the labor power of the worker is reduced to wages (an exchange value); the psychological estrangement (Entfremdung) of the worker results from the unmediated relation between his productive labour and the wages paid to him for the labour. The worker is alienated from the means of production via two forms: wage compulsion and the imposed production content. The worker is bound to unwanted labour as a means of survival, labour is not "voluntary but coerced" (forced labour). The worker is only able to reject wage compulsion at the expense of their life and that of their family. The distribution of private property in the hands of wealth owners, combined with government enforced taxes compel workers to labour. In a capitalist world, our means of survival is based on monetary exchange, therefore we have no other choice than to sell our labour power and consequently be bound to the demands of the capitalist. The worker "[d]oes not feel content but unhappy, does not develop freely his physical and mental energy but mortifies his body and ruins his mind. The worker therefore only feels himself outside his work, and in his work feels outside himself;" "[l]abor is external to the worker,": 74  it is not a part of their essential being. During work, the worker is miserable, unhappy and drained of their energy, work "mortifies his body and ruins his mind." The production content, direction and form are imposed by the capitalist. The worker is being controlled and told what to do since they do not own the means of production they have no say in production, "labour is external to the worker, i.e. it does not belong to his essential being.: 74  A person's mind should be free and conscious, instead it is controlled and directed by the capitalist, "the external character of labour for the worker appears in the fact that it is not his own but someone else's, that it does not belong to him, that in it he belongs, not to himself, but to another.": 74  This means he cannot freely and spontaneously create according to his own directive as labour's form and direction belong to someone else. === From a worker's Gattungswesen (species-being) === The Gattungswesen ('species-being' or 'human nature'), of individuals is not discrete (separate and apart) from their activity as a worker and as such species-essence also comprises all of innate human potential as a person. Conceptually, in the term species-essence, the word species describes the intrinsic human mental essence that is characterised by a "plurality of interests" and "psychological dynamism," whereby every individual has the desire and the tendency to engage in the many activities that promote mutual human survival and psychological well-being, by means of emotional connections with other people, with society. The psychic value of a human consists in being able to conceive (think) of the ends of their actions as purposeful ideas, which are distinct from the actions required to realise a given idea. That is, humans are able to objectify their intentions by means of an idea of themselves as "the subject" and an idea of the thing that they produce, "the object." Conversely, unlike a human being, an animal does not objectify itself as "the subject" nor its products as ideas, "the object," because an animal engages in directly self-sustaining actions that have neither a future intention, nor a conscious intention. Whereas a person's Gattungswesen does not exist independently of specific, historically conditioned activities, the essential nature of a human being is actualised when an individual— within their given historical circumstance— is free to subordinate their will to the internal demands they have imposed upon themselves by their imagination and not the external demands imposed upon individuals by other people. ==== Relations of production ==== Whatever the character of a person's consciousness (will and imagination), societal existence is conditioned by their relationships with the people and things that facilitate survival, which is fundamentally dependent upon cooperation with others, thus, a person's consciousness is determined inter-subjectively (collectively), not subjectively (individually), because humans are a social animal. In the course of history, to ensure individual survival societies have organised themselves into groups who have different, basic relationships to the means of production. One societal group (class) owned and controlled the means of production while another societal class worked the means of production and in the relations of production of that status quo the goal of the owner-class was to economically benefit as much as possible from the labour of the working class. In the course of economic development when a new type of economy displaced an old type of economy—agrarian feudalism superseded by mercantilism, in turn superseded by the Industrial Revolution—the rearranged economic order of the social classes favoured the social class who controlled the technologies (the means of production) that made possible the change in the relations of production. Likewise, there occurred a corresponding rearrangement of the human nature (Gattungswesen) and the system of values of the owner-class and of the working-class, which allowed each group of people to accept and to function in the rearranged status quo of production-relations. Despite the ideological promise of industrialisation—that the mechanisation of industrial production would raise the mass of the workers from a brutish life of subsistence existence to honourable work—the division of labour inherent to the capitalist mode of production thwarted the human nature (Gattungswesen) of the worker and so rendered each individual into a mechanistic part of an industrialised system of production, from being a person capable of defining their value through direct, purposeful activity. Moreover, the near-total mechanisation and automation of the industrial production system would allow the (newly) dominant bourgeois capitalist social class to exploit the working class to the degree that the value obtained from their labour would diminish the ability of the worker to materially survive. As a result of this exploitation, when the proletarian working-class become a sufficiently developed political force, they will effect a revolution and re-orient the relations of production to the means of production—from a capitalist mode of production to a communist mode of production. In the resultant communist society, the fundamental relation of the workers to the means of production would be equal and non-conflictual because there would be no artificial distinctions about the value of a worker's labour; the worker's humanity (Gattungswesen) thus respected, men and women would not become alienated. In the communist socio-economic organisation, the relations of production would operate the mode of production and employ each worker according to their abilities and benefit each worker according to their needs. Hence, each worker could direct their labour to productive work suitable to their own innate abilities, rather than be forced into a narrowly defined, minimum-wage "job" meant to extract maximal profit from individual labour as determined by and dictated under the capitalist mode of production. In the classless, collectively-managed communist society, the exchange of value between the objectified productive labour of one worker and the consumption benefit derived from that production will not be determined by or directed to the narrow interests of a bourgeois capitalist class, but instead will be directed to meet the needs of each producer and consumer. Although production will be differentiated by the degree of each worker's abilities, the purpose of the communist system of industrial production will be determined by the collective requirements of society, not by the profit-oriented demands of a capitalist social class who live at the expense of the greater society. Under the collective ownership of the means of production, the relation of each worker to the mode of production will be identical and will assume the character that corresponds to the universal interests of the communist society. The direct distribution of the fruits of the labour of each worker to fulfil the interests of the working class—and thus to an individual's own interest and benefit— will constitute an un-alienated state of labour conditions, which restores to the worker the fullest exercise and determination of their human nature. === From other workers === Capitalism reduces the labour of the worker to a commercial commodity that can be traded in the competitive labour-market, rather than as a constructive socio-economic activity that is part of the collective common effort performed for personal survival and the betterment of society. In a capitalist economy, the businesses who own the means of production establish a competitive labour-market meant to extract from the worker as much labour (value) as possible in the form of capital. The capitalist economy's arrangement of the relations of production provokes social conflict by pitting worker against worker in a competition for "higher wages", thereby alienating them from their mutual economic interests; the effect is a false consciousness, which is a form of ideological control exercised by the capitalist bourgeoisie through its cultural hegemony. Furthermore, in the capitalist mode of production the philosophic collusion of religion in justifying the relations of production facilitates the realisation and then worsens the alienation (Entfremdung) of the worker from their humanity; it is a socio-economic role independent of religion being "the opiate of the masses". == Philosophical significance and influences == The concept of alienation does not originate with Marx. Marx's two main influences in his use of the term are Georg Wilhelm Friedrich Hegel and Ludwig Feuerbach. === Georg Wilhelm Friedrich Hegel === For Hegel, alienation consists in an "unhappy consciousness". By this term, Hegel means a misunderstood form of Christianity, or a Christianity that hasn't been interpreted according to Hegel's own pantheism. In The Phenomenology of Spirit (1807), Hegel described the stages in the development of the human Geist ('spirit'), by which men and women progress from ignorance to knowledge of the self and of the world. Developing Hegel's human-spirit proposition, Marx said that those poles of idealism— "spiritual ignorance" and "self-understanding"— are replaced with material categories, whereby "spiritual ignorance" becomes "alienation" and "self-understanding" becomes man's realisation of his Gattungswesen (species-essence). === Ludwig Feuerbach === The middle-period writings of Ludwig Feuerbach, where he critiques Christianity and philosophy, are pre-occupied with the problem of alienation. In these works, Feuerbach argues that an inappropriate separation of individuals from their essential human nature is at the heart of Christianity. Feuerbach believes the alienation of modern individuals consists in their holding false beliefs about God. God is not an objective being, but is instead a projection of man's own essential predicates. Christian belief entails the sacrifice, the practical denial or repression, of essential human characteristics. Feuerbach characterises his own work as having a therapeutic goal – healing the painful separation at the heart of alienation. === Entfremdung and the theory of history === In Part I: "Feuerbach – Opposition of the Materialist and Idealist Outlook" of The German Ideology (1846), Karl Marx said the following:Things have now come to such a pass that the individuals must appropriate the existing totality of productive forces, not only to achieve self-activity, but also, merely, to safeguard their very existence. That humans psychologically require the life activities that lead to their self-actualisation as persons remains a consideration of secondary historical relevance because the capitalist mode of production eventually will exploit and impoverish the proletariat until compelling them to social revolution for survival. Yet, social alienation remains a practical concern, especially among the contemporary philosophers of Marxist humanism. In The Marxist-Humanist Theory of State-Capitalism (1992), Raya Dunayevskaya discusses and describes the existence of the desire for self-activity and self-actualisation among wage-labour workers struggling to achieve the elementary goals of material life in a capitalist economy. === Entfremdung and social class === In Chapter 4 of The Holy Family (1845), Marx said that capitalists and proletarians are equally alienated, but that each social class experiences alienation in a different form: The propertied class and the class of the proletariat present the same human self-estrangement. But the former class feels at ease and strengthened in this self-estrangement, it recognises estrangement as its own power, and has in it the semblance of a human existence. The class of the proletariat feels annihilated, this means that they cease to exist in estrangement; it sees in it its own powerlessness and in the reality of an inhuman existence. It is, to use an expression of Hegel, in its abasement, the indignation at that abasement, an indignation to which it is necessarily driven by the contradiction between its human nature and its condition of life, which is the outright, resolute and comprehensive negation of that nature. Within this antithesis, the private property-owner is therefore the conservative side, and the proletarian the destructive side. From the former arises the action of preserving the antithesis, from the latter the action of annihilating it. == Criticism == In discussion of "aleatory materialism" (matérialisme aléatoire) or "materialism of the encounter," French philosopher Louis Althusser criticised a teleological (goal-oriented) interpretation of Marx's theory of alienation because it rendered the proletariat as the subject of history; an interpretation tainted with the absolute idealism of the "philosophy of the subject," which he criticised as the "bourgeois ideology of philosophy". == See also == Commodity fetishism – Concept in Marxist analysis Critique of political economy – Social critique Cultural evolution – Evolutionary theory of social change Theories of class consciousness and reification by György Lukács The Society of the Spectacle – 1967 book by Guy Debord Disenchantment – Cultural rationalization and devaluation of religion apparent in modern society == Footnotes == == References == == Further reading == == External links == Warburton, Nigel. 19 January 2015. "Karl Marx on Alienation," narrated by Gillian Anderson. A History of Ideas. UK: BBC Radio 4.
Wikipedia/Marx's_theory_of_alienation
Methodenstreit (German for "method dispute"), in intellectual history beyond German-language discourse, was an economics controversy commenced in the 1880s and persisting for more than a decade, between that field's Austrian School and the (German) Historical School. The debate concerned the place of general theory in social science and the use of history in explaining the dynamics of human action. It also touched on policy and political issues, including the roles of the individual and state. Nevertheless, methodological concerns were uppermost and some early members of the Austrian School also defended a form of welfare state, as prominently advocated by the Historical School. When the debate opened, Carl Menger developed the Austrian School's standpoint, and Gustav von Schmoller defended the approach of the Historical School. (In German-speaking countries, the original of this Germanism is not specific to the one controversy, which is likely to be specified as Methodenstreit der Nationalökonomie, i.e. "Methodenstreit of national economics".) == History == === Background === The Historical School contended that economists could develop new and better social laws from the collection and study of statistics and historical materials, and distrusted theories not derived from historical experience. Thus, the German Historical School focused on specific dynamic institutions as the largest variable in changes in political economy. The Historical School were themselves reacting against materialist determinism, the idea that human action could, and would (once science advanced enough), be explained as physical and chemical reactions. The Austrian School, beginning with the work of Carl Menger in the 1860s, argued against this (in Grundsätze der Volkswirtschaftslehre, English title: Principles of Economics), that economics was the work of philosophical logic and could only ever be about developing rules from first principles—seeing human motives and social interaction as far too complex to be amenable to statistical analysis—and purporting to deduce universally valid precepts from human actions. === Menger and the German Historical School === The first move was when Carl Menger attacked Schmoller and the German Historical School, in his 1883 book Investigations into the Method of the Social Sciences, with Special Reference to Political Economics (Untersuchungen über die Methode der Socialwissenschaften, und der politischen Ökonomie insbesondere). Menger thought the best method of studying economics was through reason and finding general theories which applied to broad areas. Menger, as did the other Austrians, concentrated upon the subjective, atomistic nature of economics. He argued that the foundations for economics were built upon assumption of self-interest, evaluation on the margin, and incomplete knowledge. He said aggregative, collective ideas could not have adequate foundation unless they rested upon individual components. The direct attack on the German Historical School lead Schmoller to respond quickly with an unfavourable and quite hostile review of Menger's book. Menger accepted the challenge and replied in a passionate pamphlet, written in the form of letters to a friend, in which he (according to Hayek) "ruthlessly demolished Schmoller's position". The encounter between the masters was soon imitated by their disciples. A degree of hostility not often equaled in scientific controversy developed. == Consequences == The term "Austrian school of economics" came into existence as a result of the Methodenstreit, when Schmoller used it in an unfavourable review of one of Menger's later books, intending to convey an impression of backwardness and obscurantism of Austria compared to the more modern Prussians. A serious consequence of the hostile debate was that Schmoller went so far as to declare publicly that members of the "abstract" school were unfit to fill a teaching position in a German university, and his influence was quite sufficient to make this equivalent to a complete exclusion of all adherents to Menger's doctrines from academic positions in Germany. The result was that even thirty years after the close of the controversy Germany was still less affected by the new ideas now spreading elsewhere, than any other academically important country in the world. == See also == Economic methodology Philosophy of mathematics Philosophy of science Positive economics Unreasonable ineffectiveness of mathematics Positivismusstreit Werturteilsstreit == References == == External links == Principles of Economics by Carl Menger Investigations into the Method of the Social Sciences with Special Reference to Economics by Carl Menger Epistemological Problems of Economics by Ludwig von Mises The Historical Setting of the Austrian School of Economics by Ludwig von Mises
Wikipedia/Methodenstreit
Queer theory is a field of post-structuralist critical theory that emerged in the early 1990s out of queer studies (formerly often known as gay and lesbian studies) and women's studies. The term "queer theory" is broadly associated with the study and theorization of gender and sexual practices that exist outside of heterosexuality, and which challenge the notion that heterosexuality is what is normal. Following social constructivist developments in sociology, queer theorists are often critical of what they consider essentialist views of sexuality and gender. Instead, they study those concepts as social and cultural phenomena, often through an analysis of the categories, binaries, and language in which they are said to be portrayed. Scholars associated with the development of queer theory are French post-structuralist philosopher Michel Foucault, and American feminist authors Gloria Anzaldúa, Eve Kosofsky Sedgwick, and Judith Butler. == History == Informal use of the term "queer theory" began with Gloria Anzaldúa and other scholars in the 1990s, themselves influenced by the work of French post-structuralist philosopher Michel Foucault, who viewed sexuality as socially constructed and rejected identity politics. Queer theory's roots can also be traced back to activism, with the reclaiming of the derogatory term "queer" as an umbrella term for those who do not identify with heteronormativity in the 1980s. This would continue on in the 1990s, with Queer Nation's use of "queer" in their protest chants, such as "We're here! We're queer! Get used to it!" Teresa de Lauretis organized the first queer theory conference in 1990. According to David Halperin, an early queer theorist, de Lauretis' usage was somewhat controversial at first, as she chose to combine the word "queer" which was just starting to be used in a "gay-affirmative sense by activists, street kids, and members of the art world," and the word "theory" which was seen as very academically weighty. In the early 1990s, the term started to become legitimized in academia. As an academic discipline, queer theory itself was developed by American academics Judith Butler at University of California, Berkley, and Eve Kosofsky Sedgwick at Duke University. Other early queer theorists include Michael Warner, Lauren Berlant, and Adrienne Rich. Feminist literary criticism laid groundwork by linking gender and textual interpretation. Foundational works like Sedgwick’s Epistemology of the Closet (1990) drew on literary and philosophical traditions to examine the homo/heterosexual binary. Sedgwick, D. A. Miller, Leo Bersani, and other queer literary critics have analyzed themes such as the closet, shame, and power in narratives. == Definition == The term "queer" itself intentionally remains loosely defined in order to encompass the difficult-to-categorize spectrum of gender, sexuality and romantic attraction. Similarly, queer theory remains difficult to objectively define as academics from various disciplines have contributed varying understanding of the term. At its core, queer theory relates to queer people, their lived experience and how their lived experience is culturally or politically perceived, specifically referring to the marginalization of queer people. This thinking is then applied to various fields of thinking.Queer theory and politics necessarily celebrate transgression in the form of visible difference from norms. These 'Norms' are then exposed to be norms, not natures or inevitabilities. Gender and sexual identities are seen, in much of this work, to be demonstrably defiant definitions and configurations. In an influential essay, Michael Warner argued that queerness is defined by what he called "heteronormativity"; those ideas, narratives and discourses which suggest that heterosexuality is the default, preferred, or normal mode of sexual orientation. Warner stated that while many thinkers had been theorising sexuality from a non-heterosexual perspective for perhaps a century, queerness represented a distinctive contribution to social theory for precisely this reason. Lauren Berlant and Warner further developed these ideas in their seminal essay, "Sex in Public". According to Warner, critics such as Edward Carpenter, Guy Hocquenghem and Jeffrey Weeks had already emphasised the "necessity of thinking about sexuality as a field of power, as a historical mode of personality, and as the site of an often critical utopian aim".: 3  Whereas the terms "homosexual", "gay" or "lesbian" which they used signified particular identities with stable referents (i.e. to a certain cultural form, historical context, or political agenda whose meanings can be analysed sociologically), the word "queer" is instead defined in relation to a range of practices, behaviours and issues that have meaning only in their shared contrast to categories which are alleged to be "normal". Such a focus highlights the indebtedness of queer theory to the concept of normalisation found in the sociology of deviance, particularly through the work of Michel Foucault, who studied the normalisation of heterosexuality in his work The History of Sexuality. In The History of Sexuality, Foucault argues that repressive structures in society police the discourse concerning sex and sexuality and are thus relegated in the private sphere. As a result, heterosexuality is normalized while homosexuality (or queerness) is stigmatized. Foucault then points out that this imposed secrecy has led to sexuality as a phenomenon that needs to be frequently confessed and examined. Foucault's work is particularly important to queer theory in that he describes sexuality as a phenomenon that "must not be thought of as a kind of natural given which power tries to hold in check" but rather "a historical construct." Judith Butler extends this idea of sexuality as a social construct to gender identity in Gender Trouble: Feminism and the Subversion of Identity, where they theorize that gender is not a biological reality but rather something that is performed through repeated actions. Because this definition of queerness does not have a fixed reference point, Judith Butler has described the subject of queer theory as a site of "collective contestation". They suggest that "queer" as a term should never be "fully owned, but always and only redeployed, twisted, queered from a prior usage and in the direction of urgent and expanding political purposes". While proponents argue that this flexibility allows for the constant readjustment of queer theory to accommodate the experiences of people who face marginalisation and discrimination on account of their sexuality and gender, critics allege that such a "subjectless critique", as it is often called, runs the risk of abstracting cultural forms from their social structure, political organization, and historical context, reducing social theory to a mere "textual idealism". == Analysis of same-sex partnerships == Queer theory deals with the micro level (the identity of the individual person), the meso level (the individual in their immediate groups such as family, friends, and work), and the macro level (the larger context of society, culture, politics, policies and law). Accordingly, queer theory not only examines the communities surrounding the queer people, but also the communities they form. Same-sex living communities have a significant priority in the formation of a queer theory. The standard work of Andreas Frank, Committed Sensations, highlights comprehensively the life situation of coming out, homosexuality and same-sex communities to the millennium. == Queer theory and communication studies == As an interdisciplinary concept, queer theory is applied to different disciplines, including communication studies and research. It was introduced to the field of communication through Jeffrey Ringer's Queer Words, Queer Images: Communication and the Construction of Homosexuality in 1994, which offered a queer perspective to communication research findings. Queer theory has also contributed to communication research by challenging the heteronormative society's notions of what's considered deviant and taboo—what is considered normative and non-normative. == Queering family communication == Queer theory's interdisciplinarity is evident in its application in and critique of family communication. One of the criticisms regarding family communication is its focus on "mainstream" families, often focusing on heterosexual parents and children. Although more studies on family communication have started to include nontraditional families, critical rhetorical scholar Roberta Chevrette argues that researchers continue to look at nontraditional families, including families with openly queer members, from a heteronormative lens. That is, when studying LGBTQ+ families, many scholars continue to compare these families to their cis-heterosexual counterparts' norms. As Chevrette writes, "Queering family communication requires challenging ideas frequently taken for granted and thinking about sexual identities as more than check marks." Chevrette describes four ways that scholars can "queer" family communication: (1) revealing the biases and heteronormative assumptions in family communication; (2) challenging the treatment of sexuality and queerness as a personal and sensitive topic reserved for the private sphere rather than the public; (3) interpreting identity as a socially constructed phenomenon and sexuality as being fluid in order to expose the ways gender roles and stereotypes are reinforced by notions of identity and sexuality as being fixed; and (4) emphasizing intersectionality and the importance of studying different identity markers in connection with each other. == Queer theory in philosophy == In the field of philosophy, queer theory falls under an adjacent category to critical disability theory and feminist theory for their similar approaches in defending communities discriminated against by questioning a societal status quo. Although all three are distinct fields of study, they all work towards a common activist goal of inclusion. Critical disability theory is a comprehensive term that is used to observe, discuss and question how people marginalized due to a difference in their social context (such as physical or mental disability as well as any other difference that would cause them to be othered in society) are treated in society. == Lens for power == Queer theory is the lens used to explore and challenge how scholars, activists, artistic texts, and the media perpetrate gender- and sex-based binaries, and its goal is to undo hierarchies and fight against social inequalities. Due to controversy about the definition of queer, including whether the word should even be defined at all or should be left deliberately open-ended, there are many disagreements and often contradictions within queer theory. In fact, some queer theorists, like Berlant and Warner and Butler, have warned that defining it or conceptualizing it as an academic field might only lead to its inevitable misinterpretation or destruction, since its entire purpose is to critique academia rather than become a formal academic domain itself. Fundamentally, queer theory does not construct or defend any particular identity, but instead, grounded in post-structuralism and deconstruction, it works to actively critique heteronormativity, exposing and breaking down traditional assumptions that sexual and gender identities are presumed to be heterosexual or cisgender. == Intersectionality and queer theory == The concept of queer theory has emerged from multiple avenues that challenge the definition of normality. However, institutions often tend to prioritize one marginalized group over others, resulting in limited social change. As activist Charlene A. Carruthers describes in her book Unapologetic, it is important to imagine "alternative economics, alternative family structures, or something else entirely" from an imagination of cross-sectional communities – such as her stance as a Black queer feminist. Imagination is a crucial aspect of queer theory. It is a tool for creating new worlds that are currently not viable for under-represented or oppressed communities, prompting a transformative stance to current norms. An intersectional approach decentralizes queer theory and thus shifts power to a more radical set of narratives, aligning with the definition of queerness itself: challenging prominent, white, and heterosexual discourses. According to critical theorist Daniel J. Gil De Lamadrid, intersectionality can be used to examine how queer identity is racialized as normatively white, and the intersectional stigma and resistance that comes from such racialization. Intersectionality recognizes that complex identities and social categories form from "structured multiple oppression." Therefore, the personal identities of intersectional people are inherently political. Groups such as the Human Rights Campaign have previously employed this understanding in formal rights advocacy for queer legal protection. However, queer theorists and activists like Lisa Duggan have noted that such groups prioritize the voices of some groups over others by focusing on specific identities like "gay middle-class men" rather than complex and intersectional ones. They have emphasized the importance of intersectionality in queer discourse and activism. New directions in queer intersectionality include Jones' "euphorias" studies showing intersectional differences in diverse LGBTIQA+ peoples' experiences of happiness. Specifically, Jones found that happiness was often used as a reward for performance of intersectional normativity; those who were lesbian and yet also cisgender and mothers were more likely to experience euphoric moments even in discriminatory settings. However, LGBTIQA+ people who had "other othered" identities such as disabilities were less likely to report experiencing euphoria. Jones argues being euphorically queer should not presume typical happiness narrative arcs and should make room for negativity; queer diverse people will need to critique society and critique critique of society but can still be euphoric about being queer and intersectional. == Criticisms == A recurring criticism of queer theory, which often employs sociological jargon, is that it is written, according to Brent Pickett, by a "small ideologically oriented elite" and possesses an evident social class bias. It is not only class biased but also, in practice, only really referred to at universities and colleges. Likewise, Ros Coward writing in The Guardian, says that advocates of queer theory are like other elite academics that engage in obscurantism with their use of jargon to protect their field from outside criticism and fail to deconstruct their own role and perspective as academics at elite institutions. According to Joshua Gamson, due to its engagement in social deconstruction, it is nearly impossible for queer theory to talk about a "lesbian" or "gay" subject, as all social categories are denaturalized and reduced to discourse. Thus, according to Adam Isaiah Green, a professor at the University of Toronto, queer theory can only examine discourses and not subjectivities. Green further argues that queer theory might be doing a disservice to the study of queer people for, among other reasons, unduly doing away with categories of sexuality and gender that had an explanatory role in their original context. He argues that for instance the lesbians documented in Cherry Grove, Fire Island chose to identify specifically as either "ladies", "dykes" or "postfeminists" for generational, ethnic and class reasons. While they have a shared sexuality, flattening their diversity of identity, culture and expression to "the lesbian community" might be undue and hide the social contigencies that queer theory purports to foreground (race, class, ethnicity, gender). Rosemary Hennessy argues that queer theory's focus on cultural and discursive representations of sexuality often ignores or minimizes the materialist feminist emphasis on capitalism and patriarchy. While queer theory critiques identity as a fixed category, it may fail to account for how systemic structures shape sexual identities and oppression beyond cultural representations. For some feminists, queer theory undermines feminism by blurring the boundaries between gendered social classes, which it explains as personal choices rather than consequences of social structures. Bruno Perreau, the Cynthia L. Reed Professor of French Studies at the Massachusetts Institute of Technology, discusses various facets of the French response to queer theory, from the mobilization of activists and the seminars of scholars to the emergence of queer media and translations. Perreau sheds new light on events around gay marriage in France, where opponents to the 2013 law saw queer theory as a threat to French family. Perreau questions the return of French Theory to France from the standpoint of queer theory, thereby exploring the way France conceptualizes America. By examining mutual influences across the Atlantic, he seeks to reflect on changes in the idea of national identity in France and the United States, offering insight on recent attempts to theorize the notion of "community" in the wake of Maurice Blanchot's work. Perreau offers in his book a theory of minority politics that considers an ongoing critique of norms as the foundation of citizenship, in which a feeling of belonging arises from regular reexamination of it. In their work Cynical Theories, scholars Helen Pluckrose and James A. Lindsay claim queer theory has a largely unscientific view on biology and objective reality as an intentional feature. They state that, "queer theory is a political project and its aim is to disrupt". As such within it, "there can be absolutely no quarter given to any discourse—even matters of scientific fact—that could be interpreted as promoting biological essentialism." Thus, according to them, queer theory knowingly misrepresents biological facts and research, especially on intersex people, to conflate them with completely unrelated issues concerning constructed gender identities such as transgender. == Queer theory in online discourse == One of the ways queer theory has made its way into online discourse is through the popularity of Adrienne Rich's 1980 essay "Compulsory Heterosexuality and Lesbian Existence". Rich's theory regarding compulsory heterosexuality (or comp-het)—the socio-cultural expectation that women must be attracted to men and desire a romantic heterosexual relationship—inspired the creation of the "lesbian masterdoc", a 30-page Google Document originally written in 2018 by Anjeli Luz, a Tumblr user who was in the midst of questioning her own sexuality as a teenager. Katelyn McKenna and John Bargh's studies of online groups consisting of marginalized groups found an interesting phenomenon called "identity demarginalization" — how participation in a group consisting of people with shared marginalized identity can lead to a higher level of self-acceptance, which could lead to eventually coming out to their friends and family. Online groups and interactions also contribute to normalizing queerness and challenging heteronormativity by serving as a networked counterpublic. Sarah Jackson, Moya Bailey, and Brooke Foucault Welles' discourse analysis of the hashtag #GirlsLikeUs shows how trans women have used the hashtag to build community in ways that normalize being trans and offering counter-narratives to the often stereotypical and caricatured portrayal of trans people's lives in popular mainstream media. == See also == Queer archaeology Queer of color critique Queer theology Quare theory Neuroqueer theory == References == == External links == Media related to Queer theory at Wikimedia Commons
Wikipedia/Queer_theory
Theoretical linguistics is a term in linguistics that, like the related term general linguistics, can be understood in different ways. Both can be taken as a reference to the theory of language, or the branch of linguistics that inquires into the nature of language and seeks to answer fundamental questions as to what language is, or what the common ground of all languages is. The goal of theoretical linguistics can also be the construction of a general theoretical framework for the description of language. Another use of the term depends on the organisation of linguistics into different sub-fields. The term 'theoretical linguistics' is commonly juxtaposed with applied linguistics. This perspective implies that the aspiring language professional, e.g. a student, must first learn the theory i.e. properties of the linguistic system, or what Ferdinand de Saussure called internal linguistics. This is followed by practice, or studies in the applied field. The dichotomy is not fully unproblematic because language pedagogy, language technology and other aspects of applied linguistics also include theory. Similarly, the term general linguistics is used to distinguish core linguistics from other types of study. However, because college and university linguistics is largely distributed with the institutes and departments of a relatively small number of national languages, some larger universities also offer courses and research programmes in 'general linguistics' which may cover exotic and minority languages, cross-linguistic studies and various other topics outside the scope of the main philological departments. == Fields of linguistics proper == When the concept of theoretical linguistics is taken to refer to core or internal linguistics, it means the study of the parts of the language system. This traditionally means phonology, morphology, syntax and semantics. Pragmatics and discourse can also be included; delimitation varies between institutions. Furthermore, Saussure's definition of general linguistics consists of the dichotomy of synchronic and diachronic linguistics, thus including historical linguistics as a core issue. == Linguistic theories == There are various frameworks of linguistic theory which include a general theory of language and a general theory of linguistic description. Current humanistic approaches include theories within structural linguistics and functional linguistics. In addition to the humanistic approaches of structural linguistics and functional linguistics, the field of theoretical linguistics encompasses other frameworks and perspectives. Evolutionary linguistics is one such framework that investigates the origins and development of language from an evolutionary and cognitive perspective. It incorporates various models within generative grammar, which seeks to explain language structure through formal rules and transformations. Cognitive linguistics and cognitive approaches to grammar, on the other hand, focuses on the relationship between language and cognition, exploring how language reflects and influences our thought processes. == See also == Theoretical Linguistics – journal Course in General Linguistics == References ==
Wikipedia/Linguistic_theory
Critical historiography approaches the history of art, literature or architecture from a critical theory perspective. Critical historiography is used by various scholars in recent decades to emphasize the ambiguous relationship between the past and the writing of history. Specifically, it is used as a method by which one understands the past and can be applied in various fields of academic work. == Concept == While historiography is concerned with the theory and history of historical writing, including the study of the developmental trajectory of history as a discipline, critical historiography addresses how historians or historical authors have been influenced by their own groups and loyalties. Here, there is an assumption that historical sources should not be taken at face value and has to be examined critically according to scholarly criteria. A critique of historiography warns against a tendency to focus on past greatness so that it opposes the present as demonstrated in the emphasis on dead traditions that paralyze present life. This view holds that critical historiography can also condemn the past and reveal the effects of repression and mistaken possibilities, among others. For instance, there is the case of the counter discourse to the so-called hegemonic epistemologies that previously defined and dominated the Black experience in America. Some authors trace the origin of this field in nineteenth-century Germany, particularly with Leopold von Ranke, one of the proponents of the concept of Wissenschaft, which means "critical history" or "scientific history", which viewed historiography as a rigorous, critical inquiry. For instance, in the application of Wissenschaft to the study of Judaism, it is maintained that there is an implied criticism of the stand of those advocating Orthodoxy. It is said to reveal the tendency of nationalist historians to favor the pious affirmation of the orthodox in attempts to restore pride in Jewish history. A type of critical historiography can be seen in the work of Harold Bloom. In Map of Misreading, Bloom argued that poets should not be seen as autonomous agents of creativity, but rather as part of a history that transcends their own production and that to a large degree gives it shape. The historian can try to stabilize poetic production so as to better understand the work of art, but can never completely extract the historical subject from history. Also among those who argue for the primacy of historiography is the architectural historian Mark Jarzombek. The focus of this work is on disciplinary production rather than poetic production, as was the case with Bloom. Since psychology – which became a more or less official science in the 1880s – is now so pervasive, Jarzombek argued, but yet so difficult to pinpoint, the traditional dualism of subjectivity and objectivity has become not only highly ambiguous, but also the site of a complex negotiation that needs to take place between the historian and the discipline. The issue, for Jarzombek, is particular poignant in the fields of art and architectural history, the principal subject of the book. Pierre Nora's notion of "ego-histories" also moves in the direction of critical historiography due to its interest in the ambiguous relationship between the present, the past, and the writing of history as well as the interactions of the fields of history, literary studies, and anthropology. The idea of these "ego-histories" is to bring into focus the relationship between the personality of historians and their life choices in the process of writing of history. The goal is to obtain the link between the history produced by the historian and the history of which he is a product. It is also proposed that, in architecture, critical historiography involves a strategic choice to approach the position of architecture within the given Symbolic order. This is demonstrated in the way Kenneth Frampton and Manfredo Tafuri associated Marxism with the Frankfurt School's critical theory. A critique of critical historiography cites the risk of judging the realities of the past by the yardstick of what is true in the present so that it becomes illusory and can obscure identity. == References == Harold Bloom, The Anxiety of Influence: A Theory of Poetry. New York: Oxford University Press, 1973; 2d ed., 1997. ISBN H. Bloom, A Map of Misreading. New York: Oxford University Press, 1975. Mark Jarzombek, "Critical Historiography," in The Psychologizing of Modernity (Cambridge University Press, 2002) online chapter Pierre Nora, Essais d'ego-histoire (Gallimard), 1987
Wikipedia/Critical_historiography
The theory of structuration is a social theory of the creation and reproduction of social systems that is based on the analysis of both structure and agents (see structure and agency), without giving primacy to either. Furthermore, in structuration theory, neither micro- nor macro-focused analysis alone is sufficient. The theory was proposed by sociologist Georges Gurvitch and later refined by Anthony Giddens, most significantly in The Constitution of Society, which examines phenomenology, hermeneutics, and social practices at the inseparable intersection of structures and agents. Its proponents have adopted and expanded this balanced position. Though the theory has received much criticism, it remains a pillar of contemporary sociological theory. == Premises and origins == Sociologist Anthony Giddens adopted a post-empiricist frame for his theory, as he was concerned with the abstract characteristics of social relations. This leaves each level more accessible to analysis via the ontologies which constitute the human social experience: space and time ("and thus, in one sense, 'history'").: 3  His aim was to build a broad social theory which viewed "[t]he basic domain of study of the social sciences... [as] neither the experience of the individual actor, nor the existence of any form of societal totality, but social practices ordered across space and time.": 189  His focus on abstract ontology accompanied a general and purposeful neglect of epistemology or detailed research methodology, consistent with other types of pragmatism. Giddens used concepts from objectivist and subjectivist social theories, discarding objectivism's focus on detached structures, which lacked regard for humanist elements and subjectivism's exclusive attention to individual or group agency without consideration for socio-structural context. He critically engaged classical nineteenth and early twentieth century social theorists such as Auguste Comte, Karl Marx, Max Weber, Émile Durkheim, Alfred Schutz, Robert K. Merton, Erving Goffman, and Jürgen Habermas. Thus, in many ways, structuration was "an exercise in clarification of logical issues.": viii  Structuration drew on other fields, as well: "He also wanted to bring in from other disciplines novel aspects of ontology that he felt had been neglected by social theorists working in the domains that most interested him. Thus, for example, he enlisted the aid of geographers, historians and philosophers in bringing notions of time and space into the central heartlands of social theory.": 16  Giddens hoped that a subject-wide "coming together" might occur which would involve greater cross-disciplinary dialogue and cooperation, especially between anthropologists, social scientists and sociologists of all types, historians, geographers, and even novelists. Believing that "literary style matters", he held that social scientists are communicators who share frames of meaning across cultural contexts through their work by utilising "the same sources of description (mutual knowledge) as novelists or others who write fictional accounts of social life.": 285  Structuration differs from its historical sources. Unlike structuralism it sees the reproduction of social systems not "as a mechanical outcome, [but] rather ... as an active constituting process, accomplished by, and consisting in, the doings of active subjects.": 121  Unlike Althusser's concept of agents as "bearers" of structures, structuration theory sees them as active participants. Unlike the philosophy of action and other forms of interpretative sociology, structuration focuses on structure rather than production exclusively. Unlike Saussure's production of an utterance, structuration sees language as a tool from which to view society, not as the constitution of society—parting with structural linguists such as Claude Lévi-Strauss and generative grammar theorists such as Noam Chomsky. Unlike post-structuralist theory, which put similar focus on the effects of time and space, structuration does not recognise only movement, change and transition. Unlike functionalism, in which structures and their virtual synonyms, "systems", comprise organisations, structuration sees structures and systems as separate concepts. Unlike Marxism, structuration avoids an overly restrictive concept of "society" and Marxism's reliance on a universal "motor of history" (i.e. class conflict), its theories of societal "adaptation", and its insistence on the working class as universal class and socialism as the ultimate form of modern society. Finally, "structuration theory cannot be expected to furnish the moral guarantees that critical theorists sometimes purport to offer.": 16  == Main ideas == === Duality of structure === Giddens observed that in social analysis, the term structure referred generally to "rules and resources" and more specifically to "the structuring properties allowing the 'binding' of time-space in social systems". These properties make it possible for similar social practices to exist across time and space and that lend them "systemic" form.: 17  Agents—groups or individuals—draw upon these structures to perform social actions through embedded memory, called memory traces. Memory traces are thus the vehicle through which social actions are carried out. Structure is also, however, the result of these social practices. Thus, Giddens conceives of the duality of structure as being: ...the essential recursiveness of social life, as constituted in social practices: structure is both medium and outcome of reproduction of practices. Structure enters simultaneously into the constitution of the agent and social practices, and 'exists' in the generating moments of this constitution.: 5  Giddens uses "the duality of structure" (i.e. material/ideational, micro/macro) to emphasize structure's nature as both medium and outcome. Structures exist both internally within agents as memory traces that are the product of phenomenological and hermeneutic inheritance: 27  and externally as the manifestation of social actions. Similarly, social structures contain agents and/or are the product of past actions of agents. Giddens holds this duality, alongside "structure" and "system," in addition to the concept of recursiveness, as the core of structuration theory.: 17  His theory has been adopted by those with structuralist inclinations, but who wish to situate such structures in human practice rather than to reify them as an ideal type or material property. (This is different, for example, from actor–network theory which appears to grant a certain autonomy to technical artifacts.) Social systems have patterns of social relation that change over time; the changing nature of space and time determines the interaction of social relations and therefore structure. Hitherto, social structures or models were either taken to be beyond the realm of human control—the positivistic approach—or posit that action creates them—the interpretivist approach. The duality of structure emphasizes that they are different sides to the same central question of how social order is created. Gregor McLennan suggested renaming this process "the duality of structure and agency", since both aspects are involved in using and producing social actions.: 322  === Cycle of structuration === The duality of structure is essentially a feedback–feedforward process whereby agents and structures mutually enact social systems, and social systems in turn become part of that duality. Structuration thus recognizes a social cycle. In examining social systems, structuration theory examines structure, modality, and interaction. The "modality" (discussed below) of a structural system is the means by which structures are translated into actions. ==== Interaction ==== Interaction is the agent's activity within the social system, space and time. "It can be understood as the fitful yet routinized occurrence of encounters, fading away in time and space, yet constantly reconstituted within different areas of time-space.": 86  Rules can affect interaction, as originally suggested by Goffman. "Frames" are "clusters of rules which help to constitute and regulate activities, defining them as activities of a certain sort and as subject to a given range of sanctions.": 87  Frames are necessary for agents to feel "ontological security, the trust that everyday actions have some degree of predictability. Whenever individuals interact in a specific context they address—without any difficulty and in many cases without conscious acknowledgement—the question: "What is going on here?" Framing is the practice by which agents make sense of what they are doing. ==== Routinization ==== Structuration theory is centrally concerned with order as "the transcending of time and space in human social relationships". Institutionalized action and routinization are foundational in the establishment of social order and the reproduction of social systems. Routine persists in society, even during social and political revolutions, where daily life is greatly deformed, "as Bettelheim demonstrates so well, routines, including those of an obnoxious sort, are re-established.": 87  Routine interactions become institutionalized features of social systems via tradition, custom and/or habit, but this is no easy societal task and it "is a major error to suppose that these phenomena need no explanation. On the contrary, as Goffman (together with ethnomethodology) has helped to demonstrate, the routinized character of most social activity is something that has to be 'worked at' continually by those who sustain it in their day-to-day conduct." Therefore, routinized social practices do not stem from coincidence, "but the skilled accomplishments of knowledgeable agents.": 26  Trust and tact are essential for the existence of a "basic security system, the sustaining (in praxis) of a sense of ontological security, and [thus] the routine nature of social reproduction which agents skilfully organize. The monitoring of the body, the control and use of face in 'face work'—these are fundamental to social integration in time and space.": 86  ==== Explanation ==== When I utter a sentence I draw upon various syntactical rules (sedimented in my practical consciousness of the language) in order to do so. These structural features of the language are the medium whereby I generate the utterance. But in producing a syntactically correct utterance I simultaneously contribute to the reproduction of the language as a whole. ...The relation between moment and totality for social theory... [involves] a dialectic of presence and absence which ties the most minor or trivial forms of social action to structural properties of the overall society, and to the coalescence of institutions over long stretches of historical time.: 24  Thus, even the smallest social actions contribute to the alteration or reproduction of social systems. Social stability and order is not permanent; agents always possess a dialectic of control (discussed below) which allows them to break away from normative actions. Depending on the social factors present, agents may cause shifts in social structure. The cycle of structuration is not a defined sequence; it is rarely a direct succession of causal events. Structures and agents are both internal and external to each other, mingling, interrupting, and continually changing each other as feedbacks and feedforwards occur. Giddens stated, "The degree of "systemness" is very variable. ...I take it to be one of the main features of structuration theory that the extension and 'closure' of societies across space and time is regarded as problematic.": 165  The use of "patriot" in political speech reflects this mingling, borrowing from and contributing to nationalistic norms and supports structures such as a police state, from which it in turn gains impact. === Structure and society === Structures are the "rules and resources" embedded in agents' memory traces. Agents call upon their memory traces of which they are "knowledgeable" to perform social actions. "Knowledgeability" refers to "what agents know about what they do, and why they do it." Giddens divides memory traces (structures-within-knowledgeability) into three types: Domination (power): Giddens also uses "resources" to refer to this type. "Authoritative resources" allow agents to control persons, whereas "allocative resources" allow agents to control material objects. Signification (meaning): Giddens suggests that meaning is inferred through structures. Agents use existing experience to infer meaning. For example, the meaning of living with mental illness comes from contextualized experiences. Legitimation (norms): Giddens sometimes uses "rules" to refer to either signification or legitimation. An agent draws upon these stocks of knowledge via memory to inform him or herself about the external context, conditions, and potential results of an action. When an agent uses these structures for social interactions, they are called modalities and present themselves in the forms of facility (domination), interpretive scheme/communication (signification) and norms/sanctions (legitimation). Thus, he distinguishes between overall "structures-within-knowledgeability" and the more limited and task-specific "modalities" on which these agents subsequently draw when they interact. The duality of structures means that structures enter "simultaneously into the constitution of the agent and social practices, and 'exists' in the generating moments of this constitution.": 5  "Structures exist paradigmatically, as an absent set of differences, temporally "present" only in their instantiation, in the constituting moments of social systems.": 64  Giddens draws upon structuralism and post-structuralism in theorizing that structures and their meaning are understood by their differences. === Agents and society === Giddens' agents follow previous psychoanalysis work done by Sigmund Freud and others. Agency, as Giddens calls it, is human action. To be human is to be an agent (not all agents are human). Agency is critical to both the reproduction and the transformation of society. Another way to explain this concept is by what Giddens calls the "reflexive monitoring of actions." "Reflexive monitoring" refers to agents' ability to monitor their actions and those actions' settings and contexts. Monitoring is an essential characteristic of agency. Agents subsequently "rationalize," or evaluate, the success of those efforts. All humans engage in this process, and expect the same from others. Through action, agents produce structures; through reflexive monitoring and rationalization, they transform them. To act, agents must be motivated, must be knowledgeable must be able to rationalize the action; and must reflexively monitor the action. Agents, while bounded in structure, draw upon their knowledge of that structural context when they act. However, actions are constrained by agents' inherent capabilities and their understandings of available actions and external limitations. Practical consciousness and discursive consciousness inform these abilities. Practical consciousness is the knowledgeability that an agent brings to the tasks required by everyday life, which is so integrated as to be hardly noticed. Reflexive monitoring occurs at the level of practical consciousness. Discursive consciousness is the ability to verbally express knowledge. Alongside practical and discursive consciousness, Giddens recognizes actors as having reflexive, contextual knowledge, and that habitual, widespread use of knowledgeability makes structures become institutionalized. Agents rationalize, and in doing so, link the agent and the agent's knowledgeability. Agents must coordinate ongoing projects, goals, and contexts while performing actions. This coordination is called reflexive monitoring and is connected to ethnomethodology's emphasis on agents' intrinsic sense of accountability. The factors that can enable or constrain an agent, as well as how an agent uses structures, are known as capability constraints include age, cognitive/physical limits on performing multiple tasks at once and the physical impossibility of being in multiple places at once, available time and the relationship between movement in space and movement in time. Location offers are a particular type of capability constraint. Examples include: Locale Regionalization: political or geographical zones, or rooms in a building Presence: Do other actors participate in the action? (see co-presence); and more specifically Physical presence: Are other actors physically nearby? Agents are always able to engage in a dialectic of control, able to "intervene in the world or to refrain from such intervention, with the effect of influencing a specific process or state of affairs.": 14  In essence, agents experience inherent and contrasting amounts of autonomy and dependence; agents can always either act or not. === Methodology === Structuration theory is relevant to research, but does not prescribe a methodology and its use in research has been problematic. Giddens intended his theory to be abstract and theoretical, informing the hermeneutic aspects of research rather than guiding practice. Giddens wrote that structuration theory "establishes the internal logical coherence of concepts within a theoretical network.": 34  Giddens criticized many researchers who used structuration theory for empirical research, critiquing their "en bloc" use of the theory's abstract concepts in a burdensome way. "The works applying concepts from the logical framework of structuration theory that Giddens approved of were those that used them more selectively, 'in a spare and critical fashion.'": 2  Giddens and followers used structuration theory more as "a sensitizing device". Structuration theory allows researchers to focus on any structure or concept individually or in combination. In this way, structuration theory prioritizes ontology over epistemology. In his own work, Giddens focuses on production and reproduction of social practices in some context. He looked for stasis and change, agent expectations, relative degrees of routine, tradition, behavior, and creative, skillful, and strategic thought simultaneously. He examined spatial organization, intended and unintended consequences, skilled and knowledgeable agents, discursive and tacit knowledge, dialectic of control, actions with motivational content, and constraints. Structuration theorists conduct analytical research of social relations, rather than organically discovering them, since they use structuration theory to reveal specific research questions, though that technique has been criticized as cherry-picking. Giddens preferred strategic conduct analysis, which focuses on contextually situated actions. It employs detailed accounts of agents' knowledgeability, motivation, and the dialectic of control. == Criticisms and additions == Though structuration theory has received critical expansion since its origination, Giddens' concepts remained pivotal for later extension of the theory, especially the duality of structure. === Strong structuration === Rob Stones argued that many aspects of Giddens' original theory had little place in its modern manifestation. Stones focused on clarifying its scope, reconfiguring some concepts and inserting new ones, and refining methodology and research orientations. Strong structuration: Places its ontology more in situ than abstractly. Introduces the quadripartite cycle, which details the elements in the duality of structure. These are: external structures as conditions of action; internal structures within the agent; active agency, "including a range of aspects involved when agents draw upon internal structures in producing practical action";: 9  and outcomes (as both structures and events). Increases attention to epistemology and methodology. Ontology supports epistemology and methodology by prioritising: the question-at-hand; appropriate forms of methodological bracketing; distinct methodological steps in research; and "[t]he specific combinations of all the above in composite forms of research.": 189  Discovers the "meso-level of ontology between the abstract, philosophical level of ontology and the in-situ, ontic level." Strong structuration allows varied abstract ontological concepts in experiential conditions. Focuses on the meso-level at the temporal and spatial scale. Conceptualises independent causal forces and irresistible causal forces, which take into account how external structures, internal structures, and active agency affect agent choices (or lack of them). "Irresistible forces" are the connected concepts of a horizon of action with a set of "actions-in-hand" and a hierarchical ordering of purposes and concerns. An agent is affected by external influences. This aspect of strong structuration helps reconcile an agent's dialectic of control and his/her more constrained set of "real choices." === Post-structuration and dualism === Margaret Archer objected to the inseparability of structure and agency in structuration theory. She proposed a notion of dualism rather than "duality of structure". She primarily examined structural frameworks and the action within the limits allowed by those conditions. She combined realist ontology and called her methodology analytical dualism. Archer maintained that structure precedes agency in social structure reproduction and analytical importance, and that they should be analysed separately. She emphasised the importance of temporality in social analysis, dividing it into four stages: structural conditioning, social interaction, its immediate outcome and structural elaboration. Thus her analysis considered embedded "structural conditions, emergent causal powers and properties, social interactions between agents, and subsequent structural changes or reproductions arising from the latter." Archer criticised structuration theory for denying time and place because of the inseparability between structure and agency. Nicos Mouzelis reconstructed Giddens' original theories. Mouzelis kept Giddens' original formulation of structure as "rules and resources." However, he was considered a dualist, because he argued for dualism to be as important in social analysis as the duality of structure. Mouzelis reexamined human social action at the "syntagmatic" (syntactic) level. He claimed that the duality of structure does not account for all types of social relationships. Duality of structure works when agents do not question or disrupt rules, and interaction resembles "natural/performative" actions with a practical orientation. However, in other contexts, the relationship between structure and agency can resemble dualism more than duality, such as systems that are the result of powerful agents. In these situations, rules are not viewed as resources, but are in states of transition or redefinition, where actions are seen from a "strategic/monitoring orientation.": 28  In this orientation, dualism shows the distance between agents and structures. He called these situations "syntagmatic duality". For example, a professor can change the class he or she teaches, but has little capability to change the larger university structure. "In that case, syntagmatic duality gives way to syntagmatic dualism.": 28  This implies that systems are the outcome, but not the medium, of social actions. Mouzelis also criticised Giddens' lack of consideration for social hierarchies. John Parker built on Archer and Mouzelis's support for dualism to propose a theoretical reclamation of historical sociology and macro-structures using concrete historical cases, claiming that dualism better explained the dynamics of social structures. Equally, Robert Archer developed and applied analytical dualism in his critical analysis of the impact of New Managerialism on education policy in England and Wales during the 1990s and organization theory. === John B. Thompson === Though he agreed with the soundness and overall purposes of Giddens' most expansive structuration concepts (i.e., against dualism and for the study of structure in concert with agency), John B. Thompson ("a close friend and colleague of Giddens at Cambridge University"): 46  wrote one of the most widely cited critiques of structuration theory. His central argument was that it needed to be more specific and more consistent both internally and with conventional social structure theory. Thompson focused on problematic aspects of Giddens' concept of structure as "rules and resources," focusing on "rules". He argued that Giddens' concept of rule was too broad. Thompson claimed that Giddens presupposed a criterion of importance in contending that rules are a generalizable enough tool to apply to every aspect of human action and interaction; "on the other hand, Giddens is well aware that some rules, or some kinds or aspects of rules, are much more important than others for the analysis of, for example, the social structure of capitalist societies.": 159  He found the term to be imprecise and to not designate which rules are more relevant for which social structures. Thompson used the example of linguistic analysis to point out that the need for a prior framework which to enable analysis of, for example, the social structure of an entire nation. While semantic rules may be relevant to social structure, to study them "presupposes some structural points of reference which are not themselves rules, with regard to which [of] these semantic rules are differentiated": 159  according to class, sex, region and so on. He called this structural differentiation. Rules differently affect variously situated individuals. Thompson gave the example of a private school which restricts enrollment and thus participation. Thus rules—in this case, restrictions—"operate differentially, affecting unevenly various groups of individuals whose categorization depends on certain assumptions about social structures.": 159  The isolated analysis of rules does not incorporate differences among agents. Thompson claimed that Giddens offered no way of formulating structural identity. Some "rules" are better conceived of as broad inherent elements that define a structure's identity (e.g., Henry Ford and Harold Macmillan are "capitalistic"). These agents may differ, but have important traits in common due to their "capitalistic" identity. Thompson theorized that these traits were not rules in the sense that a manager could draw upon a "rule" to fire a tardy employee; rather, they were elements which "limit the kinds of rules which are possible and which thereby delimit the scope for institutional variation.": 160  It is necessary to outline the broader social system to be able to analyze agents, actors, and rules within that system. Thus Thompson concluded that Giddens' use of the term "rules" is problematic. "Structure" is similarly objectionable: "But to adhere to this conception of structure, while at the same time acknowledging the need for the study of 'structural principles,' 'structural sets' and 'axes of structuration,' is simply a recipe for conceptual confusion.": 163  Thompson proposed several amendments. He requested sharper differentiation between the reproduction of institutions and the reproduction of social structure. He proposed an altered version of the structuration cycle. He defined "institutions" as "characterized by rules, regulations and conventions of various sorts, by differing kinds and quantities of resources and by hierarchical power relations between the occupants of institutional positions.": 165  Agents acting within institutions and conforming to institutional rules and regulations or using institutionally endowed power reproduce the institution. "If, in so doing, the institutions continue to satisfy certain structural conditions, both in the sense of conditions which delimit the scope for institutional variation and the conditions which underlie the operation of structural differentiation, then the agents may be said to reproduce social structure.": 165  Thompson also proposed adding a range of alternatives to Giddens' conception of constraints on human action. He pointed out the paradoxical relationship between Giddens' "dialectic of control" and his acknowledgement that constraints may leave an agent with no choice. He demanded that Giddens better show how wants and desires relate to choice. Giddens replied that a structural principle is not equivalent with rules, and pointed to his definition from A Contemporary Critique of Historical Materialism: "Structural principles are principles of organisation implicated in those practices most "deeply" (in time) and "pervasively" (in space) sedimented in society",: 54  and described structuration as a "mode of institutional articulation": 257  with emphasis on the relationship between time and space and a host of institutional orderings including, but not limited to, rules. Ultimately, Thompson concluded that the concept of structure as "rules and resources" in an elemental and ontological way resulted in conceptual confusion. Many theorists supported Thompson's argument that an analysis "based on structuration's ontology of structures as norms, interpretative schemes and power resources radically limits itself if it does not frame and locate itself within a more broadly conceived notion of social structures.": 51  === Change === Sewell provided a useful summary that included one of the theory's less specified aspects: the question "Why are structural transformations possible?" He claimed that Giddens' overrelied on rules and modified Giddens' argument by re-defining "resources" as the embodiment of cultural schemas. He argued that change arises from the multiplicity of structures, the transposable nature of schemas, the unpredictability of resource accumulation, the polysemy of resources and the intersection of structures.: 20  The existence of multiple structures implies that the knowledgeable agents whose actions produce systems are capable of applying different schemas to contexts with differing resources, contrary to the conception of a universal habitus (learned dispositions, skills and ways of acting). He wrote that "Societies are based on practices that derived from many distinct structures, which exist at different levels, operate in different modalities, and are themselves based on widely varying types and quantities of resources. ...It is never true that all of them are homologous.": 16  Originally from Bourdieu, transposable schemas can be "applied to a wide and not fully predictable range of cases outside the context in which they were initially learned." That capacity "is inherent in the knowledge of cultural schemas that characterizes all minimally competent members of society.": 17  Agents may modify schemas even though their use does not predictably accumulate resources. For example, the effect of a joke is never quite certain, but a comedian may alter it based on the amount of laughter it garners regardless of this variability. Agents may interpret a particular resource according to different schemas. E.g., a commander could attribute his wealth to military prowess, while others could see it as a blessing from the gods or a coincidental initial advantage. Structures often overlap, confusing interpretation (e.g., the structure of capitalist society includes production from both private property and worker solidarity). === Technology === This theory was adapted and augmented by researchers interested in the relationship between technology and social structures, such as information technology in organizations. DeSanctis and Poole proposed an "adaptive structuration theory" with respect to the emergence and use of group decision support systems. In particular, they chose Giddens' notion of modalities to consider how technology is used with respect to its "spirit". "Appropriations" are the immediate, visible actions that reveal deeper structuration processes and are enacted with "moves". Appropriations may be faithful or unfaithful, be instrumental and be used with various attitudes. Wanda Orlikowski applied the duality of structure to technology: "The duality of technology identifies prior views of technology as either objective force or as socially constructed product–as a false dichotomy.": 13  She compared this to previous models (the technological imperative, strategic choice, and technology as a trigger) and considered the importance of meaning, power, norms, and interpretive flexibility. Orlikowski later replaced the notion of embedded properties for enactment (use). The "practice lens" shows how people enact structures which shape their use of technology that they employ in their practices. While Orlikowski's work focused on corporations, it is equally applicable to the technology cultures that have emerged in smaller community-based organizations, and can be adapted through the gender sensitivity lens in approaches to technology governance. Workman, Ford and Allen rearticulated structuration theory as structuration agency theory for modeling socio-biologically inspired structuration in security software. Software agents join humans to engage in social actions of information exchange, giving and receiving instructions, responding to other agents, and pursuing goals individually or jointly. === Four-flows-model === The four flows model of organizing is grounded in structuration theory. McPhee and Pamela Zaug (2001) identify four communication flows that collectively perform key organizational functions and distinguish organizations from less formal social groups: Membership negotiation—socialization, but also identification and self-positioning; Organizational self-structuring—reflexive, especially managerial, structuring and control activities; Activity coordination—Interacting to align or adjust local work activities; Institutional positioning in the social order of institutions—mostly external communication to gain recognition and inclusion in the web of social transactions. === Group communication === Poole, Seibold, and McPhee wrote that "group structuration theory,": 3  provides "a theory of group interaction commensurate with the complexities of the phenomenon.": 116  The theory attempts to integrate macrosocial theories and individuals or small groups, as well as how to avoid the binary categorization of either "stable" or "emergent" groups. Waldeck et al. concluded that the theory needs to better predict outcomes, rather than merely explaining them. Decision rules support decision-making, which produces a communication pattern that can be directly observable. Research has not yet examined the "rational" function of group communication and decision-making (i.e., how well it achieves goals), nor structural production or constraints. Researchers must empirically demonstrate the recursivity of action and structure, examine how structures stabilize and change over time due to group communication, and may want to integrate argumentation research. === Public relations === Falkheimer claimed that integrating structuration theory into public relations (PR) strategies could result in a less agency-driven business, return theoretical focus to the role of power structures in PR, and reject massive PR campaigns in favor of a more "holistic understanding of how PR may be used in local contexts both as a reproductive and [transformational] social instrument.": 103  Falkheimer portrayed PR as a method of communication and action whereby social systems emerge and reproduce. Structuration theory reinvigorates the study of space and time in PR theory. Applied structuration theory may emphasize community-based approaches, storytelling, rituals, and informal communication systems. Moreover, structuration theory integrates all organizational members in PR actions, integrating PR into all organizational levels rather than a separate office. Finally, structuration reveals interesting ethical considerations relating to whether a social system should transform. === COVID-19 and structure === the COVID-19 pandemic had huge impact on society since the beginning. When investigating those impacts, many researchers found helpful using structuration theory to explain the change in society. Oliver (2021) used "a theoretical framework derived from Giddens' structuration theory to analyze societal information cultures, concentrating on information and health literacy perspectives." And this framework focused on "the three modalities of structuration, i.e., interpretive schemes, resources, and norms." And in Oliver's research, those three modalities are "resources", "information freedom" and "formal and informal concepts and rules of behavior". After analyzing four countries framework, Oliver and his research team concluded "All our case studies show a number of competing information sources – from traditional media and official websites to various social media platforms used by both the government and the general public – that complicate the information landscape in which we all try to navigate what we know, and what we do not yet know, about the pandemic." In the research of interpreting how remote work environment change during COVID-19 in South Africa, Walter (2020) applied structuration theory because "it addresses the relationship between actors (or persons) and social structures and how these social structures ultimately realign and conform to the actions of actors" Plus, "these social structures from Giddens's structuration theory assist people to navigate through everyday life." Zvokuomba (2021) also used Giddens' theory of structuration "to reflect at the various levels of fragilities within the context of COVID-19 lockdown measures." One example in the research is that "theory of structuration and agency point to situations when individuals and groups of people either in compliance or defiance of community norms and rules of survival adopt certain practices." And during pandemic, researched pointed out "reverting to the traditional midwifery became a pragmatic approach to a problem." One example to support this point is that "As medical centers were partly closed, with no basic medication and health staff, the only alternative was seek traditional medical services. " === Business and structure === Structuration theory can also be used in explaining business related issues including operating, managing and marketing. Clifton Scott and Karen Myers (2010)studied how the duality of structure can explain the shifts of members' actions during the membership negotiations in an organization by This is an example of how structure evolves with the interaction of a group of people. Another case study done by Dutta (2016) and his research team shows how the models shift because of the action of individuals. The article examines the relationship between CEO's behavior and a company's cross-border acquisition. This case can also demonstrate one of the major dimensions in the duality of structure, the sense of power from the CEO. Authors found out that the process follows the theory of duality of structure: under the circumstances of CEO is overconfident, and the company is the limitation of resources, the process of cross-border acquisition is likely to be different than before. Yuan ElaineJ (2011)'s research focused on a certain demographic of people under the structure. Authors studied Chinese TV shows and audiences' flavor of the show. The author concludes in the relationship between the audience and the TV shows producers, audiences' behavior has higher-order patterns. Pavlou and Majchrzak argued that research on business-to-business e-commerce portrayed technology as overly deterministic. The authors employed structuration theory to re-examine outcomes such as economic/business success as well as trust, coordination, innovation, and shared knowledge. They looked beyond technology into organizational structure and practices, and examined the effects on the structure of adapting to new technologies. The authors held that technology needs to be aligned and compatible with the existing "trustworthy": 179  practices and organizational and market structure. The authors recommended measuring long-term adaptations using ethnography, monitoring and other methods to observe causal relationships and generate better predictions. == See also == == References == == External links == Anthony Giddens'The constitution of society: An outline of the theory of structuration.. Giddens' most comprehensive work on structuration theory. Available in part for free online via Google Books This book is intended to provide an accessible introduction to Giddens' work and also to situate structuration theory in the context of other approaches. Available in part for free online via Google Books. A critical assessment of Giddens' entire body of work. Available in part for free online via Google Books. Social theory for beginners. Available in part for free online via Google Books. Anthony Giddens: The theory of structuration - Theory.org.uk. A video on YouTube detailing the structure of structuration theory as contrasted with Talcott Parsons's action theory.
Wikipedia/Structuration_theory
Critical race theory (CRT) is an academic field focused on the relationships between social conceptions of race and ethnicity, social and political laws, and mass media. CRT also considers racism to be systemic in various laws and rules, not based only on individuals' prejudices. The word critical in the name is an academic reference to critical theory, not criticizing or blaming individuals. CRT is also used in sociology to explain social, political, and legal structures and power distribution as through a "lens" focusing on the concept of race, and experiences of racism. For example, the CRT conceptual framework examines racial bias in laws and legal institutions, such as highly disparate rates of incarceration among racial groups in the United States. A key CRT concept is intersectionality—the way in which different forms of inequality and identity are affected by interconnections among race, class, gender, and disability. Scholars of CRT view race as a social construct with no biological basis. One tenet of CRT is that disparate racial outcomes are the result of complex, changing, and often subtle social and institutional dynamics, rather than explicit and intentional prejudices of individuals. CRT scholars argue that the social and legal construction of race advances the interests of white people at the expense of people of color, and that the liberal notion of U.S. law as "neutral" plays a significant role in maintaining a racially unjust social order, where formally color-blind laws continue to have racially discriminatory outcomes. CRT began in the United States in the post–civil rights era, as 1960s landmark civil rights laws were being eroded and schools were being re-segregated. With racial inequalities persisting even after civil rights legislation and color-blind laws were enacted, CRT scholars in the 1970s and 1980s began reworking and expanding critical legal studies (CLS) theories on class, economic structure, and the law to examine the role of US law in perpetuating racism. CRT, a framework of analysis grounded in critical theory, originated in the mid-1970s in the writings of several American legal scholars, including Derrick Bell, Alan Freeman, Kimberlé Crenshaw, Richard Delgado, Cheryl Harris, Charles R. Lawrence III, Mari Matsuda, and Patricia J. Williams. CRT draws on the work of thinkers such as Antonio Gramsci, Sojourner Truth, Frederick Douglass, and W. E. B. Du Bois, as well as the Black Power, Chicano, and radical feminist movements from the 1960s and 1970s. Academic critics of CRT argue it is based on storytelling instead of evidence and reason, rejects truth and merit, and undervalues liberalism. Since 2020, conservative US lawmakers have sought to ban or restrict the teaching of CRT in primary and secondary schools, as well as relevant training inside federal agencies. Advocates of such bans argue that CRT is false, anti-American, villainizes white people, promotes radical leftism, and indoctrinates children. Advocates of bans on CRT have been accused of misrepresenting its tenets and of having the goal to broadly silence discussions of racism, equality, social justice, and the history of race. == Definitions == In his introduction to the comprehensive 1995 publication of critical race theory's key writings, Cornel West described CRT as "an intellectual movement that is both particular to our postmodern (and conservative) times and part of a long tradition of human resistance and liberation." Law professor Roy L. Brooks defined critical race theory in 1994 as "a collection of critical stances against the existing legal order from a race-based point of view". Gloria Ladson-Billings, who—along with co-author William Tate—had introduced CRT to the field of education in 1995, described it in 2015 as an "interdisciplinary approach that seeks to understand and combat race inequity in society." Ladson-Billings wrote in 1998 that CRT "first emerged as a counterlegal scholarship to the positivist and liberal legal discourse of civil rights." In 2021, Khiara Bridges, a law professor and author of the textbook Critical Race Theory: A Primer, defined critical race theory as an "intellectual movement", a "body of scholarship", and an "analytical toolset for interrogating the relationship between law and racial inequality." The 2021 Encyclopaedia Britannica described CRT as an "intellectual and social movement and loosely organized framework of legal analysis based on the premise that race is not a natural, biologically grounded feature of physically distinct subgroups of human beings but a socially constructed (culturally invented) category that is used to oppress and exploit people of colour." == Tenets == Scholars of CRT say that race is not "biologically grounded and natural"; rather, it is a socially constructed category used to oppress and exploit people of color; and that racism is not an aberration, but a normalized feature of American society. According to CRT, negative stereotypes assigned to members of minority groups benefit white people and increase racial oppression. Individuals can belong to a number of different identity groups. The concept of intersectionality—one of CRT's main concepts—was introduced by legal scholar Kimberlé Crenshaw. Derrick Albert Bell Jr. (1930 – 2011), an American lawyer, professor, and civil rights activist, wrote that racial equality is "impossible and illusory" and that racism in the US is permanent. According to Bell, civil-rights legislation will not on its own bring about progress in race relations; alleged improvements or advantages to people of color "tend to serve the interests of dominant white groups", in what Bell called "interest convergence". These changes do not typically affect—and at times even reinforce—racial hierarchies. This is representative of the shift in the 1970s, in Bell's re-assessment of his earlier desegregation work as a civil rights lawyer. He was responding to the Supreme Court's decisions that had resulted in the re-segregation of schools. The concept of standpoint theory became particularly relevant to CRT when it was expanded to include a black feminist standpoint by Patricia Hill Collins. First introduced by feminist sociologists in the 1980s, standpoint theory holds that people in marginalized groups, who share similar experiences, can bring a collective wisdom and a unique voice to discussions on decreasing oppression. In this view, insights into racism can be uncovered by examining the nature of the US legal system through the perspective of the everyday lived experiences of people of color. According to Encyclopedia Britannica, tenets of CRT have spread beyond academia and are used to deepen understanding of socio-economic issues such as "poverty, police brutality, and voting rights violations", that are affected by the ways in which race and racism are "understood and misunderstood" in the United States. == Common themes == Richard Delgado and Jean Stefancic published an annotated bibliography of CRT references in 1993, listing works of legal scholarship that addressed one or more of the following themes: "critique of liberalism"; "storytelling/counterstorytelling and 'naming one's own reality'"; "revisionist interpretations of American civil rights law and progress"; "a greater understanding of the underpinnings of race and racism"; "structural determinism"; "race, sex, class, and their intersections"; "essentialism and anti-essentialism"; "cultural nationalism/separatism"; "legal institutions, critical pedagogy, and minorities in the bar"; and "criticism and self-criticism". When Gloria Ladson-Billings introduced CRT into education in 1995, she cautioned that its application required a "thorough analysis of the legal literature upon which it is based". === Critique of liberalism === First and foremost to CRT legal scholars in 1993 was their "discontent" with the way in which liberalism addressed race issues in the US. They critiqued "liberal jurisprudence", including affirmative action, color-blindness, role modeling, and the merit principle. Specifically, they claimed that the liberal concept of value-neutral law contributed to maintenance of the US's racially unjust social order.An example questioning foundational liberal conceptions of Enlightenment values, such as rationalism and progress, is Rennard Strickland's 1986 Kansas Law Review article, "Genocide-at-Law: An Historic and Contemporary View of the Native American Experience". In it, he "introduced Native American traditions and world-views" into law school curriculum, challenging the entrenchment at that time of the "contemporary ideas of progress and enlightenment". He wrote that US laws that "permeate" the everyday lives of Native Americans were in "most cases carried out with scrupulous legality" but still resulted in what he called "cultural genocide".In 1993, David Theo Goldberg described how countries that adopt classical liberalism's concepts of "individualism, equality, and freedom"—such as the United States and European countries—conceal structural racism in their cultures and languages, citing terms such as "Third World" and "primitive".: 6–7 In 1988, Kimberlé Williams Crenshaw traced the origins of the New Right's use of the concept of color-blindness from 1970s neoconservative think tanks to the Ronald Reagan administration in the 1980s. She described how prominent figures such as neoconservative scholars Thomas Sowell and William Bradford Reynolds, who served as Assistant Attorney General for the Civil Rights Division from 1981 to 1988, called for "strictly color-blind policies". Sowell and Reynolds, like many conservatives at that time, believed that the goal of equality of the races had already been achieved, and therefore the race-specific civil rights movement was a "threat to democracy". The color-blindness logic used in "reverse discrimination" arguments in the post-civil rights period is informed by a particular viewpoint on "equality of opportunity", as adopted by Sowell, in which the state's role is limited to providing a "level playing field" rather than promoting an equal distribution of resources.Crenshaw claimed that "equality of opportunity" in antidiscrimination law can have both an expansive and a restrictive aspect. Crenshaw wrote that formally color-blind laws continue to have racially discriminatory outcomes. According to her, this use of formal color-blindness rhetoric in claims of reverse discrimination, as in the 1978 Supreme Court ruling on Bakke, was a response to the way in which the courts had aggressively imposed affirmative action and busing during the Civil Rights era, even on those who were hostile to those issues. In 1990, legal scholar Duncan Kennedy described the dominant approach to affirmative action in legal academia as "colorblind meritocratic fundamentalism". He called for a postmodern "race consciousness" approach that included "political and cultural relations" while avoiding "racialism" and "essentialism".Sociologist Eduardo Bonilla-Silva describes this newer, subtle form of racism as "color-blind racism", which uses frameworks of abstract liberalism to decontextualize race, naturalize outcomes such as segregation in neighborhoods, attribute certain cultural practices to race, and cause "minimization of racism".In his influential 1984 article, Delgado challenged the liberal concept of meritocracy in civil rights scholarship. He questioned how the top articles in most well-established journals were all written by white men. === Storytelling/counterstorytelling and "naming one's own reality" === This refers to the use of narrative (storytelling) to illuminate and explore lived experiences of racial oppression.One of the prime tenets of liberal jurisprudence is that people can create appealing narratives to think and talk about greater levels of justice. Delgado and Stefancic call this the empathic fallacy—the belief that it is possible to "control our consciousness" by using language alone to overcome bigotry and narrow-mindedness. They examine how people of color, considered outsiders in mainstream US culture, are portrayed in media and law through stereotypes and stock characters that have been adapted over time to shield the dominant culture from discomfort and guilt. For example, slaves in the 18th-century Southern States were depicted as childlike and docile; Harriet Beecher Stowe adapted this stereotype through her character Uncle Tom, depicting him as a "gentle, long-suffering", pious Christian. Following the American Civil War, the African-American woman was depicted as a wise, care-giving "Mammy" figure. During the Reconstruction period, African-American men were stereotyped as "brutish and bestial", a danger to white women and children. This was exemplified in Thomas Dixon Jr.'s novels, used as the basis for the epic film The Birth of a Nation, which celebrated the Ku Klux Klan and lynching. During the Harlem Renaissance, African-Americans were depicted as "musically talented" and "entertaining". Following World War II, when many Black veterans joined the nascent civil rights movement, African Americans were portrayed as "cocky [and] street-smart", the "unreasonable, opportunistic" militant, the "safe, comforting, cardigan-wearing" TV sitcom character, and the "super-stud" of blaxploitation films. The empathic fallacy informs the "time-warp aspect of racism", where the dominant culture can see racism only through the hindsight of a past era or distant land, such as South Africa. Through centuries of stereotypes, racism has become normalized; it is a "part of the dominant narrative we use to interpret experience". Delgado and Stefancic argue that speech alone is an ineffective tool to counter racism, since the system of free expression tends to favor the interests of powerful elites and to assign responsibility for racist stereotypes to the "marketplace of ideas". In the decades following the passage of civil rights laws, acts of racism had become less overt and more covert—invisible to, and underestimated by, most of the dominant culture. Since racism makes people feel uncomfortable, the empathic fallacy helps the dominant culture to mistakenly believe that it no longer exists, and that dominant images, portrayals, stock characters, and stereotypes—which usually portray minorities in a negative light—provide them with a true image of race in America. Based on these narratives, the dominant group has no need to feel guilty or to make an effort to overcome racism, as it feels "right, customary, and inoffensive to those engaged in it", while self-described liberals who uphold freedom of expression can feel virtuous while maintaining their own superior position. === Standpoint epistemology === This is the view that members of racial minority groups have a unique authority and ability to speak about racism. This is seen as undermining dominant narratives relating to racial inequality, such as legal neutrality and personal responsibility or bootstrapping, through valuable first-hand accounts of the experience of racism. === Revisionist interpretations of American civil rights law and progress === Interest convergence is a concept introduced by Derrick Bell in his 1980 Harvard Law Review article, "Brown v. Board of Education and the Interest-Convergence Dilemma". In this article, Bell described how he re-assessed the impact of the hundreds of NAACP LDF de-segregation cases he won from 1960 to 1966 and how he began to believe that in spite of his sincerity at the time, anti-discrimination law had not resulted in improving Black children's access to quality education. He listed and described how Supreme Court cases had gutted civil rights legislation, which had resulted in African-American students continuing to attend all-black schools that lacked adequate funding and resources. In examining these Supreme Court cases, Bell concluded that the only civil-rights legislation that was passed coincided with the self-interest of white people, which Bell termed interest convergence. One of the best-known examples of interest convergence is the way in which American geopolitics during the Cold War in the aftermath of World War II was a critical factor in the passage of civil rights legislation by both Republicans and Democrats. Bell described this in numerous articles, including the aforementioned, and it was supported by the research and publications of legal scholar Mary L. Dudziak. In her journal articles and her 2000 book Cold War Civil Rights—based on newly released documents—Dudziak provided detailed evidence that it was in the interest of the United States to quell the negative international press about treatment of African-Americans when the majority of the populations of newly decolonized countries which the US was trying to attract to Western-style democracy, were not white. The US sought to promote liberal values throughout Africa, Asia, and Latin America to prevent the Soviet Union from spreading communism. Dudziak described how the international press widely circulated stories of segregation and violence against African-Americans. The Moore's Ford lynchings, where a World War II veteran was lynched, were particularly widespread in the news. American allies followed stories of American racism through the international press, and the Soviets used stories of racism against Black Americans as a vital part of their propaganda. Dudziak performed extensive archival research in the US Department of State and Department of Justice and concluded that US government support for civil-rights legislation "was motivated in part by the concern that racial discrimination harmed the United States' foreign relations". When the National Guard was called in to prevent nine African-American students from integrating the Little Rock Central High School, the international press covered the story extensively. The then-Secretary of State told President Dwight Eisenhower that the Little Rock situation was "ruining" American foreign policy, particularly in Asia and Africa. The US's ambassador to the United Nations told President Eisenhower that as two-thirds of the world's population was not white, he was witnessing their negative reactions to American racial discrimination. He suspected that the US "lost several votes on the Chinese communist item because of Little Rock." === Intersectional theory === This refers to the examination of race, sex, class, national origin, and sexual orientation, and how their intersections play out in various settings, such as how the needs of a Latina are different from those of a Black male, and whose needs are promoted. These intersections provide a more holistic picture for evaluating different groups of people. Intersectionality is a response to identity politics insofar as identity politics does not take into account the different intersections of people's identities. === Essentialism vs. anti-essentialism === Delgado and Stefancic write, "Scholars who write about these issues are concerned with the appropriate unit for analysis: Is the black community one, or many, communities? Do middle- and working-class African-Americans have different interests and needs? Do all oppressed peoples have something in common?" This is a look at the ways that oppressed groups may share in their oppression but also have different needs and values that need to be analyzed differently. It is a question of how groups can be essentialized or are unable to be essentialized. From an essentialist perspective, one's identity consists of an internal "essence" that is static and unchanging from birth, whereas a non-essentialist position holds that "the subject has no fixed or permanent identity." Racial essentialism diverges into biological and cultural essentialism, where subordinated groups may endorse one over the other. "Cultural and biological forms of racial essentialism share the idea that differences between racial groups are determined by a fixed and uniform essence that resides within and defines all members of each racial group. However, they differ in their understanding of the nature of this essence." Subordinated communities may be more likely to endorse cultural essentialism as it provides a basis of positive distinction for establishing a cumulative resistance as a means to assert their identities and advocacy of rights. By comparison, biological essentialism may be unlikely to resonate with marginalized groups because historically dominant groups have used genetics and biology in justifying both racism and oppression. Essentialism is the idea of a singular, shared experience between a specific group of people. Anti-essentialism, on the other hand, believes that there are other various factors that can affect a person's being and overall life experience. The race of an individual is viewed more as a social construct that does not necessarily dictate the outcome of their life circumstances. Race is viewed as "a social and historical construction, rather than an inherent, fixed, essential biological characteristic." Anti-essentialism "forces a destabilization in the very concept of race itself…" The results of this destabilization vary on the analytic focus falling into two general categories, "... consequences for the analytic concepts of racial identity or racial subjectivity." === Structural determinism, and race, sex, class, and their intersections === This refers to the exploration of how "the structure of legal thought or culture influences its content" in a way that determines social outcomes. Delgado and Stefancic cited "empathic fallacy" as one example of structural determinism: the "idea that our system, by reason of its structure and vocabulary, cannot redress certain types of wrong." They interrogate the absence of terms such as intersectionality, anti-essentialism, and jury nullification in standard legal reference research tools in law libraries. === Cultural nationalism/separatism === This refers to the exploration of more radical views that argue for separation and reparations as a form of foreign aid (including black nationalism). === Legal institutions, critical pedagogy, and minorities in the bar === Camara Phyllis Jones defines institutionalized racism as "differential access to the goods, services, and opportunities of society by race. Institutionalized racism is normative, sometimes legalized and often manifests as inherited disadvantage. It is structural, having been absorbed into our institutions of custom, practice, and law, so there need not be an identifiable offender. Indeed, institutionalized racism is often evident as inaction in the face of need, manifesting itself both in material conditions and in access to power. With regard to the former, examples include differential access to quality education, sound housing, gainful employment, appropriate medical facilities, and a clean environment." === Black–white binary === The black–white binary is a paradigm identified by legal scholars through which racial issues and histories are typically articulated within a racial binary between black and white Americans. The binary largely governs how race has been portrayed and addressed throughout US history. Critical race theorists Richard Delgado and Jean Stefancic argue that anti-discrimination law has blindspots for non-black minorities due to its language being confined within the black–white binary. == Applications and adaptations == Scholars of critical race theory have focused, with some particularity, on the issues of hate crime and hate speech. In response to the opinion of the US Supreme Court in the hate speech case of R.A.V. v. City of St. Paul (1992), in which the Court struck down an anti-bias ordinance as applied to a teenager who had burned a cross, Mari Matsuda and Charles Lawrence argued that the Court had paid insufficient attention to the history of racist speech and the actual injury produced by such speech. Critical race theorists have also argued in favor of affirmative action. They propose that so-called merit standards for hiring and educational admissions are not race-neutral and that such standards are part of the rhetoric of neutrality through which whites justify their disproportionate share of resources and social benefits. In his 2009 article "Will the Real CRT Please Stand Up: The Dangers of Philosophical Contributions to CRT", Curry distinguished between the original CRT key writings and what is being done in the name of CRT by a "growing number of white feminists". The new CRT movement "favors narratives that inculcate the ideals of a post-racial humanity and racial amelioration between compassionate (Black and White) philosophical thinkers dedicated to solving America's race problem." They are interested in discourse (i.e., how individuals speak about race) and the theories of white Continental philosophers, over and against the structural and institutional accounts of white supremacy which were at the heart of the realist analysis of racism introduced in Derrick Bell's early works, and articulated through such African-American thinkers as W. E. B. Du Bois, Paul Robeson, and Judge Robert L. Carter. == History == === Early years === Although the terminology critical race theory began in its application to laws, the subject emerges from the broader frame of critical theory in how it analyzes power structures in society despite whatever laws may be in effect. In the 1998 article, "Critical Race Theory: Past, Present, and Future", Delgado and Stefancic trace the origins of CRT to the early writings of Derrick Albert Bell Jr. including his 1976 Yale Law Journal article, "Serving Two Masters" and his 1980 Harvard Law Review article entitled "Brown v. Board of Education and the Interest-Convergence Dilemma". In the 1970s, as a professor at Harvard Law School Bell began to critique, question and re-assess the civil rights cases he had litigated in the 1960s to desegregate schools following the passage of Brown v. Board of Education. This re-assessment became the "cornerstone of critical race theory". Delgado and Stefancic, who together wrote Critical Race Theory: a Introduction in 2001, described Bell's "interest convergence" as a "means of understanding Western racial history". The focus on desegregation after the 1954 Supreme Court decision in Brown—declaring school segregation unconstitutional—left "civil-rights lawyers compromised between their clients' interests and the law". The concern of many Black parents—for their children's access to better education—was being eclipsed by the interests of litigators who wanted a "breakthrough" in their "pursuit of racial balance in schools". In 1995, Cornel West said that Bell was "virtually the lone dissenter" writing in leading law reviews who challenged basic assumptions about how the law treated people of color. In his Harvard Law Review articles, Bell cites the 1964 Hudson v. Leake County School Board case which the NAACP Legal Defense and Educational Fund (NAACP LDF) won, mandating that the all-white school board comply with desegregation. At that time it was seen as a success. By the 1970s, white parents were removing their children from the desegregated schools and enrolling them in segregation academies. Bell came to believe that he had been mistaken in 1964 when, as a young lawyer working for the LDF, he had convinced Winson Hudson, who was the head of the newly formed local NAACP chapter in Harmony, Mississippi, to fight the all-White Leake County School Board to desegregate schools. She and the other Black parents had initially sought LDF assistance to fight the board's closure of their school—one of the historic Rosenwald Schools for Black children. Bell explained to Hudson, that—following Brown—the LDF could not fight to keep a segregated Black school open; they would have to fight for desegregation. In 1964, Bell and the NAACP had believed that resources for desegregated schools would be increased and Black children would access higher quality education, since White parents would insist on better quality schools; by the 1970s, Black children were again attending segregated schools and the quality of education had deteriorated. Bell began to work for the NAACP LDF shortly after the Montgomery bus boycott and the ensuing 1956 Supreme Court ruling following Browder v. Gayle that the Alabama and Montgomery bus segregation laws were unconstitutional. From 1960 to 1966 Bell successfully litigated 300 civil rights cases in Mississippi. Bell was inspired by Thurgood Marshall, who had been one of the two leaders of a decades-long legal campaign starting in the 1930s, in which they filed hundreds of lawsuits to reverse the "separate but equal" doctrine announced by the Supreme Court's decision in Plessy v. Ferguson (1896). The Court ruled that racial segregation laws enacted by the states were not in violation of the United States Constitution as long as the facilities for each race were equal in quality. The Plessy decision provided the legal mandate at the federal level to enforce Jim Crow laws that had been introduced by white Southern Democrats starting in the 1870s for racial segregation in all public facilities, including public schools. The Court's 1954 Brown decision—which held that the "separate but equal" doctrine is unconstitutional in the context of public schools and educational facilities—severely weakened Plessy. The Supreme Court concept of constitutional colorblindness in regards to case evaluation began with Plessy. Before Plessy, the Court considered color as a determining factor in many landmark cases, which reinforced Jim Crow laws. Bell's 1960s civil rights work built on Justice Marshall's groundwork begun in the 1930s. It was a time when the legal branch of the civil rights movement was launching thousands of civil rights cases. It was a period of idealism for the civil rights movement. At Harvard, Bell developed new courses that studied American law through a racial lens. He compiled his own course materials which were published in 1970 under the title Race, Racism, and American Law. He became Harvard Law School's first Black tenured professor in 1971. During the 1970s, the courts were using legislation to enforce affirmative action programs and busing—where the courts mandated busing to achieve racial integration in school districts that rejected desegregation. In response, the 1970s neoconservative think tanks—hostile to these two issues in particular—developed a color-blind rhetoric to oppose them, claiming they represented reverse discrimination. In 1978, Regents of the University of California v. Bakke, when Bakke won this landmark Supreme Court case by using the argument of reverse racism, Bell's skepticism that racism would end increased. Justice Lewis F. Powell Jr. held that the "guarantee of equal protection cannot mean one thing when applied to one individual and something else when applied to a person of another color." In a 1979 article, Bell asked if there were any groups of the White population that would be willing to suffer any disadvantage that might result from the implementation of a policy to rectify harms to Black people resulting from slavery, segregation, or discrimination. Bell resigned in 1980 because of what he viewed as the university's discriminatory practices, became the dean at University of Oregon School of Law and later returned to Harvard as a visiting professor. While he was absent from Harvard, his supporters organized protests against Harvard's lack of racial diversity in the curriculum, in the student body, and in the faculty. The university had rejected student requests, saying no sufficiently qualified black instructor existed. Legal scholar Randall Kennedy writes that some students had "felt affronted" by Harvard's choice to employ an "archetypal white liberal... in a way that precludes the development of black leadership". One of these students was Kimberlé Crenshaw, who had chosen Harvard in order to study under Bell; she was introduced to his work at Cornell. Crenshaw organized the student-led initiative to offer an alternative course on race and law in 1981—based on Bell's course and textbook—where students brought in visiting professors, such as Charles Lawrence, Linda Greene, Neil Gotanda, and Richard Delgado, to teach chapter-by-chapter from Race, Racism, and American Law. Critical race theory emerged as an intellectual movement with the organization of this boycott; CRT scholars included graduate law students and professors. Alan Freeman was a founding member of the Critical Legal Studies (CLS) movement that hosted forums in the 1980s. CLS legal scholars challenged claims to the alleged value-neutral position of the law. They criticized the legal system's role in generating and legitimizing oppressive social structures which contributed to maintaining an unjust and oppressive class system. Delgado and Stefancic cite the work of Alan Freeman in the 1970s as formative to critical race theory. In his 1978 Minnesota Law Review article Freeman reinterpreted, through a critical legal studies perspective, how the Supreme Court oversaw civil rights legislation from 1953 to 1969 under the Warren Court. He criticized the narrow interpretation of the law which denied relief for victims of racial discrimination. In his article, Freeman describes two perspectives on the concept of racial discrimination: that of victim or perpetrator. Racial discrimination to the victim includes both objective conditions and the "consciousness associated with those objective conditions". To the perpetrator, racial discrimination consists only of actions without consideration of the objective conditions experienced by the victims, such as the "lack of jobs, lack of money, lack of housing". Only those individuals who could prove they were victims of discrimination were deserving of remedies. By the late 1980s, Freeman, Bell, and other CRT scholars left the CLS movement claiming it was too narrowly focused on class and economic structures while neglecting the role of race and race relations in American law. === Emergence as a movement === In 1989, Kimberlé Crenshaw, Neil Gotanda, and Stephanie Phillips organized a workshop at the St. Benedict Center in Madison, Wisconsin entitled "New Developments in Critical Race Theory". The organizers coined the term "Critical Race Theory" to signify an "intersection of critical theory and race, racism and the law." Delgado later reflected that the convent, with its austere rooms and crucifixes, was "an odd place for a bunch of Marxists." Afterward, legal scholars began publishing a higher volume of works employing critical race theory, including more than "300 leading law review articles" and books.: 108  In 1990, Duncan Kennedy published his article on affirmative action in legal academia in the Duke Law Journal, and Anthony E. Cook published his article "Beyond Critical Legal Studies" in the Harvard Law Review. In 1991, Patricia Williams published The Alchemy of Race and Rights, while Derrick Bell published Faces at the Bottom of the Well in 1992.: 124  Cheryl I. Harris published her 1993 Harvard Law Review article "Whiteness as Property" in which she described how passing led to benefits akin to owning property. In 1995, two dozen legal scholars contributed to a major compilation of key writings on CRT. By the early 1990s, key concepts and features of CRT had emerged. Bell had introduced his concept of "interest convergence" in his 1973 article. He developed the concept of racial realism in a 1992 series of essays and book, Faces at the bottom of the well: the permanence of racism. He said that Black people needed to accept that the civil rights era legislation would not on its own bring about progress in race relations; anti-Black racism in the US was a "permanent fixture" of American society; and equality was "impossible and illusory" in the US. Crenshaw introduced the term intersectionality in the 1990s. In 1995, pedagogical theorists Gloria Ladson-Billings and William F. Tate began applying the critical race theory framework in the field of education. In their 1995 article Ladson-Billings and Tate described the role of the social construction of white norms and interests in education. They sought to better understand inequities in schooling. Scholars have since expanded work to explore issues including school segregation in the US; relations between race, gender, and academic achievement; pedagogy; and research methodologies. As of 2002, over 20 American law schools and at least three non-American law schools offered critical race theory courses or classes. Critical race theory is also applied in the fields of education, political science, women's studies, ethnic studies, communication, sociology, and American studies. Other movements developed that apply critical race theory to specific groups. These include the Latino-critical (LatCrit), queer-critical, and Asian-critical movements. These continued to engage with the main body of critical theory research, over time developing independent priorities and research methods. CRT has also been taught internationally, including in the United Kingdom (UK) and Australia. According to educational researcher Mike Cole, the main proponents of CRT in the UK include David Gillborn, John Preston, and Namita Chakrabarty. === Philosophical foundations === CRT scholars draw on the work of Antonio Gramsci, Sojourner Truth, Frederick Douglass, and W. E. B. DuBois. Bell shared Paul Robeson's belief that "Black self-reliance and African cultural continuity should form the epistemic basis of Blacks' worldview." Their writing is also informed by the 1960s and 1970s movements such as Black Power, Chicano, and radical feminism. Critical race theory shares many intellectual commitments with critical theory, critical legal studies, feminist jurisprudence, and postcolonial theory. University of Connecticut philosopher, Lewis Gordon, who has focused on postcolonial phenomenology, and race and racism, wrote that CRT is notable for its use of postmodern poststructural scholarship, including an emphasis on "subaltern" or "marginalized" communities and the "use of alternative methodology in the expression of theoretical work, most notably their use of "narratives" and other literary techniques". Standpoint theory, which has been adopted by some CRT scholars, emerged from the first wave of the women's movement in the 1970s. The main focus of feminist standpoint theory is epistemology—the study of how knowledge is produced. The term was coined by Sandra Harding, an American feminist theorist, and developed by Dorothy Smith in her 1989 publication, The Everyday World as Problematic: A Feminist Sociology. Smith wrote that by studying how women socially construct their own everyday life experiences, sociologists could ask new questions. Patricia Hill Collins introduced black feminist standpoint—a collective wisdom of those who have similar perspectives in society which sought to heighten awareness to these marginalized groups and provide ways to improve their position in society. Critical race theory draws on the priorities and perspectives of both critical legal studies (CLS) and conventional civil rights scholarship, while also sharply contesting both of these fields. UC Davis School of Law legal scholar Angela P. Harris, describes critical race theory as sharing "a commitment to a vision of liberation from racism through right reason" with the civil rights tradition. It deconstructs some premises and arguments of legal theory and simultaneously holds that legally constructed rights are incredibly important. CRT scholars disagreed with the CLS anti-legal rights stance, nor did they wish to "abandon the notions of law" completely; CRT legal scholars acknowledged that some legislation and reforms had helped people of color. As described by Derrick Bell, critical race theory in Harris' view is committed to "radical critique of the law (which is normatively deconstructionist) and... radical emancipation by the law (which is normatively reconstructionist)". University of Edinburgh philosophy professor Tommy J. Curry says that by 2009, the CRT perspective on a race as a social construct was accepted by "many race scholars" as a "commonsense view" that race is not "biologically grounded and natural." Social construct is a term from social constructivism, whose roots can be traced to the early science wars, instigated in part by Thomas Kuhn's 1962 The Structure of Scientific Revolutions. Ian Hacking, a Canadian philosopher specializing in the philosophy of science, describes how social construction has spread through the social sciences. He cites the social construction of race as an example, asking how race could be "constructed" better. == Criticism == === Academic criticism === According to the Encyclopaedia Britannica, aspects of CRT have been criticized by "legal scholars and jurists from across the political spectrum." Criticism of CRT has focused on its emphasis on storytelling, its critique of the merit principle and of objective truth, and its thesis of the voice of color. As reported by Britannica, critics say it contains a "postmodernist-inspired skepticism of objectivity and truth" and has a tendency to interpret "any racial inequity or imbalance ... as proof of institutional racism and as grounds for directly imposing racially equitable outcomes in those realms". Proponents of CRT have also been accused of treating even well-meaning criticism of CRT as evidence of latent racism. In a 1997 book, law professors Daniel A. Farber and Suzanna Sherry criticized CRT for basing its claims on personal narrative and for its lack of testable hypotheses and measurable data. CRT scholars including Crenshaw, Delgado, and Stefancic responded that such critiques represent dominant modes within social science which tend to exclude people of color. Delgado and Stefancic wrote: "In these realms [social science and politics], truth is a social construct created to suit the purposes of the dominant group." Farber and Sherry have also argued that anti-meritocratic tenets in critical race theory, critical feminism, and critical legal studies may unintentionally lead to antisemitic and anti-Asian implications. They write that the success of Jews and Asians within what critical race theorists posit to be a structurally unfair system may lend itself to allegations of cheating and advantage-taking. In response, Delgado and Stefancic write that there is a difference between criticizing an unfair system and criticizing individuals who perform well inside that system. Philosopher John Gray writes that CRT "projects a particular American history onto all of humankind". === Public controversies === Critical race theory has stirred controversy in the United States for promoting the use of narrative in legal studies, advocating "legal instrumentalism" as opposed to ideal-driven uses of the law, and encouraging legal scholars to promote racial equity. Before 1993, the term "critical race theory" was not part of public discourse. In the spring of that year, conservatives launched a campaign led by Clint Bolick to portray Lani Guinier—then-President Bill Clinton's nominee for Assistant Attorney General for Civil Rights—as a radical because of her connection to CRT. Within months, Clinton withdrew the nomination, describing the effort to stop Guinier's appointment as "a campaign of right-wing distortion and vilification". This was part of a wider conservative strategy to shift the Supreme Court in their favor. Amy E. Ansell writes that the logic of legal instrumentalism reached wide public reception in the O.J. Simpson murder case when attorney Johnnie Cochran "enacted a sort of applied CRT", selecting an African-American jury and urging them to acquit Simpson in spite of the evidence against him—a form of jury nullification. Legal scholar Jeffrey Rosen calls this the "most striking example" of CRT's influence on the US legal system. Law professor Margaret M. Russell responded to Rosen's assertion in the Michigan Law Review, saying that Cochran's "dramatic" and "controversial" courtroom "style and strategic sense" in the Simpson case resulted from his decades of experience as an attorney; it was not significantly influenced by CRT writings. In 2010, a Mexican-American studies program in Tucson, Arizona, was halted because of a state law forbidding public schools from offering race-conscious education in the form of "advocat[ing] ethnic solidarity instead of the treatment of pupils as individuals". Certain books, including a primer on CRT, were banned from the curriculum. Matt de la Peña's young-adult novel Mexican WhiteBoy was banned for "containing 'critical race theory'" according to state officials. The ban on ethnic-studies programs was later deemed unconstitutional on the grounds that the state showed discriminatory intent: "Both enactment and enforcement were motivated by racial animus", federal Judge A. Wallace Tashima ruled. ==== 2020s challenges ==== == Subfields == Within critical race theory, various sub-groupings focus on issues and nuances unique to particular ethno-racial and/or marginalized communities. This includes the intersection of race with disability, ethnicity, gender, sexuality, class, or religion. For example, disability critical race studies (DisCrit), critical race feminism (CRF), Jewish Critical Race Theory (HebCrit, pronounced "Heeb"), Black Critical Race Theory (Black Crit), Latino critical race studies (LatCrit), Asian American critical race studies (AsianCrit), South Asian American critical race studies (DesiCrit), Quantitative Critical Race Theory (QuantCrit), Queer Critical Race Theory (QueerCrit), and American Indian critical race studies or Tribal critical race theory (sometimes called TribalCrit). CRT methodologies have also been applied to the study of white immigrant groups. CRT has spurred some scholars to call for a second wave of whiteness studies, which is now a small offshoot known as Second Wave Whiteness (SWW). Critical race theory has also begun to spawn research that looks at understandings of race outside the United States. === Disability critical race theory === Another offshoot field is disability critical race studies (DisCrit), which combines disability studies and CRT to focus on the intersection of disability and race. === Latino critical race theory === Latino critical race theory (LatCRT or LatCrit) is a research framework that outlines the social construction of race as central to how people of color are constrained and oppressed in society. Race scholars developed LatCRT as a critical response to the "problem of the color line" first explained by W. E. B. Du Bois. While CRT focuses on the Black–White paradigm, LatCRT has moved to consider other racial groups, mainly Chicana/Chicanos, as well as Latinos/as, Asians, Native Americans/First Nations, and women of color. In Critical Race Counterstories along the Chicana/Chicano Educational Pipeline, Tara J. Yosso discusses how the constraint of POC can be defined. Looking at the differences between Chicana/o students, the tenets that separate such individuals are: the intercentricity of race and racism, the challenge of dominant ideology, the commitment to social justice, the centrality of experience knowledge, and the interdisciplinary perspective. LatCRTs main focus is to advocate social justice for those living in marginalized communities (specifically Chicana/os), who are guided by structural arrangements that disadvantage people of color. Arrangements where Social institutions function as dispossessions, disenfranchisement, and discrimination over minority groups. In an attempt to give voice to those who are victimized, LatCRT has created two common themes: First, CRT proposes that white supremacy and racial power are maintained over time, a process that the law plays a central role in. Different racial groups lack the voice to speak in this civil society, and, as such, CRT has introduced a new critical form of expression, called the voice of color. The voice of color is narratives and storytelling monologues used as devices for conveying personal racial experiences. These are also used to counter metanarratives that continue to maintain racial inequality. Therefore, the experiences of the oppressed are important aspects for developing a LatCRT analytical approach, and it has not been since the rise of slavery that an institution has so fundamentally shaped the life opportunities of those who bear the label of criminal. Secondly, LatCRT work has investigated the possibility of transforming the relationship between law enforcement and racial power, as well as pursuing a project of achieving racial emancipation and anti-subordination more broadly. Its body of research is distinct from general critical race theory in that it emphasizes immigration theory and policy, language rights, and accent- and national origin-based forms of discrimination. CRT finds the experiential knowledge of people of color and draws explicitly from these lived experiences as data, presenting research findings through storytelling, chronicles, scenarios, narratives, and parables. === Asian critical race theory === Asian critical race theory looks at the influence of race and racism on Asian Americans and their experiences in the US education system. Like Latino critical race theory, Asian critical race theory is distinct from the main body of CRT in its emphasis on immigration theory and policy. === Tribal critical race theory === Critical Race Theory evolved in the 1970s in response to Critical Legal Studies. Tribal Critical Theory (TribalCrit) focuses on stories and values oral data as a primary source of information. TribalCrit builds on the idea that White supremacy and imperialism underpin US policies toward Indigenous peoples. In contrast with CRT, it argues that colonization rather than racism is endemic to society. A key tenet of TribalCrit is that Indigenous people exist within a US society that both politicizes and racializes them, placing them in a "liminal space" where Indigenous self-representation is at odds with how others perceive them. TribalCrit argues that ideas of culture, information, and power take on new importance when inspected through a Native lens. TribalCrit rejects goals of assimilation in US educational institutions, and argues that understanding the lived realities of Indigenous peoples is dependent on comprehending tribal philosophies, beliefs, traditions, and visions for the future. === Critical philosophy of race === The Critical Philosophy of Race is inspired by both Critical Legal Studies and Critical Race Theory's use of interdisciplinary scholarship. Both CLS and CRT explore the covert nature of mainstream use of "apparently neutral concepts, such as merit or freedom." == See also == Anti-bias curriculum Anti-subordination principle Cultural Marxism conspiracy theory Cultural hegemony Institutional or systemic racism Judicial aspects of race in the United States Racism in the United States Slavery in the United States White privilege Systemic racism Culture war Identity politics Whiteness studies == Notes == == References == == Further reading ==
Wikipedia/Critical_race_theory
Edward Palmer Thompson (3 February 1924 – 28 August 1993) was an English historian, writer, socialist and peace campaigner. He is best known for his historical work on the radical movements in the late 18th and early 19th centuries, in particular The Making of the English Working Class (1963). In 1966, Thompson coined the term "history from below" to describe his approach to social history, which became one of the most consequential developments within the global history discipline. History from below arose from the Communist Party Historians Group and its work to popularise historical materialism. Thompson's work is considered by some to have been among the most important contributions to social history in the latter twentieth-century, with a global impact, including on scholarship in Asia and Africa. In a 2011 poll by History Today magazine, he was named the second most important historian of the previous 60 years, behind only Fernand Braudel. == Early life == E. P. Thompson was born in Oxford to Methodist missionary parents: His father, Edward John Thompson (1886–1946), was a poet and admirer of the Nobel Prize–winning poet Rabindranath Tagore. His older brother was William Frank Thompson (1920–1944), a British officer in the Second World War, who was captured and shot aiding the Bulgarian anti-fascist partisans. Edward Thompson and his mother wrote There is a Spirit in Europe: A Memoir of Frank Thompson (1947). This out of print memoir was re-released by Brittunculi Records & Books in 2024. Thompson would later write another book about his brother, published posthumously in 1996. Thompson attended two private schools, The Dragon School in Oxford and Kingswood School in Bath. Like many he left school in 1941 to fight in the Second World War. He served in a tank unit in the Italian campaign, including at the fourth battle of Cassino. After his military service, he studied at Corpus Christi College, Cambridge, where he joined the Communist Party of Great Britain. In 1946, Thompson formed the Communist Party Historians Group with Christopher Hill, Eric Hobsbawm, Rodney Hilton, Dona Torr, and others. In 1952 they launched the journal Past and Present. == Scholarship == === 1950s: William Morris === Thompson's first major work of scholarship was his biography of William Morris, written while he was a member of the Communist Party. Subtitled From Romantic to Revolutionary, it was part of an effort by the Communist Party Historians' Group, inspired by Torr, to emphasise the domestic roots of Marxism in Britain at a time when the Communist Party was under attack for always following the Moscow line. It was also an attempt to take Morris back from the critics who for more than 50 years had emphasised his art and downplayed his politics. Although Morris's political work is well to the fore, Thompson also used his literary talents to comment on aspects of Morris's work, such as his early Romantic poetry, which had previously received relatively little consideration. As Thompson noted in his preface to the second edition (1976), the first edition (1955) appears to have received relatively little attention from the literary establishment because of its then-unfashionable Marxist point of view. However, the somewhat rewritten second edition was much better received. After Nikita Khrushchev's "secret speech" to the 20th Congress of the Communist Party of the Soviet Union in 1956, which revealed that the Soviet party leadership had long been aware of Stalin's crimes, Thompson (with John Saville and others) started a dissident publication inside the CP, called The Reasoner. Six months later, he and most of his comrades left the party in disgust at the Soviet invasion of Hungary. But Thompson remained what he called a "socialist humanist". With Saville and others, he set up the New Reasoner, a journal that sought to develop a democratic socialist alternative to what its editors considered the ossified official Marxism of the Communist and Trotskyist parties and the managerialist cold war social democracy of the Labour Party and its international allies. The New Reasoner was the most important organ of what became known as the "New Left", an informal movement of dissident leftists closely associated with the nascent movement for nuclear disarmament in the late 1950s and early 1960s. The New Reasoner combined with the Universities and Left Review to form New Left Review in 1960, though Thompson and others fell out with the group around Perry Anderson who took over the journal in 1962. The fashion ever since has been to describe the Thompson et al. New Left as "the first New Left" and the Anderson et al. group, which by 1968 had embraced Tariq Ali and various Trotskyists, as the second. === Early-1960s: The Making of the English Working Class === Thompson's most influential work was and remains The Making of the English Working Class, published in 1963 while he was working at the University of Leeds. The massive book, over 800 pages, was a watershed in the foundation of the field of social history. By exploring the ordinary cultures of working people through their previously ignored documentary remains, Thompson told the forgotten history of the first working-class political left in the world in the late-18th and early-19th centuries. Reflecting on the importance of the book for its 50th anniversary, Emma Griffin explained that Thompson "uncovered details about workshop customs and rituals, failed conspiracies, threatening letters, popular songs, and union club cards. He took what others had regarded as scraps from the archive and interrogated them for what they told us about the beliefs and aims of those who were not on the winning side. Here, then, was a book that rambled over aspects of human experience that had never before had their historian. The Making of the English Working Class had a profound effect on the shape of British historiography, and still endures as a staple on university reading lists more than 50 years after its first publication in 1963. Writing for the Times Higher Education in 2013, Robert Colls recalled the power of Thompson's book for his generation of young British leftists: I bought my first copy in 1968 – a small, fat bundle of Pelican with a picture of a Yorkshire miner on the front – and I still have it, bandaged up and exhausted by the years of labour. From the first of its 900-odd pages, I knew, and my friends at the University of Sussex knew, that this was something else. We talked about it in the bar and on the bus and in the refectory queue. Imagine that: young male students more interested in a book than in gooseberry tart and custard. In his preface to this book, E.P. Thompson set out his approach to writing history from below, "I am seeking to rescue the poor stockinger, the Luddite cropper, the "obsolete" hand-loom weaver, the "Utopian" artisan, and even the deluded follower of Joanna Southcott, from the enormous condescension of posterity. Their crafts and traditions may have been dying. Their hostility to the new industrialism may have been backward-looking. Their communitarian ideals may have been fantasies. Their insurrectionary conspiracies may have been foolhardy. But they lived through these times of acute social disturbance, and we did not. Their aspirations were valid in terms of their own experience; and, if they were casualties of history, they remain, condemned in their own lives, as casualties.": 12  Thompson's thought was also original and significant because of the way he defined "class." To Thompson, class was not a structure, but a relationship: And class happens when some men, as a result of common experiences (inherited or shared), feel and articulate the identity of their interests as between themselves, and as against other men whose interests are different from (and usually opposed to) theirs. The class experience is largely determined by the productive relations into which men are born—or enter involuntarily. Class-consciousness is the way in which these experiences are handled in cultural terms: embodied in traditions, value-systems, ideas, and institutional forms. If the experience appears as determined, class-consciousness does not. We can see a logic in the responses of similar occupational groups undergoing similar experiences, but we cannot predicate any law. Consciousness of class arises in the same way in different times and places, but never in just the same way. By re-defining class as a relationship that changed over time, Thompson proceeded to demonstrate how class was worthy of historical investigation. He opened the gates for a generation of labour historians, such as David Montgomery and Herbert Gutman, who made similar studies of the American working classes. A major work of research and synthesis, the book was also important in historiographical terms: with it, Thompson demonstrated the power of a historical Marxism rooted in the experience of real flesh-and-blood workers. Thompson wrote the book while living in Siddal, Halifax, West Yorkshire and based some of the work on his experiences with the local Halifax population. In later essays, Thompson has emphasized that crime and disorder were characteristic responses of the working and lower classes to the oppressions imposed upon them. He argues that crime was defined and punished primarily as an activity that threatened the status, property and interests of the elites. England's lower classes were kept under control by large-scale execution, transportation to the colonies, and imprisonment in horrible hulks of old warships. There was no interest in reforming the culprits, the goal being to deter through extremely harsh punishment. === Late-1960s: Time, Work-Discipline, and Industrial Capitalism === Time discipline, as it pertains to sociology and anthropology, is the general name given to social and economic rules, conventions, customs, and expectations governing the measurement of time, the social currency and awareness of time measurements, and people's expectations concerning the observance of these customs by others. Thompson authored Time, Work-Discipline, and Industrial Capitalism, published in 1967, which posits that reliance on clock-time is a result of the European Industrial Revolution and that neither industrial capitalism nor the creation of the modern state would have been possible without the imposition of synchronic forms of time and work discipline. An accurate and precise record of time was not kept prior to the industrial revolution. The new clock-time imposed by government and capitalist interests replaced earlier, collective perceptions of time—such as natural rhythms of time like sunrise, sunset, and seasonal changes—that Thompson believed flowed from the collective wisdom of human societies. However, although it is likely that earlier views of time were imposed by religious and other social authorities prior to the industrial revolution, Thompson's work identified time discipline as an important concept for study within the social sciences. Thompson addresses the development of time as a measurement that has value and that can be controlled by social structures. As labor became more mechanized during the industrial revolution, time became more precise and standardized. Factory work changed the relationship that the capitalist and laborers had with time and the clock; clock time became a tool for social control. Capitalist interests demanded that the work of laborers be monitored accurately to ensure that cost of labor was to the maximum benefit of the capitalist. == Post-academia == Thompson left the University of Warwick in protest at its commercialisation, as documented in the book Warwick University Limited (1971). He continued to teach and lecture as a visiting professor, particularly in the United States. However, he increasingly worked as a freelance writer, contributing many essays to New Society, Socialist Register and historical journals. In 1978, he published The Poverty of Theory which attacked the structural Marxism of Louis Althusser and his followers in Britain on New Left Review (saying: "...all of them are Geschichtenscheissenschlopff, unhistorical shit"). The title echoes that of Karl Marx's 1847 polemic against Pierre-Joseph Proudhon, The Poverty of Philosophy; and that of philosopher Karl Popper's 1936 book The Poverty of Historicism. Thompson's polemic provoked a book-length response from Perry Anderson entitled Arguments Within English Marxism. During the late 1970s, Thompson acquired a large public audience as a critic of what he perceived as the then Labour government's disregard of civil liberties; his writings from this time are collected in Writing By Candlelight (1980). From 1981 onward, Thompson was a frequent contributor to the American magazine The Nation. From 1980, Thompson was the most prominent intellectual of the revived movement for nuclear disarmament, revered by activists throughout the world. In Britain, his pamphlet Protest and Survive, a parody on the government leaflet Protect and Survive, played a major role in the revived strength of the Campaign for Nuclear Disarmament. Just as important, Thompson was, with Ken Coates, Mary Kaldor and others, an author of the 1980 Appeal for European Nuclear Disarmament, calling for a nuclear-free Europe from Poland to Portugal, which was the founding document of European Nuclear Disarmament. END was both a Europe-wide campaign that comprised a series of large public conferences (the END Conventions), and a small British pressure group. Thompson played a key role in both END and CND throughout the 1980s, speaking at many public meetings, corresponding with hundreds of fellow activists and sympathetic intellectuals, and doing committee work. He had a particularly important part in opening a dialogue between the west European peace movement and dissidents in Soviet-dominated eastern Europe, particularly in Hungary and Czechoslovakia, for which he was denounced as a tool of American imperialism by the Soviet authorities. He wrote dozens of polemical articles and essays during this period, which are collected in the books Zero Option (1982) and The Heavy Dancers (1985). He also wrote an extended essay attacking the ideologists on both sides of the cold war, Double Exposure (1985) and edited a collection of essays opposing Ronald Reagan's Strategic Defense Initiative, Star Wars (1985). An excerpt from a speech given by Thompson featured in the computer game Deus Ex Machina (1984). Thompson's own haunting recitation of his 1950 poem of "apocalyptic expectation, "The Place Called Choice," appeared on the 1984 vinyl recording "The Apocalypso", by Canadian pop group Singing Fools, released by A&M Records. During the 1980s Thompson was also invited by Michael Eavis, who founded a local branch of CND, to speak at the Glastonbury Festival on several occasions after it became a fundraising event for the organisation: Thompson's speech at the 1983 edition of the festival, where he declared that the audience were part of an "alternative nation" of " inventors, writers... theatre, musicians" opposed to Margaret Thatcher and the tradition of "moneymakers and imperialists" which he identified her with, was named by Eavis as the best speech ever made at the festival. === 1990s: William Blake === The last book Thompson finished was Witness Against the Beast: William Blake and the Moral Law (1993). The product of years of research and published shortly after his death, it shows how far Blake was inspired by dissident religious ideas rooted in the thinking of the most radical opponents of the monarchy during the English civil war. == Legacy and criticism == Thompson was one of the principal intellectuals of the Communist Party of Great Britain. Although he left the party in 1956 due to its suppression of open debate over the Soviet invasion of Hungary, he continued to refer to himself as a "historian in the Marxist tradition", calling for a rebellion against Stalinism as a prerequisite for the restoration of communists' "confidence in our own revolutionary perspectives". Thompson played a key role in the first New Left in Britain in the late 1950s. He was a vociferous left-wing socialist critic of the Labour governments of 1964–70 and 1974–79, and an early and constant supporter of the Campaign for Nuclear Disarmament, becoming during the 1980s the leading intellectual light of the movement against nuclear weapons in Europe. Although Thompson left the Communist Party of Great Britain, he remained committed to Marxist ideals. Leszek Kołakowski wrote a very harsh criticism of Thompson in his 1974 essay "My Correct Views on Everything", accusing Thompson of intellectual dishonesty in minimizing the brutalities of communism and placing abstract principles over real-world consequences. Tony Judt considered this rejoinder so authoritative that he claimed that "no one who reads it will ever take E.P. Thompson seriously again". Kołakowski's portrait of Thompson elicited some protests from readers and other left-wing journals came to Thompson's defence. On the 50th anniversary of the landmark publication of The Making of the English Working Class, several journalists celebrated E.P. Thompson as one of the pre-eminent historians of his day. As Marxist history became less fashionable in the face of the adaptation of discourse-focused approaches inspired by the linguistic turn and post-structuralism in the 1980s, Thompson's work was subjected to critique by fellow historians. Joan Wallach Scott argued that Thompson's approach in The Making of the English Working Class was androcentric, and ignored the centrality of gender in the construction of class identities, with the sphere of paid labour in which economic class was rooted being understood as inherently male and privileged over the feminised domestic realm. Sheila Rowbotham, also a feminist historian and a friend of E.P. and Dorothy Thompson, has argued that Scott's critique was ahistorical, given that the book was published in 1963, before the second-wave feminist movement had fully developed a theoretical gender perspective. In a 2020 interview, Rowbotham acknowledged that "there was not a great deal of reference to women in The Making... But at the time it seemed like there were a lot of references to women, because we had to read people like J. H. Plumb — history in which there were really absolutely no women at all", and suggested that Thompson limited his writing about women in deference to his wife, for whom women's history was a key area of research interest. Rowbotham did acknowledge that whilst they supported the emancipation of women, the Thompsons had mixed feelings about the contemporary second-wave feminist movement, regarding it as too middle class. Barbara Winslow, who studied under Thompson and named him as "the most important academic influence on my life", similarly acknowledged that whilst "he was not politically sympathetic to the women's liberation movement, in part because he thought it was an American import, he was not hostile to women students or their feminist research agendas", and argued that early women's history in the 1960s primarily focused on "writing women into history", with more sophisticated feminist theoretical approaches only arriving later. Gareth Stedman Jones claimed that the conception of the role of experience in The Making of the English Working Class embodied the idea of a direct link between social being and social consciousness, ignoring the importance of discourse as a means of mediating between the two, enabling people to develop a political understanding of the world and orientating them to political action. Marc Steinberg argued that Stedman Jones' interpretation of Thompson's perspective was "reductionist", with Thompson understanding the relationship between experience and consciousness as a "complex dialectical relationship". Wade Matthews argued in 2013: Numerous books, special collections, and journal articles on E.P. Thompson's scholarly work and legacy appeared soon after his death in 1993. Since then, however, interest in Thompson has waned. The reasons for this are perhaps easily enough summarized. Today, Thompson's histories are viewed as old-fashioned, while his socialist politics are believed extinct. Class is considered neither a fruitful concept of historical analysis nor an appropriate basis for an emancipatory politics. Nuclear weapons proliferate, but no anti-nuclear movement grows up alongside their proliferation. Civil liberties are a minority, and increasingly "radical," interest in the age of the "war on terror." Internationalism, as ideology and practice, is the preserve of capital not labour. At the beginning of the twenty-first century, then, Thompson seems out of place. ...certainly part of his distinctiveness lay in his literary style and tone. But it also lay in the moral quality which undergirded his histories and his political interventions. Part of that quality was the "glimpses of other possibilities of human nature, other ways of behaving" that they gave us. In this way, as Stefan Collini has suggested, Thompson is perhaps more relevant than he ever was. == Personal life == In 1948 Thompson married Dorothy Towers, whom he met at Cambridge. A fellow left-wing historian, she wrote studies on women in the Chartist movement, and the biography Queen Victoria: Gender and Power; she was Professor of History at the University of Birmingham. The Thompsons had three children, the youngest of whom is the award-winning children's writer, Kate Thompson. After four years of declining health, Thompson died at his home in Upper Wick, Worcestershire, on 28 August 1993, aged 69. == Honours == A blue plaque to the Thompsons was erected by the Halifax Civic Trust. == Selected works == William Morris: Romantic to Revolutionary. London: Lawrence & Wishart, 1955. "Socialist Humanism," The New Reasoner, vol. 1, no. 1 (Summer 1957), pp. 105–143. "The New Left," The New Reasoner, whole no. 9 (Summer 1959), pp. 1–17. The Making of the English Working Class London: Victor Gollancz (1963); 2nd edition with new postscript, Harmondsworth: Penguin, 1968, third edition with new preface 1980. "Time, work-discipline and industrial capitalism." Past & Present, vol 38, no. 1 (1967), pp. 56–97. "The moral economy of the English crowd in the eighteenth century." Past & Present, vol. 50, no. 1 (1971), pp. 76–136. Whigs and Hunters: The Origin of the Black Act, London: Allen Lane, 1975. Albion's Fatal Tree: Crime and Society in Eighteenth Century England. (Editor.) London: Allen Lane, 1975. The Poverty of Theory and Other Essays, London: Merlin Press, 1978. Writing by Candlelight, London: Merlin Press, 1980. Zero Option, London: Merlin Press, 1982. Double Exposure, London: Merlin Press, 1985. The Heavy Dancers, London: Merlin Press, 1985. The Sykaos Papers, London: Bloomsbury, 1988. Customs in Common: Studies in Traditional Popular Culture, London: Merlin Press, 1991. Witness Against the Beast: William Blake and the Moral Law, Cambridge: Cambridge University Press, 1993. Alien Homage: Edward Thompson and Rabindranath Tagore, Delhi: Oxford University Press, 1993. Making History: Writings on History and Culture, New York: New Press, 1994. Beyond the Frontier: The Politics of a Failed Mission, Bulgaria 1944, Rendlesham: Merlin, 1997. The Romantics: England in a Revolutionary Age, Woodbridge: Merlin Press, 1997. Collected Poems, Newcastle upon Tyne: Bloodaxe, 1999. == See also == Communist Party Historians Group The New Reasoner Postpositivism Cultural studies == References == == Further reading == Anderson, Perry (1980). Arguments within English Marxism (2nd ed.). London: Verso. ISBN 9780860917274. Berger, Stefan, and Christian Wicke. "‘… two monstrous antagonistic structures’: E. P. Thompson’s Marxist Historical Philosophy and Peace Activism during the Cold War." in Marxist Historical Cultures and Social Movements during the Cold War (Palgrave Macmillan, Cham, 2019) pp. 163-185. Bess, M. D., "E. P. Thompson: the historian as activist", American Historical Review, vol. 98 (1993), pp. 19–38. https://doi.org/10.1086/ahr/98.1.19 Best, Geoffrey, "The Making of the English Working Class [review]", The Historical Journal, vol. 8, no. 2 (1965), pp. 271–81. Blackburn, Robin (September–October 1993). "Edward Thompson and the New Left". New Left Review. I (201): 3–25. Clevenger, Samuel M. "Culturalism, EP Thompson and the polemic in British cultural studies." Continuum 33.4 (2019): 489-500. Davis, Madeleine; Morgan, Kevin, "'Causes that were lost'? Fifty years of E. P. Thompson's The Making of the English Working Class as contemporary history", Contemporary British History, vol. 28, no. 4 (2014), pp. 374–81. Delius, Peter. "E.P. Thompson,‘social history’, and South African historiography, 1970–90." Journal of African History 58.1 (2017): 3-17. Dworkin, Dennis, Cultural Marxism in Postwar Britain: History, the New Left, and the Origins of Cultural Studies (Durham, NC: Duke University Press, 1997). Eastwood, D., "History, politics and reputation: E. P. Thompson reconsidered", History, vol. 85, no. 280 (2000), pp. 634–54. Efstathiou, Christos. "E.P. Thompson's concept of class formation and its political implications: Echoes of popular front radicalism in The making of the English working class." Contemporary British History 28.4 (2014): 404-421. Efstathiou, Christos. "E.P. Thompson, the Early New Left and the Fife Socialist League." Labour History Review 81.1 (2016): 25-48. online Efstathiou, Christos. E.P. Thompson: A Twentieth Century Romantic, (London: Merlin Press, 2015). ISBN 9780850367157 Epstein, James. "Among the Romantics: EP Thompson and the Poetics of Disenchantment." Journal of British Studies 56.2 (2017): 322-350. Fieldhouse, Roger and Taylor, Richard (Eds.) (2014) E. P. Thompson and English Radicalism, Manchester: Manchester University Press. ISBN 9780719088216 Flewers, Paul. "E.P. Thompson’s Investigation of Stalinism: An Unrealised Project." Critique 45.4 (2017): 549-582. Fuchs, Christian. "Revisiting the Althusser/EP Thompson-controversy: towards a Marxist theory of communication." Communication and the Public 4.1 (2019): 3-20 online. Hall, Stuart, "Life and times of the first New Left", New Left Review, 2nd series, vol. 59 (2010), 177–96. Hempton, D., and Walsh, J., "E. P. Thompson and Methodism", in Mark A. Noll (ed.), God and Mammon: Protestants, Money and The Market, 1790–1860 (Oxford University Press, 2002), pp. 99–120. Hobsbawm, Eric (Winter 1994). "E. P. Thompson". Radical History Review. 1994 (58): 157–159. doi:10.1215/01636545-1994-58-157. Hobsbawm, Eric, "Edward Palmer Thompson (1924–1993)", Proceedings of the British Academy, vol. 90 (1996), pp. 521–39. Hyslop, Jonathan. "The Experience of War and the Making of a Historian: E.P. Thompson on Military Power, the Colonial Revolution and Nuclear Weapons." South African Historical Journal 68.3 (2016): 267-285 online. Johnson, Richard (Autumn 1978). "Edward Thompson, Eugence Genovese and Socialist-humanist History". History Workshop Journal. 6 (1): 79–100. doi:10.1093/hwj/6.1.79. Kaye, Harvey J. (1984). The British Marxist Historians. Cambridge: Polity Press. ISBN 9780333662434. Kaye, Harvey J.; McClelland, Keith, eds. (1990). E.P. Thompson: Critical Perspectives. London: Polity Press. ISBN 9780745602387. Kenny, Michael. "E.P. Thompson: last of the English radicals?." Political Quarterly 88.4 (2017): 579-588. Kenny, Michael, The First New Left: British Intellectuals after Stalin (London: Lawrence & Wishart, 1995). online Kołakowski, Leszek (1974). "My correct views on everything: A rejoinder to Edward Thompson's 'Open letter to Leszek Kołakowski'". Socialist Register. 11. Monthly Review Press. Litwak, Howard (28 April 1981). "END Game: The European View - A Talk With E. P. Thompson". The Boston Phoenix. Retrieved 9 March 2024. Lynd, Staughton (2014). Doing History from the Bottom Up: On E.P. Thompson, Howard Zinn, and Rebuilding the Labor Movement from Below. Chicago: Haymarket Books. ISBN 9781608463886. McCann, Gerard. Theory and History: The Political Thought of E. P. Thompson (Routledge, 2019). McIlroy, John. "Another look at E. P. Thompson and British Communism, 1937–1955." Labor History 58.4 (2017): 506-539. online McWilliam, Rohan, "Back to the future: E. P. Thompson, Eric Hobsbawm and the remaking of nineteenth-century British history", Social History, vol. 39, no. 2 (2014), pp. 149–59. Matthews, Wade. "Remaking EP Thompson." Labour/Le Travail 72#1 (2013): 253–278, online Merrill, Michael (1984) [1976], "Interview with E. P. Thompson", in Abelove, H. (ed.), Visions of History, Manchester, UK: Manchester University Press, pp. 5–25, ISBN 9780394722009. Merrill, Michael (Winter 1994). "E. P. Thompson: In Solidarity". Radical History Review. 1994 (58): 152–156. doi:10.1215/01636545-1994-58-153. Millar, Kathleen M. "Introduction: Reading twenty-first-century capitalism through the lens of EP Thompson." Focaal 2015.73 (2015): 3-11 online. Palmer, Bryan D. "Paradox and polemic; argument and awkwardness: Reflections on E.P. Thompson." Contemporary British History 28.4 (2014): 382-403. Palmer, Bryan D. (1981). The Making of E. P. Thompson: Marxism, Humanism, and History. Toronto, Canada: New Hogtown Press. ISBN 9780919940178. Palmer, Bryan D. (1994). E. P. Thompson: Objections and Oppositions. London: Verso. ISBN 9781859840702. Rule, John G.; Malcolmson, Robert W. (1993). Protest and Survival: Essays for E. P. Thompson. London: Merlin. Sandoica, Elena Hernández. "Still Reading Edward P. Thompson." Culture & History Digital Journal 6.1 (2017): e009-e009. online Scott, Joan Wallach, "Women in The Making of the English Working Class", in Scott, Joan Wallach, Gender and the Politics of History (New York: Columbia University Press, 1988), pp. 68–92. Shenk, Timothy. "" I Am No Longer Answerable for Its Actions": EP Thompson After Moral Economy." Humanity: An International Journal of Human Rights, Humanitarianism, and Development 11.2 (2020): 241-246 excerpt. Steinberg, Marc W., "'A way of struggle': Reformations and affirmations of E. P. Thompson's class analysis in the light of postmodern theories of language", British Journal of Sociology, vol. 48, no. 3 (1997), pp. 471–492. Taylor, Jonathan R. P. "There is A Spirit in Europe: A Memoir of Frank Thompson 80 Years on". Imprint Lulu. Brittunculi Records & Books. His first book and as first published by E. P. Thompson at Victor Gollancz: 1947 — the Fanfare Press London. This was a memoir to his older poet sibling 'Frank Thompson SOE' executed by fascists in Bulgaria: 1944. ISBN 9781304479525. Todd, Selina, "Class, experience and Britain's twentieth century", Social History, vol. 49, no. 4 (2014), pp. 489–508. del Valle Alcalá, Roberto. "A multitude of hopes: Humanism and subjectivity in E.P. Thompson and Antonio Negri" Culture, Theory and Critique 54.1 (2013): 74-87 online. Webb, W. L. (Winter 1994). "A Thoroughly English Dissident". Radical History Review. 1994 (58): 160–164. doi:10.1215/01636545-1994-58-160. Stuart White (2 August 2013). "The dignity of dissent: E.P. Thompson and One Nation Labour". openDemocracy. Retrieved 13 September 2024. Winant, Gabriel, et al. "Introduction: The Global E.P. Thompson." International Review of Social History 61.1 (2016): 1-9 online. == External links == E. P. Thompson on marxists.org archive E. P. Thompson in discussion with C. L. R. James, 1983 on YouTube. E. P. Thompson at the March 1977 SSRC Seminar on Models of Social Change on YouTube. and Dorothy Thompson, Family Website. Now hosted on the Verso Books website E.P. Thompson talking to Andrew Whitehead in 1991 about his association with the Communist Party Works by or about E. P. Thompson at the Internet Archive
Wikipedia/The_Poverty_of_Theory
In praxeology, methodological dualism is an epistemological position which states that it is necessary ─ based on our current state of knowledge and understanding ─ to use a different method in analysing the actions of human beings than the methods of the natural sciences (such as physics, chemistry, physiology, etc.). This position is based on the presupposition that humans differ radically from other objects in the external world. Namely, humans purposefully aim at chosen ends and employ chosen means to attain them (i.e. humans act), whereas other objects in nature ─ such as, for example, sticks, stones, and atoms ─ do not. Methodological dualism is not a metaphysical or ontological doctrine, and refrains from making such judgments. == Overview == Ludwig von Mises' insistence on methodological dualism was a reaction against “the ‘methodological monism’ preached by behaviorists and positivists who [saw] no basic reason to approach human behavior and social phenomena differently from the way natural scientists approach molecular behavior and physical phenomena.” Mises states that the sciences of human action deal with ends and means, with volition, with meaning and understanding, with “thoughts, ideas, and judgments of value”. Action is the purposive use of chosen means for the attainment of chosen ends, and ideas, beliefs, and judgments of value (called mental phenomena) determine the choice of both means and ends. Thus, these mental phenomena occupy a central position in the sciences of human action for, as Mises argues, “acts of choosing are determined by thoughts and ideas.” In arguing for methodological dualism, Mises states because the natural sciences have not yet determined “how definite external events […] produce within the human mind definite ideas, value judgments, and volitions”, this ignorance splits our knowledge into two distinct fields, the “realm of external events” on the one hand, and the “realm of human thought and action” on the other. Thus Mises’ conception of the sciences of human action ─ i.e. praxeology and thymology ─ is based on this methodological dualism. Mises argues that because we are ourselves thinking and acting beings we can reflect, through introspection, on the meaning of action, of intention and volition, of ends and means, and on our ideas, beliefs, and judgments of value. This kind of reflective knowledge, Mises insists, is knowledge from within us, “is our own because we are men“, whereas we are not stones or atoms and so we cannot reflect on what it means to be these things. == See also == Behavioral economics Cognitive science Hard and soft science Methodological individualism == References ==
Wikipedia/Methodological_dualism
In computer science, a tree is a widely used abstract data type that represents a hierarchical tree structure with a set of connected nodes. Each node in the tree can be connected to many children (depending on the type of tree), but must be connected to exactly one parent, except for the root node, which has no parent (i.e., the root node as the top-most node in the tree hierarchy). These constraints mean there are no cycles or "loops" (no node can be its own ancestor), and also that each child can be treated like the root node of its own subtree, making recursion a useful technique for tree traversal. In contrast to linear data structures, many trees cannot be represented by relationships between neighboring nodes (parent and children nodes of a node under consideration, if they exist) in a single straight line (called edge or link between two adjacent nodes). Binary trees are a commonly used type, which constrain the number of children for each parent to at most two. When the order of the children is specified, this data structure corresponds to an ordered tree in graph theory. A value or pointer to other data may be associated with every node in the tree, or sometimes only with the leaf nodes, which have no children nodes. The abstract data type (ADT) can be represented in a number of ways, including a list of parents with pointers to children, a list of children with pointers to parents, or a list of nodes and a separate list of parent-child relations (a specific type of adjacency list). Representations might also be more complicated, for example using indexes or ancestor lists for performance. Trees as used in computing are similar to but can be different from mathematical constructs of trees in graph theory, trees in set theory, and trees in descriptive set theory. == Terminology == A node is a structure which may contain data and connections to other nodes, sometimes called edges or links. Each node in a tree has zero or more child nodes, which are below it in the tree (by convention, trees are drawn with descendants going downwards). A node that has a child is called the child's parent node (or superior). All nodes have exactly one parent, except the topmost root node, which has none. A node might have many ancestor nodes, such as the parent's parent. Child nodes with the same parent are sibling nodes. Typically siblings have an order, with the first one conventionally drawn on the left. Some definitions allow a tree to have no nodes at all, in which case it is called empty. An internal node (also known as an inner node, inode for short, or branch node) is any node of a tree that has child nodes. Similarly, an external node (also known as an outer node, leaf node, or terminal node) is any node that does not have child nodes. The height of a node is the length of the longest downward path to a leaf from that node. The height of the root is the height of the tree. The depth of a node is the length of the path to its root (i.e., its root path). Thus the root node has depth zero, leaf nodes have height zero, and a tree with only a single node (hence both a root and leaf) has depth and height zero. Conventionally, an empty tree (tree with no nodes, if such are allowed) has height −1. Each non-root node can be treated as the root node of its own subtree, which includes that node and all its descendants. Other terms used with trees: Neighbor Parent or child. Ancestor A node reachable by repeated proceeding from child to parent. Descendant A node reachable by repeated proceeding from parent to child. Also known as subchild. Degree For a given node, its number of children. A leaf, by definition, has degree zero. Degree of tree The degree of a tree is the maximum degree of a node in the tree. Distance The number of edges along the shortest path between two nodes. Level The level of a node is the number of edges along the unique path between it and the root node. This is the same as depth. Width The number of nodes in a level. Breadth The number of leaves. Complete tree A tree with every level filled, except the last. Forest A set of one or more disjoint trees. Ordered tree A rooted tree in which an ordering is specified for the children of each vertex. Size of a tree Number of nodes in the tree. == Common operations == Enumerating all the items Enumerating a section of a tree Searching for an item Adding a new item at a certain position on the tree Deleting an item Pruning: Removing a whole section of a tree Grafting: Adding a whole section to a tree Finding the root for any node Finding the lowest common ancestor of two nodes === Traversal and search methods === Stepping through the items of a tree, by means of the connections between parents and children, is called walking the tree, and the action is a walk of the tree. Often, an operation might be performed when a pointer arrives at a particular node. A walk in which each parent node is traversed before its children is called a pre-order walk; a walk in which the children are traversed before their respective parents are traversed is called a post-order walk; a walk in which a node's left subtree, then the node itself, and finally its right subtree are traversed is called an in-order traversal. (This last scenario, referring to exactly two subtrees, a left subtree and a right subtree, assumes specifically a binary tree.) A level-order walk effectively performs a breadth-first search over the entirety of a tree; nodes are traversed level by level, where the root node is visited first, followed by its direct child nodes and their siblings, followed by its grandchild nodes and their siblings, etc., until all nodes in the tree have been traversed. == Representations == There are many different ways to represent trees. In working memory, nodes are typically dynamically allocated records with pointers to their children, their parents, or both, as well as any associated data. If of a fixed size, the nodes might be stored in a list. Nodes and relationships between nodes might be stored in a separate special type of adjacency list. In relational databases, nodes are typically represented as table rows, with indexed row IDs facilitating pointers between parents and children. Nodes can also be stored as items in an array, with relationships between them determined by their positions in the array (as in a binary heap). A binary tree can be implemented as a list of lists: the head of a list (the value of the first term) is the left child (subtree), while the tail (the list of second and subsequent terms) is the right child (subtree). This can be modified to allow values as well, as in Lisp S-expressions, where the head (value of first term) is the value of the node, the head of the tail (value of second term) is the left child, and the tail of the tail (list of third and subsequent terms) is the right child. Ordered trees can be naturally encoded by finite sequences, for example with natural numbers. == Examples of trees and non-trees == == Type theory == As an abstract data type, the abstract tree type T with values of some type E is defined, using the abstract forest type F (list of trees), by the functions: value: T → E children: T → F nil: () → F node: E × F → T with the axioms: value(node(e, f)) = e children(node(e, f)) = f In terms of type theory, a tree is an inductive type defined by the constructors nil (empty forest) and node (tree with root node with given value and children). == Mathematical terminology == Viewed as a whole, a tree data structure is an ordered tree, generally with values attached to each node. Concretely, it is (if required to be non-empty): A rooted tree with the "away from root" direction (a more narrow term is an "arborescence"), meaning: A directed graph, whose underlying undirected graph is a tree (any two vertices are connected by exactly one simple path), with a distinguished root (one vertex is designated as the root), which determines the direction on the edges (arrows point away from the root; given an edge, the node that the edge points from is called the parent and the node that the edge points to is called the child), together with: an ordering on the child nodes of a given node, and a value (of some data type) at each node. Often trees have a fixed (more properly, bounded) branching factor (outdegree), particularly always having two child nodes (possibly empty, hence at most two non-empty child nodes), hence a "binary tree". Allowing empty trees makes some definitions simpler, some more complicated: a rooted tree must be non-empty, hence if empty trees are allowed the above definition instead becomes "an empty tree or a rooted tree such that ...". On the other hand, empty trees simplify defining fixed branching factor: with empty trees allowed, a binary tree is a tree such that every node has exactly two children, each of which is a tree (possibly empty). == Applications == Trees are commonly used to represent or manipulate hierarchical data in applications such as: File systems for: Directory structure used to organize subdirectories and files (symbolic links create non-tree graphs, as do multiple hard links to the same file or directory) The mechanism used to allocate and link blocks of data on the storage device Class hierarchy or "inheritance tree" showing the relationships among classes in object-oriented programming; multiple inheritance produces non-tree graphs Abstract syntax trees for computer languages Natural language processing: Parse trees Modeling utterances in a generative grammar Dialogue tree for generating conversations Document Object Models ("DOM tree") of XML and HTML documents Search trees store data in a way that makes an efficient search algorithm possible via tree traversal A binary search tree is a type of binary tree Representing sorted lists of data Computer-generated imagery: Space partitioning, including binary space partitioning Digital compositing Storing Barnes–Hut trees used to simulate galaxies Implementing heaps Nested set collections Hierarchical taxonomies such as the Dewey Decimal Classification with sections of increasing specificity. Hierarchical temporal memory Genetic programming Hierarchical clustering Trees can be used to represent and manipulate various mathematical structures, such as: Paths through an arbitrary node-and-edge graph (including multigraphs), by making multiple nodes in the tree for each graph node used in multiple paths Any mathematical hierarchy Tree structures are often used for mapping the relationships between things, such as: Components and subcomponents which can be visualized in an exploded-view drawing Subroutine calls used to identify which subroutines in a program call other subroutines non recursively Inheritance of DNA among species by evolution, of source code by software projects (e.g. Linux distribution timeline), of designs in various types of cars, etc. The contents of hierarchical namespaces JSON and YAML documents can be thought of as trees, but are typically represented by nested lists and dictionaries. == See also == Distributed tree search Category:Trees (data structures) (catalogs types of computational trees) == Notes == == References == == Further reading == Donald Knuth. The Art of Computer Programming: Fundamental Algorithms, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89683-4 . Section 2.3: Trees, pp. 308–423. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 10.4: Representing rooted trees, pp. 214–217. Chapters 12–14 (Binary Search Trees, Red–Black Trees, Augmenting Data Structures), pp. 253–320. == External links == Description from the Dictionary of Algorithms and Data Structures
Wikipedia/Tree_(computer_science)
Truthmaker theory is "the branch of metaphysics that explores the relationships between what is true and what exists". The basic intuition behind truthmaker theory is that truth depends on being. For example, a perceptual experience of a green tree may be said to be true because there actually is a green tree. But if there were no tree there, it would be false. So the experience by itself does not ensure its truth or falsehood, it depends on something else. Expressed more generally, truthmaker theory is the thesis that "the truth of truthbearers depends on the existence of truthmakers". A perceptual experience is the truthbearer in the example above. Various representational entities, like beliefs, thoughts or assertions can act as truthbearers. Truthmaker theorists are divided about what type of entity plays the role of truthmaker; popular candidates include states of affairs and tropes. Truthmaker maximalism is the thesis that every truth has a truthmaker. An alternative view is truthmaker atomism, the thesis that only atomic sentences have truthmakers. Truthmaker atomism remains true to the basic intuition that truth depends on being by holding that the truth of molecular sentences depends on the truth of atomic sentences, whose truth in turn depends on being. All non-maximalist positions accept that there are truthmaker gaps: truths without truthmakers. Opponents have tried to disprove truthmaker theory by showing that there are so-called deep truthmaker gaps: truthbearers that not only lack a truthmaker but whose truths do not even depend on being. Various principles governing the truthmaking relation have been proposed in order to make the intuitions about the role and nature of truthmaking explicit. Truthmaker theory is closely related to the correspondence theory of truth, but not identical to it. Truthmaker theory has been applied to various fields in metaphysics, often with the goal of exposing ontological cheaters: theorists who are committed to certain beliefs but do not or cannot account for the existence of a truthmaker for these beliefs. == Overview == In Truth-Makers (1984), Kevin Mulligan, Peter Simons and Barry Smith introduced the truth-maker idea as a contribution to the correspondence theory of truth. Logically atomic empirical sentences such as "John kissed Mary" have truthmakers, typically events or tropes corresponding to the main verbs of the sentences in question. Mulligan et al. explore extensions of this idea to sentences of other sorts, but they do not embrace any position of truthmaker maximalism, according to which every truthbearer has a truthmaker. This maximalist position leads to philosophical difficulties, such as the question of what the truthmaker for an ethical, modal or mathematical truthbearer could be. Someone who is deeply enough committed to truthmakers and who simultaneously doubts that a truthmaker could be found for a certain kind of truthbearer will simply deny that that truthbearer could be true. Those who find the Parmenidean insight sufficiently compelling often take it to be a particularly enlightening metaphysical pursuit to search for truthmakers of these kinds of propositions. Another difficulty for the claim that every truthbearer has a truthmaker is with negations of existential propositions (or, equivalently, universal propositions). In the example of asking if unicorns exist, proposals include the totality of all things, or some worldly state of affairs such as x1's not being a unicorn, x2's not being a unicorn, ..., and everything's being x1, or x2, or ... (the latter suggestion is due to Richard M. Gale). David Lewis has proposed a more moderate version of the truthmaker theory on which truthmakers are only required for positive propositions (e.g., there must be a truthmaker for the proposition that there are horses, but not for the equally true proposition that there are no unicorns). What makes a negative proposition p true is the lack of a falsemaker for it, i.e., the lack of a truthmaker for the negation of p. Thus what makes it true that there are no unicorns is the lack of a truthmaker for the proposition that there are unicorns, i.e., the lack of unicorns. Truthmaker theorists differ as to what entities are the truthmakers of various truthbearers. Some say that the truthmaker of the proposition that Socrates is sitting (assuming Socrates is) is "Socrates' being seated" (whatever exactly that might turn out to be on the correct ontology) and in general the truthmaker of the truthbearer expressed by a sentence s can be denoted by the participial nominalization of s. Others will say that the truthmaker of the proposition that Socrates is sitting is just "Socrates" himself. In any case, the truthmaker is supposed to be something concrete, and on the first view is that whose existence is reported by the truthbearer and on the second view is that which the truthbearer is about. While the existence of truthmakers may seem an abstruse question, concrete instances are at the heart of a number of philosophical issues. Thus, J. L. Mackie has argued that the truthmakers of moral claims would be "queer entities", too strange to exist, and hence all moral claims are false. Alternatively, a divine command metaethicist may insist that the only possible candidate for a truthmaker of a moral claim is a command from a perfect God, and hence if moral claims are true and a truthmaker theory holds, then God exists. Thus the disagreement between various metaethical schools is in part a disagreement over what kinds of truthmakers moral claims would have if these claims were true and over whether such truthmakers exist. == Truthmaker gaps == A truthmaker gap is a truth that lacks a truthmaker. Truthmaker maximalists hold that there are no truthmaker gaps: every truth has a truthmaker. Truthmaker non-maximalists, on the other hand, allow that some truths lack a truthmaker. Truthmaker non-maximalists still count as truthmaker theorists in the sense that they hold onto the core intuition of truthmaker theory that truth depends on being. Atomic truthmaker theories, which have their root in logical atomism, are examples of such a position. According to them, only atomic sentences have truthmakers. A sentence is atomic or simple if it does not have other sentences as proper parts. For example, "The sun is shining" is an atomic sentence while "The sun is shining and the wind is blowing" is a non-atomic or molecular sentence since it is made up of two sentences linked by the conjunction "and". In propositional calculus molecular sentences are composed through truth-functional logical connectives. Molecular sentences lack truthmakers according to atomic truthmaker theories and therefore constitute truthmaker gaps. But the fact that the truth values of molecular sentences depends on the truth values of its constituents (if only truth-functional connectives are allowed) ensures that truth still depends on being. This type of truthmaker gap has been called a "shallow" truthmaker gap. Shallow truthmaker gaps are contrasted with "deep" truthmaker gaps. Deep truthmaker gaps are truths that do not depend on being. They therefore pose a challenge to any type of truthmaker theory. In terms of possible worlds, a deep truthmaker gap is a proposition that is true in one possible world and false in another where there is no difference between these two worlds beside the truth value of this proposition. Critics of truthmaker theory have tried to find deep truthmaker gaps in order to refute truthmaker theory in general. == Truthmaking principles == Various principles governing the truthmaking relation have been proposed. They aim to make our intuitions about the role and nature of truthmaking explicit. The entailment principle states that if entity e is a truthmaker for proposition p and p entails proposition q then e is also a truthmaker for q. The conjunction principle states that if entity e is a truthmaker for the conjunction of proposition p and proposition q then e is also a truthmaker for p. The disjunction principle states that if entity e is a truthmaker for the disjunction of proposition p and proposition q then e is either a truthmaker of p or a truthmaker of q. These principles seem intuitively to be true but it has been shown that they lead to implausible conclusions when combined with other plausible principles. == Relation to the correspondence theory of truth == The correspondence theory of truth states that truth consists in correspondence with reality. Or in the words of Thomas Aquinas: "A judgment is said to be true when it conforms to the external reality". Truthmaker theory is closely related to correspondence theory; some authors see it as a modern version of correspondence theory. The similarity between the two can be seen in the following example definitions: Correspondence theory: David's belief that the sky is blue is true if and only if this belief stands in a correspondence-relation to the fact that the sky is blue. Truthmaker theory: David's belief that the sky is blue is true if and only if this belief stands in a truthmaking-relation to the fact that the sky is blue. But despite the obvious similarities there are a few important differences between truthmaker theory and correspondence theory. For one, correspondence theory aims to give a substantive account or a definition of what truth is. Truthmaker theory, on the other hand, has the goal of determining how truth depends on being. So it presupposes the notion of truth instead of defining it. While it seems natural to combine truthmaker theory with a correspondence-conception of truth, this is not necessary. Another difference between the two theories is that correspondence is a symmetric relation while the truthmaking relation is asymmetric. == Applications == Arguments based on truthmaker theory have been used in various fields to criticize so-called "ontological cheaters". An ontological cheater is someone who is committed to a certain belief but does not or cannot account for the existence of a truthmaker for this belief. If such a belief was true then its truth would be brute or free-floating: it would be disconnected from any underlying reality. This is opposed to the basic intuition behind truthmaker theory that truth depends on being. Defense strategies open to theorists accused of ontological cheating include denying that the proposition in question is true, denying the legitimacy of truthmaker theory as a whole or finding a so-called "proxy" or "trace" within their preferred ontology. A proxy or trace, in this context, is an entity that can act as a truthmaker for the proposition in question even though it is not obvious that this proposition is about this entity. An example of such a strategy in actualism is to use actual but abstract objects as proxies for propositions about possible objects, whose existence is denied by actualism. === Presentism === One such criticism has been leveled against presentism. Presentism is the view that only the present exists, i.e. that past entities or events lack existence. Eternalism is the opposite of presentism. It holds that past, present and future existents are equally real. Beliefs about the past and the future are very common, for example the belief that dinosaurs existed. Providing a truthmaker for this belief is quite straightforward for eternalists: they may claim that the dinosaurs themselves or facts about dinosaurs act as truthmakers. This is unproblematic since, for eternalists, past entities have regular existence. This strategy is not available to the presentists since they deny that past entities have existence. But there seem to be no obvious truthmaker candidates for this belief among the present entities. The presentist would have to be labeled an ontological cheater unless they can find a truthmaker within their ontology. === Phenomenalism === Phenomenalism has been subjected to a similar criticism. Phenomenalism is the view that only phenomena exist. It is opposed to the common sense intuition that the material objects we perceive exist independently of our perceptual experiences of them and that they even exist when not perceived. This includes for example the belief that valuables locked inside a safe do not cease to exist despite the fact that no one observes them in there, which would, of course, defeat the purpose of locking them inside in the first place. The phenomenalist faces the problem of how to account for the truth of this belief. A well-known solution to this problem comes from John Stuart Mill. John claimed that we can account for unperceived objects in terms of counterfactual conditionals: It is true that the valuables are in the safe because if someone looked inside then this person would have a corresponding sensory impression. But this solution does not satisfy the truthmaker theorist since it still leaves open what the truthmaker for this counterfactual conditional is. It is not clear how such a truthmaker could be found within the phenomenalist ontology. === Actualism === Actualism is the view that everything there is, is actual, i.e. that only actual things have existence. Actualism contrasts with possibilism, the view that there are some entities that are merely possible. Actualists face the problem of how to account for the truthmakers of modal truths, like "it was possible for the Cuban Missile Crisis to escalate into a full-scale nuclear war", "there could have been purple cows" or "it is necessary that all cows are animals". Actualists have proposed various solutions, but there is no consensus as to which one is the best solution. A well-known account relies on the notion of possible worlds, conceived as actual abstract objects, for example as maximal consistent sets of propositions or of states of affairs. A set of propositions is maximal if, for any statement p, either p or not-p is a member. Possible worlds act as truthmakers for modal truths. For example, there is a possible world which is inhabited by purple cows. This world is a truthmaker for "there could have been purple cows". Cows are animals in all possible worlds that are inhabited by cows. So all worlds are the truthmaker of "it is necessary that all cows are animals". This account relies heavily on a logical notion of modality, since possibility and necessity are defined in terms of consistency. This dependency has prompted some philosophers to assert that no truthmakers at all are needed for modal truths, that modal truths are true "by default". This position involves abandoning truthmaker maximalism. An alternative solution to the problem of truthmakers for modal truths is based on the notion of "essence". Objects have their properties either essentially or accidentally. The essence of an object involves all the properties it has essentially. The essence of a thing defines its nature: what it fundamentally is. On this type of account, the truthmaker for "it is necessary that all cows are animals" is that it belongs to the essence of cows to be animals. The truthmaker for "there could have been purple cows" is that color is not essential to cows. Some essentialist theories focus on object essences, i.e. that certain properties are essential to a specific object. Other essentialist theories focus on kind essences, i.e. that certain properties are essential to the kind or species of the object in question. == See also == Slingshot argument == References == == Further reading == Armstrong, D. M. (2004). Truth and truthmakers. Cambridge: Cambridge University Press. ISBN 0-521-54723-7 Beebee, H., & Dodd, J. (Eds.). (2005). Truthmakers: The contemporary debate. Oxford: Oxford University Press. ISBN 0-19-928356-7 Fine, Kit (2018) Truthmaking and the is–Ought Gap. Synthese, 1-28. Lewis, David (2001) Truthmaking and Difference-Making, Noûs 35 (4):602–615. MacBride, Fraser. (2013). “Truthmakers.” Stanford Encyclopedia of Philosophy Mulligan, K., Simons, P. M. and Smith B. (1984). "Truth-Makers", Philosophy and Phenomenological Research, 44, 287–321. Mulligan, K. (2007). Two dogmas of truthmaking, Metaphysics and Truthmakers Frankfurt: Ontos Verlag, 51–66. Rodriguez-Pereyra, Gonzalo. (2006). “Truthmakers.” Philosophy Compass (1), 186–200. Smith, B. (1999). “Truthmaker Realism”, Australasian Journal of Philosophy, 77 (3), 274–291. == External links == Truthmaker theory at the Indiana Philosophy Ontology Project "Truth-makers", by Kevin Mulligan, Barry Smith, & Peter Simons, Philosophy and Phenomenological Research, 44 (1984), 287–321.
Wikipedia/Truthmaker_theory
Feminist metaphysics aims to question how inquiries and answers in the field of metaphysics have supported sexism. Feminist metaphysics overlaps with fields such as the philosophy of mind and philosophy of self. Feminist metaphysicians such as Sally Haslanger, Ásta, and Judith Butler have sought to explain the nature of gender in the interest of advancing feminist goals. Another aim of feminist metaphysics has been to provide a basis for feminist activism by explaining what unites women as a group. These accounts have historically centered on cisgender women, but philosophers such as Gayle Salamon, Talia Mae Bettcher and Robin Dembroff have sought to further explain the genders of transgender and non-binary people. == Approaches == === Social constructionism === Feminist metaphysicians have significantly influenced social ontology by developing tools to critique and understand social realities. Social constructionism emerged in feminism as a response to biological determinist claims of female inferiority. Existentialist philosopher Simone de Beauvoir argues in her seminal work The Second Sex that, although biological features distinguish men and women, these features neither cause nor justify the social conditions which disadvantage women. Beauvoir rejects explanations based on biology, psychoanalytic theory and historical materialism, advancing instead a phenomenological investigation influenced by Maurice Merleau-Ponty. The distinction between sex and gender in feminist theory is commonly attributed to Beauvoir. Later theorists would challenge the commitment to the pre-social existence of sex, arguing that sex is socially constructed as well as gender. For Monique Wittig, the division of bodies into sexes is the product of a heterosexual society.There is but sex that is oppressed and sex that oppresses. It is oppression that creates sex and not the contrary. The contrary would be to say that sex creates oppression, or to say that the cause (origin) of oppression is to be found in sex itself, in a natural division of the sexes preexisting (or outside of) society.This is expanded by Judith Butler in Gender Trouble. Drawing on post-structuralist theory, Butler criticizes the dependence on a pre-discursive sex upon which gender would be constructed, instead proposing gender as a performative doing. === Psychoanalytic theory === Écriture féminine is a concept from psychoanalytic French feminism that emphasizes the connection between women's writing and their bodies. French feminists argue that Western thought suppresses female experiences and reinforces phallogocentrism, and propose deconstructing language through women's distinct bodily experiences. In This Sex Which is Not One (1977), Luce Irigaray seeks to create a psychoanalytic narrative that incorporates Lacanian ideas while challenging its phallocentric elements. Irigaray contends that women can cultivate a sense of identity and sexuality without needing to conform to phallic ideals, and that the female body is multiplicitous. === Gender performativity === Judith Butler's theory of gender performativity can be seen as a means to show "the ways in which reified and naturalized conceptions of gender might be understood as constituted and, hence, capable of being constituted differently.": 520  Drawing from J. L. Austin's speech act theory, Butler suggests that gender is performative, meaning it comes into existence through repeated social practices, gestures, and discourses that reinforce norms of masculinity and femininity. This repetition creates the illusion of a stable gender identity, but Butler emphasizes that these performances are neither voluntary nor fixed; rather, they are shaped by cultural expectations and can be subverted through alternative performances. Other influences include Friedrich Nietzsche, Michel Foucault, and Jacques Derrida.: 581  On Butler's hypothesis, the performative aspect of gender is perhaps most obvious in drag performance, which offers a rudimentary understanding of gender binaries in its emphasis on gender performance. Butler understands drag cannot be regarded as an example of subjective or singular identity, where "there is a 'one' who is prior to gender, a one who goes to the wardrobe of gender decides with deliberation which gender it will be today".: 21  Consequently, drag should not be considered the honest expression of its performer's intent. Rather, Butler suggests that what is performed "can only be understood through reference to what is barred from the signifier within the domain of corporeal legibility".: 24  According to Butler, gender performance is subversive because it is "the kind of effect that resists calculation", which is to say that signification is multiplicitous, that the subject is unable to control it, and so subversion is always occurring and always unpredictable.: 29  Moya Lloyd suggests that the political potential of gender performances can be evaluated relative to similar past acts in similar contexts. Conversely, Rosalyn Diprose lends a hard-line Foucauldian interpretation to her understanding of gender performance's political reach, as one's identity "is built on the invasion of the self by the gestures of others, who, by referring to other others, are already social beings".: 25  Diprose implies that the individual's will, and the individual performance, is always subject to the dominant discourse of an Other (or Others), so as to restrict the transgressive potential of performance to the inscription of simply another dominant discourse. Ásta acknowledges the strengths of Butler's metaphysics of sex and gender, but raises concerns about the role of biological constraints in the construction of sex. She proposes a “conferralist” framework, where both sex and gender are socially constructed but subject to different constraints. === Female energy === Feminist theologian Mary Daly proposed in her work Gyn/Ecology (1978) the existence of a feminine nature that should be defended against "male barrenness". "Since female energy is essentially biophilic", she writes, "the female spirit/body is the primary target in this perpetual war of aggression against life. Gyn/Ecology is the reclaiming of life-loving female energy." Janice Raymond had Daly as her advisor when writing The Transsexual Empire (1979), in which she states: "It is not hard to understand why transsexuals want to become lesbian-feminists. They indeed have discovered where strong female energy exists and want to capture it." == Problem of universals == In the context of feminist metaphysics, the problem of universals led to a division between gender realists and gender nominalists. Elizabeth Spelman identified in the 1980s a predominance of realism in Western feminist theory, which she accused of overlooking the differences between women. Nominalism has since become the hegemonic view. == See also == Cultural feminism Materialist feminism Post-structural feminism Social constructionism Strategic essentialism == References == == Further reading == Battersby, Christine. The Phenomenal Woman: Feminist Metaphysics and the Patterns of Identity. New York: Routledge, 1998. ISBN 978-0-415-92035-3 OCLC 37742199 Howell, Nancy R. A Feminist Cosmology: Ecology, Solidarity, and Metaphysics. Amherst, N.Y.: Humanity Books, 2000. ISBN 978-1-573-92653-9 OCLC 36713191 Raschke, Debrah. Modernism, Metaphysics, and Sexuality. Selinsgrove: Susquehanna University Press, 2006. ISBN 978-1-575-91106-9 OCLC 63679917 Witt, Charlotte. Feminist Metaphysics Explorations in the Ontology of Sex, Gender and the Self. Dordrecht: Springer, 2010. ISBN 978-9-048-137831 OCLC 695386850 Schües, Christina, Dorothea Olkowski, and Helen Fielding. Time in Feminist Phenomenology. Bloomington: Indiana University Press, 2011. ISBN 978-0-253-00160-3 OCLC 747431814 Witt, Charlotte. The Metaphysics of Gender. New York: Oxford University Press, 2011. ISBN 978-0-199-74040-6 OCLC 706025098
Wikipedia/Feminist_metaphysics
Locus of control is the degree to which people believe that they, as opposed to external forces (beyond their influence), have control over the outcome of events in their lives. The concept was developed by Julian B. Rotter in 1954, and has since become an aspect of personality psychology. A person's "locus" (plural "loci", Latin for "place" or "location") is conceptualized as internal (a belief that one can control one's own life) or external (a belief that life is controlled by outside factors which the person can not influence, or that chance or fate controls their lives). Individuals with a strong internal locus of control believe events in their life are primarily a result of their own actions: for example, when receiving an exam result, people with an internal locus of control tend to praise or blame themselves and their abilities. People with a strong external locus of control tend to praise or blame external factors such as the teacher or the difficulty of the exam. Locus of control has generated much research in a variety of areas in psychology. The construct is applicable to such fields as educational psychology, health psychology, industrial and organizational psychology, and clinical psychology. Debate continues whether domain-specific or more global measures of locus of control will prove to be more useful in practical application. Careful distinctions should also be made between locus of control (a personality variable linked with generalized expectancies about the future) and attributional style (a concept concerning explanations for past outcomes), or between locus of control and concepts such as self-efficacy. Locus of control is one of the four dimensions of core self-evaluations – one's fundamental appraisal of oneself – along with neuroticism, self-efficacy, and self-esteem. The concept of core self-evaluations was first examined by Judge, Locke, and Durham (1997), and since has proven to have the ability to predict several work outcomes, specifically, job satisfaction and job performance. In a follow-up study, Judge et al. (2002) argued that locus of control, neuroticism, self-efficacy, and self-esteem factors may have a common core. == History == Locus of control as a theoretical construct derives from Julian B. Rotter's (1954) social learning theory of personality. It is an example of a problem-solving generalized expectancy, a broad strategy for addressing a wide range of situations. In 1966 he published an article in Psychological Monographs which summarized over a decade of research (by Rotter and his students), much of it previously unpublished. In 1976, Herbert M. Lefcourt defined the perceived locus of control: "...a generalised expectancy for internal as opposed to external control of reinforcements". Attempts have been made to trace the genesis of the concept to the work of Alfred Adler, but its immediate background lies in the work of Rotter and his students. Early work on the topic of expectations about control of reinforcement had been performed in the 1950s by James and Phares (prepared for unpublished doctoral dissertations supervised by Rotter at Ohio State University). Another Rotter student, William H. James studied two types of "expectancy shifts": Typical expectancy shifts, believing that success (or failure) would be followed by a similar outcome Atypical expectancy shifts, believing that success (or failure) would be followed by a dissimilar outcome Additional research led to the hypothesis that typical expectancy shifts were displayed more often by those who attributed their outcomes to ability, whereas those who displayed atypical expectancy were more likely to attribute their outcomes to chance. This was interpreted that people could be divided into those who attribute to ability (an internal cause) versus those who attribute to luck (an external cause). Bernard Weiner argued that rather than ability-versus-luck, locus may relate to whether attributions are made to stable or unstable causes. Rotter (1975, 1989) has discussed problems and misconceptions in others' use of the internal-versus-external construct. == Personality orientation == Rotter (1975) cautioned that internality and externality represent two ends of a continuum, not an either/or typology. Internals tend to attribute outcomes of events to their own control. People who have internal locus of control believe that the outcomes of their actions are results of their own abilities. Internals believe that their hard work would lead them to obtain positive outcomes. They also believe that every action has its consequence, which makes them accept the fact that things happen and it depends on them if they want to have control over it or not. Externals attribute outcomes of events to external circumstances. A person with an external locus of control will tend to believe that their present circumstances are not the effect of their own influence, decisions, or control, and even that their own actions are a result of external factors, such as fate, luck, history, the influence of powerful forces, or individually or unspecified others (such as governmental entities; corporations; racial, religious, ethnic, or fraternal groups; sexes; political affiliations; outgroups; or even perceived individual personal antagonists) and/or a belief that the world is too complex for one to predict or influence its outcomes. Laying blame on others for one's own circumstances with the implication one is owed a moral or other debt is an indicator of a tendency toward an external locus of control. It should not be thought, however, that internality is linked exclusively with attribution to effort and externality with attribution to luck (as Weiner's work – see below – makes clear). This has obvious implications for differences between internals and externals in terms of their achievement motivation, suggesting that internal locus is linked with higher levels of need for achievement. Due to their locating control outside themselves, externals tend to feel they have less control over their fate. People with an external locus of control tend to be more stressed and prone to clinical depression. Internals were believed by Rotter (1966) to exhibit two essential characteristics: high achievement motivation and low outer-directedness. This was the basis of the locus-of-control scale proposed by Rotter in 1966, although it was based on Rotter's belief that locus of control is a single construct. Since 1970, Rotter's assumption of uni-dimensionality has been challenged, with Levenson (for example) arguing that different dimensions of locus of control (such as beliefs that events in one's life are self-determined, or organized by powerful others and are chance-based) must be separated. Weiner's early work in the 1970s suggested that orthogonal to the internality-externality dimension, differences should be considered between those who attribute to stable and those who attribute to unstable causes. This new, dimensional theory meant that one could now attribute outcomes to ability (an internal stable cause), effort (an internal unstable cause), task difficulty (an external stable cause) or luck (an external, unstable cause). Although this was how Weiner originally saw these four causes, he has been challenged as to whether people see luck (for example) as an external cause, whether ability is always perceived as stable, and whether effort is always seen as changing. Indeed, in more recent publications (e.g. Weiner, 1980) he uses different terms for these four causes (such as "objective task characteristics" instead of "task difficulty" and "chance" instead of "luck"). Psychologists since Weiner have distinguished between stable and unstable effort, knowing that in some circumstances effort could be seen as a stable cause (especially given the presence of words such as "industrious" in English). Regarding locus of control, there is another type of control that entails a mix among the internal and external types. People who have the combination of the two types of locus of control are often referred to as bi-locals. People who have bi-local characteristics are known to handle stress and cope with their diseases more efficiently by having the mixture of internal and external locus of control. People who have this mix of loci of control can take personal responsibility for their actions and the consequences thereof while remaining capable of relying upon and having faith in outside resources; these characteristics correspond to the internal and external loci of control, respectively. == Measuring scales == The most widely used questionnaire to measure locus of control is the 23-item (plus six filler items), forced-choice scale of Rotter (1966). However, this is not the only questionnaire; Bialer's (1961) 23-item scale for children predates Rotter's work. Also relevant to the locus-of-control scale are the Crandall Intellectual Ascription of Responsibility Scale (Crandall, 1965) and the Nowicki-Strickland Scale (Nowicki & Strickland 1973). One of the earliest psychometric scales to assess locus of control (using a Likert-type scale, in contrast to the forced-choice alternative measure in Rotter's scale) was that devised by W. H. James for his unpublished doctoral dissertation, supervised by Rotter at Ohio State University; however, this remains unpublished. Many measures of locus of control have appeared since Rotter's scale. These were reviewed by Furnham and Steele (1993) and include those related to health psychology, industrial and organizational psychology and those specifically for children (such as the Stanford Preschool Internal-External Scale for three- to six-year-olds). Furnham and Steele (1993) cite data suggesting that the most reliable, valid questionnaire for adults is the Duttweiler scale. For a review of the health questionnaires cited by these authors, see "Applications" below. The Duttweiler (1984) Internal Control Index (ICI) addresses perceived problems with the Rotter scales, including their forced-choice format, susceptibility to social desirability and heterogeneity (as indicated by factor analysis). She also notes that, while other scales existed in 1984 to measure locus of control, "they appear to be subject to many of the same problems". Unlike the forced-choice format used on Rotter's scale, Duttweiler's 28-item ICI uses a Likert-type scale in which people must state whether they would rarely, occasionally, sometimes, frequently or usually behave as specified in each of 28 statements. The ICI assess variables pertinent to internal locus: cognitive processing, autonomy, resistance to social influence, self-confidence and delay of gratification. A small (133 student-subject) validation study indicated that the scale had good internal consistency reliability (a Cronbach's alpha of 0.85). == Attributional style == Attributional style (or explanatory style) is a concept introduced by Lyn Yvonne Abramson, Martin Seligman and John D. Teasdale. This concept advances a stage further than Weiner, stating that in addition to the concepts of internality-externality and stability a dimension of globality-specificity is also needed. Abramson et al. believed that how people explained successes and failures in their lives related to whether they attributed these to internal or external factors, short-term or long-term factors, and factors that affected all situations. The topic of attribution theory (introduced to psychology by Fritz Heider) has had an influence on locus of control theory, but there are important historical differences between the two models. Attribution theorists have been predominantly social psychologists, concerned with the general processes characterizing how and why people make the attributions they do, whereas locus of control theorists have been concerned with individual differences. Significant to the history of both approaches are the contributions made by Bernard Weiner in the 1970s. Before this time, attribution theorists and locus of control theorists had been largely concerned with divisions into external and internal loci of causality. Weiner added the dimension of stability-instability (and later controllability), indicating how a cause could be perceived as having been internal to a person yet still beyond the person's control. The stability dimension added to the understanding of why people succeed or fail after such outcomes. == Applications == Locus of control's best known application may have been in the area of health psychology, largely due to the work of Kenneth Wallston. Scales to measure locus of control in the health domain were reviewed by Furnham and Steele in 1993. The best-known are the Health Locus of Control Scale and the Multidimensional Health Locus of Control Scale, or MHLC. The latter scale is based on the idea (echoing Levenson's earlier work) that health may be attributed to three sources: internal factors (such as self-determination of a healthy lifestyle), powerful others (such as one's doctor) or luck (which is very dangerous as lifestyle advice will be ignored – these people are very difficult to help). Some of the scales reviewed by Furnham and Steele (1993) relate to health in more specific domains, such as obesity (for example, Saltzer's (1982) Weight Locus of Control Scale or Stotland and Zuroff's (1990) Dieting Beliefs Scale), mental health (such as Wood and Letak's (1982) Mental Health Locus of Control Scale or the Depression Locus of Control Scale of Whiteman, Desmond and Price, 1987) and cancer (the Cancer Locus of Control Scale of Pruyn et al., 1988). In discussing applications of the concept to health psychology Furnham and Steele refer to Claire Bradley's work, linking locus of control to the management of diabetes mellitus. Empirical data on health locus of control in a number of fields was reviewed by Norman and Bennett in 1995; they note that data on whether certain health-related behaviors are related to internal health locus of control have been ambiguous. They note that some studies found that internal health locus of control is linked with increased exercise, but cite other studies which found a weak (or no) relationship between exercise behaviors (such as jogging) and internal health locus of control. A similar ambiguity is noted for data on the relationship between internal health locus of control and other health-related behaviors (such as breast self-examination, weight control and preventive-health behavior). Of particular interest are the data cited on the relationship between internal health locus of control and alcohol consumption. Norman and Bennett note that some studies that compared alcoholics with non-alcoholics suggest alcoholism is linked to increased externality for health locus of control; however, other studies have linked alcoholism with increased internality. Similar ambiguity has been found in studies of alcohol consumption in the general, non-alcoholic population. They are more optimistic in reviewing the literature on the relationship between internal health locus of control and smoking cessation, although they also point out that there are grounds for supposing that powerful-others and internal-health loci of control may be linked with this behavior. It is thought that, rather than being caused by one or the other, that alcoholism is directly related to the strength of the locus, regardless of type, internal or external. They argue that a stronger relationship is found when health locus of control is assessed for specific domains than when general measures are taken. Overall, studies using behavior-specific health locus scales have tended to produce more positive results. These scales have been found to be more predictive of general behavior than more general scales, such as the MHLC scale. Norman and Bennett cite several studies that used health-related locus-of-control scales in specific domains (including smoking cessation), diabetes, tablet-treated diabetes, hypertension, arthritis, cancer, and heart and lung disease. They also argue that health locus of control is better at predicting health-related behavior if studied in conjunction with health value (the value people attach to their health), suggesting that health value is an important moderator variable in the health locus of control relationship. For example, Weiss and Larsen (1990) found an increased relationship between internal health locus of control and health when health value was assessed. Despite the importance Norman and Bennett attach to specific measures of locus of control, there are general textbooks on personality which cite studies linking internal locus of control with improved physical health, mental health and quality of life in people with diverse conditions: HIV, migraines, diabetes, kidney disease and epilepsy. During the 1970s and 1980s, Whyte correlated locus of control with the academic success of students enrolled in higher-education courses. Students who were more internally controlled believed that hard work and focus would result in successful academic progress, and they performed better academically. Those students who were identified as more externally controlled (believing that their future depended upon luck or fate) tended to have lower academic-performance levels. Cassandra B. Whyte researched how control tendency influenced behavioral outcomes in the academic realm by examining the effects of various modes of counseling on grade improvements and the locus of control of high-risk college students. Rotter also looked at studies regarding the correlation between gambling and either an internal or external locus of control. For internals, gambling is more reserved. When betting, they primarily focus on safe and moderate wagers. Externals, however, take more chances and, for example, bet more on a card or number that has not appeared for a certain period, under the notion that this card or number has a higher chance of occurring, a belief known as the gambler's fallacy. === Organizational psychology and religion === Other fields to which the concept has been applied include industrial and organizational psychology, sports psychology, educational psychology and the psychology of religion. Richard Kahoe has published work in the latter field, suggesting that intrinsic religious orientation correlates positively (and extrinsic religious orientation correlates negatively) with internal locus. Of relevance to both health psychology and the psychology of religion is the work of Holt, Clark, Kreuter and Rubio (2003) on a questionnaire to assess spiritual-health locus of control. The authors distinguished between an active spiritual-health locus of control (in which "God empowers the individual to take healthy actions") and a more passive spiritual-health locus of control (where health is left up to God). In industrial and organizational psychology, it has been found that internals are more likely to take positive action to change their jobs (rather than merely talk about occupational change) than externals. Locus of control relates to a wide variety of work variables, with work-specific measures relating more strongly than general measures. In Educational setting, some research has shown that students who were intrinsically motivated had processed reading material more deeply and had better academic performance than students with extrinsic motivation. === Consumer research === Locus of control has also been applied to the field of consumer research. For example, Martin, Veer and Pervan (2007) examined how the weight locus of control of women (i.e., beliefs about the control of body weight) influence how they react to female models in advertising of different body shapes. They found that women who believe they can control their weight ("internals"), respond most favorably to slim models in advertising, and this favorable response is mediated by self-referencing. In contrast, women who feel powerless about their weight ("externals"), self-reference larger-sized models, but only prefer larger-sized models when the advertisement is for a non-fattening product. For fattening products, they exhibit a similar preference for larger-sized models and slim models. The weight locus of control measure was also found to be correlated with measures for weight control beliefs and willpower. === Political ideology === Locus of control has been linked to political ideology. In the 1972 U.S. presidential election, research of college students found that those with an internal locus of control were substantially more likely to register as a Republican, while those with an external locus of control were substantially more likely to register as a Democratic. A 2011 study surveying students at Cameron University in Oklahoma found similar results, although these studies were limited in scope. Consistent with these findings, Kaye Sweetser (2014) found that Republicans significantly displayed greater internal locus of control than Democrats and Independents. Those with an internal locus of control are more likely to be of higher socioeconomic status, and are more likely to be politically involved (e.g., following political news, joining a political organization) Those with an internal locus of control are also more likely to vote. == Familial origins == The development of locus of control is associated with family style and resources, cultural stability and experiences with effort leading to reward. Many internals have grown up with families modeling typical internal beliefs; these families emphasized effort, education, responsibility and thinking, and parents typically gave their children rewards they had promised them. In contrast, externals are typically associated with lower socioeconomic status. Societies experiencing social unrest increase the expectancy of being out-of-control; therefore, people in such societies become more external. The 1995 research of Schneewind suggests that "children in large single parent families headed by women are more likely to develop an external locus of control" Schultz and Schultz also claim that children in families where parents have been supportive and consistent in discipline develop internal locus of control. At least one study has found that children whose parents had an external locus of control are more likely to attribute their successes and failures to external causes. Findings from early studies on the familial origins of locus of control were summarized by Lefcourt: "Warmth, supportiveness and parental encouragement seem to be essential for development of an internal locus". However, causal evidence regarding how parental locus of control influences offspring locus of control (whether genetic, or environmentally mediated) is lacking. Locus of control becomes more internal with age. As children grow older, they gain skills which give them more control over their environment. However, whether this or biological development is responsible for changes in locus is unclear. == Age == Some studies showed that with age people develop a more internal locus of control, but other study results have been ambiguous. Longitudinal data collected by Gatz and Karel imply that internality may increase until middle age, decreasing thereafter. Noting the ambiguity of data in this area, Aldwin and Gilmer (2004) cite Lachman's claim that locus of control is ambiguous. Indeed, there is evidence here that changes in locus of control in later life relate more visibly to increased externality (rather than reduced internality) if the two concepts are taken to be orthogonal. Evidence cited by Schultz and Schultz (2005) suggests that locus of control increases in internality until middle age. The authors also note that attempts to control the environment become more pronounced between ages eight and fourteen. Health locus of control is how people measure and understand how people relate their health to their behavior, health status and how long it may take to recover from a disease. Locus of control can influence how people think and react towards their health and health decisions. Each day we are exposed to potential diseases that may affect our health. The way we approach that reality has a lot to do with our locus of control. Sometimes it is expected to see older adults experience progressive declines in their health, for this reason it is believed that their health locus of control will be affected. However, this does not necessarily mean that their locus of control will be affected negatively but older adults may experience decline in their health and this can show lower levels of internal locus of control. Age plays an important role in one's internal and external locus of control. When comparing a young child and an older adult with their levels of locus of control in regards to health, the older person will have more control over their attitude and approach to the situation. As people age they become aware of the fact that events outside of their own control happen and that other individuals can have control of their health outcomes. A study published in the journal Psychosomatic Medicine examined the health effect of childhood locus of control. 7,500 British adults (followed from birth), who had shown an internal locus of control at age 10, were less likely to be overweight at age 30. The children who had an internal locus of control also appeared to have higher levels of self-esteem. == Gender-based differences == As Schultz and Schultz (2005) point out, significant gender differences in locus of control have not been found for adults in the U.S. population. However, these authors also note that there may be specific sex-based differences for specific categories of items to assess locus of control; for example, they cite evidence that men may have a greater internal locus for questions related to academic achievement. A study made by Takaki and colleagues (2006) focused on the sex or gendered differences with relationship to internal locus of control and self-efficacy in hemodialysis patients and their compliance. This study showed that women who had high internal locus of control were less compliant in regards to their health and medical advice compared to the men that participated in this study. Compliance is known to be the degree in which a person's behavior, in this case the patient, has a relationship with the medical advice. For example, a person that is compliant will correctly follow his/her doctor's advice. == Cross-cultural and regional issues == The question of whether people from different cultures vary in locus of control has long been of interest to social psychologists. Japanese people tend to be more external in locus-of-control orientation than people in the U.S.; however, differences in locus of control between different countries within Europe (and between the U.S. and Europe) tend to be small. As Berry et al. pointed out in 1992, ethnic groups within the United States have been compared on locus of control; African Americans in the U.S. are more external than whites when socioeconomic status is controlled. Berry et al. also pointed out in 1992 how research on other ethnic minorities in the U.S. (such as Hispanics) has been ambiguous. More on cross-cultural variations in locus of control can be found in Shiraev & Levy (2004). Research in this area indicates that locus of control has been a useful concept for researchers in cross-cultural psychology. On a less broad scale, Sims and Baumann explained how regions in the United States cope with natural disasters differently. The example they used was tornados. They "applied Rotter's theory to explain why more people have died in tornado[e]s in Alabama than in Illinois". They explain that after giving surveys to residents of four counties in both Alabama and Illinois, Alabama residents were shown to be more external in their way of thinking about events that occur in their lives. Illinois residents, however, were more internal. Because Alabama residents had a more external way of processing information, they took fewer precautions prior to the appearance of a tornado. Those in Illinois, however, were more prepared, thus leading to fewer casualties. Later studies find that these geographic differences can be explained by differences in relational mobility. Relational mobility is a measure of how much choice individuals have in terms of whom to form relationships with, including friendships, romantic partnerships, and work relations. Relational mobility is low in cultures with a subsistence economy that requires tight cooperation and coordination, such as farming, while it is high in cultures based on nomadic herding and in urban industrial cultures. A cross-cultural study found that the relational mobility is lowest in East Asian countries where rice farming is common, and highest in South American countries. == Self-efficacy == Self-efficacy refers to an individual's belief in their capacity to execute behaviors necessary to produce specific performance attainments. It is a related concept introduced by Albert Bandura, and has been measured by means of a psychometric scale. It differs from locus of control by relating to competence in circumscribed situations and activities (rather than more general cross-situational beliefs about control). Bandura has also emphasised differences between self-efficacy and self-esteem, using examples where low self-efficacy (for instance, in ballroom dancing) are unlikely to result in low self-esteem because competence in that domain is not very important (see valence) to an individual. Although individuals may have a high internal health locus of control and feel in control of their own health, they may not feel efficacious in performing a specific treatment regimen that is essential to maintaining their own health. Self-efficacy plays an important role in one's health because when people feel that they have self-efficacy over their health conditions, the effects of their health becomes less of a stressor. Smith (1989) has argued that locus of control only weakly measures self-efficacy; "only a subset of items refer directly to the subject's capabilities". Smith noted that training in coping skills led to increases in self-efficacy, but did not affect locus of control as measured by Rotter's 1966 scale. == Stress == The previous section showed how self-efficacy can be related to a person's locus of control, and stress also has a relationship in these areas. Self-efficacy can be something that people use to deal with the stress that they are faced within their everyday lives. Some findings suggest that higher levels of external locus of control combined with lower levels self-efficacy are related to higher illness-related psychological distress. People who report a more external locus of control also report more concurrent and future stressful experiences and higher levels of psychological and physical problems. These people are also more vulnerable to external influences and as a result, they become more responsive to stress. Veterans of the military forces who have spinal cord injuries and post-traumatic stress are a good group to look at in regard to locus of control and stress. Aging shows to be a very important factor that can be related to the severity of the symptoms of PTSD experienced by patients following the trauma of war. Research suggests that patients with a spinal cord injury benefit from knowing that they have control over their health problems and their disability, which reflects the characteristics of having an internal locus of control. A study by Chung et al. (2006) focused on how the responses of spinal cord injury post-traumatic stress varied depending on age. The researchers tested different age groups including young adults, middle-aged, and elderly; the average age was 25, 48, and 65 for each group respectively. After the study, they concluded that age does not make a difference on how spinal cord injury patients respond to the traumatic events that happened. However, they did mention that age did play a role in the extent to which the external locus of control was used, and concluded that the young adult group demonstrated more external locus of control characteristics than the other age groups to which they were being compared. == See also == == References == == Sources == Abramson, Lyn Y; Seligman, Martin E; Teasdale, John D (1978). "Learned helplessness in humans: Critique and reformulation". Journal of Abnormal Psychology. 87 (1): 49–74. doi:10.1037/0021-843X.87.1.49. PMID 649856. S2CID 2845204. Abramson, Lyn Y; Metalsky, Gerald I; Alloy, Lauren B (1989). "Hopelessness depression: A theory-based subtype of depression". Psychological Review. 96 (2): 358–372. doi:10.1037/0033-295X.96.2.358. S2CID 18511760. Aldwin, C.M.; Gilmer, D.F. (2004). Health, Illness and Optimal Ageing. London: Sage. ISBN 978-0-7619-2259-9. Allen, David G.; Weeks, Kelly P.; Moffitt, Karen R. (2005). "Turnover Intentions and Voluntary Turnover: The Moderating Roles of Self-Monitoring, Locus of Control, Proactive Personality, and Risk Aversion". Journal of Applied Psychology. 90 (5): 980–990. doi:10.1037/0021-9010.90.5.980. PMID 16162070. Anderson, Craig A; Jennings, Dennis L; Arnoult, Lynn H (1988). "Validity and utility of the attributional style construct at a moderate level of specificity". Journal of Personality and Social Psychology. 55 (6): 979–990. doi:10.1037/0022-3514.55.6.979. S2CID 144258995. Berry, J.W.; Poortinga, Y.H.; Segall, M.H.; Dasen, P.R. (1992). Cross-cultural Psychology: Research and Applications. Cambridge: Cambridge University Press. ISBN 978-0-521-37761-4. Buchanan, G.M.; Seligman, M.E.P., eds. (1997). Explanatory Style. NJ: Lawrence Erlbaum Associates. ISBN 978-0-8058-0924-4. Burns, Melanie O; Seligman, Martin E (1989). "Explanatory style across the life span: Evidence for stability over 52 years". Journal of Personality and Social Psychology. 56 (3): 471–477. doi:10.1037/0022-3514.56.3.471. PMID 2926642. Cutrona, Carolyn E; Russell, Dan; Jones, R. Dallas (1984). "Cross-situational consistency in causal attributions: Does attributional style exist?". Journal of Personality and Social Psychology. 47 (5): 1043–1058. doi:10.1037/0022-3514.47.5.1043. Duttweiler, Patricia C (1984). "The Internal Control Index: A Newly Developed Measure of Locus of Control". Educational and Psychological Measurement. 44 (2): 209–221. doi:10.1177/0013164484442004. S2CID 144130334. Furnham, Adrian; Steele, Howard (1993). "Measuring locus of control: A critique of general, children's, health- and work-related locus of control questionnaires". British Journal of Psychology. 84 (4): 443–479. doi:10.1111/j.2044-8295.1993.tb02495.x. PMID 8298858. Eisner, J.E. (1997). "The origins of explanatory style: Trust as a determinant of pessimism and optimism". In Buchanan, G.M.; Seligman, M.E.P. (eds.). Explanatory Style. NJ: Lawrence Erlbaum Associates. pp. 49–55. ISBN 978-0-8058-0924-4. Gong-Guy, Elizabeth; Hammen, Constance (1980). "Causal perceptions of stressful events in depressed and nondepressed outpatients". Journal of Abnormal Psychology. 89 (5): 662–669. doi:10.1037/0021-843X.89.5.662. PMID 7410726. Hock, Roger R. (2008). "Are you the master of your fate?". Forty Studies that Changed Psychology (PDF) (6th ed.). Pearson. pp. 192–199. ISBN 978-0-13-504507-7. Archived from the original (PDF) on 2018-10-24. Retrieved 2019-12-13. Holt, Cheryl L; Clark, Eddie M; Kreuter, Matthew W; Rubio, Doris M (2003). "Spiritual health locus of control and breast cancer beliefs among urban African American women". Health Psychology. 22 (3): 294–299. doi:10.1037/0278-6133.22.3.294. PMID 12790257. Johansson, Boo; Grant, Julia D.; Plomin, Robert; Pedersen, Nancy L.; Ahern, Frank; Berg, Stig; McClearn, Gerald E. (2001). "Health locus of control in late life: A study of genetic and environmental influences in twins aged 80 years and older". Health Psychology. 20 (1): 33–40. doi:10.1037/0278-6133.20.1.33. PMID 11199063. Kahoe, Richard D (1974). "Personality and achievement correlates of intrinsic and extrinsic religious orientations". Journal of Personality and Social Psychology. 29 (6): 812–818. doi:10.1037/h0036222. Lefcourt, Herbert M (1966). "Internal versus external control of reinforcement: A review". Psychological Bulletin. 65 (4): 206–220. doi:10.1037/h0023116. PMID 5325292. Lefcourt, H.M. (1976). Locus of Control: Current Trends in Theory and Research. NJ: Lawrence Erlbaum Associates. ISBN 978-0-470-15044-3. Maltby, J.; Day, L.; Macaskill, A. (2007). Personality, Individual Differences and Intelligence (1st ed.). Harlow: Pearson Prentice Hall. ISBN 978-0-13-129760-9. Meyerhoff, Michael K (2004). "Locus of Control". Pediatrics for Parents. 21 (10): 8. EBSCO 17453574. Norman, Paul D; Antaki, Charles (1988). "Real Events Attributional Style Questionnaire". Journal of Social and Clinical Psychology. 7 (2–3): 97–100. doi:10.1521/jscp.1988.7.2-3.97. Norman, P.; Bennett, P. (1995). "Health Locus of Control". In Conner, M.; Norman, P. (eds.). Predicting Health Behaviour. Buckingham: Open University Press. pp. 62–94. APA 1996-97268-003. Nowicki, Stephen; Strickland, Bonnie R (1973). "A locus of control scale for children". Journal of Consulting and Clinical Psychology. 40: 148–154. doi:10.1037/h0033978. S2CID 40029563. Peterson, Christopher; Semmel, Amy; von Baeyer, Carl; Abramson, Lyn Y; Metalsky, Gerald I; Seligman, Martin E. P (1982). "The attributional Style Questionnaire". Cognitive Therapy and Research. 6 (3): 287–299. doi:10.1007/BF01173577. S2CID 30737751. Robbins; Hayes (1997). "The role of causal attributions in the prediction of depression". In Buchanan, G.M.; Seligman, M.E.P. (eds.). Explanatory Style. NJ: Lawrence Erlbaum Associates. pp. 71–98. ISBN 978-0-8058-0924-4. Rotter, J.B. (1954). Social learning and clinical psychology. NY: Prentice-Hall. Rotter, Julian B (1966). "Generalized expectancies for internal versus external control of reinforcement". Psychological Monographs: General and Applied. 80 (1): 1–28. doi:10.1037/h0092976. PMID 5340840. S2CID 15355866. Rotter, Julian B (1975). "Some problems and misconceptions related to the construct of internal versus external control of reinforcement". Journal of Consulting and Clinical Psychology. 43: 56–67. doi:10.1037/h0076301. Rotter, Julian B (1990). "Internal versus external control of reinforcement: A case history of a variable". American Psychologist. 45 (4): 489–493. doi:10.1037/0003-066X.45.4.489. S2CID 41698755. Schultz, D.P.; Schultz, S.E. (2005). Theories of Personality (8th ed.). Wadsworth: Thomson. ISBN 978-0-534-62402-6. Sherer, Mark; Maddux, James E; Mercandante, Blaise; Prentice-Dunn, Steven; Jacobs, Beth; Rogers, Ronald W (1982). "The Self-Efficacy Scale: Construction and Validation". Psychological Reports. 51 (2): 663–671. doi:10.2466/pr0.1982.51.2.663. S2CID 144745134. Shiraev, E.; Levy, D. (2004). Cross-cultural Psychology: Critical Thinking and Contemporary Applications (2nd ed.). Boston: Pearson. ISBN 978-0-205-38612-3. Smith, Ronald E (1989). "Effects of coping skills training on generalized self-efficacy and locus of control". Journal of Personality and Social Psychology. 56 (2): 228–233. doi:10.1037/0022-3514.56.2.228. PMID 2926626. S2CID 14092752. Weiner, B., ed. (1974). Achievement Motivation and Attribution Theory. NY: General Learning Press. Weiner, B. (1980). Human Motivation. New York: Holt, Rinehart and Winston. Whyte, C. (1980). An Integrated Counseling and Learning Assistance Center. New Directions Sourcebook-Learning Assistance Centers. Jossey-Bass, Inc. Whyte, C. (1978). "Effective Counseling Methods for High-Risk College Freshmen". Measurement and Evaluation in Guidance. 6 (4): 198–200. doi:10.1080/00256307.1978.12022132. ERIC EJ177217. Xenikou, Athena; Furnham, Adrian; McCarrey, Michael (1997). "Attributional style for negative events: A proposition for a more reliable and valid measure of attributional style". British Journal of Psychology. 88: 53–69. doi:10.1111/j.2044-8295.1997.tb02620.x. == Bibliography == R. Gross, P. Humphreys, Psychology: The Science of Mind and Behaviour Psychology Press, 1994, ISBN 978-0-340-58736-2. == External links == Locus of control: A class tutorial Spheres of Control Scale Attributional Style & Controllability
Wikipedia/Locus_of_control
In sociology, action theory is the theory of social action presented by the American theorist Talcott Parsons. Parsons established action theory to integrate the study of social action and social order with the aspects of macro and micro factors. In other words, he was trying to maintain the scientific rigour of positivism, while acknowledging the necessity of the "subjective dimension" of human action incorporated in hermeneutic types of sociological theorizing. Parsons sees motives as part of our actions. Therefore, he thought that social science must consider ends, purposes and ideals when looking at actions. Parsons placed his discussion within a higher epistemological and explanatory context of systems theory and cybernetics. == Action theory == Parsons' action theory is characterized by a system-theoretical approach, which integrated a meta-structural analysis with a voluntary theory. Parsons' first major work, The Structure of Social Action (1937) discussed the methodological and meta-theoretical premises for the foundation of a theory of social action. It argued that an action theory must be based on a voluntaristic foundation—claiming neither a sheer positivistic-utilitarian approach nor a sheer "idealistic" approach would satisfy the necessary prerequisites, and proposing an alternative, systemic general theory. Parsons shared positivism's desire for a general unified theory, not only for the social science but for the whole realm of action systems (in which Parsons included the concept of "living systems"). On the other hand, he departed from them on the criteria for science, particularly on Auguste Comte's proposition that scientists must not look for the "ultimate ends" so as to avoid unanswerable metaphysical questions. Parsons maintained that, at least for the social sciences, a meaningful theory had to include the question of ultimate values, which by their very nature and definition, included questions of metaphysics. As such, Parsons' theory stands at least with one foot in the sphere of hermeneutics and similar interpretive paradigms, which become particularly relevant when the question of "ends" must be considered within systems of action-orientation. As such, system theorists such as Parsons can be viewed as at least partially antipositivist. Parsons was not a functionalist per se, but an action theorist. In fact, he never used the term functionalism to refer to his own theory. Also, his use of the term "structural functionalism", generally understood as a characterization of his theory, was used by Parsons in a special context to describe a particular stage in the methodological development of the social sciences. One of the main features of Parsons' approach to sociology was the way in which he stated that cultural objects form an autonomous type. This is one of the reasons why Parsons established a careful division between cultural and social system, a point he highlighted in a short statement that he wrote with Alfred Kroeber, and is expressed on his AGIL paradigm. For Parsons, adaptation, goal attainment, integration and latency form the basic characteristics of social action, and could be understood as a fourfold function of a cybernetic system where the hierarchical order is L-I-G-A. The most metaphysical questions in his theory laid embedded in the concept of constitutive symbolization, which represented the pattern maintenance of the cultural system and was the cultural systemic equivalent of latent pattern maintenance through institutions like school and family (or, simply put, "L"). Later the metaphysical questions became more specified in the Paradigm of the Human Condition, which Parsons developed in the years before his death as an extension of the original AGIL theory. The separation of the cultural and social system had various implications for the nature of the basic categories of the cultural system; especially it had implications for the way cognitive capital is perceived as a factor in history. In contrast to pragmatism, materialism, behaviorism and other anti-Kantian types of epistemological paradigms, which tended to regard the role of cognitive capital as identical with the basic rationalization processes in history, Parsons regarded this question as fundamentally different. Cognitive capital, Parsons maintained, is bound to passion and faith and is entangled as promotional factors in rationalization processes but is neither absorbed or identical with these processes per se. == See also == Agency (sociology) Functional structuralism Social actions Skopos theory Structural functionalism Structure and agency Theory of structuration == References == == Sources == Parsons, Talcott; Shils, Edward (1951). Toward a General Theory of Action. Cambridge, Mass.: Harvard University Press. Parsons, Talcott (1978). Action Theory and the Human Condition. New York: Free Press. ISBN 9780029239902. Parsons, Talcott (1968). The structure of social action: a study in social theory with special reference to a group of recent European writers. New York: Free Press.
Wikipedia/Action_theory_(sociology)
Self-determination theory (SDT) is a macro theory of human motivation and personality regarding individuals' innate tendencies toward growth and innate psychological needs. It pertains to the motivation behind individuals' choices in the absence of external influences and distractions. SDT focuses on the degree to which human behavior is self-motivated and self-determined. In the 1970s, research on SDT evolved from studies comparing intrinsic and extrinsic motives and a growing understanding of the dominant role that intrinsic motivation plays in individual behavior. It was not until the mid-1980s, when Edward L. Deci and Richard Ryan wrote a book entitled Intrinsic Motivation and Self-Determination in Human Behavior, that SDT was formally introduced and accepted as having sound empirical evidence. Since the 2000s, research into practical applications of SDT has increased significantly. SDT is rooted in the psychology of intrinsic motivation, drawing upon the complexities of human motivation and the factors that foster or hinder autonomous engagement in activities. Intrinsic motivation refers to initiating an activity because it is interesting and satisfying to do so, as opposed to doing an activity to obtain an external goal (i.e., from extrinsic motivation). A taxonomy of motivations has been described based on the degree to which they are internalized. Internalization refers to the active attempt to transform an extrinsic motive into personally endorsed values and thus assimilate behavioral regulations that were originally external. Deci and Ryan later expanded on their early work, differentiating between intrinsic and extrinsic motivation, and proposed three main intrinsic needs involved in self-determination. According to Deci and Ryan, three basic psychological needs motivate self-initiated behavior and specify essential nutrients for individual psychological health and well-being. These needs are said to be universal and innate. The three needs are for autonomy, competence, and relatedness. == Self-determination theory == Humanistic psychology has been influential in the creation of SDT. Humanistic psychology is interested in looking at a person's psyche and personal achievement for self-efficacy and self-actualization. Whether or not an individual's self-efficacy and self-actualization are fulfilled can affect their motivation. To this day, it may be difficult for a parent, coach, mentor, and teacher to motivate and help others complete specific tasks and goals. SDT acknowledges the importance of the interconnection of intrinsic and extrinsic motivations as a means of motivation to achieve a goal. With the acknowledgment of interconnection of motivations, SDT forms the belief that extrinsic motivations and the motivations of others, such as a therapist, may be beneficial. However, it is more important for people to find the "why" behind the desired goal within themselves. According to Sheldon et al., "Therapists who fully endorse self-determination principles acknowledge the limits of their responsibilities because they fully acknowledge that ultimately people must make their own choices" (2003, p. 125). One needs to determine their reasons for being motivated and reaching their goal. SDT comprises The Organismic Dialectic approach, which is a meta-theory, and a formal theory containing mini-theories focusing on the connection between extrinsic and intrinsic motivations within society and an individual. SDT is continually being developed as individuals incorporate the findings of more recent research. As SDT has developed, more mini-theories have been added to what was originally proposed by Deci and Ryan in 1985. Generally, SDT is described as having either five or six mini-theories. The main five mini-theories are cognitive evaluation theory, organismic integration theory, causality orientations theory, basic needs theory, and goal contents theory. The sixth mini-theory that some sources include in SDT is called Relational Motivation Theory. SDT centers around the belief that human nature shows persistent positive features, with people repeatedly showing effort, agency, and commitment in their lives that the theory calls inherent growth tendencies. "Self-determination also has a more personal and psychology-relevant meaning today: the ability or process of making one’s own choices and controlling one’s own life." The use of one's personal agency to determine behavior and mindset will help an individual's choices. === Summary of the SDT mini-theories === Cognitive evaluation theory (CET): explains the relationship between internal motivation and external rewards. According to CET, when external rewards are controlling, when they pressure individuals to act a certain way, they diminish internal motivation. On the other hand, when external motivations are informational and provide feedback about behaviors, they increase internal motivation. Organismic integration theory (OIT): suggests different types of extrinsic motivations and how they contribute to the socialization of the individual. This mini-theory suggests that people willingly participate in activities and behaviors that they do not find interesting or enjoyable because they are influenced by external motivators. The four types of extrinsic motivations proposed in this theory are external regulation, introjected regulation, identified regulation, and integrated regulation. Causality orientations theory (COT): explores individual differences in the way people motivate themselves in regards to their personality. COT suggests three orientations toward decision making which are determined by identifying the motivational forces behind an individual's decisions. Individuals can have an autonomy orientation and make choices according to their own interests and values, they may have a control orientation and make decisions based on the different pressures that they experience from internal and external demands, or they may have an impersonal orientation where they are overcome with feelings of helplessness which are accompanied by a belief that their decisions will not make a difference on the outcome of their lives. Basic needs theory (BNT): considers three psychological needs that are related to intrinsic motivation, effective functioning, high quality engagement, and psychological well-being. The first psychological need is autonomy or the belief that one can choose their own behaviors and actions. The second psychological need is competence. In this sense, competence is when one is able to work effectively as they master their capacity to interact with the environment. The third psychological need proposed in basic needs theory is relatedness, or the need to form strong relationships or bonds with people who are around an individual. Goal contents theory (GCT): compares the benefits of intrinsic goals to the negative outcomes of external goals in terms of psychological well-being. Key to this mini-theory is understanding what reasoning lies behind an individual's goals. Individuals who pursue goals as a way to satisfy their needs have intrinsic goals and over time experience need satisfaction while those who pursue goals in search of validation have external goals and do not experience need satisfaction. Relationship motivation theory (RMT): examines the importance of relationships. This theory posits that high quality relationships satisfy all three psychological needs described in BNT. Of the three needs, relatedness is impacted the most by high quality relationships but autonomy and competence are satisfied as well. This is because high quality relationships are able to provide individuals with a bond to another person while simultaneously reinforcing their needs for autonomy and competence. == Organismic dialectical perspective == The organismic dialectical perspective sees all humans as active organisms interacting with their environment. People are actively growing, striving to overcome challenges, and creating new experiences. While endeavoring to become unified from within, individuals also become part of social structures. SDT also suggests that people have innate psychological needs that are the basis for self-motivation and personality integration. Through further explanation, people search for fulfillment in their 'meaning of life'. Discovering the meaning of life constitutes a distinctive desire someone has to find purpose and aim in their lives, which enhances their perception of themselves and their surroundings. Not only does SDT tend to focus on innate psychological needs, it also focuses on the pursuit of goals, the effects of the success in their goals, and the outcome of goals. == Basic psychological needs == One mini-theory of SDT includes basic psychological needs theory which proposes three basic psychological needs that must be satisfied to foster well-being and health. These three psychological needs of autonomy, competence, and relatedness are generally universal (i.e., apply across individuals and situations). However, some needs may be more salient than others at certain times and be expressed differently based on time, culture, or experience. SDT identifies three innate needs that, if satisfied, allow optimal function and growth: === Autonomy === Desire to be causal agents of one's own life and act in harmony with one's integrated self; however, note this does not mean to be independent of others, but rather constitutes a feeling of overall psychological liberty and freedom of internal will. When a person is autonomously motivated their performance, wellness, and engagement is heightened rather than if a person is told what to do (a.k.a. control motivation). Deci found that offering people extrinsic rewards for behavior that is intrinsically motivated undermined the intrinsic motivation as they grow less interested in it. Initially intrinsically motivated behavior becomes controlled by external rewards, which undermines their autonomy. In further research by Amabile et al. other external factors also appear to cause a decline in such motivation. For example, it is shown that deadlines restrict and control an individual which decreases their intrinsic motivation in the process. Situations that give autonomy as opposed to taking it away also have a similar link to motivation. Studies looking at choice have found that increasing a participant's options and choices increases their intrinsic motivation. Direct evidence for the innate need comes from Lübbecke and Schnedler who find that people are willing to pay money to have caused an outcome themselves. Additionally, satisfaction or frustration of autonomy impacts not only an individual's motivation, but also their growth. This satisfaction or frustration further affects behavior, leading to optimal well-being, or unfortunate ill-being. === Competence === Seek to control the outcome and experience mastery. Deci found that giving people unexpected positive feedback on a task increases their intrinsic motivation to do it, meaning that this was because positive feedback fulfilled people's need for competence. Additionally, SDT influences the fulfillment of meaning-making, well-being, and finding value within internal growth and motivation. Giving positive feedback on a task served only to increase people's intrinsic motivation and decreased extrinsic motivation for the task. Vallerand and Reid found negative feedback has the opposite effect (i.e., decreasing intrinsic motivation by taking away from people's need for competence). In a study conducted by Felnhofer et al., the level of competence and view of attributing competence is judged in regards to the scope of age differences, gender, and attitude variances of an individual within a given society. The effect of the different variances between individuals subsidize the negative influence that may lead to decreasing intrinsic motivation. === Relatedness === Will to interact with, be connected to, and experience caring for others. During a study on the relationship between infants' attachment styles, their exhibition of mastery-oriented behaviour, and their affect during play, Frodi, Bridges and Grolnick failed to find significant effects: "Perhaps somewhat surprising was the finding that the quality of attachment assessed at 12 months failed to significantly predict either mastery motivation, competence, or affect 8 months later, when other investigators have demonstrated an association between similar constructs ..." Yet they note that larger sample sizes could be able to uncover such effects: "A comparison of the secure/stable and the insecure/stable groups, however, did suggest that the secure/stable group was superior to the insecure/stable groups on all mastery-related measures. Obviously, replications of all the attachment-motivation relations are needed with different and larger samples." Deci and Ryan claim that there are three essential elements of the theory: Humans are inherently proactive with their potential and mastery of their inner forces (such as drives and emotions) Humans have an inherent tendency toward growth development and integrated functioning Optimal development and actions are inherent in humans but they do not happen automatically In an additional study focusing on the relatedness of adolescents, connection to other individuals' predisposed behaviors from relatedness satisfaction or frustration. The fulfillment or dissatisfaction of relatedness either promotes necessary psychological functioning or undermines developmental growth through deprivation. Across both study examples, the essential need for nurturing from a social environment goes beyond obvious and simple interactions for adolescents and promotes the actualization of inherent potential. If this happens, there are positive consequences (e.g. well-being and growth) but if not, there are negative consequences (e.g. dissatisfaction and deprivation). SDT emphasizes humans' natural growth toward positive motivation, development, and personal fulfillment. However, this prevents the SDT's purpose if the basic needs go unfulfilled. Although thwarting of an individual's basic needs might occur, recent studies argue that such prevention has its own influence on well-being. == Motivations == SDT claims to offer a different approach to motivation, considering what motivates a person at any given time, rather than viewing motivation as a single concept. SDT makes distinctions between different types of motivation and what results from them. White and deCharms proposed that the need for competence and autonomy is the basis of intrinsic motivation and behaviour. This idea is a link between people's basic needs and their motivations. === Intrinsic motivation === Intrinsic motivation is the natural, inherent drive to seek out challenges and new possibilities that SDT associates with cognitive and social development. Cognitive evaluation theory (CET) is a sub-theory of SDT that specifies factors explaining intrinsic motivation and variability with it and looks at how social and environmental factors help or hinder intrinsic motivations. CET focuses on the needs of competence and autonomy. CET is offered as an explanation of the phenomenon known as motivational "crowding out". Claiming social context events like feedback on work or rewards lead to feelings of competence and so enhance intrinsic motivations. Deci found positive feedback enhanced intrinsic motivations and negative feedback diminished it. Vallerand and Reid went further and found that these effects were being mediated by perceived control. Autonomy, however, must accompany competence for people to see their behaviours as self determined by intrinsic motivation. There must be immediate contextual support for both needs or inner resources based on prior development for this to happen. CET and intrinsic motivation are also linked to relatedness through the hypothesis that intrinsic motivation flourishes if linked with a sense of security and relatedness. Grolnick and Ryan found lower intrinsic motivation in children who believed their teachers to be uncaring or cold and so not fulfilling their relatedness needs. There is an interesting correlation between intrinsic motivation and educational performance according to Augustyniak, et al. They studied intrinsic motivation in second year medical students and discovered that students with lower intrinsic motivation had lower test scores and overall grades. They also noted these students lacked interest and enjoyment in their studies. They suggest that it may be beneficial to find out if a student lacks intrinsic motivation when they are younger and it may be possible to develop as they grow up. === Extrinsic motivation === Extrinsic motivation comes from external sources. Deci and Ryan developed organismic integration theory (OIT) as a sub-theory of SDT to explain the different ways extrinsically motivated behaviour is regulated. OIT details the different forms of extrinsic motivation and the contexts in which they come about. The context of such motivation concerns the SDT theory as these contexts affect whether the motivations are internalised and so integrated into the sense of self. OIT describes four different types of extrinsic motivations that often vary in terms of their relative autonomy: Externally regulated behaviour: Is the least autonomous, it is performed because of external demand or possible reward. Such actions can be seen to have an externally perceived locus of control. Introjected regulation of behaviour: describes taking on regulations to behaviour but not fully accepting said regulations as your own. Deci and Ryan claim such behaviour normally represents regulation by contingent self-esteem, citing ego involvement as a classic form of introjections. This is the kind of behaviour where people feel motivated to demonstrate ability to maintain self-worth. While this is internally driven, introjected behavior has an external perceived locus of causality or not coming from one's self. Since the causality of the behavior is perceived as external, the behavior is considered non-self-determined. Regulation through identification: a more autonomously driven form of extrinsic motivation. It involves consciously valuing a goal or regulation so that said action is accepted as personally important. Integrated Regulation: Is the most autonomous kind of extrinsic motivation. Occurring when regulations are fully assimilated with self so they are included in a person's self-evaluations and beliefs on personal needs. Because of this, integrated motivations share qualities with intrinsic motivation but are still classified as extrinsic because the goals that are trying to be achieved are for reasons extrinsic to the self, rather than the inherent enjoyment or interest in the task. Extrinsically motivated behaviours can be integrated into self. OIT proposes that internalization is more likely to occur when there is a sense of relatedness. Ryan, Stiller and Lynch found that children internalize school's extrinsic regulations when they feel secure and cared for by parents and teachers. Internalisation of extrinsic motivation is also linked to competence. OIT suggests that feelings of competence in activities should facilitate internalisation of said actions. Autonomy is particularly important when trying to integrate its regulations into a person's sense of self. If an external context allows a person to integrate regulation—they must feel competent, related and autonomous. They must also understand the regulation in terms of their other goals to facilitate a sense of autonomy. This was supported by Deci, Eghrari, Patrick and Leone who found in laboratory settings if a person was given a meaningful reason for uninteresting behaviour along with support for their sense of autonomy and relatedness they internalized and integrated their behaviour. == Individual differences == SDT argues that needs are innate but can be developed in a social context or learned from various life experiences and outside influences. Some people develop stronger needs than others, creating individual differences in the needs of people, whether it be autonomy, relatedness, or competence. However, individual differences within the theory focus on concepts resulting from the degree to which needs have been satisfied or not satisfied. This has the potential to lead to either need satisfaction or need frustration. Depending on which is reached, there can either be positive or negative outcomes, which vary between individual to individual and what their needs may be. Within SDT there are two general individual difference concepts, causality orientations and life goals, which will be discussed in further detail below. === Causality orientations === Causality orientations are motivational orientations that refer to the way people interact and adapt to an environment and regulate their behavior in response to these adaptations; in other words, this is the extent to which people experience feelings related to self-determination across many settings. SDT created three orientations: autonomous, controlled and impersonal. This orientation helps to explain the consequences of these interactions with the environment. The orientation an individual holds dictates how that person will adapt. Autonomous orientations refer to the results from satisfaction of the basic needs. An individual's interactions with the environment will be oriented towards trying to satisfy those needs. They will adapt their behaviors in response to the environment that they find themselves in. Certain environments may require more heightened and more conscious effort in order to achieve their needs while others may not. Either way, the individual has oriented themselves and their behaviors, whether consciously or subconsciously, towards achieving their basic needs. Strong controlled orientations come as a result of competence and relatedness needs but excludes autonomy; there is a link to regulation through both internal and external contingencies. This causes rigid functioning and diminished well-being, which are more negative outcomes rather than positive. Impersonal orientations come from failure to fulfill all three needs, which leads to poor functioning and ill-being. According to the self-determination theory, each individual has each of these orientations to some extent. This makes it possible to predict their psychological and behavioral outcomes. When needs are satisfied, it has been shown to improve vitality, life satisfaction, and positive affect. On the other hand, need frustration can lead to more negative outcomes, such as emotional exhaustion. The causality orientations may have various and unique impacts on an individual's motivation. In one particular study, participants were presented a puzzle and asked to put it together. And, what researchers found was that those who were more oriented towards autonomy would put in more time into solving the puzzle as composed to their counterparts. Feedback was also an important contributing factor to the success and motivation of the participants. === Life goals === Life goals are long-term goals people use to guide an individual's activities. They may fit into a variety of different categories and vary from person to person. The period of time that the particular goal will also be different depending on the nature of the goal. Some goals may take decades while other may take a couple years. There have even been instances where a goal can last a lifetime and will not be fully achieved until the individual passes. These goals can be divided into two separate categories: Intrinsic Aspirations: Contain life goals like affiliation, generativity and personal development. Extrinsic Aspirations: Have life goals like wealth, fame and attractiveness. There have been several studies on this subject that chart intrinsic goals being associated with greater health, well-being and performance. Intrinsic motivation has also been shown to be a better motivator, especially in relation to long-term goals as it leaves all motivation to be on an internal basis. It does not rely on external factors, that are typically temporary, to provide the necessary drive to complete a task. With intrinsic aspirations, they would relate to things that are more values rather than material things or have material manifestations, which fits with the examples provided. These life goals can also be related back to the needs that are stronger for the individual and that they are more motivated to satisfy. For example, the goal of affiliation would fit into the category of the need for relatedness. Wealth, on the other hand, would fit more under the category of competence. === The connection === Both of these aspects can be related to many important aspects in a person's life. The causality orientations held by an individual will have an impact on their life goals, including the type of goal and if they will be able to achieve it. An example of this is job engagement and its relationship to the number of resources available to employees. The researchers conducting this study found that "the autonomous and impersonal orientations were shown to moderate the relationship between job resources and work engagement; the positive relationship was weaker for both highly autonomy-oriented and highly impersonal-oriented individuals. The interaction between controlled orientation and job resources was insignificant." So, those in these work environments will have various life goals related to their work. And, depending on their orientation, may be able to better navigate the various aspects related to how well they can perform their job. Learned helplessness may even come into play with the motivation individuals may be. == Classic studies == === Deci (1971): External rewards on intrinsic motivation === Deci studied the effect of extrinsic rewards on intrinsic motivation in two labs and a field experiment. Based on the results from earlier animal and human studies on intrinsic motivation, the author explored two possibilities. In the first two experiments he looked at the effect of extrinsic rewards in terms of a decrease in intrinsic motivation to perform a task. Earlier studies showed contradictory or inconclusive findings regarding decrease in performance on a task following an external reward. The third experiment was based on findings of developmental learning theorists and looked at whether a different type of reward enhances intrinsic motivation to participate in an activity. ==== Experiment I ==== This experiment tested the hypothesis that if an individual is intrinsically motivated to perform an activity, introduction of an extrinsic reward decreases the degree of intrinsic motivation to perform the task. Twenty-four undergraduate psychology students participated in the first laboratory experiment and were assigned to either an experimental (n = 12) or control group (n = 12). Each group participated in three sessions conducted on three different days. During the sessions, participants were engaged in working on a Soma cube puzzle—which the experimenters assumed was an activity college students would be intrinsically motivated to do. The puzzle could be put together to form numerous different configurations. In each session, the participants were shown four different configurations drawn on a piece of paper and were asked to use the puzzle to reproduce the configurations while they were being timed. The first and third session of the experimental condition were identical to control, but in the second session the participants in the experimental condition were given a dollar for completing each puzzle within time. During the middle of each session, the experimenter left the room for eight minutes and the participants were told that they were free to do whatever they wanted during that time, while the experimenter observed during that period. The amount of time spent working on the puzzle during the free choice period was used to measure motivation. As Deci expected, when external reward was introduced during session two, the participants spent more time working on the puzzles during the free choice period in comparison to session 1 and when the external reward was removed in the third session, the time spent working on the puzzle dropped lower than the first session. All subjects reported finding the task interesting and enjoyable at the end of each session, providing evidence for the experimenter's assumption that the task was intrinsically motivating for the college students. The study showed some support of the experimenter's hypothesis and a trend towards a decrease in intrinsic motivation was seen after money was provided to the participants as an external reward. ==== Experiment II ==== The second experiment was a field experiment, similar to laboratory Experiment I, but was conducted in a natural setting. Eight student workers were observed at a college biweekly newspaper. Four of the students served as a control group and worked on Fridays. The experimental group worked on Tuesdays. The control and experimental group students were not aware that they were being observed. The 10-week observation was divided into three time periods. The task in this study required the students to write headlines for the newspaper. During "Time 2", the students in the experimental group were given 50 cents for each headline they wrote. At the end of Time 2, they were told that in the future the newspaper cannot pay them 50 cent for each headline anymore as the newspaper ran out of the money allocated for that and they were not paid for the headlines during Time 3. The speed of task completion (headlines) was used as a measure of motivation in this experiment. Absences were used as a measure of attitudes. To assess the stability of the observed effect, the experimenter observed the students again (Time 4) for two weeks. There was a gap of five weeks between Time 3 and Time 4. Due to absences and change in assignment etc., motivation data was not available for all students. The results of this experiment were similar to Experiment I and monetary reward was found to decrease the intrinsic motivation of the students, supporting Deci's hypothesis. ==== Experiment III ==== Experiment III was also conducted in the laboratory and was identical to Experiment I in all respects except for the kind of external reward provided to the students in the experimental condition during Session 2. In this experiment, verbal praise was used as an extrinsic reward. The experimenter hypothesized that a different type of reward—i.e., social approval in the form of verbal reinforcement and positive feedback for performing the task that a person is intrinsically motivated to perform—enhances the degree of external motivation, even after the extrinsic reward is removed. The results of the experiment III confirmed the hypothesis and the students' performance increased significantly during the third session in comparison to session one, showing that verbal praise and positive feedback enhances performance in tasks that a person is initially intrinsically motivated to perform. This provides evidence that verbal praise as an external reward increases intrinsic motivation. The author explained differences between the two types of external rewards as having different effects on intrinsic motivation. When a person is intrinsically motivated to perform a task and money is introduced to work on the task, the individual cognitively re-evaluates the importance of the task and the intrinsic motivation to perform the task (because the individual finds it interesting) shifts to extrinsic motivation and the primary focus changes from enjoying the task to gaining financial reward. However, when verbal praise is provided in a similar situation, it increases intrinsic motivation as it is not evaluated to be controlled by external factors and the person sees the task as an enjoyable task that is performed autonomously. The increase in intrinsic motivation is explained by positive reinforcement and an increase in perceived locus of control to perform the task. === Pritchard et al. (1977): Evaluation of Deci's Hypothesis === Pritchard et al. conducted a similar study to evaluate Deci's hypothesis regarding the role of extrinsic rewards on decreasing intrinsic motivation. Participants were randomly assigned to two groups. A chess-problem task was used in this study. Data was collected in two sessions. ==== Session I ==== Participants were asked to complete a background questionnaire that included questions on the amount of time the participant played chess during the week, the number of years that the participant has been playing chess for, amount of enjoyment the participant gets from playing the game, etc. The participants in both groups were then told that the experimenter needed to enter the information in the computer and for the next 10 minutes the participant were free to do whatever they liked. The experimenter left the room for 10 minutes. The room had similar chess-problem tasks on the table, some magazines as well as coffee was made available for the participants if they chose to have it. The time spent on the chess-problem task was observed through a one way mirror by the experimenter during the 10 minute break and was used as a measure of intrinsic motivation. After the experimenter returned, the experimental group was told that there was a monetary reward for the participant who could work on the most chess problems in the given time and that the reward is for this session only and would not be offered during the next session. The control group was not offered a monetary reward. ==== Session II ==== The second session was the same for the two groups: After a filler task, the experimenter left the room for 10 minutes and the time participants spent on the chess-problem task was observed. The experimental group was reminded that there was no reward for the task this time. After both sessions the participants were required to respond to questionnaires evaluating the task, i.e. to what degree did they find the task interesting. Both groups reported that they found the task interesting. The results of the study showed that the experimental group showed a significant decrease in time spent on the chess-problem task during the 10-minute free time from session 1 to session 2 in comparison to the group that was not paid, thus confirming the hypothesis presented by Deci that contingent monetary reward for an activity decreases the intrinsic motivation to perform that activity. Other studies were conducted around this time focusing on other types of rewards as well as other external factors that play a role in decreasing intrinsic motivation. == New developments == Principles of SDT have been applied in many domains of life, e.g., job demands; parenting; teaching; health; including willingness to be vaccinated; morality; and technology design. Additionally, SDT research has been widely applied to the field of sports. === Exercise and physical activity === Murcia and colleagues looked at the influence of peers on enjoyment in exercise. Specifically, they looked at the effect of motivational climate generated by peers on exercisers by analyzing data collected through questionnaires and rating scales. The assessment included an evaluation of motivational climate, basic psychological needs satisfaction, levels of self-determination and self-regulation (amotivation and external, introjected, identified, and intrinsic regulation), and the assessment of the level of satisfaction and enjoyment in exercising. Data analysis revealed that when peers are supportive and emphasize cooperation, effort, and personal improvement, the climate influences variables like basic psychological needs, motivation, and enjoyment. The task climate positively predicted the three basic psychological needs (i.e., competence, autonomy, and relatedness) and positively predicted self-determined motivation. Task climate and the resulting self-determination were also found to positively influence the level of enjoyment that exercisers experienced during the activity. Behzadniaa and colleagues studied how physical education teachers' autonomy support versus control would relate to students' wellness, knowledge, performance, and intentions to persist at physical activity beyond the physical education classes. The study concluded that "perceived autonomy support was positively related to the positive outcomes via need satisfaction and frustration and autonomous motivation, and that perceptions of teachers' control were related to students' ill-being (positively) and knowledge (negatively) through need frustration." Identified regulation was found to be more consistently associated with regular physical activity than other forms of autonomous motivation, such as intrinsic regulation, which may be triggered by pleasure derived from the activity itself. This may be explained by physical activity often relating to more mundane or repetitive actions. More recent studies suggest that different types of motivation regulate different intensities of physical activity, which may be context-dependent. For example, a higher frequency of vigorous physical activity was associated with autonomous motivation but not with controlled motivation in a study in rural Uganda. In an urban disadvantaged South African population, however, an association between moderate physical activity and autonomous motivation was found, but not with vigorous physical activity. The latter study also found the association between the basic psychological needs and more autonomous forms of motivation to be different across different contexts. === Awareness === Awareness has always been associated with autonomous functioning. However, SDT researchers just recently incorporated the concept of mindfulness and its relationship with autonomous functioning and emotional well-being into their studies. Brown and Ryan conducted a series of five experiments to study mindfulness: They defined mindfulness as open, undivided attention to what is happening within and around oneself. From their experiments, the authors concluded that when people act mindfully, their actions are consistent with their values and interests. Also, it is possible that being autonomous and performing an action because it is enjoyable to oneself increases mindful attention to one's actions. === Vitality and self-regulation === Another area of interest for SDT researchers is the relationship between subjective vitality and self-regulation. Ryan and Deci define vitality as energy available to the self, either directly or indirectly, from basic psychological needs. This energy allows individuals to act autonomously. Many theorists have posited that self-regulation depletes energy, but SDT researchers have proposed and demonstrated that only controlled regulation depletes energy; autonomous regulation can actually be vitalizing. Ryan and colleagues used SDT to explain the effect of weekends on the well-being of adult working population. The study determined that people felt higher well-being on weekends due to greater feelings of autonomy and feeling closer to others (i.e., latedness) in weekend activities. === Education === In a study by Hyungshim Jang, the capacity of two different theoretical models of motivation were used to explain why an externally provided rationale for doing a particular assignment often helps in a student's motivation, engagement, and learning during relatively uninteresting learning activities. Undergraduate students (N = 136; 108 women, 28 men) worked on a relatively uninteresting short lesson after receiving or not receiving a rationale. Students who received the rationale showed greater interest, work ethic, and determination. Structural equation modeling was used to test three alternative explanatory models to understand why the rationale produced such benefits: An identified regulation model based on SDT; An interest regulation model based on interest-enhancing strategies research; An additive model that integrated both models. The data fit all three models, but only the SDT model helped students engage and learn. Findings show that externally provided rationales can play in assisting students in generating the motivation they need to engage in and learn from uninteresting but personally important material. The importance of these findings to those in the field of education is that when teachers try to find ways to promote students' motivation during relatively uninteresting learning activities, they can successfully do so by promoting the value of the task. Teachers can help students value what they deem "uninteresting" by providing a rationale that identifies the lesson's otherwise hidden value, helps students understand why the lesson is genuinely worth their effort, and communicates why the lesson can be expected to be useful to them. An example of SDT and education is Sudbury schools, wherein students decide for themselves how to spend their days. In these schools, students of all ages determine what they do and when, how, and where they do it. This freedom is at the heart of the school; it belongs to the students as their right not to be violated. The fundamental premises of the school are simple: that all people are curious by nature; that the most efficient, long-lasting, and profound learning takes place when started and pursued by the learner; that all people are creative if they are allowed to develop their unique talents; that age-mixing among students promotes growth in all members of the group; and that freedom is essential to the development of personal responsibility. In practice, this means that students initiate all their own activities and create their own environments. The physical plant, the staff, and the equipment are there for the students to use as the need arises. The school provides a setting in which students are independent, are trusted, and are treated as responsible people; and a community in which students are exposed to the complexities of life in the framework of a participatory democracy. Sudbury schools do not perform and do not offer evaluations, assessments, or recommendations, asserting that they do not rate people and that the school is not a judge; comparing students to each other or to some standard that has been set is for them a violation of the student's right to privacy and to self-determination. Students decide for themselves how to measure their progress as self-starting learners as a process of self-evaluation: real lifelong learning and the proper educational evaluation for the 21st century, they adduce. === Alcohol use === According to SDT, individuals who attribute their actions to external circumstances rather than internal mechanisms are far more likely to succumb to peer pressure. In contrast, individuals who consider themselves autonomous tend to be initiators of actions rather than followers. Research examining the relationship between SDT and alcohol use among college students has indicated that individuals with the former criteria for decision-making are associated with greater alcohol consumption and drinking as a function of social pressure. For instance, in a study conducted by Knee and Neighbors, external factors in the individuals who claim not to be motivated by internal factors were found to be associated with drinking for extrinsic reasons and with stronger perceptions of peer pressure, which in turn was related to heavier alcohol use. Given the evidence suggesting a positive association between outward motivation and drinking and the potential role of perceived social influence in this association, understanding the precise nature of this relationship seems important. Further, it may be hypothesized that the relationship between self-determination and drinking may be mediated to some extent by the perceived approval of others. === Healthy eating === Self-determination theory offers an explanatory framework to predict healthy eating and other dietary behaviors. Research on SDT in the domain of eating regulation is still in its early stages and most of these studies were conducted in high-income settings. In support of SDT, A recent study in an urban township population in South Africa found that frequency of fruit, vegetable, and non-refined starch intake was associated with identified regulation and negatively associated with introjected regulation among people with prediabetes. The same study found perceived competence and relatedness to be positively associated with identified regulation and negatively associated with introjected regulation. In more concrete wording, individuals who experience support from friends or family and who feel competent in maintaining a healthy diet are more likely to become motivated by their own values, such as having good health. Motivation linked to pressure from others or feelings of guilt or shame is negatively associated with maintaining a healthy diet. == Motivational interviewing == Motivational interviewing (MI) is a popular approach to positive behavioral change. Used initially in the area of addiction (Miller & Rollnick, 2002), it is now used for a wider range of issues. It is a client-centered method that does not persuade or coerce patients to change and instead attempts to explore and resolve their ambivalent feelings, which allows them to choose themselves whether to change or not. Markland, Ryan, Tobin, and Rollnick believe that SDT provides a framework behind how and the reasons why MI works. They believe that MI provides an autonomy-supportive atmosphere, which allows clients to find their own source of motivation and achieve their own success (in terms of overcoming addiction). Patients randomly assigned to an MI treatment group found the setting to be more autonomy-supportive than those in a regular support group. == Environmental behaviors == Several studies explored the link between SDT and environmental behaviors to determine the role of intrinsic motivation for environmental behavior performance and to account for the lack of success of current intervention strategies. == Consumer behavior == Self-determination theory identifies a basic psychological need for autonomy as a central feature for understanding effective self-regulation and well-being. As adopting these services increases both individual and collective well-being, research has to delve more deeply into the origins of consumers' motivations. For this reason aim at augmenting the understanding of how different types of motivation determine consumers' intention to adopt transformative services. They examine whether Self-Determination Theory (SDT) can be of help in fostering more sustainable food choices by taking a closer look at the relationship between food-related types of motivation and different aspects of meat consumption, based on a survey among 1083 consumers in the Netherlands. === Motivation toward the environment scale === Environmental attitudes and knowledge are not good predictors of behavior. SDT suggests that motivation can predict behavior performance. Pelletier et al. (1998) constructed a scale of motivation for environmental behavior, which consists of 4x6 statements (4 statements for each type of motivation on the SDT motivation scale: intrinsic, integrated, identified, introjected, external, and amotivation) responding to a question 'Why are you doing things for the environment?'. Each item is scored on a 1–7 Likert scale. Utilizing MTES, Villacorta (2003) demonstrates a correlation between environmental concerns and intrinsic motivations together with peer and parental support; further, intrinsically motivated behaviors tend to persist longer. === Environmental motivation === Pelletier et al. (1999) shows that four personal beliefs: helplessness, strategy, capacity, and effort, lead to greater amotivation, while self-determination has an inverse relationship with amotivation. The Amotivation toward the Environment Scale measures the four reasons for amotivation by answering the question 'Why are you not doing things for the environment?'. The participants rank 16 total statements (four in each category of amotivation) on a 1–7 Likert scale. === Intervention strategies === Intervention strategies have to be effective in bridging the gap between attitudes and behaviors. Monetary incentives, persuasive communication, and convenience are often successful in the short term, but when the intervention is removed, behavior is discontinued. In the long run, such intervention strategies are therefore expensive and difficult to maintain. SDT explains that environmental behavior that is not motivated intrinsically is not persistent. On the other hand, when self-determination is high, behavior is more likely to occur repeatedly. The importance of intrinsic motivation is particularly apparent with more difficult behaviors. While they are less likely to be performed in general, people with high internal motivation are more likely to perform them more frequently than people with low intrinsic motivation. 5 Subjects scoring high on intrinsic motivation and supporting ecological well-being also reported a high level of happiness. According to Osbaldiston and Sheldon (2003), autonomy perceived by an individual leads to an increased frequency of environmental behavior performance. In their study, 162 university students chose an environmental goal and performed it for a week. Perceived autonomy, success in performing chosen behavior, and their future intention to continue were measured. The results suggested that people with higher degree of self-perceived autonomy successfully perform behaviors and are more likely to do so in the long term. Based on the connection between SDT and environmental behaviors, Pelletier et al. suggest that successful intervention should emphasize self-determined motivation for performing environmental behaviors. == Industrial and organizational psychology == SDT has been applied to industrial and organizational psychology. == Criticism == Despite extensive research, SDT theory also has its critics. Steven Reiss (2017) points to, among others, the lack of a clear definition of intrinsic and extrinsic motivation, unreliability of measurement, inadequately designed experiments, and other factors. == See also == Digital self-determination Industrial and organizational psychology Positive psychology == Additional Resources == Rochester Psychology: SDT == References ==
Wikipedia/Self-determination_theory
Prolegomena to Any Future Metaphysics That Will Be Able to Present Itself as a Science (German: Prolegomena zu einer jeden künftigen Metaphysik, die als Wissenschaft wird auftreten können) is a book by the German philosopher Immanuel Kant, published in 1783, two years after the first edition of his Critique of Pure Reason. One of Kant's shorter works, it contains a summary of the Critique‘s main conclusions, sometimes by arguments Kant had not used in the Critique. Kant characterizes his more accessible approach here as an "analytic" one, as opposed to the Critique‘s "synthetic" examination of successive faculties of the mind and their principles. The book is also intended as a polemic. Kant was disappointed by the poor reception of the Critique of Pure Reason, and here he repeatedly emphasizes the importance of its critical project for the very existence of metaphysics as a science. The final appendix contains a response to an unfavorable review of the Critique. == Contents == === Introduction === Kant declared that the Prolegomena are for the use of both learners and teachers as an heuristic way to discover a science of metaphysics. Unlike other sciences, metaphysics has not yet attained universal and permanent knowledge. There are no standards to distinguish truth from error. Kant asked, "Can metaphysics even be possible?" David Hume investigated the problem of the origin of the concept of causality. Is the concept of causality truly independent of experience or is it learned from experience? Hume mistakenly attempted to derive the concept of causality from experience. He thought that causality was really based on seeing two objects that were always together in past experience. If causality is not dependent on experience, however, then it may be applied to metaphysical objects, such as an omnipotent God or an immortal soul. Kant claimed to have logically deduced how causality and other pure concepts originate from human understanding itself, not from experiencing the external world. Unlike the Critique of Pure Reason, which was written in the synthetic style, Kant wrote the Prolegomena using the analytical method. He divided the question regarding the possibility of metaphysics as a science into three parts. In so doing, he investigated the three problems of the possibility of pure mathematics, pure natural science, and metaphysics in general. His result allowed him to determine the bounds of pure reason and to answer the question regarding the possibility of metaphysics as a science. === Preamble on the peculiarities of all metaphysical knowledge === § 1. On the sources of metaphysics Metaphysical principles are a priori in that they are not derived from external or internal experience. Metaphysical knowledge is philosophical cognition that comes from pure understanding and pure reason. § 2. Concerning the kind of knowledge which can alone be called metaphysical a. On the distinction between analytical and synthetical judgments in general Analytical judgments are explicative. They express nothing in the predicate but what has already been actually thought in the concept of the subject. Synthetical judgments are expansive. The predicate contains something that is not actually thought in the concept of the subject. It amplifies knowledge by adding something to the subject's concept. b. The common principle of all analytical judgments is the law of contradiction The predicate of an affirmative analytical judgment is already contained in the concept of the subject, of which it cannot be denied without contradiction. All analytical judgments are a priori. c. Synthetical judgments require a principle that is different from the law of contradiction. 1. Judgments of experience are always synthetical. Analytical judgments are not based on experience. They are based merely on the subject's concept. 2. Mathematical judgments are all synthetical. Pure mathematical knowledge is different from all other a priori knowledge. It is synthetical and cannot be known from mere conceptual analysis. Mathematics require the intuitive construction of concepts. This intuitive construction implies an a priori view of the concept constructed in the mind. In the Critique of Pure Reason, Kant elaborates on this, explaining that "to construct a concept means to exhibit a priori the intuition corresponding to it.” Arithmetical sums are the result of the addition of intuited counters. Geometrical concepts, such as "shortest distance," are known only through exhibiting the concept in pure intuition. 3. Metaphysical judgments, properly so called, are all synthetical. Concepts and judgments pertaining to metaphysics may be analytical. These may not be metaphysical but can be combined to make a priori, synthetical, metaphysical judgments. For example, the analytical judgment "substance only exists as subject" can be used to make the judgment "all substance is permanent," which is a synthetical and properly metaphysical judgment. § 3. A remark on the general division of judgment into analytical and synthetical. This division is critical but has not been properly recognized by previous philosophers. § 4. The general question of the Prolegomena: Is metaphysics at all possible? The Critique of Pure Reason investigates this question synthetically. In it, an abstract examination of the concepts of the sources of pure reason results in knowledge of the actual science of metaphysics. The Prolegomena, on the other hand, starts with the known fact that there is actual synthetic a priori metaphysical knowledge of pure mathematics and pure natural science. From this knowledge, analytically, we arrive at the sources of the possibility of metaphysics. § 5. The general problem: How is knowledge from pure reason possible? By using the analytical method, we start from the fact that there are actual synthetic a priori propositions and then inquire into the conditions of their possibility. In so doing, we learn the limits of pure reason. === Part one of the main transcendental problem. How is pure mathematics possible? === § 6. Mathematics consists of synthetic a priori knowledge. How was it possible for human reason to produce such a priori knowledge? If we understand the origins of mathematics, we might know the basis of all knowledge that is not derived from experience. § 7. All mathematical knowledge consists of concepts that are derived from intuitions. These intuitions, however, are not based on experience. § 8. How is it possible to intuit anything a priori? How can the intuition of the object occur before the experience of the object? § 9. My intuition of an object can occur before I experience an object if my intuition contains only the mere form of sensory experience. § 10. We can intuit things a priori only through the mere form of sensuous intuition. In so doing, we can only know objects as they appear to us, not as they are in themselves, apart from our sensations. Mathematics is not an analysis of concepts. Mathematical concepts are constructed from a synthesis of intuitions. Geometry is based on the pure intuition of space. The arithmetical concept of number is constructed from the successive addition of units in time. Pure mechanics uses time to construct motion. Space and time are pure a priori intuitions. They are the mere forms of our sensations and exist in us prior to all of our intuitions of objects. Space and time are a priori knowledge of a sensed object as it appears to an observer. § 11. The problem of a priori intuition is solved. The pure a priori intuition of space and time is the basis of empirical a posteriori intuition. Synthetic a priori mathematical knowledge refers to empirically sensed objects. A priori intuition relates to the mere form of sensibility; it makes the appearance of objects possible. The a priori form of a phenomenal object is space and time. The a posteriori matter of a phenomenal object is sensation, which is not affected by pure, a priori intuition. The subjective a priori pure forms of sensation, namely space and time, are the basis of mathematics and of all of the objective a posteriori phenomena to which mathematics refers. § 12. The concept of pure, a priori intuition can be illustrated by geometrical congruence, the three–dimensionality of space, and the boundlessness of infinity. These cannot be shown or inferred from concepts. They can only be known through pure intuition. Pure mathematics is possible because we intuit space and time as the mere form of phenomena. § 13. The difference between similar things which are not congruent cannot be made intelligible by understanding and thinking about any concept. They can only be made intelligible by being intuited or perceived. For example, the difference of chirality is of this nature. So, also, is the difference seen in mirror images. Right hands and ears are similar to left hands and ears. They are not, however, congruent. These objects are not things as they are apart from their appearance. They are known only through sensuous intuition. The form of external sensible intuition is space. Time is the form of internal sense. Time and space are mere forms of our sense intuition and are not qualities of things in themselves apart from our sensuous intuition. Remark I. Pure mathematics, including pure geometry, has objective reality when it refers to objects of sense. Pure mathematical propositions are not creations of imagination. They are necessarily valid of space and all of its phenomenal objects because a priori mathematical space is the foundational form of all a posteriori external appearance. Remark II. Berkeleian Idealism denies the existence of things in themselves. The Critique of Pure Reason, however, asserts that it is uncertain whether or not external objects are given, and we can only know their existence as a mere appearance. Unlike Locke's view, space is also known as a mere appearance, not as a thing existing in itself. Remark III. Sensuous knowledge represents things only in the way that they affect our senses. Appearances, not things as they exist in themselves, are known through the senses. Space, time, and all appearances in general are mere modes of representation. Space and time are ideal, subjective, and exist a priori in all of our representations. They apply to all of the objects of the sensible world because these objects exist as mere appearances. Such objects are not dreams or illusions, though. The difference between truth and dreaming or illusion depends on the connection of representations according to rules of true experience. A false judgment can be made if we take a subjective representation as being objective. All the propositions of geometry are true of space and all of the objects that are in space. Therefore, they are true of all possible experience. If space is considered to be the mere form of sensibility, the propositions of geometry can be known a priori concerning all objects of external intuition. === Part two of the main transcendental problem. How is pure natural science possible? === § 14. An observer can't know anything about objects that exist in themselves, apart from being observed. Things in themselves cannot be known a priori because this would be a mere analysis of concepts. Neither can the nature of things in themselves be known a posteriori. Experience can never give laws of nature that describe how things in themselves must necessarily exist completely apart from an observer's experience. § 15. The universal science of nature contains a pure science of nature, as well as an empirical science of nature. The pure science of nature is a priori and expresses laws to which nature must necessarily conform. Two of its principles are "substance is permanent" and "every event has a cause." How is it possible that there are such a priori universal laws of nature? § 16. There is a priori knowledge of nature that precedes all experience. This pure knowledge is actual and can be confirmed by natural experience. We are not concerned with any so–called knowledge that cannot be verified by experience. § 17. The a priori conditions that make experience possible are also the sources of the universal laws of nature. How is this possible? § 18. Judgments of experience are empirical judgments that are valid for external objects. They require special pure concepts which have originated in the pure understanding. All judging subjects will agree on their experience of the object. When a perception is subsumed under these pure concepts, it is changed into objective experience. On the other hand, all empirical judgments that are only valid for the one judging subject are judgments of mere perception. These judgments of perception are not subsumed under a pure concept of the understanding. § 19. We cannot immediately and directly know an object as it is apart from the way that it appears. However, if we say that a judgment must be valid for all observers, then we are making a valid statement about an object. Judgments of experience are valid judgments about an object because they necessarily connect everyone's perceptions of the object through the use of a pure concept of the understanding. § 20. A judgment of perception is a connection of perceptions in a subject's mind. For example, "When the sun shines on a stone, the stone becomes warm." A judgment of perception has no necessary universality and therefore no objective validity. A judgment of perception can become a judgment of experience, as in "The sun warms the stone." This occurs when the subject's perceptions are connected according to the form of a pure concept of the understanding. These pure concepts of the understanding are the general forms that any object must assume in order to be experienced. § 21. In general, judgments about any perception whatsoever have the following forms: In general concepts abstracted from any perceptions whatsoever have the following forms: Universal scientific principles, about any and all natural phenomena whatsoever, have the following forms: § 21a. This Prolegomena is a critique of the understanding and it discusses the form and content of experience. It is not an empirical psychology that is concerned with the origin of experience. Experience consists of sense perceptions, judgments of perception, and judgments of experience. A judgment of experience includes what experience in general contains. This kind of judgment results when a sense perception and a judgment of perception are unified by a concept that makes the judgment necessary and valid for all perceivers. § 22. The senses intuit. The understanding thinks, or judges. Experience is generated when a concept of the understanding is added to a sense perception. The pure concepts of the understanding are concepts under which all sense perceptions must be subsumed [subsumirt] before they can be used in judgments of experience. A synthesis of perception then becomes necessary, universally valid, and representative of an experienced object. § 23. Pure a priori principles of possible experience bring mere phenomenal appearances under pure concepts of the understanding. This makes the empirical judgment valid in reference to an external object. These principles are universal laws of nature which are known before any experience. This solves the second question "How is the pure science of nature possible?". A logical system consists of the forms of all judgments in general. A transcendental system is made up of the pure concepts which are the conditions of all synthetical, necessary judgments. A physical system, which is a universal and pure science of nature, contains pure principles of all possible experience. § 24. The first physical principle of pure understanding subsumes all spatial and temporal phenomenal appearances under the concept of quantity. All appearances are extensive magnitudes. It is the principle of the axioms of intuition. The second physical principle subsumes sensation under the concept of quality. All sensations exhibit a degree, or intensive magnitude, of sensed reality. This is the principle of the anticipations of perception. § 25. In order for a relationship between appearances to be valid as an objective experience, it must be formulated in accordance with an a priori concept. The concepts of substance/accident, cause/effect, and action/reaction (community) constitute a priori principles that turn subjective appearances into objective experiences. The concept of substance relates appearances to existence. The concepts of cause and community relate appearances to other appearances. The principles that are made of these concepts are the real, dynamical [Newtonian] laws of nature. Appearances are related to experience in general as being possible, actual, or necessary. Judgments of experience, that are thought or spoken, are formulated by using these modes of expression. § 26. The table of the Universal Principles of Natural Science is perfect and complete. Its principles are limited only to possible experience. The principle of the axioms of intuition states that appearances in space and time are thought of as quantitative, having extensive magnitude. The principle of the anticipations of perception states that an appearance's sensed reality has degree, or intensive magnitude. The principles of the analogies of experience state that perceptual appearances, not things in themselves, are thought of as experienced objects, in accordance with a priori rules of the understanding. § 27. Hume wrote that we cannot rationally comprehend cause and effect (causality). Kant added that we also cannot rationally comprehend substance and accident (subsistence) or action and reaction (community). Yet he denied that these concepts are derived from experience. He also denied that their necessity was false and merely an illusion resulting from habit. These concepts and the principles that they constitute are known before experience and are valid when they are applied to the experience of objects. § 28. We cannot know anything about the relations of things in themselves or of mere appearances. When we speak or think about objects of experience, however, they must necessarily have the relations of subsistence, causality, and community. These concepts constitute the principles of the possibility of our experience. § 29. With regard to causality, we start with the logical form of a hypothetical judgment. We can make a subjective judgment of perception and say, "If the sun shines long enough on a body, then the body will become warm." This, however, is an empirical rule that is valid merely of appearances in one consciousness. If I want to make an objective, universally valid hypothetical judgment, however, I must make it in the form of causality. As such, I say, "The sun is the cause of heat." This is a universal and necessary law that is valid for the possibility of objective experience. Experience is the valid knowledge of the way that appearances succeed each other as objects. This knowledge is expressed in the form of a hypothetical [if/then] judgment. The concept of causality refers to thoughts and statements about the way that successive appearances and perceptions are universally and necessarily experienced as objects, in any consciousness. § 30. The principles that contain the reference of the pure concepts of the understanding to the sensed world can only be used to think or speak of experienced objects, not things in themselves. These pure concepts are not derived from experience. Experience is derived from these pure concepts. This solves Hume's problem regarding the pure concept of causality. Pure mathematics and pure natural science can never refer to anything other than mere appearances. They can only represent either (1) that which makes experience in general possible, or (2) that which must always be capable of being represented in some possible particular experience. § 31. By this method, we have gained definite knowledge with reference to metaphysics. Unscientific researchers could also say that we can never reach, with our reason, beyond experience. They, however, have no grounds for their assertion. § 32. Former philosophers claimed that the sensible world was an illusion. The intelligible world, they said, was real and actual. Critical philosophy, however, acknowledges that objects of sense are mere appearances, but they are usually not illusions. They are appearances of a thing in itself, which cannot be directly known. Our pure concepts [causality, subsistence, etc.] and pure intuitions [space, time] refer only to objects of possible sense experience. They are meaningless when referred to objects that cannot be experienced. § 33. Our pure concepts of the understanding are not derived from experience and they also contain strict necessity, which experience never attains. As a result, we are tempted to use them to think and speak about objects of thought that transcend experience. This is a transcendent and illegitimate use. § 34. Unlike empirical concepts, which are grounded on sense perceptions, the pure concepts of the understanding are based on schemata. This is explained in the Critique of Pure Reason, A 137 ff. The objects thus produced occur only in experience. In the Critique, A 236 ff., it is explained that nothing that is beyond experience can be meaningfully thought by using the pure concepts without sense perception. § 35. The understanding, which thinks, should never wander beyond the bounds of experience. It keeps the imagination in check. The impossibility of thinking about unnatural beings should be demonstrated with scientific certainty. § 36. The constitution of our five senses and the way that they provide data makes nature possible materially, as a totality of appearances in space and time. The constitution of our understanding makes nature possible formally, as a totality of rules that regulate appearances in order for them to be thought of as connected in experience. We derive the laws of nature from the conditions of their necessary unity in one consciousness. We can know, before any experience, the universal laws of nature because they are derived from our sensibility and understanding. Nature and the possibility of experience in general are the same. The understanding does not derive its a priori laws from nature. The understanding prescribes laws to nature. § 37. The necessary laws of nature that we seem to discover in perceived objects have actually been derived from our own understanding. § 38. According to natural law, gravitation decreases inversely as the square of the surfaces, over which this force spreads, increases. Is this law found in space itself? No, it is found in the way that the understanding knows space. The understanding is the origin of the universal order of nature. It comprehends all appearances under its own laws. In so doing, it produces the form by which all experienced objects that appear to us are necessarily subject to its laws. § 39. Appendix to pure natural science. On the system of the categories. The Kantian categories constitute a complete, necessary system of concepts and thus lead to comprehension. These concepts constitute the form of connection between the concepts that occur in all empirical knowledge. To make a table of pure concepts, a distinction was made between the pure elementary concepts of the sensibility and those of the understanding. The former are space and time. The latter are the pure concepts or categories. The list is complete, necessary, and certain because it is based on a principle or rule. This principle is that thinking in general is judging. A table of the functions of judgments, when applied to objects in general, becomes a table of pure concepts of the understanding. These concepts, and only these, are our whole knowledge of things by pure understanding. These pure concepts are logical functions and do not, by themselves, produce a concept of an object. To do so, they need to be based on sensuous intuition. Their use is limited to experience. The systematic table of categories is used as a clue in the investigation of complete metaphysical knowledge. It was used in the Critique as a pattern for research on, among other things, the soul (A 344), the universe (A 415), and nothingness (A 292). === Part three of the main transcendental problem. How is metaphysics in general possible? === § 40. The truth or the objective reality of the concepts that are used in metaphysics cannot be discovered or confirmed by experience. Metaphysics is subjectively actual because its problems occur to everyone as a result of the nature of their reason. How, however, is metaphysics objectively possible? The concepts of reason are transcendent because they are concerned with the absolute totality of all possible experience. Reason doesn't know when to stop asking, "why?." Such an absolute totality cannot be experienced. The corresponding objects of the necessary Ideas of reason cannot be given in experience and are misleading illusions. Only through self–knowledge can reason prevent the consideration of the immanent, subjective, guiding Ideas as being transcendent objects. § 41. In order to establish metaphysics as a science, a clear distinction must be made between the categories (pure concepts of the understanding) and the Ideas (pure concepts of reason). § 42. The concepts of the understanding appear in experience. They are confirmed by experience. On the other hand, the transcendent concepts of reason cannot be confirmed or refuted by experience because they don't appear in experience. Reason must introspectively investigate itself in order to avoid errors, illusions, and dialectical problems. § 43. The origin of the transcendental Ideas is the three forms of syllogism that reason uses in its activity. The first Idea is based on the categorical syllogism. It is the psychological Idea of the complete substantial subject. This Idea results in a paralogism, or unwittingly false dialectical reasoning. The second Idea is based on the hypothetical syllogism. It is the cosmological Idea of the complete series of conditions. This Idea results in an antinomy, or contradiction. The third Idea is based on the disjunctive syllogism. It is the theological Idea of the complete complex of everything that is possible. This Idea results in the dialectical problem of the Ideal. In this way, reason and its claims are completely and systematically considered. § 44. The Ideas of reason are useless, and even detrimental, to the understanding of nature. Is the soul a simple substance? Did the world have a beginning or did it always exist? Did a Supreme Being design nature? Reason, however, can help to make understanding complete. To do this, reason's Ideas are thought of as though they are known objects. § 45. Prefatory Remark to the Dialectic of Pure Reason. Reason continues to ask "why?" and will not be satisfied until a final thing in itself is experienced and understood. This, however, is a deceitful illusion. This transcendent and unbounded abuse of knowledge must be restrained by toilsome, laborious scientific instruction. I. The Psychological Ideas (wrongly use Reason beyond experience) § 46. Substance (subject) cannot be known. Only accidents (predicates) can be known. Substance is a mere Idea, not an object. Pure reason, however, wrongly wants to know the subject of every predicate. Every subject, however, is a predicate for yet another subject, and so on as far as our knowledge of predicates extends. We can never know an ultimate subject or absolute substance. We seem to have an ego, though, which is a thinking subject for our thoughts. The ego, however, is not known. It is only a conceptless feeling of an existence and a representation of something that is related to all thinking. § 47. We can call this thinking self, or soul, a substance. We can say that is an ultimate subject that is not the predicate of yet another subject. Substances, though, are permanent. If we cannot prove that the soul is permanent, then it is an empty, insignificant concept. The synthetical a priori proposition "the thinking subject is permanent" can only be proved if it is an object of experience. § 48. Substances can be said to be permanent only if we are going to associate them with possible or actual experience. We can never think of substances as independent of all experience. The soul, or thinking substance, cannot be proved to be permanent and immortal, because death is the end of experience. Only living beings can have experiences. We cannot prove anything about a person's thinking substance (soul) after the person dies. §49. We know only appearances, not things in themselves. Actual bodies are external appearances in space. My soul, self, or ego is an internal appearance in time. Bodies, as appearances of my outer sense, do not exist apart from my thoughts. I myself, as an appearance of my inner sense, do not exist apart from being my representation in time and cannot be known to be immortal. Space and time are forms of my sensibility and whatever exists in them is a real appearance that I experience. These appearances are connected in space and time according to universal laws of experience. Anything that cannot be experienced in space or time is nothing to us and does not exist for us. II. The Cosmological Ideas (wrongly use Reason beyond experience) §50. The Cosmological Idea is cosmological because it is concerned with sensually experienced objects and it is an Idea because the ultimate condition which it seeks can never be experienced. Because its objects can be sensed, the Cosmological Idea wouldn't usually be considered to be a mere Idea. However, it outruns experience when it seeks the ultimate condition for all conditioned objects. In so doing, it is a mere Idea. § 51. There are four Cosmological Ideas. They mistakenly refer to the completeness, which can never be experienced, of a series of conditions. Pure reason makes four kinds of contradictory assertions about these Ideas. These antinomies result from the nature of human reason and cannot be avoided. 1. Thesis: The world has a temporal and spatial beginning or limit. Antithesis: The world does not have a temporal and spatial beginning or limit . 2. Thesis: Everything in the world consists of something that is simple. Antithesis: Everything in the world does not consist of something that is simple. 3. Thesis: There are causes in the world that are, themselves, free and uncaused. Antithesis: There are no causes in the world that are, themselves, free and uncaused. 4. Thesis: In the series of causes in the world, there is a necessary, uncaused being. Antithesis: In the series of causes in the world, there is not a necessary, uncaused being. § 52a. This conflict between thesis and antithesis cannot be resolved dogmatically. Both are supported by proofs. The conflict results when an observer considers a phenomenon (an observed occurrence) to be a thing in itself (an observed occurrence without an observer). § 52b. The falsehood of mere Ideas, which cannot be experienced, cannot be discovered by reference to experience. The hidden dialectic of the four natural Ideas of pure reason, however, reveals their false dogmatism. Reason's assertions are based on universally admitted principles while contrary assertions are deduced from other universally acknowledged principles. Contradictory assertions are both false when they are based on a self–contradictory concept. There is no middle between the two false contradictory assertions and therefore nothing is thought by the self–contradictory concept on which they are based. § 52c. Experienced objects exist, in the way that they appear, only in experience. They do not exist, in the way that they appear, apart from a spectator's thoughts. In the first two antinomies, both the thesis and the antithesis are false because they are founded on a contradictory concept. With regard to the first antinomy, I cannot say that the world is infinite or finite. Infinite or finite space and time are mere Ideas and can never be experienced. With regard to the second antinomy, I cannot say that a body consists of an infinite or a finite number of simple parts. The division, into simple parts, of an experienced body reaches only as far as the possible experience reaches. § 53. The first two antinomies were false because they considered an appearance to be a thing–in–itself (a thing as it is apart from being an appearance). In the last two antinomies, due to a misunderstanding, an appearance was mistakenly opposed to a thing–in–itself. The theses are true of the world of things–in–themselves, or the intelligible world. The antitheses are true of the world of appearances, or the phenomenal world. In the third antinomy, the contradiction is resolved if we realize that natural necessity is a property of things only as mere appearances, while freedom is attributed to things–in–themselves. An action of a rational being has two aspects or states of being: (1) as an appearance, it is an effect of some previous cause and is a cause of some subsequent effect, and (2) as a thing–in–itself it is free or spontaneous. Necessity and freedom can both be predicated of reason. In the world of appearances, motives necessarily cause actions. On the other hand, rational Ideas and maxims, or principles of conduct, command what a reasonable being ought to do. All actions of rational beings, as appearances, are strictly determined by causality. The same actions are free when the rational being acts as a thing–in– itself in accordance with mere practical reason. The fourth antinomy is solved in the same way as the third. Nowhere in the world of sense experiences and appearances is there an absolutely necessary being. The whole world of sense experiences and appearances, however, is the effect of an absolutely necessary being which can be thought of as a thing–in–itself which is not in the world of appearances. § 54. This antinomy or self–conflict of reason results when reason applies its principles to the sensible world. The antinomy cannot be prevented as long as objects (mere appearances) of the sensible world are considered to be things–in–themselves (objects apart from the way that they appear). This exposition of the antinomy will allow the reader to combat the dialectical illusions that result from the nature of pure reason. III. The Theological Idea § 55. This Idea is that of a highest, most perfect, primeval, original Being. From this Idea of pure reason, the possibility and actuality of all other things is determined. The Idea of this Being is conceived in order for all experience to be comprehended in an orderly, united connection. It is, however, a dialectical illusion that results when we assume that the subjective conditions of our thinking are the objective conditions of objects in the world. The theological Idea is an hypothesis that was made in order to satisfy reason. It mistakenly became a dogma. § 56. General Remark on the Transcendental Ideas The psychological, cosmological, and theological Ideas are nothing but pure concepts of reason. They cannot be experienced. All questions about them must be answerable because they are only principles that reason has originated from itself in order to achieve complete and unified understanding of experience. The Idea of a whole of knowledge according to principles gives knowledge a systematic unity. The unity of reason's transcendental Ideas has nothing to do with the object of knowledge. The Ideas are merely for regulative use. If we try to use these Ideas beyond experience, a confusing dialectic results. === Conclusion. On the determination of the bounds of pure reason === § 57. We cannot know things in themselves, that is, things as they are apart from being experienced. However, things in themselves may exist and there may be other ways of knowing them, apart from our experience. We must guard against assuming that the limits of our reason are the limits of the possibility of things in themselves. To do this, we must determine the boundary of the use of our reason. We want to know about the soul. We want to know about the size and origin of the world, and whether we have free will. We want to know about a Supreme Being. Our reason must stay within the boundary of appearances but it assumes that there can be knowledge of the things–in–themselves that exist beyond that boundary. Mathematics and natural science stay within the boundary of appearances and have no need to go beyond. The nature of reason is that it wants to go beyond appearances and wants to know the basis of appearances. Reason never stops asking "why?." Reason won't rest until it knows the complete condition for the whole series of conditions. Complete conditions are thought of as being the transcendental Ideas of the immaterial Soul, the whole world, and the Supreme Being. In order to think about these beings of mere thought, we symbolically attribute sensuous properties to them. In this way, the Ideas mark the bounds of human reason. They exist at the boundary because we speak and think about them as if they possess the properties of both appearances and things–in–themselves. Why is reason predisposed to metaphysical, dialectical inferences? In order to strengthen morality, reason has a tendency to be unsatisfied with physical explanations that relate only to nature and the sensible world. Reason uses Ideas that are beyond the sensible world as analogies of sensible objects. The psychological Idea of the Soul is a deterrent from materialism. The cosmological Ideas of freedom and natural necessity, as well as the magnitude and duration of the world, serve to oppose naturalism, which asserts that mere physical explanations are sufficient. The theological Idea of God frees reason from fatalism. § 58. We cannot know the Supreme Being absolutely or as it is in itself. We can know it as it relates to us and to the world. By means of analogy, we can know the relationship between God and us. The relationship can be like the love of a parent for a child, or of a clock–maker for his clock. We know, by analogy, only the relationship, not the unknown things that are related. In this way, we think of the world as if it was made by a Supreme Rational Being. === Solution of the general question of the Prolegomena. How is metaphysics possible as a science? === Metaphysics, as a natural disposition of reason, is actual. Yet metaphysics itself leads to illusion and dialectical argument. In order for metaphysics to become a science, a critique of pure reason must systematically investigate the role of a priori concepts in understanding. The mere analysis of these concepts does nothing to advance metaphysics as a science. A critique is needed that will show how these concepts relate to sensibility, understanding, and reason. A complete table must be provided, as well as an explanation of how they result in synthetic a priori knowledge. This critique must strictly demarcate the bounds of reason. Reliance on common sense or statements about probability will not lead to a scientific metaphysics. Only a critique of pure reason can show how reason investigates itself and can be the foundation of metaphysics as a complete, universal, and certain science. === Appendix === ==== How to make metaphysics as a science actual ==== An accurate and careful examination of the one existing critique of pure reason is needed. Otherwise, all pretensions to metaphysics must be abandoned. The existing critique of pure reason can be evaluated only after it has been investigated. The reader must ignore for a while the consequences of the critical researches. The critique's researches may be opposed to the reader's metaphysics, but the grounds from which the consequences derive can be examined. Several metaphysical propositions mutually conflict with each other. There is no certain criterion of the truth of these metaphysical propositions. This results in a situation that requires that the present critique of pure reason must be investigated before it can be judged as to its value in making metaphysics an actual science. ==== Pre-judging the Critique of Pure Reason ==== Kant was motivated to write this Prolegomena after reading what he judged to be a shallow and ignorant review of his Critique of Pure Reason. The review was published anonymously in a journal and was written by Garve with many edits and deletions by Feder. Kant's Critique was dismissed as "a system of transcendental or higher idealism." This made it seem as though it was an account of things that exist beyond all experience. Kant, however, insisted that his intent was to restrict his investigation to experience and the knowledge that makes it possible. Among other mistakes, the review claimed that Kant's table and deduction of the categories were "common well–known axioms of logic and ontology, expressed in an idealistic manner." Kant believed that his Critique was a major statement regarding the possibility of metaphysics. He tried to show in the Prolegomena that all writing about metaphysics must stop until his Critique was studied and accepted or else replaced by a better critique. Any future metaphysics that claims to be a science must account for the existence of synthetic a priori propositions and the dialectical antinomies of pure reason. ==== Proposals as to an investigation of the Critique of Pure Reason upon which a judgment may follow ==== Kant proposed that his work be tested in small increments, beginning with the basic assertions. The Prolegomena can be used as a general outline to be compared to the Critique. He was not satisfied with certain parts of the Critique and suggested that the discussions in the Prolegomena be used to clarify those sections. The unsatisfactory parts were the deduction of the categories and the paralogisms of pure reason in the Critique. If the Critique and the Prolegomena are studied and revised by a united effort by thinking people, then metaphysics may finally become scientific. In this way, metaphysical knowledge can be distinguished from false knowledge. Theology will also be benefited because it will become independent of mysticism and dogmatic speculation. == Reception == Lewis White Beck claimed that the chief interest of the Prolegomena to the student of philosophy is "the way in which it goes beyond and against the views of contemporary positivism". He wrote: "The Prolegomena is, moreover, the best of all introductions to that vast and obscure masterpiece, the Critique of Pure Reason. ... It has an exemplary lucidity and wit, making it unique among Kant's greater works and uniquely suitable as a textbook of the Kantian philosophy." Ernst Cassirer asserted that "the Prolegomena inaugurates a new form of truly philosophical popularity, unrivaled for clarity and keenness". Schopenhauer, in 1819, declared that the Prolegomena was "the finest and most comprehensible of Kant's principal works, which is far too little read, for it immensely facilitates the study of his philosophy". As a teenager Ernst Mach read and was inspired by the Prolegomena, before later reaching the conclusion that the 'thing-in-itself' was "just an illusion". == Notes == == Sources == Kant, Immanuel (1999). Critique of Pure Reason (The Cambridge Edition of the Works of Immanuel Kant). Translated and edited by Paul Guyer and Allen W. Wood. Cambridge University Press. ISBN 978-0-5216-5729-7. == External links == Prolegomena to Any Future Metaphysics, English translation by Gary Hatfield Contains the Prolegomena, English translation by Jonathan Bennett Original German text Prolegomena to Any Future Metaphysics public domain audiobook at LibriVox
Wikipedia/Prolegomena_to_Any_Future_Metaphysics
Self-control is an aspect of inhibitory control, one of the core executive functions. Executive functions are cognitive processes that are necessary for regulating one's behavior in order to achieve specific goals. Defined more independently, self-control is the ability to regulate one's emotions, thoughts, and behavior in the face of temptations and impulses. Thought to be like a muscle, acts of self-control expend a limited resource. In the short term, overuse of self-control leads to the depletion of that resource. However, in the long term, the use of self-control can strengthen and improve the ability to control oneself over time. Self-control is also a key concept in the general theory of crime, a major theory in criminology. The theory was developed by Michael Gottfredson and Travis Hirschi in their book A General Theory of Crime (1990). Gottfredson and Hirschi define self-control as the differentiating tendency of individuals to avoid criminal acts independent of the situations in which they find themselves. Individuals with low self-control tend to be impulsive, inconsiderate towards others, risk takers, short-sighted, and nonverbal oriented. About 70% of the variance in questionnaire data operationalizing one construct of self-control was found to be genetic. == As a virtue == Classically, the virtue of self-control (or "willpower") was called continence, and contrasted with the vice of akrasia, or incontinence. In certain contexts, self-control manifested as other virtues: in frightening situations as courage, or in the face of aggravating situations as good temper. Christians may describe the struggle with akrasia as a battle between spirit (which is inclined to God) and flesh (which is mired in sin). Jesus, as his crucifixion approached, acknowledged the challenge his disciples faced when he found them sleeping instead of praying, he stated "the spirit indeed is willing, but the flesh is weak". Paul the Apostle, in his letter to the Romans, complained, "I do not understand my own actions. For I do not do what I want, but I do the very thing I hate.... I know that the good does not dwell within me, that is, in my flesh. For the desire to do the good lies close at hand, but not the ability". St. Augustine wrote in his Confessions, "As a youth I prayed, 'Give me chastity and continence, but not right away.'" The related virtue of temperance, or sophrosyne, has been discussed by philosophers and religious thinkers from Plato and Aristotle to the present day. One of the earliest and most well-known examples of self control as a virtue was Aristotle's virtue of temperance, which concerns having a well-chosen and well-regulated set of desires. The vices associated with Aristotle's temperance are self-indulgence (deficiency) and insensibility (excess). Deficiency or excess is in reference to how much temperance is had, for example, a deficiency of temperance leads to over indulgence, while too much or an excess of temperance leads to insensibility or unreasonable control. Aristotle suggested this analogy: The intemperate person is like a city with bad laws; the person without self-control is like a city that has good laws on the books but that does not enforce them. == Research == === Counteractive === Desire is an affectively charged motivation toward a certain object, person, or activity, often, but not limited to, one associated with pleasure or relief from displeasure. Desires differ in their intensity and longevity. A desire becomes a temptation when it impacts or enters the individual's area of self-control, if the behavior resulting from the desire conflicts with an individual's values or other self-regulatory goals. A limitation to research on desire is that people desire different things. In research into what people desire in real world settings, over one week 7,827 self-reports of desires were collected, including differences in desire frequency and strength, degree of conflict between desires and other goals, and the likelihood of resisting desire and success of the resistance. The most common and strongly experienced desires are those related to bodily needs like eating, drinking, and sleeping. Self-control dilemmas occur when long-term goals clash with short-term outcomes. Counteractive Self-Control Theory states that when presented with such a dilemma, we lessen the significance of the instant rewards while momentarily increasing the importance of our overall values. When asked to rate the perceived appeal of different snacks before making a decision, people valued health bars over chocolate bars. However, when asked to do the rankings after having chosen a snack, there was no significant difference of appeal. Further, when college students completed a questionnaire prior to their course registration deadline, they ranked leisure activities as less important and enjoyable than when they filled out the survey after the deadline passed. The stronger and more available the temptation is, the harsher the devaluation will be. One of the most common self-control dilemmas involves the desire for unhealthy or unneeded food consumption versus the desire to maintain long-term health. An indication of unneeded food could also be over-expenditure on certain types of consumption such as eating away from home. Not knowing how much to spend, or overspending one's budget on eating out, can be a symptom of a lack of self-control. Experiment participants rated a new snack as significantly less healthy when it was described as very tasty compared to when they heard it was just slightly tasty. Without knowing anything else about a food, the mere suggestion of good taste triggered counteractive self-control and prompted them to devalue the temptation in the name of health. Further, when presented with the strong temptation of one large bowl of chips, participants both perceived the chips to be higher in calories and ate less of them than did participants who faced the weak temptation of three smaller chip bowls, even though both conditions represented the same amount of chips overall. Weak temptations are falsely perceived to be less unhealthy, so self-control is not triggered and desirable actions are more often engaged in; this supports the counteractive self-control theory. Weak temptations present more of a challenge to overcome than strong temptations, because they appear less likely to compromise long-term values. === Satiation === The decrease in an individual's liking of and desire for a substance following repeated consumption of that substance is known as satiation. Satiation rates when eating depend on interactions of trait self-control and healthiness of the food. After eating equal amounts of either clearly healthy (raisins and peanuts) or unhealthy (M&Ms and Skittles) snack foods, people who scored higher on trait self-control tests reported feeling significantly less desire to eat more of the unhealthy foods than they did the healthy foods. Those with low trait self-control satiated at the same pace regardless of health value. Further, when reading a description emphasizing the sweet flavor of their snack, participants with higher trait self-control reported a decrease in desire faster than they did after hearing a description of the healthy benefits of their snack. Once again, those with low self-control satiated at the same rate regardless of the description. Perceived unhealthiness of the food alone, regardless of actual health level, relates to faster satiation, but only for people with high trait self-control. === Construal levels === Thinking that is characterized by high construals, whenever individuals "are obliged to infer additional details of content, context, or meaning in the actions and outcomes that unfold around them", will view goals and values in a global, abstract sense, whereas low-level construals emphasize concrete, definitive ideas and categorizations. Different construal levels determine our activation of self-control in response to temptations. One technique for inducing high-level construals is asking an individual a series of "why?" questions that lead to increasingly abstracted responses, whereas low-level construals are induced by "how?" questions leading to increasingly concrete answers. When taking an Implicit Association Test, people with induced high-level construals are significantly faster at associating temptations (such as candy bars) with "bad", and healthy choices (such as apples) with "good" than those in the low-level condition. Those with induced higher-level construals also show a significantly increased likelihood of choosing an apple for snack over a candy bar. In a person who is not exercising any conscious or active self-control efforts, temptations can be dampened by merely inducing high-level construals in them. Abstraction of high-level construals may remind people of their large-scale values, such as a healthy lifestyle, which deemphasizes the current tempting situation. === Human and non-human === Positive correlation between linguistic capability and self-control has been inferred from experiments with common chimpanzees. Human self-control research is typically modeled by using a token economy system: a behavioral program in which individuals in a group can earn tokens for a variety of desirable behaviors and can cash in the tokens for various backup, positive reinforcers.: 305  The difference in research methodologies with humans using tokens or conditioned reinforcers versus non-humans using sub-primary forces suggested procedural artifacts as a possible suspect. One procedural difference was in the delay in the exchange period: Non-human subjects can and most likely would access their reinforcement immediately; human subjects had to wait for an "exchange period" in which they could exchange their tokens for money, usually at the end of the experiment. When this was done with non-human subjects (pigeons), they responded much like humans in that males showed much less control than females. Logue, who is discussed more below, points out that in her study on self-control it was boys who responded with less self-control than girls. She says that in adulthood, for the most part, the sexes equalize on their ability to exhibit self-control. This could imply a human's ability to exert more self-control as they mature and become aware of the consequences associated with impulsivity. This suggestion is further examined below. Most of the research in the field of self-control assumes that self-control is, in general, better than impulsiveness. As a result, almost all research done on this topic is from this standpoint; very rarely is impulsiveness the more adaptive response in experimental design. Some in the field of developmental psychology think of self-control in a way that takes into account that sometimes impulsiveness is the more adaptive response. In their view, a normal individual should have the capacity to be either impulsive or controlled depending on which is the most adaptive. However, there is comparatively less research conducted along these lines. Self-control has been theorized to be a measurable variable in humans, although there are many different tests and means of measuring it. In the worst circumstances people with the most self-control and resilience have the best chance of defying the odds they are faced with, such as poverty, bad schooling, unsafe communities, etc. Those at a disadvantage but with high self-control go on to higher education, professional jobs, and psychosocial outcomes, although there is conflicting evidence on health impacts later in adulthood. The psychological phenomenon known as "John Henryism" posits that when goal-oriented, success-minded people strive ceaselessly in the absence of adequate support and resources, they can—like the eponymous 19th-century folk hero who fell dead of an aneurysm after besting a steam-powered drill in a railroad-spike-driving competition—work themselves to death (or toward it). In the 1980s, socio-epidemiologist Sherman James found that black Americans in North Carolina suffered disproportionately from heart disease and strokes. He suggested "John Henryism" as the cause of this phenomenon. === Alternatives === Using compassion, gratitude, and healthy pride to create positive emotional motivation can be less stressful, less vulnerable to rationalization, and more likely to succeed than the traditional strategy of using logic and willpower to suppress behavior that resonates emotionally. Philosopher Immanuel Kant, at the beginning of one of his main works, "Groundwork of the Metaphysics of Morals", mentions the term "Selbstbeherrschung"—self-control—in a way such that it does not play a key role in his account of virtue. He argues instead that qualities such as self-control and moderation of affect and passions are mistakenly taken to be absolutely good (G 4: 394). In his apology of a solid universal morality, he also saw compassion as a weak and misguided sentiment: "Such benevolence is called soft-heartedness and should not occur at all among human beings", he said of it. In distancing from his previous positions on the matter of self-control, he points out that such qualities can have only instrumental value: they can promote the good will and make its work easier, but they can also have bad effects. To distinguish between morals and self-control, Kant mentions the example of the cruel Roman Dictator Lucius Cornelius Sulla Felix: despite his maxims being morally incorrect, Sulla had self-control because he steadfastly followed those maxims (A 7: 293). Sulla lacks the two levels of moral self-control that are constitutive of virtue (our ability to adopt moral maxims, abstracted from sense impressions; and our ability to follow these maxims). His lack of virtue is primarily explained by his failure to compel himself to adopt moral maxims. According to Kant, self-control is merely a kind of instrument for following already-adopted maxims. As a result, even when closer attention is paid to self-control, its role in adopting morally correct maxims remains neglected in Kant's secondary literature. == Skinner's survey of techniques == B.F. Skinner's Science and Human Behavior provides a survey of nine categories of self-control methods. === Physical restraint and physical aid === The manipulation of the environment to make some responses easier to physically execute and others more difficult illustrates this principle. This can be physical guidance: the application of physical contact to induce an individual to go through the motions of a desired behavior. This can also be a physical prompt. Examples of this include clapping one's hand over one's own mouth, placing one's hand in one's pocket to prevent fidgeting, and using a 'bridge' hand position to steady a pool shot; these all represent physical methods to affect behavior.: 231  === Changing the stimulus === Manipulating the occasion for behavior may change behavior as well. Removing distractions that induce undesired actions or adding a prompt to induce them are examples. Hiding temptation and leaving reminders are two more.: 233  The need to hide temptation is a result of temptation's effect on the mind. A common theme among studies of desire is an investigation of the underlying cognitive processes of a craving for an addictive substance, such as nicotine or alcohol. In order to better understand the cognitive processes involved, the Elaborated Intrusion (EI) theory of craving was developed. According to EI, craving persists because individuals develop mental images of the coveted substance that are themselves pleasurable, but which also increase their awareness of deficit. The result is a cruel circle of desire, imagery, and preparation to satisfy the desire. This quickly escalates into greater expression of the imagery that incorporates working memory, interferes with performance on simultaneous cognitive tasks, and strengthens the emotional response. Essentially the mind is consumed by the craving for a desired substance, and this craving in turn interrupts any concurrent cognitive tasks. A craving for nicotine or alcohol is an extreme case, but EI theory also applies to more ordinary motivations and desires. === Depriving and satiating === Deprivation is the time in which an individual does not receive a reinforcer; satiation occurs when an individual has received a reinforcer to such a degree that it temporarily has no reinforcing power.: 40  If we deprive ourselves of a stimulus, the value of that reinforcement increases. For example, if a person has been deprived of food, they may go to extreme measures to get that food, such as stealing. On the other hand, if a person eats a large meal, they may no longer be enticed by the reinforcement of dessert. One may manipulate one's own behavior by affecting states of deprivation or satiation. By skipping a meal before a free dinner one may more effectively capitalize on the free meal. By eating a healthy snack beforehand the temptation to eat free "junk food" is reduced.: 235  Imagery is important in desire cognition during a state of deprivation. One study divided smokers divided into two groups: The control group was instructed to continue smoking as usual until they arrived at the laboratory, where they were then asked to read a multisensory neutral script (one not related to a craving for nicotine). The experimental group, however, was asked to abstain from smoking before coming to the laboratory in order to induce craving, and upon their arrival were told to read a multisensory urge-induction script intended to intensify their nicotine craving. After the participants finished reading the script they rated their craving for cigarettes. Next they formulated visual or auditory images when prompted with verbal cues such as "a game of tennis" or "a telephone ringing". After this task the participants again rated their craving for cigarettes. The study found that the craving experienced by the abstaining smokers was decreased to the control group's level by visual imagery but not by auditory imagery alone. That mental imagery served to reduce the level of craving in smokers suggests that it can be used as a method of self-control during times of deprivation. === Manipulating emotional conditions === Manipulating emotional conditions can induce certain ways of responding. One example of this can be seen in theatre. Actors often elicit tears from their own painful memories if it is necessary for the character they are playing to cry. One may read a letter or book, listen to music, or watch a movie, in order to get in the proper state of mind for a certain event or function. Additionally, considering an activity either as "work" or as "fun" can have an effect on the difficulty of self-control. To analyze the possible effects of the cognitive transformation of an object on desire, a study was conducted on 71 undergraduate students, all of whom were familiar with a particular chocolate product. The participants were randomly assigned to one of three groups: the control condition, the consummatory condition, and the nonconsummatory transformation condition. Each group was then given three minutes to complete their assigned task. The participants in the control condition were told to read a neutral article, about a location in South America, that was devoid of any words associated with food consumption. Those in the consummatory condition were instructed to imagine as clearly as possible how consuming the chocolate would taste and feel. The participants in the nonconsummatory transformation condition were told to imagine as clearly as possible odd settings or uses for the chocolate. Next, all the participants underwent a manipulation task that required them to rate their mood on a five-point scale in response to ten items they viewed. Following the manipulation task, participants completed automatic evaluations that measured their reaction time to six different images of the chocolate, each of which was paired with a positive or a negative stimuli. The results showed that the participants instructed to imagine the consumption of the chocolate demonstrated higher automatic evaluations toward the chocolate than did the participants told to imagine odd settings or uses for the chocolate, and participants in the control condition fell in-between the two experimental conditions. This indicates that the manner in which one considers an item influences how much it is desired. === Using aversive stimulation === Aversive stimulation is used as a means of increasing or decreasing the likelihood of target behavior. An averse stimulus is sometimes referred to as a "punisher" or an "aversive". Closely related to the idea of a punisher is the concept of punishment. Punishment is when in some situation, a person does something that is immediately followed by a punisher; that person then is less likely to do the same thing again in a similar situation. An example of this can be seen when a teenager stays out past curfew, the teenager's parents ground the teenager, and this punishment makes it less likely that the teenager will stay out past their curfew again. === Drugs === Low doses of stimulants, such as methylphenidate and amphetamine, improve inhibitory control and are used to treat ADHD. High amphetamine doses that are above the therapeutic range can interfere with working memory and other aspects of inhibitory control. Alcohol impairs self-control. === Operant conditioning === Operant conditioning, sometimes referred to as Skinnerian conditioning, is the process of strengthening a behavior by reinforcing it or weakening it by punishing it. By continually strengthening and reinforcing a behavior, or weakening and punishing a behavior, an association as well as a consequence develops. A behavior that is altered by its consequences is known as operant behavior. There are multiple components of operant conditioning. These include reinforcement such as positive reinforcers and negative reinforcers. A positive reinforcer is a stimulus which, when presented immediately following a behavior, causes the behavior to increase in frequency. Negative reinforcers are stimuli whose removal immediately after a response cause the response to be strengthened or to increase in frequency. Components of punishment are also incorporated such as positive punishment and negative punishment. Examples of operant conditioning are commonplace. When a student tells a joke to one of his peers and they all laugh at this joke, this student is more likely to continue this behavior of telling jokes because his joke was reinforced by the sound of their laughing. However, if a peer tells the student his joke is "silly" or "stupid", he will be punished by telling the joke and his likelihood of telling another joke is decreased. === Punishment === Self-punishment of responses would include the arranging of punishment contingent upon undesired responses. This might be seen in the behavior of whipping oneself which some monks and religious persons do. This is different from aversive stimulation in that, for example, the alarm clock generates escape from the alarm, while self-punishment presents stimulation after the fact to reduce the probability of future behavior.: 237  Punishment is more like conformity than self-control because with self-control there needs to be an internal drive, not an external source of punishment, that makes the person want to do something. With a learning system of punishment the person does not make their decision based upon what they want, rather they base it on the additional external factors. When you use a negative reinforcement you are more likely to influence their internal decisions and allow them to make the choice on their own whereas with a punishment the person will make their decisions based upon the consequences rather than exerting self-control. The best way to learn self-control is with "free will" in which people perceive they are making their own choices. === "Doing something else" === Skinner noted that various philosophies and religions exemplified this principle by instructing believers to (for example) love their enemies. When we are filled with rage or hatred we might control ourselves by "doing something else" or, more specifically, something that is incompatible with our desired but inappropriate response. == Brain regions involved == Functional imaging of the brain has shown that self-control correlates with activity in an area in the dorsolateral prefrontal cortex (dlPFC), a part of the frontal lobe. This area is distinct from those involved in generating intentional actions, attending to intentions, or selecting between alternatives. Self-control occurs through top-down inhibition of the premotor cortex, which essentially means using perception and mental effort to reign in behavior and action as opposed to allowing emotions or sensory experience (bottom-up) to control and drive behavior. There is some debate about the mechanism of self-control and how it emerges. Researchers believed the bottom-up approach, relying on sensory experience and immediate stimuli, guided self-control behavior. The more time a person spends thinking about a rewarding stimulus, the more likely he or she will experience a desire for it. Information that is most important gains control of working memory, and can then be processed through a top-down mechanism. Evidence suggests that top-down processing plays a strong role in self-control. Top-down processing can regulate bottom-up attentional mechanisms. To demonstrate this, researchers studied working memory and distraction by presenting participants with neutral or negative pictures and then a math problem or no task. They found that participants reported less negative moods after solving the math problem compared to the no task group, which they attributed to an influence on working memory capacity. Many researchers work on identifying the brain areas involved in the exertion of self-control. Many different areas are known to be involved. In relation to self-control mechanisms, the reward centers in the brain compare external stimuli versus internal need states and a person's learning history. At the biological level, a loss of control is thought to be caused by a malfunctioning of a decision mechanism. Much of the work on how the brain makes decisions is based on evidence from perceptual learning combined with neuroimaging where it has been found that the pre-frontal cortex has a major impact on how people make choices. Subjects are often tested on tasks that are not typically associated with self-control, but are more general decision tasks. Nevertheless, the research on self-control is informed by such research. Sources for evidence on the neural mechanisms of self-control include fMRI studies on human subjects, neural recordings on animals, lesion studies on humans and animals, and clinical behavioral studies on humans with self-control disorders. There is broad agreement that the cortex is involved in self-control, specifically the pre-frontal cortex. A mechanistic account of self-control could have tremendous explanatory value and clinical application. What follows is a survey of some important literature on the brain regions involved in self-control. === Prefrontal cortex === The prefrontal cortex is located in the most anterior portion of the frontal lobe in the brain. It forms a larger portion of the cortex in humans, taking up about a third of the cortex, and being far more complex than in other animals. The dendrites in the prefrontal cortex contain up to 16 times as many dendritic spines as neurons in other cortical areas. Due to this, the prefrontal cortex integrates a large amount of information.: 104  The orbitofrontal cortex cells are important in self-control. If an individual has the choice between an immediate reward or a more valuable reward they can receive later, they would most likely try to control the impulse of taking the inferior immediate reward. If that individual has a damaged orbitofrontal cortex, this impulse control will most likely not be as strong; they may be more likely to take the immediate reinforcement. Lack of impulse control in children may be attributable to the fact that the prefrontal cortex develops slowly.: 406  Todd A. Hare et al. use functional MRI techniques to show that the ventromedial prefrontal cortex (vmPFC) and the dorsolateral prefrontal cortex (DLPFC) are crucial to the exertion of self-control. They found the vmPFC encoded the value placed on pleasurable, but ultimately self defeating behavior versus that placed on long-term goals. Another discovery was the fact that the exertion of self-control required the modulation of the vmPFC by the DLPFC. The study found that a lack of self-control was strongly correlated with reduced activity in the DLPFC. Hare's study is especially relevant to the self-control literature because it suggests that an important cause of poor self-control is a defective DLPFC. === Outcomes as determining whether a choice is made === Alexandra W. Logue studies how outcomes change the possibilities of a self-control choice being made. Logue identifies three possible outcome effects: outcome delays, outcome size, and outcome contingencies. outcome delays A delay in a positive outcome results in the perception that the outcome is less valuable than an outcome which is more readily achieved. The devaluing of the delayed outcome can cause less self-control. A way to increase self-control in situations of a delayed outcome is to pre-expose the outcome. Pre-exposure reduces the frustrations related to the delay of the outcome. An example of this is signing bonuses. outcome size There tends to be a relationship between the value of the incentive and the desired outcome: the larger the desired outcome, the larger the value. Some factors that decrease value include delay, effort/cost, and uncertainty. A decision tends to be based on the option with the highest value at the time of the decision. outcome contingencies The relationship between responses and outcomes, or "outcome contingencies", impact the degree of self-control that a person exercises. For instance, if a person is able to change his choice after the initial choice is made, the person is far more likely to take the impulsive, rather than self-controlled, choice. Additionally, it is possible for people to make a precommitment action—one meant to lead to a self-controlled action at a later period in time. When a person sets an alarm clock, for example, they are making a precommitted response to wake up early in the morning. Hence, that person is more likely to exercise the self-controlled decision to wake up, rather than to fall back in bed for a little more sleep. Cassandra B. Whyte studied locus of control which is the degree to which people think that they, as opposed to external sources, have control over their outcomes. Results indicated that academic performance was higher among people who think their decisions meaningfully impact their outcomes. These outcomes may be due to the belief that they have options from which to choose from, which facilitates more hopeful decision-making behavior when compared to dependence on externally determined outcomes that require less commitment, effort, or self-control. === Physiology of behavior === Many things affect one's ability to exert self-control; one of these is glucose levels in the brain. Exerting self-control depletes glucose. Reduced glucose, and poor glucose tolerance (reduced ability to transport glucose to the brain) are correlated with lower performance in tests of self-control, particularly in difficult new situations. Self-control demands that an individual work to overcome thoughts, emotions, and automatic responses/impulses. These efforts require higher blood glucose levels. Lower blood glucose levels can lead to unsuccessful self-control abilities. Alcohol causes a decrease of glucose levels in both the brain and the body, and it also has an impairing effect on many forms of self-control. Furthermore, failure of self-control is most likely to occur during times of the day when glucose is used least effectively. Self-control thus appears highly susceptible to glucose. An alternative explanation of the limited amounts of glucose that are found is that this depends on the allocation of glucose, not on limited supply of glucose. According to this theory, the brain has sufficient resources of glucose and also has the possibility of delivering the glucose, but the personal priorities and motivations of the individual cause the glucose to be allocated to other sites. As of 2024 this theory has not been tested. == The "marshmallow test" == In the 1960s, Walter Mischel tested four-year-old children for self-control via the "marshmallow test": the children were each given a marshmallow and told that they can eat it anytime they want, but if they waited 15 minutes, they would receive another marshmallow. Follow-up studies showed that the results correlated well with these children's success levels in later life in the form of greater academic achievement. A strategy used in the marshmallow test was to focus on "hot" or "cool" features of an object. The children were encouraged to think about the marshmallow's "cool features" such as its shape and texture, possibly comparing it to a cotton ball or a cloud. The "hot features" of the marshmallow would be its sweet, sticky tastiness. These hot features make it more difficult to delay gratification. By focusing on the cool features, the mind is adverted from the appealing aspects of the marshmallow, and self-control is more plausible. Years later Mischel reached out to the participants of his study, who were then in their 40s. He found that those who showed less self-control by taking the single marshmallow in the initial study were more likely to develop problems with relationships, stress, and drug abuse later in life. Mischel carried out the experiment again with the same participants in order to see which parts of the brain were active during the process of self-control. The participants received MRI scans to show brain activity. The results showed that those who exhibited lower levels of self-control had higher brain activity in the ventral striatum, the area that deals with positive rewards. In more recent years, other studies have shown that income status was a much larger influence than any internal factors (i.e., if their family could afford to have breakfast every day, the child would be more likely delay gratification). Another study showed cultural influences also play a role in delayed gratification in the context of the marshmallow test. Self-control is negatively correlated with sociotropy which in turn is correlated with depression. == Ego depletion == Ego depletion is the theory that self-control requires energy and focus, and over an extended period of self-control demands, this energy and focus can fatigue. There are ways to help this ego depletion. One way is through rest and relaxation from these high demands. Additionally, training self-control with certain behaviors such as practicing self awareness may also help to strengthen an individual's self-control, as may motivational incentives and supplementation of glucose. Training on self-control tasks such as improving posture and monitoring eating habits might help boost one's ability to resist giving in to impulses. This may be particularly effective in those who would otherwise have difficulty controlling their impulses. However, there is conflicting evidence about whether ego depletion is a real effect; meta-analyses have mostly found no evidence that the effect exists. For more details, see the main ego depletion page. == See also == == References == == Further reading == == External links == Discipline in our life (religious tract) Teaching Children the Art of Self-Control
Wikipedia/Self-control
Abstract object theory (AOT) is a branch of metaphysics regarding abstract objects. Originally devised by metaphysician Edward Zalta in 1981, the theory was an expansion of mathematical Platonism. == Overview == Abstract Objects: An Introduction to Axiomatic Metaphysics (1983) is the title of a publication by Edward Zalta that outlines abstract object theory. AOT is a dual predication approach (also known as "dual copula strategy") to abstract objects influenced by the contributions of Alexius Meinong and his student Ernst Mally. On Zalta's account, there are two modes of predication: some objects (the ordinary concrete ones around us, like tables and chairs) exemplify properties, while others (abstract objects like numbers, and what others would call "nonexistent objects", like the round square and the mountain made entirely of gold) merely encode them. While the objects that exemplify properties are discovered through traditional empirical means, a simple set of axioms allows us to know about objects that encode properties. For every set of properties, there is exactly one object that encodes exactly that set of properties and no others. This allows for a formalized ontology. A notable feature of AOT is that several notable paradoxes in naive predication theory (namely Romane Clark's paradox undermining the earliest version of Héctor-Neri Castañeda's guise theory, Alan McMichael's paradox, and Daniel Kirchner's paradox) do not arise within it. AOT employs restricted abstraction schemata to avoid such paradoxes. In 2007, Zalta and Branden Fitelson introduced the term computational metaphysics to describe the implementation and investigation of formal, axiomatic metaphysics in an automated reasoning environment. == See also == == Notes == == References == == Further reading ==
Wikipedia/Abstract_object_theory
Activity theory (AT; Russian: Теория деятельности) is an umbrella term for a line of eclectic social-sciences theories and research with its roots in the Soviet psychological activity theory pioneered by Sergei Rubinstein in the 1930s. It was later advocated for and popularized by Alexei Leont'ev. Some of the traces of the theory in its inception can also be found in a few works of Lev Vygotsky. These scholars sought to understand human activities as systemic and socially situated phenomena and to go beyond paradigms of reflexology (the teaching of Vladimir Bekhterev and his followers) and classical conditioning (the teaching of Ivan Pavlov and his school), psychoanalysis and behaviorism. It became one of the major psychological approaches in the former USSR, being widely used in both theoretical and applied psychology, and in education, professional training, ergonomics, social psychology and work psychology. Activity theory is more of a descriptive meta-theory or framework than a predictive theory. It considers an entire work/activity system (including teams, organizations, etc.) beyond just one actor or user. It accounts for environment, history of the person, culture, role of the artifact, motivations, and complexity of real-life activity. One of the strengths of AT is that it bridges the gap between the individual subject and the social reality—it studies both through the mediating activity. The unit of analysis in AT is the concept of object-oriented, collective and culturally mediated human activity, or activity system. This system includes the object (or objective), subject, mediating artifacts (signs and tools), rules, community and division of labor. The motive for the activity in AT is created through the tensions and contradictions within the elements of the system. According to ethnographer Bonnie Nardi, a leading theorist in AT, activity theory "focuses on practice, which obviates the need to distinguish 'applied' from 'pure' science—understanding everyday practice in the real world is the very objective of scientific practice. ... The object of activity theory is to understand the unity of consciousness and activity." Sometimes called "Cultural-Historical Activity Theory", this approach is particularly useful for studying a group that exists "largely in virtual form, its communications mediated largely through electronic and printed texts." Cultural-Historical Activity Theory has accordingly also been applied to genre theory within writing studies to consider how quasi-stabilized forms of communication regularize relations and work while forming communally shared knowledge and values in both educational and workplace settings. AT is particularly useful as a lens in qualitative research methodologies (e.g., ethnography, case study). AT provides a method of understanding and analyzing a phenomenon, finding patterns and making inferences across interactions, describing phenomena and presenting phenomena through a built-in language and rhetoric. A particular activity is a goal-directed or purposeful interaction of a subject with an object through the use of tools. These tools are exteriorized forms of mental processes manifested in constructs, whether physical or psychological. As a result the notion of tools in AT is broad and can involve stationary, digital devices, library materials, or even physical meeting spaces. AT recognizes the internalization and externalization of cognitive processes involved in the use of tools, as well as the transformation or development that results from the interaction. == History == The origins of activity theory can be traced to several sources, which have subsequently given rise to various complementary and intertwined strands of development. This account will focus on three of the most important of these strands. The first is associated with the Moscow Institute of Psychology and in particular the "troika" of young Russian researchers, Vygotsky, Leont'ev and Luria. Vygotsky founded cultural-historical psychology, a field that became the basis for modern AT; Leont'ev, one of the principal founders of activity theory, both developed and reacted against Vygotsky's work. Leont'ev's formulation of general activity theory is currently a strong influence in post-Soviet developments in AT, which have largely been in social-scientific, organizational, and writing-studies rather than psychological research and organization. The second major line of development within activity theory involves Russian scientists, such as P. K. Anokhin and Nikolai Bernstein, more directly concerned with the neurophysiological basis of activity; its foundation is associated with the Soviet philosopher of psychology Sergei Rubinstein. This work was subsequently developed by researchers such as Pushkin, Zinchenko & Gordeeva, Ponomarenko, Zarakovsky and others, and is currently most well-known through the work on systemic-structural activity theory being carried out by G. Z. Bedny and his associates, including a focus on the application of this theory as well as other related theories. Finally, in the Western world, discussions and use of AT are primarily framed within the Scandinavian activity theory strand, developed by Yrjö Engeström. === Russian === After Vygotsky's early death, Leont'ev became the leader of the research group nowadays known as the Kharkov School of Psychology and extended Vygotsky's research framework in significantly new ways. Leont'ev first examined the psychology of animals, looking at the different degrees to which animals can be said to have mental processes. He concluded that Pavlov's reflexionism was not a sufficient explanation of animal behaviour and that animals have an active relation to reality, which he called "activity". In particular, the behaviour of higher primates such as chimpanzees could only be explained by the ape's formation of multi-phase plans using tools. Leont'ev then progressed to humans and pointed out that people engage in "actions" that do not in themselves satisfy a need, but contribute towards the eventual satisfaction of a need. Often, these actions only make sense in a social context of a shared work activity. This led him to a distinction between "activities", which satisfy a need, and the "actions" that constitute the activities. Leont'ev also argued that the activity in which a person is involved is reflected in their mental activity, that is (as he puts it) material reality is "presented" to consciousness, but only in its vital meaning or significance. Activity theory also influenced the development of organizational-activity game as developed by Georgy Shchedrovitsky. === Scandinavian === AT remained virtually unknown outside the Soviet Union until the mid-1980s, when it was picked up by Scandinavian researchers. The first international conference on activity theory was not held until 1986. The earliest non-Soviet paper cited by Nardi is a 1987 paper by Yrjö Engeström: "Learning by expanding". This resulted in a reformulation of AT. Kuutti notes that the term "activity theory" "can be used in two senses: referring to the original Soviet tradition or referring to the international, multi-voiced community applying the original ideas and developing them further." The Scandinavian AT school of thought seeks to integrate and develop concepts from Vygotsky's Cultural-historical psychology and Leont'ev's activity theory with Western intellectual developments such as Cognitive Science, American Pragmatism, Constructivism, and Actor-Network Theory. It is known as Scandinavian activity theory. Work in the systems-structural theory of activity is also being carried on by researchers in the US and UK. Some of the changes are a systematisation of Leont'ev's work. Although Leont'ev's exposition is clear and well structured, it is not as well-structured as the formulation by Yrjö Engeström. Kaptelinin remarks that Engeström "proposed a scheme of activity different from that by Leont'ev; it contains three interacting entities—the individual, the object and the community—instead of the two components—the individual and the object—in Leont'ev's original scheme." Some changes were introduced, apparently by importing notions from human–computer interaction theory. For instance, the notion of rules, which is not found in Leont'ev, was introduced. Also, the notion of collective subject was introduced in the 1970s and 1980s (Leont'ev refers to "joint labour activity", but only has individuals, not groups, as activity subjects). == Theory == The goal of activity theory is understanding the mental capabilities of a single individual. However, it rejects the isolated individuals as insufficient unit of analysis, analyzing the cultural and technical aspects of human actions. Activity theory is most often used to describe actions in a socio-technical system through six related elements (Bryant et al. as defined by Leonti'ev 1981 and redefined in Engeström 1987) of a conceptual system expanded by more nuanced theories: Object-orientedness – the objective of the activity system. Object refers to the objectiveness of the reality; items are considered objective according to natural sciences but also have social and cultural properties. Subject or internalization – actors engaged in the activities; the traditional notion of mental processes Community or externalization – social context; all actors involved in the activity system Tools or tool mediation – the artifacts (or concepts) used by actors in the system (both material and abstract artifacts). Tools influence actor-structure interactions, they change with accumulating experience. In addition to physical shape, the knowledge also evolves. Tools are influenced by culture, and their use is a way for the accumulation and transmission of social knowledge. Tools influence both the agents and the structure. Division of labor – social strata, hierarchical structure of activity, the division of activities among actors in the system Rules – conventions, guidelines and rules regulating activities in the system Activity theory helps explain how social artifacts and social organization mediate social action (Bryant et al.). == Information systems == The application of activity theory to information systems derives from the work of Bonnie Nardi and Kari Kuutti. Kuutti's work is addressed below. Nardi's approach is, briefly, as follows: Nardi (p. 6) described activity theory as "...a powerful and clarifying descriptive tool rather than a strongly predictive theory. The object of activity theory is to understand the unity of consciousness and activity...Activity theorists argue that consciousness is not a set of discrete disembodied cognitive acts (decision making, classification, remembering), and certainly it is not the brain; rather, consciousness is located in everyday practice: you are what you do." Nardi (p. 5) also argued that "activity theory proposes a strong notion of mediation—all human experience is shaped by the tools and sign systems we use." Nardi (p. 6) explained that "a basic tenet of activity theory is that a notion of consciousness is central to a depiction of activity. Vygotsky described consciousness as a phenomenon that unifies attention, intention, memory, reasoning, and speech..." and (p. 7) "Activity theory, with its emphasis on the importance of motive and consciousness—which belongs only to humans—sees people and things as fundamentally different. People are not reduced to 'nodes' or 'agents' in a system; 'information processing' is not seen as something to be modelled in the same way for people and machines." In a later work, Nardi et al. in comparing activity theory with cognitive science, argue that "activity theory is above all a social theory of consciousness" and therefore "... activity theory wants to define consciousness, that is, all the mental functioning including remembering, deciding, classifying, generalising, abstracting and so forth, as a product of our social interactions with other people and of our use of tools." For Activity Theorists "consciousness" seems to refer to any mental functioning, whereas most other approaches to psychology distinguish conscious from unconscious functions. Over the last 15 years the use and exploration of activity theory in information systems has grown. One stream of research has focused on technology mediated change and the implementation of technologies and how they disrupt, change and improve organisational work activity. In these studies, activity systems are used to understand emergent contradictions in the work activity, which are temporarily resolved using information systems (tools) and/or arising from the introduction of information systems. Information science studies use a similar approach to activity theory in order to understand information behaviour "in context". In the field of Information and communications technology (ICT) and development (a field of study within information systems), activity theory has also been used to inform development of IT systems and to frame the study of ICT in development settings. In addition, Etengoff & Daiute have conducted recent work exploring how social media interfaces can be productively used to mediate conflicts. Their work has illustrated this perspective with analyses of online interactions between gay men and their religious family members and Sunni-Muslim emerging adults' efforts to maintain a positive ethnic identity via online religious forums in post 9/11 contexts. == Human–computer interaction == The rise of the personal computer challenged the focus in traditional systems developments on mainframe systems for automation of existing work routines. It furthermore brought forth a need to focus on how to work on materials and objects through the computer. In the search of theoretical and methodical perspectives suited to deal with issues of flexibility and more advanced mediation between the human being, material and outcomes through the interface, it seemed promising to turn to the still rather young HCI research tradition that had emerged primarily in the US (for further discussion see Bannon & Bødker, 1991). Specifically the cognitive science-based theories lacked means of addressing a number of issues that came out of the empirical projects (see Bannon & Bødker, 1991): 1. Many of the early advanced user interfaces assumed that the users were the designers themselves, and accordingly built on an assumption of a generic user, without concern for qualifications, work environment, division of work, etc. 2.In particular the role of the artifact as it stands between the user and her materials, objects and outcomes was ill understood. 3. In validating findings and designs there was a heavy focus on novice users whereas everyday use by experienced users and concerns for the development of expertise were hardly addressed. 4.Detailed task analysis and the idealized models created through task analysis failed to capture the complexity and contingency of real-life action. 5.From the point of view of complex work settings, it was striking how most HCI focused on one user – one computer in contrast to the ever-ongoing cooperation and coordination of real work situations (this problem later lead to the development of CSCW). 6.Users were mainly seen as objects of study. Because of these shortcomings, it was necessary to move outside cognitive science-based HCI to find or develop the necessary theoretical platform. European psychology had taken different paths than had American with much inspiration from dialectical materialism (Hydén 1981, Engeström, 1987). Philosophers such as Heidegger and Wittgenstein came to play an important role, primarily through discussions of the limitations of AI (Winograd & Flores 1986, Dreyfus & Dreyfus 1986). Suchman (1987) with a similar focus introduced ethnomethodology into the discussions, and Ehn (1988) based his treatise of design of computer artifacts on Marx, Heidegger and Wittgenstein. The development of the activity theoretical angle was primarily carried out by Bødker (1991, 1996) and by Kuutti (Bannon & Kuutti, 1993, Kuutti, 1991, 1996), both with strong inspiration from Scandinavian activity theory groups in psychology. Bannon (1990, 1991) and Grudin (1990a and b) made significant contributions to the furthering of the approach by making it available to the HCI audience. The work of Kaptelinin (1996) has been important to connect to the earlier development of activity theory in Russia. Nardi produced the, hitherto, most applicable collection of activity theoretical HCI literature (Nardi, 1996). === Systemic-structural activity theory (SSAT) === At the end of the 1990s, a group of Russian and American activity theorists working in the systems-cybernetic tradition of Bernshtein and Anokhin began to publish English-language articles and books dealing with topics in human factors and ergonomics and, latterly, human–computer interaction. Under the rubric of systemic-structural activity theory (SSAT), this work represents a modern synthesis within activity theory which brings together the cultural-historical and systems-structural strands of the tradition (as well as other work within Soviet psychology such as the Psychology of Set) with findings and methods from Western human factors/ergonomics and cognitive psychology. The development of SSAT has been specifically oriented toward the analysis and design of the basic elements of human work activity: tasks, tools, methods, objects and results, and the skills, experience and abilities of involved subjects. SSAT has developed techniques for both the qualitative and quantitative description of work activity. Its design-oriented analyses specifically focus on the interrelationship between the structure and self-regulation of work activity and the configuration of its material components. == An explanation == This section presents a short introduction to activity theory, and some brief comments on human creativity in activity theory and the implications of activity theory for tacit knowledge and learning. === Activities === Activity theory begins with the notion of activity. An activity is seen as a system of human "doing" whereby a subject works on an object in order to obtain a desired outcome. In order to do this, the subject employs tools, which may be external (e.g. an axe, a computer) or internal (e.g. a plan). As an illustration, an activity might be the operation of an automated call centre. As we shall see later, many subjects may be involved in the activity and each subject may have one or more motives (e.g. improved supply management, career advancement or gaining control over a vital organisational power source). A simple example of an activity within a call centre might be a telephone operator (subject) who is modifying a customer's billing record (object) so that the billing data is correct (outcome) using a graphical front end to a database (tool). Kuutti formulates activity theory in terms of the structure of an activity. "An activity is a form of doing directed to an object, and activities are distinguished from each other according to their objects. Transforming the object into an outcome motivates the existence of an activity. An object can be a material thing, but it can also be less tangible." Kuutti then adds a third term, the tool, which 'mediates' between the activity and the object. "The tool is at the same time both enabling and limiting: it empowers the subject in the transformation process with the historically collected experience and skill 'crystallised' to it, but it also restricts the interaction to be from the perspective of that particular tool or instrument; other potential features of an object remain invisible to the subject...". As Verenikina remarks, tools are "social objects with certain modes of operation developed socially in the course of labour and are only possible because they correspond to the objectives of a practical action." === Levels === An activity is modelled as a three-level hierarchy. Kuutti schematises processes in activity theory as a three-level system. Verenikina paraphrases Leont'ev as explaining that "the non-coincidence of action and operations... appears in actions with tools, that is, material objects which are crystallised operations, not actions nor goals. If a person is confronted with a specific goal of, say, dismantling a machine, then they must make use of a variety of operations; it makes no difference how the individual operations were learned because the formulation of the operation proceeds differently to the formulation of the goal that initiated the action." The levels of activity are also characterised by their purposes: "Activities are oriented to motives, that is, the objects that are impelling by themselves. Each motive is an object, material or ideal, that satisfies a need. Actions are the processes functionally subordinated to activities; they are directed at specific conscious goals... Actions are realised through operations that are determined by the actual conditions of activity." Engeström developed an extended model of an activity, which adds another component, community ("those who share the same object"), and then adds rules to mediate between subject and community, and the division of labour to mediate between object and community. Kuutti asserts that "These three classes should be understood broadly. A tool can be anything used in the transformation process, including both material tools and tools for thinking. Rules cover both explicit and implicit norms, conventions, and social relations within a community. Division of labour refers to the explicit and implicit organisation of the community as related to the transformation process of the object into the outcome." Activity theory therefore includes the notion that an activity is carried out within a social context, or specifically in a community. The way in which the activity fits into the context is thus established by two resulting concepts: rules: these are both explicit and implicit and define how subjects must fit into the community; division of labour: this describes how the object of the activity relates to the community. === The internal plane of action === Activity theory provides a number of useful concepts that can be used to address the lack of expression for 'soft' factors which are inadequately represented by most process modelling frameworks. One such concept is the internal plane of action. Activity theory recognises that each activity takes place in two planes: the external plane and the internal plane. The external plane represents the objective components of the action while the internal plane represents the subjective components of the action. Kaptelinin defines the internal plane of actions as "[...] a concept developed in activity theory that refers to the human ability to perform manipulations with an internal representation of external objects before starting actions with these objects in reality." The concepts of motives, goals and conditions discussed above also contribute to the modelling of soft factors. One principle of activity theory is that many activities have multiple motivation ('polymotivation'). For instance, a programmer in writing a program may address goals aligned towards multiple motives such as increasing his or her annual bonus, obtaining relevant career experience and contributing to organisational objectives. Activity theory further argues that subjects are grouped into communities, with rules mediating between subject and community and a division of labour mediating between object and community. A subject may be part of several communities and a community, itself, may be part of other communities. === Human creativity === Human creativity plays an important role in activity theory, that "human beings... are essentially creative beings" in "the creative, non-predictable character". Tikhomirov also analyses the importance of creative activity, contrasting it to routine activity, and notes the important shift brought about by computerisation in the balance towards creative activity. Karl Marx, a sociological theorist, argued that humans are unique compared to other species in that humans create everything they need to survive. According to Marx, this is described as species-being. Marx believed we find our true identity in what we produce in our personal labor. === Learning and tacit knowledge === Activity theory has an interesting approach to the difficult problems of learning and, in particular, tacit knowledge. Learning has been a favourite subject of management theorists, but it has often been presented in an abstract way separated from the work processes to which the learning should apply. Activity theory provides a potential corrective to this tendency. For instance, Engeström's review of Nonaka's work on knowledge creation suggests enhancements based on activity theory, in particular suggesting that the organisational learning process includes preliminary stages of goal and problem formation not found in Nonaka. Lompscher, rather than seeing learning as transmission, sees the formation of learning goals and the student's understanding of which things they need to acquire as the key to the formation of the learning activity. Of particular importance to the study of learning in organisations is the problem of tacit knowledge, which according to Nonaka, "is highly personal and hard to formalise, making it difficult to communicate to others or to share with others." Leont'ev's concept of operation provides an important insight into this problem. In addition, the key idea of internalisation was originally introduced by Vygotsky as "the internal reconstruction of an external operation." Internalisation has subsequently become a key term of the theory of tacit knowledge and has been defined as "a process of embodying explicit knowledge into tacit knowledge." Internalisation has been described by Engeström as the "key psychological mechanism" discovered by Vygotsky and is further discussed by Verenikina. == See also == Active learning Activity-centered design Anna Stetsenko Critical psychology Cultural-historical activity theory (CHAT) Distributed cognition Distributed leadership Educational psychology Enactivism Interaction design Leading activity Organization Workshop Post-rationalist cognitive therapy Situated cognition Social constructivism (learning theory) == References == == Sources == Leont'ev, A. Problems of the development of mind. English translation, Progress Press, 1981, Moscow. (Russian original 1947). Leont'ev, A. Activity, Consciousness, and Personality Engeström, Y. Learning by expanding Yasnitsky, A. (2011). Vygotsky Circle as a Personal Network of Scholars: Restoring Connections Between People and Ideas. Integrative Psychological and Behavioral Science, doi:10.1007/s12124-011-9168-5 pdf Verenikina, I. & Gould, E. (1998) Cultural-historical Psychology & Activity Theory. In Hasan, H., Gould., E. & Hyland, P. (Eds.) Activity Theory and Information Systems (7–18), Vol. 1.Wollongong: UOW Press == Further reading == Bertelsen, O.W. and Bødker, S., 2003. Activity theory. In J. M. Carroll (Ed.) HCI models, theories, and frameworks: Toward a multidisciplinary science, Morgan Kaufmann, San Francisco. pp. 291–324. Bryant, Susan, Andrea Forte and Amy Bruckman, Becoming Wikipedian: Transformation of participation in a collaborative online encyclopedia, Proceedings of GROUP International Conference on Supporting Group Work, 2005. pp 1.-10 [1] Kaptelinin, Victor, and Bonnie A. Nardi. (2006) Acting with Technology: Activity Theory and Interaction Design., MIT Press. Mazzoni, E. (2006). "Extending Web Sites' Usability: from a Cognitive Perspective to an Activity Theory Approach". In S. Zappala and C. Gray (Eds.) Impact of e-Commerce on Consumers and Small Firms. Aldershot, Hampshire (England), Ashgate. == External links == What is Activity Theory? The Future of Activity Theory Giorgos Kakarinos (2013): Methodological reflections on Leontiev's Activity Theory: Activity Theory and "The Logic of History" Archived 9 September 2019 at the Wayback Machine
Wikipedia/Activity_theory
Substance theory, or substance–attribute theory, is an ontological theory positing that objects are constituted each by a substance and properties borne by the substance but distinct from it. In this role, a substance can be referred to as a substratum or a thing-in-itself. Substances are particulars that are ontologically independent: they are able to exist all by themselves. Another defining feature often attributed to substances is their ability to undergo changes. Changes involve something existing before, during and after the change. They can be described in terms of a persisting substance gaining or losing properties. Attributes or properties, on the other hand, are entities that can be exemplified by substances. Properties characterize their bearers; they express what their bearer is like. Substance is a key concept in ontology, the latter in turn part of metaphysics, which may be classified into monist, dualist, or pluralist varieties according to how many substances or individuals are said to populate, furnish, or exist in the world. According to monistic views, there is only one substance. Stoicism and Spinoza, for example, hold monistic views, that pneuma or God, respectively, is the one substance in the world. These modes of thinking are sometimes associated with the idea of immanence. Dualism sees the world as being composed of two fundamental substances (for example, the Cartesian substance dualism of mind and matter). Pluralist philosophies include Plato's Theory of Forms and Aristotle's hylomorphic categories. == Ancient Greek philosophy == === Aristotle === Aristotle used the term "substance" (Greek: οὐσία ousia) in a secondary sense for genera and species understood as hylomorphic forms. Primarily, however, he used it with regard to his category of substance, the specimen ("this person" or "this horse") or individual, qua individual, who survives accidental change and in whom the essential properties inhere that define those universals.A substance—that which is called a substance most strictly, primarily, and most of all—is that which is neither said of a subject nor in a subject, e.g. the individual man or the individual horse. The species in which the things primarily called substances are, are called secondary substances, as also are the genera of these species. For example, the individual man belongs in a species, man, and animal is a genus of the species; so these—both man and animal—are called secondary substances. In chapter 6 of book I the Physics Aristotle argues that any change must be analysed in reference to the property of an invariant subject: as it was before the change and thereafter. Thus, in his hylomorphic account of change, matter serves as a relative substratum of transformation, i.e., of changing (substantial) form. In the Categories, properties are predicated only of substance, but in chapter 7 of book I of the Physics, Aristotle discusses substances coming to be and passing away in the "unqualified sense" wherein primary substances (πρῶται οὐσίαι; Categories 2a35) are generated from (or perish into) a material substratum by having gained (or lost) the essential property that formally defines substances of that kind (in the secondary sense). Examples of such a substantial change include not only conception and dying, but also metabolism, e.g., the bread a man eats becomes the man. On the other hand, in accidental change, because the essential property remains unchanged, by identifying the substance with its formal essence, substance may thereby serve as the relative subject matter or property-bearer of change in a qualified sense (i.e., barring matters of life or death). An example of this sort of accidental change is a change of color or size: a tomato becomes red, or a juvenile horse grows. Aristotle thinks that in addition to primary substances (which are particulars), there are secondary substances (δεύτεραι οὐσίαι), which are universals (Categories 2a11–a18). However, according to Aristotle's theology, a form of invariant form exists without matter, beyond the cosmos, powerless and oblivious, in the eternal substance of the unmoved movers. === Pyrrhonism === Early Pyrrhonism rejected the idea that substances exist. Pyrrho put this as: "Whoever wants to live well (eudaimonia) must consider these three questions: First, how are pragmata (ethical matters, affairs, topics) by nature? Secondly, what attitude should we adopt towards them? Thirdly, what will be the outcome for those who have this attitude?" Pyrrho's answer is that "As for pragmata they are all adiaphora (undifferentiated by a logical differentia), astathmēta (unstable, unbalanced, not measurable), and anepikrita (unjudged, unfixed, undecidable). Therefore, neither our sense-perceptions nor our doxai (views, theories, beliefs) tell us the truth or lie; so we certainly should not rely on them. Rather, we should be adoxastoi (without views), aklineis (uninclined toward this side or that), and akradantoi (unwavering in our refusal to choose), saying about every single one that it no more is than it is not or it both is and is not or it neither is nor is not. === Stoicism === The Stoics rejected the idea that incorporeal beings inhere in matter, as taught by Plato. They believed that all being is corporeal infused with a creative fire called pneuma. Thus they developed a scheme of categories different from Aristotle's based on the ideas of Anaxagoras and Timaeus. The fundamental basis of Stoicism in this context was a universally consistent ethical and moral code that should be maintained at all time, the physical belief of beings as matter is an important philosophical footnote, as it marked the start of thinking as beings as inherently linked to reality, instead of to some abstract heaven. === Neoplatonism === Neoplatonists argue that beneath the surface phenomena that present themselves to our senses are three higher spiritual principles or hypostases, each one more sublime than the preceding. For Plotinus, these are the soul or world-soul, being/intellect or divine mind (nous), and "the one". == Early modern philosophy == René Descartes means by a substance an entity which exists in such a way that it needs no other entity in order to exist. Therefore, only God is a substance in this strict sense. However, he extends the term to created things, which need only the concurrence of God to exist. He maintained that two of these are mind and body, each being distinct from the other in their attributes and therefore in their essence, and neither needing the other in order to exist. This is Descartes' substance dualism. Baruch Spinoza denied Descartes' "real distinction" between mind and matter. Substance, according to Spinoza, is one and indivisible, but has multiple "attributes". He regards an attribute, though, as "what we conceive as constituting the [single] essence of substance". The single essence of one substance can be conceived of as material and also, consistently, as mental. What is ordinarily called the natural world, together with all the individuals in it, is immanent in God: hence his famous phrase deus sive natura ("God or Nature"). John Locke views substance through a corpuscularian lens where it exhibits two types of qualities which both stem from a source. He believes that humans are born tabula rasa or "blank slate" – without innate knowledge. In An Essay Concerning Human Understanding Locke writes that "first essence may be taken for the very being of anything, whereby it is, what it is." If humans are born without any knowledge, the way to receive knowledge is through perception of a certain object. But, according to Locke, an object exists in its primary qualities, no matter whether the human perceives it or not; it just exists. For example, an apple has qualities or properties that determine its existence apart from human perception of it, such as its mass or texture. The apple itself is also "pure substance in which is supposed to provide some sort of 'unknown support' to the observable qualities of things" that the human mind perceives. The foundational or support qualities are called primary essences which "in the case of physical substances, are the underlying physical causes of the object's observable qualities". But then what is an object except "the owner or support of other properties"? Locke rejects Aristotle's category of the forms, and develops mixed ideas about what substance or "first essence" means. Locke's solution to confusion about first essence is to argue that objects simply are what they are – made up of microscopic particles existing because they exist. According to Locke, the mind cannot completely grasp the idea of a substance as it "always falls beyond knowledge". There is a gap between what first essence truly means and the mind's perception of it that Locke believes the mind cannot bridge, objects in their primary qualities must exist apart from human perception. The molecular combination of atoms in first essence then forms the solid base that humans can perceive and add qualities to describe - the only way humans can possibly begin to perceive an object. The way to perceive the qualities of an apple is from the combination of the primary qualities to form the secondary qualities. These qualities are then used to group the substances into different categories that "depend on the properties [humans] happen to be able to perceive". The taste of an apple or the feeling of its smoothness are not traits inherent to the fruit but are the power of the primary qualities to produce an idea about that object in the mind. The reason that humans can not sense the actual primary qualities is the mental distance from the object; thus, Locke argues, objects remain nominal for humans. Therefore, the argument then returns to how "a philosopher has no other idea of those substances than what is framed by a collection of those simple ideas which are found in them." The mind's conception of substances "[is] complex rather than simple" and "has no (supposedly innate) clear and distinct idea of matter that can be revealed through intellectual abstraction away from sensory qualities". The last quality of substance is the way the perceived qualities seem to begin to change – such as a candle melting; this quality is called the tertiary quality. Tertiary qualities "of a body are those powers in it that, by virtue of its primary qualities, give it the power to produce observable changes in the primary qualities of other bodies"; "the power of the sun to melt wax is a tertiary quality of the sun". They are "mere powers; qualities such as flexibility, ductility; and the power of sun to melt wax". This goes along with "passive power: the capacity a thing has for being changed by another thing". In any object, at the core are the primary qualities (unknowable by the human mind), the secondary quality (how primary qualities are perceived), and tertiary qualities (the power of the combined qualities to make a change to the object itself or to other objects). Robert Boyle's corpuscularian hypothesis states that "all material bodies are composites of ultimately small particles of matter" that "have the same material qualities as the larger composite bodies do". Using this basis, Locke defines his first group, primary qualities, as "the ones that a body doesn't lose, however much it alters." The materials retain their primary qualities even if they are broken down because of the unchanging nature of their atomic particles. If someone is curious about an object and they say it is solid and extended, these two descriptors are primary qualities. The second group consists of secondary qualities which are "really nothing but the powers to produce various sensations in us by their primary qualities." Locke argues that the impressions our senses perceive from the objects (i.e. taste, sounds, colors, etc.) are not natural properties of the object itself, but things they induce in us by means of the "size, shape, texture, and motion of their imperceptible parts." The bodies send insensible particles to our senses which let us perceive the object through different faculties; what we perceive is based on the object's composition. With these qualities, people can achieve the object through bringing "co-existing powers and sensible qualities to a common ground for explanation". Locke supposes that one wants to know what "binds these qualities" into an object, and argues that a "substratum" or "substance" has this effect, defining "substance" as follows: [T]he idea of ours to which we give the general name substance, being nothing but the supposed but unknown support of those qualities we find existing and which we imagine can't exist sine re substante — that is, without some thing to support them — we call that support substantia; which, according to the true meaning of the word, is in plain English standing under or upholding. This substratum is a construct of the mind in an attempt to bind all the qualities seen together; it is only "a supposition of an unknown support of qualities that are able to cause simple ideas in us." Without making a substratum, people would be at a loss as to how different qualities relate. Locke does, however, mention that this substratum is an unknown, relating it to the story of the world on the turtle's back and how the believers eventually had to concede that the turtle just rested on "something he knew not what". This is how the mind perceives all things and from which it can make ideas about them; it is entirely relative, but it does provide a "regularity and consistency to our ideas". Substance, overall, has two sets of qualities — those that define it, and those related to how we perceive it. These qualities rush to our minds, which must organize them. As a result, our mind creates a substratum (or substance) for these objects, into which it groups related qualities. == Criticism of soul as substance == Kant observed that the assertion of a spiritual soul as substance could be a synthetic proposition which, however, was unproved and completely arbitrary. Introspection does not reveal any diachronic substrate remaining unchanged throughout life. The temporal structure of consciousness is retentive-perceptive-prognostic. The selfhood arises as result of several informative flows: (1) signals from our own body; (2) retrieved memories and forecasts; (3) the affective load: dispositions and aversions; (4) reflections in other minds. Mental acts have the feature of appropriation: they are always attached to some pre-reflective consciousness. As visual perception is only possible from a definite point of view, so inner experience is given together with self-consciousness. The latter is not an autonomous mental act, but a formal way how the first person has their experience. From the pre-reflective consciousness, the person gains conviction of their existence. This conviction is immune to false reference. The concept of person is prior to the concepts of subject and body. The reflective self-consciousness is a conceptual and elaborate cognition. Selfhood is a self-constituting effigy, a task to be accomplished. Humans are incapable of comprising all their experience within the current state of consciousness; overlapping memories are critical for personal integrity. Appropriated experience can be recollected. At stage B, we remember the experience of stage A; at stage C, we may be aware of the mental acts of stage B. The idea of self-identity is enforced by the relatively slow changes of our body and social situation. Personal identity may be explained without accepting a spiritual agent as subject of mental activity. Associative connection between life episodes is necessary and sufficient for the maintenance of a united selfhood. Personal character and memories can persist after radical mutation of the body. == Irreducible concepts == Two irreducible concepts encountered in substance theory are the bare particular and inherence. === Bare particular === In substance theory, a bare particular of an object is the element without which the object would not exist, that is, its substance, which exists independently from its properties, even if it is impossible for it to lack properties entirely. It is "bare" because it is considered without its properties and "particular" because it is not abstract. The properties that the substance has are said to inhere in the substance. === Inherence === Another primitive concept in substance theory is the inherence of properties within a substance. For example, in the sentence, "The apple is red" substance theory says that red inheres in the apple. Substance theory takes the meaning of an apple having the property of redness to be understood, and likewise that of a property's inherence in substance, which is similar to, but not identical with, being part of the substance. The inverse relation is participation. Thus in the example above, just as red inheres in the apple, so the apple participates in red. == Arguments supporting the theory == Two common arguments supporting substance theory are the argument from grammar and the argument from conception. === Argument from grammar === The argument from grammar uses traditional grammar to support substance theory. For example, the sentence "Snow is white" contains a grammatical subject "snow" and the predicate "is white", thereby asserting snow is white. The argument holds that it makes no grammatical sense to speak of "whiteness" disembodied, without asserting that snow or something else is white. Meaningful assertions are formed by virtue of a grammatical subject, of which properties may be predicated, and in substance theory, such assertions are made with regard to a substance. Bundle theory rejects the argument from grammar on the basis that a grammatical subject does not necessarily refer to a metaphysical subject. Bundle theory, for example, maintains that the grammatical subject of a statement refers to its properties. For example, a bundle theorist understands the grammatical subject of the sentence, "Snow is white", to be a bundle of properties such as white. Accordingly, one can make meaningful statements about bodies without referring to substances. === Argument from conception === Another argument for the substance theory is the argument from conception. The argument claims that in order to conceive of an object's properties, like the redness of an apple, one must conceive of the object that has those properties. According to the argument, one cannot conceive of redness, or any other property, distinct from the substance that has that property. == Criticism == The idea of substance was famously critiqued by David Hume, who held that since substance cannot be perceived, it should not be assumed to independently exist. Friedrich Nietzsche, and after him Martin Heidegger, Michel Foucault and Gilles Deleuze also rejected the notion of "substance", and in the same movement the concept of subject - seeing both concepts as holdovers from Platonic idealism. For this reason, Althusser's "anti-humanism" and Foucault's statements were criticized, by Jürgen Habermas and others, for misunderstanding that this led to a fatalist conception of social determinism. For Habermas, only a subjective form of liberty could be conceived, to the contrary of Deleuze who talks about "a life", as an impersonal and immanent form of liberty. For Heidegger, Descartes means by "substance" that by which "we can understand nothing else than an entity which is in such a way that it need no other entity in order to be." Therefore, only God is a substance as Ens perfectissimus (most perfect being). Heidegger showed the inextricable relationship between the concept of substance and of subject, which explains why, instead of talking about "man" or "humankind", he speaks about the Dasein, which is not a simple subject, nor a substance. Alfred North Whitehead has argued that the concept of substance has only a limited applicability in everyday life and that metaphysics should rely upon the concept of process. Roman Catholic theologian Karl Rahner, as part of his critique of transubstantiation, rejected substance theory and instead proposed the doctrine of transfinalization, which he felt was more attuned to modern philosophy. However, this doctrine was rejected by Pope Paul VI in his encyclical Mysterium fidei. The 20th century Australian philosopher Colin Murray Turbayne also raised fundamental objections to the concepts of "substance" and "substratum", arguing that both have little if any meaning at best. In Turbayne's view, such concepts are more properly described as linguistic metaphors which served as the foundation for the physicalist and mechanistic theories of the universe proposed by Isaac Newton and the mind-body dualism put forth by René Descartes. Turbayne contends mankind has fallen victim over the course of time to such metaphors by misinterpreting them as examples of literal truth and subsequently utilizing deductive reasoning to incorporate them into the development of modern scientific hypotheses. He concludes that mankind can successfully embrace more beneficial theoretic constructs of the universe only after first acknowledging the metaphorical nature of these two concepts and the central role which they have assumed in the guise of literal truth within the realm of epistemology and metaphysics. == Bundle theory == In direct opposition to substance theory is bundle theory, whose most basic premise is that all concrete particulars are merely constructions or 'bundles' of attributes or qualitative properties: Necessarily, for any concrete entity, a {\displaystyle a} , if for any entity, b {\displaystyle b} , b {\displaystyle b} is a constituent of a {\displaystyle a} , then b {\displaystyle b} is an attribute. The bundle theorist's principal objections to substance theory concern the bare particulars of a substance, which substance theory considers independently of the substance's properties. The bundle theorist objects to the notion of a thing with no properties, claiming that such a thing is inconceivable and citing John Locke, who described a substance as "a something, I know not what." To the bundle theorist, as soon as one has any notion of a substance in mind, a property accompanies that notion. === Identity of indiscernibles counterargument === The indiscernibility argument from the substance theorist targets those bundle theorists who are also metaphysical realists. Metaphysical realism uses the identity of universals to compare and identify particulars. Substance theorists say that bundle theory is incompatible with metaphysical realism due to the identity of indiscernibles: particulars may differ from one another only with respect to their attributes or relations. The substance theorist's indiscernibility argument against the metaphysically realistic bundle theorist states that numerically different concrete particulars are discernible from the self-same concrete particular only by virtue of qualitatively different attributes. Necessarily, for any complex objects, a {\displaystyle a} and b {\displaystyle b} , if for any entity, c {\displaystyle c} , c {\displaystyle c} is a constituent of a {\displaystyle a} if and only if c {\displaystyle c} is a constituent of b {\displaystyle b} , then a {\displaystyle a} is numerically identical with b {\displaystyle b} . The indiscernibility argument points out that if bundle theory and discernible concrete particulars theory explain the relationship between attributes, then the identity of indiscernibles theory must also be true: Necessarily, for any concrete objects, a {\displaystyle a} and b {\displaystyle b} , if for any attribute, Φ, Φ is an attribute of a {\displaystyle a} if and only if Φ is an attribute of b {\displaystyle b} , then a {\displaystyle a} is numerically identical with b {\displaystyle b} . The indiscernibles argument then asserts that the identity of indiscernibles is violated, for example, by identical sheets of paper. All of their qualitative properties are the same (e.g. white, rectangular, 9 x 11 inches...) and thus, the argument claims, bundle theory and metaphysical realism cannot both be correct. However, bundle theory combined with trope theory (as opposed to metaphysical realism) avoids the indiscernibles argument because each attribute is a trope if can only be held by only one concrete particular. The argument does not consider whether "position" should be considered an attribute or relation. It is after all through the differing positions that we in practice differentiate between otherwise identical pieces of paper. == Religious philosophy == === Christianity === The Christian writers of antiquity adhered to the Aristotelian conception of substance. Their peculiarity was the use of this idea for the discernment of theological nuances. Clement of Alexandria considered both material and spiritual substances: blood and milk; mind and soul, respectively. Origen may be the first theologian expressing Christ's similarity with the Father as consubstantiality. Tertullian professed the same view in the West. The ecclesiastics of the Cappadocian group (Basil of Caesarea, Gregory of Nazianzus, Gregory of Nyssa) taught that the Trinity had a single substance in three hypostases individualized by the relations among them. In later ages, the meaning of "substance" became more important because of the dogma of the Eucharist. Hildebert of Lavardin, archbishop of Tours, introduced the term transubstantiation about 1080; its use spread after the Fourth Council of the Lateran in 1215. According to Thomas Aquinas, beings may possess substance in three different modes. Together with other Medieval philosophers, he interpreted God's epithet "El Shaddai" (Genesis 17:1) as self-sufficient and concluded that God's essence was identical with existence. Aquinas also deemed the substance of spiritual creatures identical with their essence (or form); therefore he considered each angel to belong to its own distinct species. In Aquinas' view, composite substances consist of form and matter. Human substantial form, i.e. soul, receives its individuality from body. === Jainism === === Buddhism === Buddhism rejects the concept of substance. Complex structures are comprehended as an aggregate of components without any essence. Just as the junction of parts is called cart, so the collections of elements are called things. All formations are unstable (aniccā) and lacking any constant core or "self" (anattā). Physical objects have no metaphysical substrate. Arising entities hang on previous ones conditionally: in the notable teaching on interdependent origination, effects arise not as caused by agents but conditioned by former situations. Our senses, perception, feelings, wishes and consciousness are flowing, the view satkāya-dṛṣṭi of their permanent carrier is rejected as fallacious. The school of Madhyamaka, namely Nāgārjuna, introduced the idea of the ontological void (śūnyatā). The Buddhist metaphysics Abhidharma presumes particular forces which determine the origin, persistence, aging and decay of everything in the world. Vasubandhu added a special force making a human, called "aprāpti" or "pṛthagjanatvam". Because of the absence of a substantial soul, the belief in personal immortality loses foundation. Instead of deceased beings, new ones emerge whose fate is destined by the karmic law. The Buddha admitted the empirical identity of persons testified by their birth, name, and age. He approved the authorship of deeds and responsibility of performers. The disciplinary practice in the Sangha including reproaches, confession and expiation of transgressions, requires continuing personalities as its justification. == See also == == References == == External links == Robinson, Howard. "Substance". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Robinson, Tad. "17th Century Theories of Substance". Internet Encyclopedia of Philosophy. Weir, Ralph. "Substance theory". Internet Encyclopedia of Philosophy. Friesian School on Substance and Essence
Wikipedia/Substance_theory
For popular psychology, the belief–desire–intention (BDI) model of human practical reasoning was developed by Michael Bratman as a way of explaining future-directed intention. BDI is fundamentally reliant on folk psychology (the 'theory theory'), which is the notion that our mental models of the world are theories. It was used as a basis for developing the belief–desire–intention software model. == Applications == BDI was part of the inspiration behind the BDI software architecture, which Bratman was also involved in developing. Here, the notion of intention was seen as a way of limiting time spent on deliberating about what to do, by eliminating choices inconsistent with current intentions. BDI has also aroused some interest in psychology. BDI formed the basis for a computational model of childlike reasoning CRIBB. == References == Bratman, M. E. (1999) [1987]. Intention, Plans, and Practical Reason. CSLI Publications. ISBN 1-57586-192-5.
Wikipedia/Belief–desire–intention_model
The neuroscience of free will, a part of neurophilosophy, is the study of topics related to free will (volition and sense of agency), using neuroscience and the analysis of how findings from such studies may impact the free will debate. As medical and scientific technology has advanced, neuroscientists have become able to study the brains of living humans, allowing them to observe the brain's decision-making processes and revealing insights into human agency, moral responsibility, and consciousness. One of the pioneering studies in this field was conducted by Benjamin Libet and his colleagues in 1983 and has been the foundation of many studies in the years since. Other studies have attempted to predict the actions of participants before they happen, explore how we know we are responsible for voluntary movements as opposed to being moved by an external force, or how the role of consciousness in decision-making may differ depending on the type of decision being made. Some philosophers, such as Alfred Mele and Daniel Dennett, have questioned the language used by researchers, suggesting that "free will" means different things to different people (e.g., some notions of "free will" posit that free will is compatible with determinism, while others do not). Dennett insisted that many important and common conceptions of "free will" are compatible with the emerging evidence from neuroscience. == Overview == The neuroscience of free will encompasses two main fields of study: volition and agency. Volition, the study of voluntary actions, is difficult to define. If human actions are considered as lying along a spectrum based on conscious involvement in initiating the actions, then reflexes would be on one end, and fully voluntary actions would be on the other. How these actions are initiated and consciousness’ role in producing them is a major area of study in volition. Agency is the capacity of an actor to act in a given environment. Within the neuroscience of free will, the sense of agency—the subjective awareness of initiating, executing, and controlling one's volitional actions—is usually what is studied. One significant finding of modern studies is that a person's brain seems to commit to certain decisions before the person becomes aware of having made them. Researchers have found a delay of about half a second or more (discussed in sections below). With contemporary brain scanning technology, scientists in 2008 were able to predict with 60% accuracy whether 12 subjects would press a button with their left or right hand up to 10 seconds before the subject became aware of having made that choice. These and other findings have led some scientists, like Patrick Haggard, to reject some definitions of "free will". However, it is very unlikely that a single study could disprove all definitions of free will. Definitions of free will can vary greatly, and each must be considered separately in light of existing empirical evidence. There have also been a number of problems regarding studies of free will. Particularly in earlier studies, research relied on self-reported measures of conscious awareness, but introspective estimates of event timing were found to be biased or inaccurate in some cases. There is no agreed-upon measure of brain activity corresponding to conscious generation of intentions, choices, or decisions, making studying processes related to consciousness difficult. The existing conclusions drawn from measurements are also debatable, as they don't necessarily tell, for example, what a sudden dip in the readings represents. Such a dip might have nothing to do with unconscious decision because many other mental processes are going on while performing the task. Although early studies mainly used electroencephalography, more recent studies have used fMRI, single-neuron recordings, and other measures. Researcher Itzhak Fried says that available studies do at least suggest that consciousness comes in a later stage of decision-making than previously expected – challenging any versions of "free will" where intention occurs at the beginning of the human decision process. === Free will as illusion === It may be possible that our intuitions about the role of our conscious "intentions" have led us astray; it may be the case that we have confused correlation with causation by believing that conscious awareness necessarily causes the body's movement. This possibility is bolstered by findings in neurostimulation, brain damage, but also research into introspection illusions. Such illusions show that humans do not have full access to various internal processes. The discovery that humans possess a determined will would have implications for moral responsibility or lack thereof. Neuroscientist, philosopher, and author Sam Harris believes that we are mistaken in believing the intuitive idea that intention initiates actions. Harris criticizes the idea that free will is "intuitive": and that careful introspection will cast doubt on free will. Harris argues: "Thoughts simply arise in the brain. What else could they do? The truth about us is even stranger than we may suppose: The illusion of free will is itself an illusion". In contrast to this claim, neuroscientist Walter Jackson Freeman III, discusses the impact of unconscious systems and actions to change the world according to human intention. Freeman writes: "our intentional actions continually flow into the world, changing the world and the relations of our bodies to it. This dynamic system is the self in each of us, it is the agency in charge, not our awareness, which is constantly trying to keep up with what we do." To Freeman, the power of intention and action can be independent of awareness. An important distinction to make is the difference between proximal and distal intentions. Proximal intentions are immediate in the sense that they are about acting now. For instance, a decision to raise a hand now or press a button now, as in Libet-style experiments. Distal intentions are delayed in the sense that they are about acting at a later point in time. For instance, deciding to go to the store later. Research has mostly focused on proximal intentions; however, it is unclear to what degree findings will generalize from one sort of intention to the other. === Relevance of scientific research === Some thinkers like neuroscientist and philosopher Adina Roskies think that these studies can still only show, unsurprisingly, that physical factors in the brain are involved before decision-making. In contrast, Haggard believes that "We feel we choose, but we don't". Researcher John-Dylan Haynes adds: "How can I call a will 'mine' if I don't even know when it occurred and what it has decided to do?". Philosophers Walter Glannon and Alfred Mele think that some scientists are getting the science right, but misrepresenting modern philosophers. This is mainly because "free will" can mean many things: it is unclear what someone means when they say "free will does not exist". Mele and Glannon say that the available research is more evidence against any dualistic notions of free will – but that is an "easy target for neuroscientists to knock down". Mele says that most discussions of free will are now in materialistic terms. In these cases, "free will" means something more like "not coerced" or that "the person could have done otherwise at the last moment". The existence of these types of free will is debatable. Mele agrees, however, that science will continue to reveal critical details about what goes on in the brain during decision-making. This issue may be controversial for good reason: there is evidence to suggest that people normally associate a belief in free will with their ability to affect their lives. Philosopher Daniel Dennett, author of Elbow Room and a supporter of deterministic free will, believes that scientists risk making a serious mistake. He says that there are types of free will that are incompatible with modern science, but those kinds of free will are not worth wanting. Other types of "free will" are pivotal to people's sense of responsibility and purpose (see also: "believing in free will"), and many of these types are actually compatible with modern science. The other studies described below have only just begun to shed light on the role that consciousness plays in actions, and it is too early to draw very strong conclusions about certain kinds of "free will". It is worth noting that such experiments so far have dealt only with free-will decisions made in short time frames (seconds) and may not have direct bearing on free-will decisions made ("thoughtfully") by the subject over the course of many seconds, minutes, hours or longer. Scientists have also only so far studied extremely simple behaviors (e.g., moving a finger). Adina Roskies points out five areas of neuroscientific research: Action initiation Intention Decision Inhibition and control The phenomenology of agency. For each of these areas Roskies concludes that the science may be developing our understanding of volition or "will", but it yet offers nothing for developing the "free" part of the "free will" discussion. There is also the question of the influence of such interpretations in people's behavior. In 2008, psychologists Kathleen Vohs and Jonathan Schooler published a study on how people behave when they are prompted to think that determinism is true. They asked their subjects to read one of two passages: one suggesting that behavior boils down to environmental or genetic factors not under personal control; the other neutral about what influences behavior. The participants then did a few math problems on a computer. But just before the test started, they were informed that because of a glitch in the computer it occasionally displayed the answer by accident; if this happened, they were to click it away without looking. Those who had read the deterministic message were more likely to cheat on the test. "Perhaps, denying free will simply provides the ultimate excuse to behave as one likes", Vohs and Schooler suggested. However, although initial studies suggested that believing in free will is associated with more morally praiseworthy behavior, some recent studies have reported contradictory findings. == Notable experiments == === Libet Experiment === A pioneering experiment in this field was conducted by Benjamin Libet in the 1980s, in which he asked each subject to choose a random moment to flick their wrist while he measured the associated activity in their brain (in particular, the build-up of electrical signal called the Bereitschaftspotential (BP), which was discovered by Kornhuber & Deecke in 1965). Although it was well known that the "readiness potential" (German: Bereitschaftspotential) preceded the physical action, Libet asked how it corresponded to the felt intention to move. To determine when the subjects felt the intention to move, he asked them to watch the second hand of a clock and report its position when they felt that they had felt the conscious will to move. Libet found that the unconscious brain activity leading up to the conscious decision by the subject to flick their wrist began approximately half a second before the subject consciously felt that they had decided to move. Libet's findings suggest that decisions made by a subject are first being made on an unconscious level and only afterward being translated into a "conscious decision", and that the subject's belief that it occurred at the behest of their will was only due to their retrospective perspective on the event. The interpretation of these findings has been criticized by Daniel Dennett, who argues that people will have to shift their attention from their intention to the clock, and that this introduces temporal mismatches between the felt experience of will and the perceived position of the clock hand. Consistent with this argument, subsequent studies have shown that the exact numerical value varies depending on attention. Despite the differences in the exact numerical value, however, the main finding has held. Philosopher Alfred Mele criticizes this design for other reasons. Having attempted the experiment himself, Mele explains that "the awareness of the intention to move" is an ambiguous feeling at best. For this reason he remained skeptical of interpreting the subjects' reported times for comparison with their Bereitschaftspotential. ==== Criticisms ==== In a variation of this task, Haggard and Eimer (1999) asked subjects to decide not only when to move their hands, but also to decide which hand to move. In this case, the felt intention correlated much more closely with the "lateralized readiness potential" (LRP), an event-related potential (ERP) component that measures the difference between left and right hemisphere brain activity. Haggard and Eimer argue that the feeling of conscious will must therefore follow the decision of which hand to move, since the LRP reflects the decision to lift a particular hand. A more direct test of the relationship between the Bereitschaftspotential and the "awareness of the intention to move" was conducted by Banks and Isham (2009). In their study, participants performed a variant of the Libet's paradigm in which a delayed tone followed the button press. Subsequently, research participants reported the time of their intention to act (e.g., Libet's W). If W were time-locked to the Bereitschaftspotential, W would remain uninfluenced by any post-action information. However, findings from this study show that W in fact shifts systematically with the time of the tone presentation, implicating that W is, at least in part, retrospectively reconstructed rather than pre-determined by the Bereitschaftspotential. A study conducted by Jeff Miller and Judy Trevena (2010) suggests that the Bereitschaftspotential (BP) signal in Libet's experiments doesn't represent a decision to move, but that it's merely a sign that the brain is paying attention. In this experiment the classical Libet experiment was modified by playing an audio tone indicating to volunteers to decide whether to tap a key or not. The researchers found that there was the same RP signal in both cases, regardless of whether or not volunteers actually elected to tap, which suggests that the RP signal doesn't indicate that a decision has been made. In a second experiment, researchers asked volunteers to decide on the spot whether to use left hand or right to tap the key while monitoring their brain signals, and they found no correlation among the signals and the chosen hand. This criticism has itself been criticized by free-will researcher Patrick Haggard, who mentions literature that distinguishes two different circuits in the brain that lead to action: a "stimulus-response" circuit and a "voluntary" circuit. According to Haggard, researchers applying external stimuli may not be testing the proposed voluntary circuit, nor Libet's hypothesis about internally triggered actions. Libet's interpretation of the ramping up of brain activity prior to the report of conscious "will" continues to draw heavy criticism. Studies have questioned participants' ability to report the timing of their "will". Authors have found that preSMA activity is modulated by attention (attention precedes the movement signal by 100 ms), and the prior activity reported could therefore have been product of paying attention to the movement. They also found that the perceived onset of intention depends on neural activity that takes place after the execution of action. Transcranial magnetic stimulation (TMS) applied over the preSMA after a participant performed an action shifted the perceived onset of the motor intention backward in time, and the perceived time of action execution forward in time. Others have speculated that the preceding neural activity reported by Libet may be an artefact of averaging the time of "will", wherein neural activity does not always precede reported "will". In a similar replication they also reported no difference in electrophysiological signs before a decision not to move and before a decision to move. Benjamin Libet himself did not interpret his experiment as evidence of the inefficacy of conscious free will — he points out that although the tendency to press a button may be building up for 500 milliseconds, the conscious will retain a right to veto any action at the last moment. According to this model, unconscious impulses to perform a volitional act are open to suppression by the conscious efforts of the subject (sometimes referred to as "free won't"). A comparison is made with a golfer, who may swing a club several times before striking the ball. The action simply gets a rubber stamp of approval at the last millisecond. Some studies have replicated Libet's findings, whilst addressing some of the original criticisms. A 2011 study conducted by Itzhak Fried found with a greater than 80% accuracy that individual neurons fire 700 ms before a reported "will" to act (long before EEG activity predicted such a response). This was accomplished with the help of volunteer epilepsy patients, who needed electrodes implanted deep in their brain for evaluation and treatment anyway. Now able to monitor awake and moving patients, the researchers replicated the timing anomalies that were discovered by Libet. Similarly to these tests, Chun Siong Soon, Anna Hanxi He, Stefan Bode and John-Dylan Haynes have conducted a study in 2013 claiming to be able to predict by 4 s the choice to sum or subtract before the subject reports it. William R. Klemm pointed out the inconclusiveness of these tests due to design limitations and data interpretations and proposed less ambiguous experiments, while affirming a stand on the existence of free will, like Roy F. Baumeister, or Catholic neuroscientists such as Tadeusz Pacholczyk. Adrian G. Guggisberg and Annaïs Mottaz have also challenged Libet and Fried's findings, stating that "the instantaneous appearance of conscious intentions might be an artifact of the method used for assessing the contents of consciousness" and that "studies using alternatives to the Libet clock have suggested that intention consciousness is a multistage process just as the neural mechanisms of motor decisions", concluding that "the time of conscious intentions reported by the participants therefore might be only the culmination of preceding conscious deliberations, not a unique and instantaneous event" and "if this is true, the delay between the onset of neural predictors of motor decisions and conscious intentions reported with the Libet clock is not due to unconscious neural processes but due to conscious evaluations which are not final yet". Another criticism stems from the fact that, despite being treated as the same by Libet, an urge, a wish and a desire are not the same thing as an intention, a decision, and a choice. In an empirical study in 2019, researchers found that readiness potentials were absent for deliberate decisions, and preceded arbitrary decisions only. In a study published in 2012, Aaron Schurger, Jacobo D. Sitt, and Stanislas Dehaene published in Proceedings of the National Academy of Sciences of the United States of America (PNAS), proposed that the occurrence of the readiness potentials observed in Libet-type experiments is stochastically occasioned by ongoing spontaneous subthreshold fluctuations in neural activity, rather than an unconscious goal-directed operation, and challenged assumptions about the causal nature of the Bereitschaftspotential itself (and the "pre-movement buildup" of neural activity in general when faced with a choice), thus denying the conclusions drawn from studies such as Libet's and Fried's. See The Information Philosopher, New Scientist, and The Atlantic, for commentary on this study. === Unconscious actions === ==== Timing intentions compared to actions ==== A study by Masao Matsuhashi and Mark Hallett, published in 2008, claims to have replicated Libet's findings without relying on subjective report or clock memorization on the part of participants. The authors believe that their method can identify the time (T) at which a subject becomes aware of his own movement. Matsuhashi and Hallet argue that T not only varies, but often occurs after early phases of movement genesis have already begun (as measured by the readiness potential). They conclude that a person's awareness cannot be the cause of movement, and may instead only notice the movement. ==== The experiment ==== Matsuhashi and Hallett's study can be summarized thus. The researchers hypothesized that, if our conscious intentions are what causes movement genesis (i.e. the start of an action), then naturally, our conscious intentions should always occur before any movement has begun. Otherwise, if we ever become aware of a movement only after it has already been started, our awareness could not have been the cause of that particular movement. Simply put, conscious intention must precede action if it is its cause. To test this hypothesis, Matsuhashi and Hallet had volunteers perform brisk finger movements at random intervals, while not counting or planning when to make such (future) movements, but rather immediately making a movement as soon as they thought about it. An externally controlled "stop-signal" sound was played at pseudo-random intervals, and the volunteers had to cancel their intent to move if they heard a signal while being aware of their own immediate intention to move. Whenever there was an action (finger movement), the authors documented (and graphed) any tones that occurred before that action. The graph of tones before actions therefore only shows tones (a) before the subject is even aware of his "movement genesis" (or else they would have stopped or "vetoed" the movement), and (b) after it is too late to veto the action. This second set of graphed tones is of little importance here. In this work, "movement genesis" is defined as the brain process of making movement, of which physiological observations have been made (via electrodes) indicating that it may occur before conscious awareness of intent to move (see Benjamin Libet). By looking to see when tones started preventing actions, the researchers supposedly know the length of time (in seconds) that exists between when a subject holds a conscious intention to move and performs the action of movement. This moment of awareness is called T (the mean time of conscious intention to move). It can be found by looking at the border between tones and no tones. This enables the researchers to estimate the timing of the conscious intention to move without relying on the subject's knowledge or demanding them to focus on a clock. The last step of the experiment is to compare time T for each subject with their event-related potential (ERP) measures (e.g., seen in this page's lead image), which reveal when their finger movement genesis first begins. The researchers found that the time of the conscious intention to move T normally occurred too late to be the cause of movement genesis. See the example of a subject's graph below on the right. Although it is not shown on the graph, the subject's readiness potentials (ERP) tells us that his actions start at −2.8 seconds, and yet this is substantially earlier than his conscious intention to move, time T (−1.8 seconds). Matsuhashi and Hallet concluded that the feeling of the conscious intention to move does not cause movement genesis; both the feeling of intention and the movement itself are the result of unconscious processing. ===== Analysis and interpretation ===== This study is similar to Libet's in some ways: volunteers were again asked to perform finger extensions in short, self-paced intervals. In this version of the experiment, researchers introduced randomly timed "stop tones" during the self-paced movements. If participants were not conscious of any intention to move, they simply ignored the tone. On the other hand, if they were aware of their intention to move at the time of the tone, they had to try to veto the action, then relax for a bit before continuing self-paced movements. This experimental design allowed Matsuhashi and Hallet to see when, once the subject moved his finger, any tones occurred. The goal was to identify their own equivalent of Libet's W, their own estimation of the timing of the conscious intention to move, which they would call T (time). Testing the hypothesis that "conscious intention occurs after movement genesis has already begun" required the researchers to analyse the distribution of responses to tones before actions. The idea is that, after time T, tones will lead to vetoing and thus a reduced representation in the data. There would also be a point of no return P where a tone was too close to the movement onset for the movement to be vetoed. In other words, the researchers were expecting to see the following on the graph: many unsuppressed responses to tones while the subjects are not yet aware of their movement genesis, followed by a drop in the number of unsuppressed responses to tones during a certain period of time during which the subjects are conscious of their intentions and are stopping any movements, and finally a brief increase again in unsuppressed responses to tones when the subjects do not have the time to process the tone and prevent an action – they have passed the action's "point of no return". That is exactly what the researchers found (see the graph on the right, below). The graph shows the times at which unsuppressed responses to tones occurred when the volunteer moved. He showed many unsuppressed responses to tones (called "tone events" on the graph) on average up until 1.8 seconds before movement onset, but a significant decrease in tone events immediately after that time. Presumably this is because the subject usually became aware of his intention to move at about −1.8 seconds, which is then labelled point T. Since most actions are vetoed if a tone occurs after point T, there are very few tone events represented during that range. Finally, there is a sudden increase in the number of tone events at 0.1 seconds, meaning that this subject has passed point P. Matsuhashi and Hallet were thus able to establish an average time T (−1.8 seconds) without subjective report. This, they compared to ERP measurements of movement, which had detected movement beginning at about −2.8 seconds on average for this participant. Since T, like Libet's original W, was often found after movement genesis had already begun, the authors concluded that the generation of awareness occurred afterwards or in parallel to action, but most importantly, that it was probably not the cause of the movement. ==== Criticisms ==== Haggard describes other studies at the neuronal levels as providing "a reassuring confirmation of previous studies that recorded neural populations" such as the one just described. Note that these results were gathered using finger movements and may not necessarily generalize to other actions such as thinking, or even other motor actions in different situations. Indeed, the human act of planning has implications for free will, and so this ability must also be explained by any theories of unconscious decision-making. Philosopher Alfred Mele also doubts the conclusions of these studies. He explains that simply because a movement may have been initiated before our "conscious self" has become aware of it does not mean that our consciousness does not still get to approve, modify, and perhaps cancel (called vetoing) the action. === Unconsciously cancelling actions === ==== Retrospective judgement of free choice ==== Recent research by Simone Kühn and Marcel Brass suggests that consciousness may not be what causes some actions to be vetoed at the last moment. First of all, their experiment relies on the simple idea that we ought to know when we consciously cancel an action (i.e. we should have access to that information). Secondly, they suggest that access to this information means humans should find it easy to tell, just after completing an action, whether it was "impulsive" (there being no time to decide) and when there was time to "deliberate" (the participant decided to allow/not to veto the action). The study found evidence that subjects could not tell this important difference. This again leaves some conceptions of free will vulnerable to the introspection illusion. The researchers interpret their results to mean that the decision to "veto" an action is determined unconsciously, just as the initiation of the action may have been unconscious in the first place. ===== The experiment ===== The experiment involved asking volunteers to respond to a go-signal by pressing an electronic "go" button as quickly as possible. In this experiment the go-signal was represented as a visual stimulus shown on a monitor. The participants' reaction times (RT) were gathered at this stage, in what was described as the "primary response trials". The primary response trials were then modified, in which 25% of the go-signals were subsequently followed by an additional signal – either a "stop" or "decide" signal. The additional signals occurred after a "signal delay" (SD), a random amount of time up to 2 seconds after the initial go-signal. They also occurred equally, each representing 12.5% of experimental cases. These additional signals were represented by the initial stimulus changing colour (e.g., to either a red or orange light). The other 75% of go-signals were not followed by an additional signal, and therefore considered the "default" mode of the experiment. The participants' task of responding as quickly as possible to the initial signal (i.e. pressing the "go" button) remained. Upon seeing the initial go-signal, the participant would immediately intend to press the "go" button. The participant was instructed to cancel their immediate intention to press the "go" button if they saw a stop signal. The participant was instructed to select randomly (at their leisure) between either pressing the "go" button or not pressing it, if they saw a decide signal. Those trials in which the decide signal was shown after the initial go-signal ("decide trials"), for example, required that the participants prevent themselves from acting impulsively on the initial go-signal and then decide what to do. Due to the varying delays, this was sometimes impossible (e.g., some decide signals simply appeared too late in the process of them both intending to and pressing the go button for them to be obeyed). Those trials in which the subject reacted to the go-signal impulsively without seeing a subsequent signal show a quick RT of about 600 ms. Those trials in which the decide signal was shown too late, and the participant had already enacted their impulse to press the go-button (i.e. had not decided to do so), also show a quick RT of about 600 ms. Those trials in which a stop signal was shown and the participant successfully responded to it, do not show a response time. Those trials in which a decide signal was shown, and the participant decided not to press the go-button, also do not show a response time. Those trials in which a decide signal was shown, and the participant had not already enacted their impulse to press the go-button, but (in which it was theorised that they) had had the opportunity to decide what to do, show a comparatively slow RT, in this case closer to 1400 ms. The participant was asked at the end of those "decide trials" in which they had actually pressed the go-button whether they had acted impulsively (without enough time to register the decide signal before enacting their intent to press the go-button in response to the initial go-signal stimulus) or based upon a conscious decision made after seeing the decide signal. Based upon the response time data, however, it appears that there was discrepancy between when the user thought that they had had the opportunity to decide (and had therefore not acted on their impulses) – in this case deciding to press the go-button, and when they thought that they had acted impulsively (based upon the initial go-signal) – where the decide signal came too late to be obeyed. ===== The rationale ===== Kühn and Brass wanted to test participant self-knowledge. The first step was that after every decide trial, participants were next asked whether they actually had time to decide. Specifically, the volunteers were asked to label each decide trial as either failed-to-decide (the action was the result of acting impulsively on the initial go-signal) or successful decide (the result of a deliberated decision). See the diagram on the right for this decide trial split: failed-to-decide and successful decide; the next split in this diagram (participant correct or incorrect) will be explained at the end of this experiment. Note also that the researchers sorted the participants’ successful decide trials into "decide go" and "decide no-go", but were not concerned with the no-go trials, since they did not yield any RT data (and are not featured anywhere in the diagram on the right). Note that successful stop trials did not yield RT data either. Kühn and Brass now knew what to expect: primary response trials, any failed stop trials, and the "failed-to-decide" trials were all instances where the participant obviously acted impulsively – they would show the same quick RT. In contrast, the "successful decide" trials (where the decision was a "go" and the subject moved) should show a slower RT. Presumably, if deciding whether to veto is a conscious process, volunteers should have no trouble distinguishing impulsivity from instances of true deliberate continuation of a movement. Again, this is important, since decide trials require that participants rely on self-knowledge. Note that stop trials cannot test self-knowledge because if the subject does act, it is obvious to them that they reacted impulsively. ===== Results and implications ===== Unsurprisingly, the recorded RTs for the primary response trials, failed stop trials, and "failed-to-decide" trials all showed similar RTs: 600 ms seems to indicate an impulsive action made without time to truly deliberate. What the two researchers found next was not as easy to explain: while some "successful decide" trials did show the tell-tale slow RT of deliberation (averaging around 1400 ms), participants had also labelled many impulsive actions as "successful decide". This result is startling because participants should have had no trouble identifying which actions were the results of a conscious "I will not veto", and which actions were un-deliberated, impulsive reactions to the initial go-signal. As the authors explain: [The results of the experiment] clearly argue against Libet's assumption that a veto process can be consciously initiated. He used the veto in order to reintroduce the possibility to control the unconsciously initiated actions. But since the subjects are not very accurate in observing when they have [acted impulsively instead of deliberately], the act of vetoing cannot be consciously initiated. In decide trials, the participants, it seems, were not able to reliably identify whether they had really had time to decide;– at least, not based on internal signals. The authors explain that this result is difficult to reconcile with the idea of a conscious veto, but is simple to understand if the veto is considered an unconscious process. Thus it seems that the intention to move might not only arise from the unconscious mind, but it may only be inhibited if the unconscious mind says so. ===== Criticisms ===== After the above experiments, the authors concluded that subjects sometimes could not distinguish between "producing an action without stopping and stopping an action before voluntarily resuming", or in other words, they could not distinguish between actions that are immediate and impulsive as opposed to delayed by deliberation. To be clear, one assumption of the authors is that all the early (600 ms) actions are unconscious, and all the later actions are conscious. These conclusions and assumptions have yet to be debated within the scientific literature or even replicated (it is a very early study). The results of the trial in which the so-called "successful decide" data (with its respective longer time measured) was observed may have possible implications for our understanding of the role of consciousness as the modulator of a given action or response, and these possible implications cannot merely be omitted or ignored without valid reasons, especially when the authors of the experiment suggest that the late decide trials were actually deliberated. It is worth noting that Libet consistently referred to a veto of an action that was initiated endogenously. That is, a veto that occurs in the absence of external cues, instead relying on only internal cues (if any at all). This veto may be a different type of veto than the one explored by Kühn and Brass using their decide signal. Daniel Dennett also argues that no clear conclusion about volition can be derived from Benjamin Libet's experiments supposedly demonstrating the irrelevance of conscious volition. According to Dennett, ambiguities in the timings of the different events are involved. Libet tells when the readiness potential occurs objectively, using electrodes, but relies on the subject reporting the position of the hand of a clock to determine when the conscious decision was made. As Dennett points out, this is only a report of where it seems to the subject that various things come together, not of the objective time at which they actually occur: Suppose Libet knows that your readiness potential peaked at millisecond 6,810 of the experimental trial, and the clock dot was straight down (which is what you reported you saw) at millisecond 7,005. How many milliseconds should he have to add to this number to get the time you were conscious of it? The light gets from your clock face to your eyeball almost instantaneously, but the path of the signals from retina through lateral geniculate nucleus to striate cortex takes 5 to 10 milliseconds — a paltry fraction of the 300 milliseconds offset, but how much longer does it take them to get to you. (Or are you located in the striate cortex?) The visual signals have to be processed before they arrive at wherever they need to arrive for you to make a conscious decision of simultaneity. Libet's method presupposes, in short, that we can locate the intersection of two trajectories: the rising-to-consciousness of signals representing the decision to flick the rising to consciousness of signals representing successive clock-face orientations so that these events occur side-by-side as it were in place where their simultaneity can be noted. === The point of no return === In early 2016, Proceedings of the National Academy of Sciences of the United States of America (PNAS) published an article by researchers in Berlin, Germany, The point of no return in vetoing self-initiated movements, in which the authors set out to investigate whether human subjects had the ability to veto an action (in this study, a movement of the foot) after the detection of its Bereitschaftspotential (BP). The Bereitschaftspotential, which was discovered by Kornhuber & Deecke in 1965, is an instance of unconscious electrical activity within the motor cortex, quantified by the use of EEG, that occurs moments before a motion is performed by a person: it is considered a signal that the brain is "getting ready" to perform the motion. The study found evidence that these actions can be vetoed even after the BP is detected (i. e. after it can be seen that the brain has started preparing for the action). The researchers maintain that this is evidence for the existence of at least some degree of free will in humans: previously, it had been argued that, given the unconscious nature of the BP and its usefulness in predicting a person's movement, these are movements that are initiated by the brain without the involvement of the conscious will of the person. The study showed that subjects were able to "override" these signals and stop short of performing the movement that was being anticipated by the BP. Furthermore, researchers identified what was termed a "point of no return": once the BP is detected for a movement, the person could refrain from performing the movement only if they attempted to cancel it at least 200 milliseconds before the onset of the movement. After this point, the person was unable to avoid performing the movement. Previously, Kornhuber and Deecke underlined that absence of conscious will during the early Bereitschaftspotential (termed BP1) is not a proof of the non-existence of free will, as also unconscious agendas may be free and non-deterministic. According to their suggestion, man has relative freedom, i.e. freedom in degrees, that can be increased or decreased through deliberate choices that involve both conscious and unconscious (panencephalic) processes. === Neuronal prediction of free will === Despite criticisms, experimenters are still trying to gather data that may support the case that conscious "will" can be predicted from brain activity. fMRI machine learning of brain activity (multivariate pattern analysis) has been used to predict the user choice of a button (left/right) up to 7 seconds before their reported will of having done so. Brain regions successfully trained for prediction included the frontopolar cortex (anterior medial prefrontal cortex) and precuneus/posterior cingulate cortex (medial parietal cortex). In order to ensure report timing of conscious "will" to act, they showed the participant a series of frames with single letters (500 ms apart), and upon pressing the chosen button (left or right) they were required to indicate which letter they had seen at the moment of decision. This study reported a statistically significant 60% accuracy rate, which may be limited by experimental setup; machine-learning data limitations (time spent in fMRI) and instrument precision. Another version of the fMRI multivariate pattern analysis experiment was conducted using an abstract decision problem, in an attempt to rule out the possibility of the prediction capabilities being product of capturing a built-up motor urge. Each frame contained a central letter like before, but also a central number, and 4 surrounding possible "answers numbers". The participant first chose in their mind whether they wished to perform an addition or subtraction operation, and noted the central letter on the screen at the time of this decision. The participant then performed the mathematical operation based on the central numbers shown in the next two frames. In the following frame the participant then chose the "answer number" corresponding to the result of the operation. They were further presented with a frame that allowed them to indicate the central letter appearing on the screen at the time of their original decision. This version of the experiment discovered a brain prediction capacity of up to 4 seconds before the conscious will to act. Multivariate pattern analysis using EEG has suggested that an evidence-based perceptual decision model may be applicable to free-will decisions. It was found that decisions could be predicted by neural activity immediately after stimulus perception. Furthermore, when the participant was unable to determine the nature of the stimulus, the recent decision history predicted the neural activity (decision). The starting point of evidence accumulation was in effect shifted towards a previous choice (suggesting a priming bias). Another study has found that subliminally priming a participant for a particular decision outcome (showing a cue for 13 ms) could be used to influence free decision outcomes. Likewise, it has been found that decision history alone can be used to predict future decisions. The prediction capacities of the Chun Siong Soon et al. (2008) experiment were successfully replicated using a linear SVM model based on participant decision history alone (without any brain activity data). Despite this, a recent study has sought to confirm the applicability of a perceptual decision model to free will decisions. When shown a masked and therefore invisible stimulus, participants were asked to either guess between a category or make a free decision for a particular category. Multivariate pattern analysis using fMRI could be trained on "free-decision" data to successfully predict "guess decisions", and trained on "guess data" in order to predict "free decisions" (in the precuneus and cuneus region). ==== Criticisms ==== Contemporary voluntary decision prediction tasks have been criticised based on the possibility the neuronal signatures for pre-conscious decisions could actually correspond to lower-conscious processing rather than unconscious processing. People may be aware of their decisions before making their report, yet need to wait several seconds to be certain. However, such a model does not explain what is left unconscious if everything can be conscious at some level (and the purpose of defining separate systems). Yet limitations remain in free-will prediction research to date. In particular, the prediction of considered judgements from brain activity involving thought processes beginning minutes rather than seconds before a conscious will to act, including the rejection of a conflicting desire. Such are generally seen to be the product of sequences of evidence accumulating judgements. == Other related phenomena == === Retrospective construction === It has been suggested that sense authorship is an illusion. Unconscious causes of thought and action might facilitate thought and action, while the agent experiences the thoughts and actions as being dependent on conscious will. The idea behind retrospective construction is that, while part of the "yes, I did it" feeling of agency seems to occur during action, there also seems to be processing performed after the fact – after the action is performed – to establish the full feeling of agency. However, to assign agency, one does not have to believe that agency is free. In the moment, unconscious agency processing can alter how we perceive the timing of sensations or actions. Kühn and Brass apply retrospective construction to explain the two peaks in "successful decide" RTs. They suggest that the late decide trials were actually deliberated, but that the impulsive early decide trials that should have been labelled "failed-to-decide" were mistaken during unconscious agency processing. They say that people "persist in believing that they have access to their own cognitive processes" when in fact we do a great deal of automatic unconscious processing before conscious perception occurs. ==== Criticisms ==== Criticism to Daniel Wegner's claims regarding the significance of introspection illusion for the notion of free will has been published. == Manipulating choice == Some research suggests that TMS can be used to manipulate the perception of authorship of a specific choice. Experiments showed that neurostimulation could affect which hands people move, even though the subjective experience of will was intact. An early TMS study revealed that activation of one side of the neocortex could be used to bias the selection of one's opposite side hand in a forced-choice decision task. K. Ammon and S. C. Gandevia found that it was possible to influence which hand people move by stimulating frontal regions that are involved in movement planning using transcranial magnetic stimulation in the left or right hemisphere of the brain. Right-handed people would normally choose to move their right hand 60% of the time, but when the right hemisphere was stimulated, they would instead choose their left hand 80% of the time (recall that the right hemisphere of the brain is responsible for the left side of the body, and the left hemisphere for the right). Despite the external influence on their decision-making, the subjects were apparently unaware of any influence, as when questioned they felt that their decisions appeared to be made in an entirely natural way. In a follow-up experiment, Alvaro Pascual-Leone and colleagues found similar results, but also noted that the transcranial magnetic stimulation must occur within the motor area and within 200 milliseconds, consistent with the time-course derived from the Libet experiments: with longer response times (between 200 and 1100 ms), magnetic stimulation had no effect on hand preference regardless of the site stimulated. In late 2015, following a previous 2010 study, both based on earlier investigations on both monkeys and humans, a team of researchers from the UK and the US published an article demonstrating similar findings. The researchers concluded that "motor responses and the choice of hand can be modulated using tDCS". However, a different attempt by Y. H. Sohn et al. failed to replicate such results. === Manipulating the perceived intention to move === Various studies indicate that the perceived intention to move (have moved) can be manipulated. Studies have focused on the pre-supplementary motor area (pre-SMA) of the brain, in which readiness potential indicating the beginning of a movement genesis has been recorded by EEG. In one study, directly stimulating the pre-SMA caused volunteers to report a feeling of intention, and sufficient stimulation of that same area caused physical movement. In a similar study, it was found that people with no visual awareness of their body can have their limbs be made to move without having any awareness of this movement, by stimulating premotor brain regions. When their parietal cortices were stimulated, they reported an urge (intention) to move a specific limb (that they wanted to do so). Furthermore, stronger stimulation of the parietal cortex resulted in the illusion of having moved without having done so. This suggests that awareness of an intention to move may literally be the "sensation" of the body's early movement, but certainly not the cause. Other studies have at least suggested that "The greater activation of the SMA, SACC, and parietal areas during and after execution of internally generated actions suggests that an important feature of internal decisions is specific neural processing taking place during and after the corresponding action. Therefore, awareness of intention timing seems to be fully established only after execution of the corresponding action, in agreement with the time course of neural activity observed here." Another experiment involved an electronic ouija board where the device's movements were manipulated by the experimenter, while the participant was led to believe that they were entirely self-conducted. The experimenter stopped the device on occasions and asked the participant how much they themselves felt like they wanted to stop. The participant also listened to words in headphones, and it was found that if experimenter stopped next to an object that came through the headphones, they were more likely to say that they wanted to stop there. If the participant perceived having the thought at the time of the action, then it was assigned as intentional. It was concluded that a strong illusion of perception of causality requires: priority (we assume the thought must precede the action), consistency (the thought is about the action), and exclusivity (no other apparent causes or alternative hypotheses). Hakwan C. Lau et al. set up an experiment where subjects would look at an analog-style clock, and a red dot would move around the screen. Subjects were told to click the mouse button whenever they felt the intention to do so. One group was given a transcranial magnetic stimulation (TMS) pulse, and the other was given a sham TMS. Subjects in the perceived intention condition were told to move the cursor to where it was when they felt the inclination to press the button. In the movement condition, subjects moved their cursor to where it was when they physically pressed the button. TMS applied over the pre-SMA after a participant performed an action shifted the perceived onset of the motor intention backward in time, and the perceived time of action execution forward in time. Results showed that the TMS was able to shift the perceived intention condition forward by 16 ms, and shifted back by 14 ms for the movement condition. Perceived intention could be manipulated up to 200 ms after the execution of the spontaneous action, indicating that the perception of intention occurred after the executive motor movements. The results of three control studies suggest that this effect is time-limited, specific to modality, and also specific to the anatomical site of stimulation. The investigators conclude that the perceived onset of intention depends, at least in part, on neural activity that takes place after the execution of action. Often it is thought that if free will were to exist, it would require intention to be the causal source of behavior. These results show that intention may not be the causal source of all behavior. === Related models === The idea that intention co-occurs with (rather than causes) movement is reminiscent of "forward models of motor control" (FMMC), which have been used to try to explain inner speech. FMMCs describe parallel circuits: movement is processed in parallel with other predictions of movement; if the movement matches the prediction, the feeling of agency occurs. FMMCs have been applied in other related experiments. Janet Metcalfe and her colleagues used an FMMC to explain how volunteers determine whether they are in control of a computer game task. On the other hand, they acknowledge other factors as well. The authors attribute feelings of agency to desirability of the results (see self-serving biases) and top-down processing (reasoning and inferences about the situation). There is also a model, called epiphenomenalism, that argues that conscious will is an illusion, and that consciousness is a by-product of physical states of the world. Others have argued that data such as the Bereitschaftspotential undermine epiphenomenalism for the same reason, that such experiments rely on a subject reporting the point in time at which a conscious experience and a conscious decision occurs, thus relying on the subject to be able to consciously perform an action. That ability would seem to be at odds with epiphenomenalism, which, according to Thomas Henry Huxley, is the broad claim that consciousness is "completely without any power… as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery". === Related brain disorders === Various brain disorders implicate the role of unconscious brain processes in decision-making tasks. Auditory hallucinations produced by schizophrenia seem to suggest a divergence of will and behaviour. The left brain of people whose hemispheres have been disconnected has been observed to invent explanations for body movement initiated by the opposing (right) hemisphere, perhaps based on the assumption that their actions are consciously willed. Likewise, people with "alien hand syndrome" are known to conduct complex motor movements against their will. === Neural models of voluntary action === A neural model for voluntary action proposed by Haggard comprises two major circuits. The first involving early preparatory signals (basal ganglia substantia nigra and striatum), prior intention and deliberation (medial prefrontal cortex), motor preparation/readiness potential (preSMA and SMA), and motor execution (primary motor cortex, spinal cord and muscles). The second involving the parietal-pre-motor circuit for object-guided actions, for example grasping (premotor cortex, primary motor cortex, primary somatosensory cortex, parietal cortex, and back to the premotor cortex). He proposed that voluntary action involves external environment input ("when decision"), motivations/reasons for actions (early "whether decision"), task and action selection ("what decision"), a final predictive check (late "whether decision") and action execution. Another neural model for voluntary action also involves what, when, and whether (WWW) based decisions. The "what" component of decisions is considered a function of the anterior cingulate cortex, which is involved in conflict monitoring. The timing ("when") of the decisions are considered a function of the preSMA and SMA, which is involved in motor preparation. Finally, the "whether" component is considered a function of the dorsal medial prefrontal cortex. === Prospection === Martin Seligman and others criticize the classical approach in science that views animals and humans as "driven by the past" and suggest instead that people and animals draw on experience to evaluate prospects they face and act accordingly. The claim is made that this purposive action includes evaluation of possibilities that have never occurred before and is experimentally verifiable. Seligman and others argue that free will and the role of subjectivity in consciousness can be better understood by taking such a "prospective" stance on cognition and that "accumulating evidence in a wide range of research suggests [this] shift in framework". == See also == Determined: A Science of Life Without Free Will Free Will (book) Adaptive unconscious Dick Swaab Neural decoding Neuroethics § Neuroscience and free will Problem of mental causation Self-agency Thought identification, through the use of technology Unconscious mind == References == == External links == Fate, Freedom and Neuroscience – a debate on whether neuroscience has proved that free will is an illusion by the Institute of Art and Ideas featuring Oxford neuroscientist Nayef Al-Rodhan, East End psychiatrist and broadcaster Mark Salter, and LSE philosopher Kristina Musholt debate the limits of science. The Philosophy and Science of Self-Control – an international collaborative project led by Al Mele. The project fosters collaboration between scientists and philosophers with the overarching goal of improving our understanding of self-control.
Wikipedia/Neuroscience_of_free_will
What Is This Thing Called Science? (1976) is a best-selling textbook by Alan Chalmers. == Overview == The book is a guide to the philosophy of science which outlines the shortcomings of naive empiricist accounts of science, and describes and assesses modern attempts to replace them. The book is written with minimal use of technical terms. What Is This Thing Called Science? was first published in 1976, and has been translated into many languages. == Editions == What Is This Thing Called Science?, Queensland University Press and Open University Press, 1976, pp. 157 + xvii. (Translated into German, Dutch, Italian Spanish and Chinese.) What Is This Thing Called Science?, Queensland University Press, Open University Press and Hackett, 2nd revised edition (6 new chapters), 1982, pp. 179 + xix. (Translated into German, Persian, French, Italian, Spanish, Dutch, Chinese, Japanese, Indonesian, Portuguese, Polish and Danish, Greek and Estonian.) What Is This Thing Called Science?, University of Queensland Press, Open University press, 3rd revised edition, Hackett, 1999. (Translated into Korean.) What Is This Thing Called Science?, University of Queensland Press, Open University press, 4th edition, 2013. == See also == The Structure of Scientific Revolutions, by Thomas Kuhn The Logic of Scientific Discovery, by Karl Popper == References == == External links == Review of What is this Thing Called Science? Deborah G. Mayo: Review of the third edition of What is this Thing Called Science? in the newsletter of the Australasian Society for the History, Philosophy and Social Studies of Science (AAHPSSS), 2000.
Wikipedia/What_Is_This_Thing_Called_Science?
The Next Generation Science Standards is a multi-state effort in the United States to create new education standards that are "rich in content and practice, arranged in a coherent manner across disciplines and grades to provide all students an internationally benchmarked science education." The standards were developed by a consortium of 26 states and by the National Science Teachers Association, the American Association for the Advancement of Science, the National Research Council, and Achieve, a nonprofit organization that was also involved in developing math and English standards. The public was also invited to review the standards, and organizations such as the California Science Teachers Association encouraged this feedback. The final draft of the standards was released in April 2013. == Goal == The purposes of the standards include; Creating science-literate citizens Creating common standards for teaching in the U.S. Making science and engineering relevant for and accessible to all students Developing greater interest in science among students so that more of them choose to major in science and technology in college. Overall, the guidelines are intended to; Help students deeply understand core scientific concepts, Develop proficiency in the scientific process of developing and testing ideas, Have a greater ability to evaluate scientific evidence. Curricula based on the standards may cover fewer topics, but will go more deeply into specific topics, possibly using a case-study method and emphasizing critical thinking and primary investigation. Possible approaches to implementing the standards may even include replacing traditionally isolated high school courses such as biology and chemistry with a case-study approach that uses a more holistic method of teaching science to consider both (or more) topics within a single classroom structure. Many education supply companies have already started offering NGSS-aligned products and resources to help teachers implement these new principles. == Standards == The Next Generation Science Standards (NGSS) are based on the "Framework K–12 Science Education" that was created by the National Research Council. They have three dimensions that are integrated in instruction at all levels. The first dimension is the Disciplinary Core Ideas (the DCIs), which consists of content and concepts specific to four disciplines: Life Science, Earth and Space Science, Physical Science, and Engineering, Technology, and Applications of Science. The second dimension is the Science and Engineering Practices (the SEPs), which describe how scientists, engineers, and science students engage in their work of making sense of real-world phenomena and designing solutions to real-world problems. The specific elements of the science and engineering practices from the Framework are identified and described in Appendix F of the Next Generation Science Standards. These practices are asking questions and defining problems; developing and using models; planning and carrying out investigations; analyzing and interpreting data; using mathematics and computational thinking; constructing explanations and designing solutions; engaging in argument from evidence; and obtaining, evaluating, and communicating information. The third dimension is the Crosscutting Concepts, which are thinking tools and ideas that span disciplines and are used to bring disciplinary ideas together to explain a phenomenon or to design a solution to a problem. The NGSS give equal emphasis to engineering design and to scientific inquiry. In addition, they are aligned with the Common Core State Standards by grade and level of difficulty. The standards describe "performance expectations" for students in the areas of science and engineering. They define what students must be able to do in order to show competency. An important facet of the standards is that learning of content is integrated with doing the practices of scientists and engineers. This is a change from traditional teaching, which typically either dealt with these topics separately or did not attempt to teach practices. According to the NGSS, it is through the integration of content and practice "that science begins to make sense and allows students to apply the material." == Adoption == Over 40 states have shown interest in the standards, and as of March 2023, 20 states, along with the District of Columbia (D.C.), have adopted the standards: Arkansas, California, Connecticut, Delaware, Hawaii, Illinois, Iowa, Kansas, Kentucky, Maine, Maryland, Michigan, Nevada, New Hampshire, New Jersey, New Mexico, Oregon, Rhode Island, Vermont, and Washington. These represent over 36% of the students in U.S. Unlike the earlier roll-out of the Common Core (CC) mathematics and English language arts standards, states have no financial incentives from federal grants to adopt the Next Generation Science Standards. Previously, adoption of the CC standards was incentivized through states accepting federal grants during the 2009 TARP bailouts. Once states accepted the grant, they accepted the responsibility to adopt "college and career readiness" standards, which didn't have to be CC, but most states chose CC anyway. The 26 states involved in developing the NGSS, called Lead State Partners, were Arizona, Arkansas, California, Delaware, Georgia, Illinois, Iowa, Kansas, Kentucky, Maine, Maryland, Massachusetts, Michigan, Minnesota, Montana, New Jersey, New York, North Carolina, Ohio, Oregon, Rhode Island, South Dakota, Tennessee, Vermont, Washington, and West Virginia. When the standards were released in April 2013, many states were expected to adopt them within 1–2 years. However, according to the New York Times, it will take several more years to actually develop curricula based on the new guidelines, to train teachers in implementing them, and to revise standardized tests. In addition, the pace of adoption is expected to be slower than was seen with the Common Core State Standards because, unlike Common Core, in which the states had financial incentives to adopt, there are no similar incentives for the NGSS. Many education supply companies have started offering NGSS-aligned products and resources to help teachers adopt NGSS. In 2018, Achieve partnered with Concentric Sky to offer digital badges for high-quality learning resources aligned to the NGSS. == Reception == News reports have suggested there will likely be resistance towards the Next Generations Science Standards from conservatives due to the inclusion of anthropogenic climate change and evolution. For example, the New Mexico Public Education Department initially attempted to make changes and deletions in the standards prior to adopting them. According to Skeptical Inquirer, the "proposed changes would have deleted key terms and concepts such as evolution and the 4.6-billion-year age of the Earth. Specifically, 'evolution' would be called 'biological diversity,' the specific age of the Earth would be changed to 'geologic history,' and a 'rise in global temperatures' would be changed to 'temperature fluctuations.'" Following significant protests by the New Mexico Academy of Science, New Mexicans for Science and Reason, the Coalition for Excellence in Science and Engineering as well as scientists, educators, and faith leaders, the department announced in October 2017 that it would adopt the standards in their entirety. == See also == Common Core State Standards Initiative == References == == External links == Next Generation Science Standards website
Wikipedia/Next_Generation_Science_Standards
Research is creative and systematic work undertaken to increase the stock of knowledge. It involves the collection, organization, and analysis of evidence to increase understanding of a topic, characterized by a particular attentiveness to controlling sources of bias and error. These activities are characterized by accounting and controlling for biases. A research project may be an expansion of past work in the field. To test the validity of instruments, procedures, or experiments, research may replicate elements of prior projects or the project as a whole. The primary purposes of basic research (as opposed to applied research) are documentation, discovery, interpretation, and the research and development (R&D) of methods and systems for the advancement of human knowledge. Approaches to research depend on epistemologies, which vary considerably both within and between humanities and sciences. There are several forms of research: scientific, humanities, artistic, economic, social, business, marketing, practitioner research, life, technological, etc. The scientific study of research practices is known as meta-research. A researcher is a person who conducts research, especially in order to discover new information or to reach a new understanding. In order to be a social researcher or a social scientist, one should have enormous knowledge of subjects related to social science that they are specialized in. Similarly, in order to be a natural science researcher, the person should have knowledge of fields related to natural science (physics, chemistry, biology, astronomy, zoology and so on). Professional associations provide one pathway to mature in the research profession. == Etymology == The word research is derived from the Middle French "recherche", which means "to go about seeking", the term itself being derived from the Old French term "recerchier," a compound word from "re-" + "cerchier", or "sercher", meaning 'search'. The earliest recorded use of the term was in 1577. == Definitions == Research has been defined in a number of different ways, and while there are similarities, there does not appear to be a single, all-encompassing definition that is embraced by all who engage in it. Research, in its simplest terms, is searching for knowledge and searching for truth. In a formal sense, it is a systematic study of a problem attacked by a deliberately chosen strategy, which starts with choosing an approach to preparing a blueprint (design) and acting upon it in terms of designing research hypotheses, choosing methods and techniques, selecting or developing data collection tools, processing the data, interpretation, and ending with presenting solution(s) of the problem. Another definition of research is given by John W. Creswell, who states that "research is a process of steps used to collect and analyze information to increase our understanding of a topic or issue". It consists of three steps: pose a question, collect data to answer the question, and present an answer to the question. The Merriam-Webster Online Dictionary defines research more generally to also include studying already existing knowledge: "studious inquiry or examination; especially: investigation or experimentation aimed at the discovery and interpretation of facts, revision of accepted theories or laws in the light of new facts, or practical application of such new or revised theories or laws". == Forms of research == === Original research === Original research, also called primary research, is research that is not exclusively based on a summary, review, or synthesis of earlier publications on the subject of research. This material is of a primary-source character. The purpose of the original research is to produce new knowledge rather than present the existing knowledge in a new form (e.g., summarized or classified). Original research can take various forms, depending on the discipline it pertains to. In experimental work, it typically involves direct or indirect observation of the researched subject(s), e.g., in the laboratory or in the field, documents the methodology, results, and conclusions of an experiment or set of experiments, or offers a novel interpretation of previous results. In analytical work, there are typically some new (for example) mathematical results produced or a new way of approaching an existing problem. In some subjects which do not typically carry out experimentation or analysis of this kind, the originality is in the particular way existing understanding is changed or re-interpreted based on the outcome of the work of the researcher. The degree of originality of the research is among the major criteria for articles to be published in academic journals and usually established by means of peer review. Graduate students are commonly required to perform original research as part of a dissertation. === Scientific research === Scientific research is a systematic way of gathering data and harnessing curiosity. This research provides scientific information and theories for the explanation of the nature and the properties of the world. It makes practical applications possible. Scientific research may be funded by public authorities, charitable organizations, and private organizations. Scientific research can be subdivided by discipline. Generally, research is understood to follow a certain structural process. Though the order may vary depending on the subject matter and researcher, the following steps are usually part of most formal research, both basic and applied: Observations and formation of the topic: Consists of the subject area of one's interest and following that subject area to conduct subject-related research. The subject area should not be randomly chosen since it requires reading a vast amount of literature on the topic to determine the gap in the literature the researcher intends to narrow. A keen interest in the chosen subject area is advisable. The research will have to be justified by linking its importance to already existing knowledge about the topic. Hypothesis: A testable prediction which designates the relationship between two or more variables. Conceptual definition: Description of a concept by relating it to other concepts. Operational definition: Details in regards to defining the variables and how they will be measured/assessed in the study. Gathering of data: Consists of identifying a population and selecting samples, gathering information from or about these samples by using specific research instruments. The instruments used for data collection must be valid and reliable. Analysis of data: Involves breaking down the individual pieces of data to draw conclusions about it. Data Interpretation: This can be represented through tables, figures, and pictures, and then described in words. Test, revising of hypothesis Conclusion, reiteration if necessary A common misconception is that a hypothesis will be proven (see, rather, null hypothesis). Generally, a hypothesis is used to make predictions that can be tested by observing the outcome of an experiment. If the outcome is inconsistent with the hypothesis, then the hypothesis is rejected (see falsifiability). However, if the outcome is consistent with the hypothesis, the experiment is said to support the hypothesis. This careful language is used because researchers recognize that alternative hypotheses may also be consistent with the observations. In this sense, a hypothesis can never be proven, but rather only supported by surviving rounds of scientific testing and, eventually, becoming widely thought of as true. A useful hypothesis allows prediction and within the accuracy of observation of the time, the prediction will be verified. As the accuracy of observation improves with time, the hypothesis may no longer provide an accurate prediction. In this case, a new hypothesis will arise to challenge the old, and to the extent that the new hypothesis makes more accurate predictions than the old, the new will supplant it. Researchers can also use a null hypothesis, which states no relationship or difference between the independent or dependent variables. === Research in the humanities === Research in the humanities involves different methods such as for example hermeneutics and semiotics. Humanities scholars usually do not search for the ultimate correct answer to a question, but instead, explore the issues and details that surround it. Context is always important, and context can be social, historical, political, cultural, or ethnic. An example of research in the humanities is historical research, which is embodied in historical method. Historians use primary sources and other evidence to systematically investigate a topic, and then to write histories in the form of accounts of the past. Other studies aim to merely examine the occurrence of behaviours in societies and communities, without particularly looking for reasons or motivations to explain these. These studies may be qualitative or quantitative, and can use a variety of approaches, such as queer theory or feminist theory. === Artistic research === Artistic research, also seen as 'practice-based research', can take form when creative works are considered both the research and the object of research itself. It is the debatable body of thought which offers an alternative to purely scientific methods in research in its search for knowledge and truth. The controversial trend of artistic teaching becoming more academics-oriented is leading to artistic research being accepted as the primary mode of enquiry in art as in the case of other disciplines. One of the characteristics of artistic research is that it must accept subjectivity as opposed to the classical scientific methods. As such, it is similar to the social sciences in using qualitative research and intersubjectivity as tools to apply measurement and critical analysis. Artistic research has been defined by the School of Dance and Circus (Dans och Cirkushögskolan, DOCH), Stockholm in the following manner – "Artistic research is to investigate and test with the purpose of gaining knowledge within and for our artistic disciplines. It is based on artistic practices, methods, and criticality. Through presented documentation, the insights gained shall be placed in a context." Artistic research aims to enhance knowledge and understanding with presentation of the arts. A simpler understanding by Julian Klein defines artistic research as any kind of research employing the artistic mode of perception. For a survey of the central problematics of today's artistic research, see Giaco Schiesser. According to artist Hakan Topal, in artistic research, "perhaps more so than other disciplines, intuition is utilized as a method to identify a wide range of new and unexpected productive modalities". Most writers, whether of fiction or non-fiction books, also have to do research to support their creative work. This may be factual, historical, or background research. Background research could include, for example, geographical or procedural research. The Society for Artistic Research (SAR) publishes the triannual Journal for Artistic Research (JAR), an international, online, open access, and peer-reviewed journal for the identification, publication, and dissemination of artistic research and its methodologies, from all arts disciplines and it runs the Research Catalogue (RC), a searchable, documentary database of artistic research, to which anyone can contribute. Patricia Leavy addresses eight arts-based research (ABR) genres: narrative inquiry, fiction-based research, poetry, music, dance, theatre, film, and visual art. In 2016, the European League of Institutes of the Arts launched The Florence Principles' on the Doctorate in the Arts. The Florence Principles relating to the Salzburg Principles and the Salzburg Recommendations of the European University Association name seven points of attention to specify the Doctorate / PhD in the Arts compared to a scientific doctorate / PhD. The Florence Principles have been endorsed and are supported also by AEC, CILECT, CUMULUS and SAR. === Historical research === The historical method comprises the techniques and guidelines by which historians use historical sources and other evidence to research and then to write history. There are various history guidelines that are commonly used by historians in their work, under the headings of external criticism, internal criticism, and synthesis. This includes lower criticism and sensual criticism. Though items may vary depending on the subject matter and researcher, the following concepts are part of most formal historical research: Identification of origin date Evidence of localization Recognition of authorship Analysis of data Identification of integrity Attribution of credibility === Documentary research === == Steps in conducting research == Research is often conducted using the hourglass model structure of research. The hourglass model starts with a broad spectrum for research, focusing in on the required information through the method of the project (like the neck of the hourglass), then expands the research in the form of discussion and results. The major steps in conducting research are: Identification of research problem Literature review Specifying the purpose of research Determining specific research questions Specification of a conceptual framework, sometimes including a set of hypotheses Choice of a methodology (for data collection) Data collection Verifying data Analyzing and interpreting the data Reporting and evaluating research Communicating the research findings and, possibly, recommendations The steps generally represent the overall process; however, they should be viewed as an ever-changing iterative process rather than a fixed set of steps. Most research begins with a general statement of the problem, or rather, the purpose for engaging in the study. The literature review identifies flaws or holes in previous research which provides justification for the study. Often, a literature review is conducted in a given subject area before a research question is identified. A gap in the current literature, as identified by a researcher, then engenders a research question. The research question may be parallel to the hypothesis. The hypothesis is the supposition to be tested. The researcher(s) collects data to test the hypothesis. The researcher(s) then analyzes and interprets the data via a variety of statistical methods, engaging in what is known as empirical research. The results of the data analysis in rejecting or failing to reject the null hypothesis are then reported and evaluated. At the end, the researcher may discuss avenues for further research. However, some researchers advocate for the reverse approach: starting with articulating findings and discussion of them, moving "up" to identification of a research problem that emerges in the findings and literature review. The reverse approach is justified by the transactional nature of the research endeavor where research inquiry, research questions, research method, relevant research literature, and so on are not fully known until the findings have fully emerged and been interpreted. Rudolph Rummel says, "... no researcher should accept any one or two tests as definitive. It is only when a range of tests are consistent over many kinds of data, researchers, and methods can one have confidence in the results." Plato in Meno talks about an inherent difficulty, if not a paradox, of doing research that can be paraphrased in the following way, "If you know what you're searching for, why do you search for it?! [i.e., you have already found it] If you don't know what you're searching for, what are you searching for?!" == Research methods == The goal of the research process is to produce new knowledge or deepen understanding of a topic or issue. This process takes three main forms (although, as previously discussed, the boundaries between them may be obscure): Exploratory research, which helps to identify and define a problem or question. Constructive research, which tests theories and proposes solutions to a problem or question. Empirical research, which tests the feasibility of a solution using empirical evidence. There are two major types of empirical research design: qualitative research and quantitative research. Researchers choose qualitative or quantitative methods according to the nature of the research topic they want to investigate and the research questions they aim to answer: Qualitative research Qualitative research refers to much more subjective non-quantitative, use different methods of collecting data, analyzing data, interpreting data for meanings, definitions, characteristics, symbols metaphors of things. Qualitative research further classified into the following types: Ethnography: This research mainly focus on culture of group of people which includes share attributes, language, practices, structure, value, norms and material things, evaluate human lifestyle. Ethno: people, Grapho: to write, this disciple may include ethnic groups, ethno genesis, composition, resettlement and social welfare characteristics. Phenomenology: It is very powerful strategy for demonstrating methodology to health professions education as well as best suited for exploring challenging problems in health professions educations. In addition, PMP researcher Mandy Sha argued that a project management approach is necessary to control the scope, schedule, and cost related to qualitative research design, participant recruitment, data collection, reporting, as well as stakeholder engagement. Quantitative research Quantitative research involves systematic empirical investigation of quantitative properties and phenomena and their relationships, by asking a narrow question and collecting numerical data to analyze it utilizing statistical methods. The quantitative research designs are experimental, correlational, and survey (or descriptive). Statistics derived from quantitative research can be used to establish the existence of associative or causal relationships between variables. Quantitative research is linked with the philosophical and theoretical stance of positivism. The quantitative data collection methods rely on random sampling and structured data collection instruments that fit diverse experiences into predetermined response categories. These methods produce results that can be summarized, compared, and generalized to larger populations if the data are collected using proper sampling and data collection strategies. Quantitative research is concerned with testing hypotheses derived from theory or being able to estimate the size of a phenomenon of interest. If the research question is about people, participants may be randomly assigned to different treatments (this is the only way that a quantitative study can be considered a true experiment). If this is not feasible, the researcher may collect data on participant and situational characteristics to statistically control for their influence on the dependent, or outcome, variable. If the intent is to generalize from the research participants to a larger population, the researcher will employ probability sampling to select participants. In either qualitative or quantitative research, the researcher(s) may collect primary or secondary data. Primary data is data collected specifically for the research, such as through interviews or questionnaires. Secondary data is data that already exists, such as census data, which can be re-used for the research. It is good ethical research practice to use secondary data wherever possible. Mixed-method research, i.e. research that includes qualitative and quantitative elements, using both primary and secondary data, is becoming more common. This method has benefits that using one method alone cannot offer. For example, a researcher may choose to conduct a qualitative study and follow it up with a quantitative study to gain additional insights. Big data has brought big impacts on research methods so that now many researchers do not put much effort into data collection; furthermore, methods to analyze easily available huge amounts of data have also been developed. Non-empirical research Non-empirical (theoretical) research is an approach that involves the development of theory as opposed to using observation and experimentation. As such, non-empirical research seeks solutions to problems using existing knowledge as its source. This, however, does not mean that new ideas and innovations cannot be found within the pool of existing and established knowledge. Non-empirical research is not an absolute alternative to empirical research because they may be used together to strengthen a research approach. Neither one is less effective than the other since they have their particular purpose in science. Typically empirical research produces observations that need to be explained; then theoretical research tries to explain them, and in so doing generates empirically testable hypotheses; these hypotheses are then tested empirically, giving more observations that may need further explanation; and so on. See Scientific method. A simple example of a non-empirical task is the prototyping of a new drug using a differentiated application of existing knowledge; another is the development of a business process in the form of a flow chart and texts where all the ingredients are from established knowledge. Much of cosmological research is theoretical in nature. Mathematics research does not rely on externally available data; rather, it seeks to prove theorems about mathematical objects. == Research ethics == == Problems in research == === Metascience === Metascience is the study of research through the use of research methods. Also known as "research on research", it aims to reduce waste and increase the quality of research in all fields. Meta-research concerns itself with the detection of bias, methodological flaws, and other errors and inefficiencies. Among the finding of meta-research is a low rates of reproducibility across a large number of fields. === Replication crisis === === Academic bias === === Funding bias === === Publication bias === === Non-western methods === In many disciplines, Western methods of conducting research are predominant. Researchers are overwhelmingly taught Western methods of data collection and study. The increasing participation of indigenous peoples as researchers has brought increased attention to the scientific lacuna in culturally sensitive methods of data collection. Western methods of data collection may not be the most accurate or relevant for research on non-Western societies. For example, "Hua Oranga" was created as a criterion for psychological evaluation in Māori populations, and is based on dimensions of mental health important to the Māori people – "taha wairua (the spiritual dimension), taha hinengaro (the mental dimension), taha tinana (the physical dimension), and taha whanau (the family dimension)". Even though Western dominance seems to be prominent in research, some scholars, such as Simon Marginson, argue for "the need [for] a plural university world". Marginson argues that the East Asian Confucian model could take over the Western model. This could be due to changes in funding for research both in the East and the West. Focused on emphasizing educational achievement, East Asian cultures, mainly in China and South Korea, have encouraged the increase of funding for research expansion. In contrast, in the Western academic world, notably in the United Kingdom as well as in some state governments in the United States, funding cuts for university research have occurred, which some say may lead to the future decline of Western dominance in research. === Language === Research is often biased in the languages that are preferred (linguicism) and the geographic locations where research occurs. Periphery scholars face the challenges of exclusion and linguicism in research and academic publication. As the great majority of mainstream academic journals are written in English, multilingual periphery scholars often must translate their work to be accepted to elite Western-dominated journals. Multilingual scholars' influences from their native communicative styles can be assumed to be incompetence instead of difference. Patterns of geographic bias also show a relationship with linguicism: countries whose official languages are French or Arabic are far less likely to be the focus of single-country studies than countries with different official languages. Within Africa, English-speaking countries are more represented than other countries. === Generalizability === Generalization is the process of more broadly applying the valid results of one study. Studies with a narrow scope can result in a lack of generalizability, meaning that the results may not be applicable to other populations or regions. In comparative politics, this can result from using a single-country study, rather than a study design that uses data from multiple countries. Despite the issue of generalizability, single-country studies have risen in prevalence since the late 2000s. For comparative politics, Western countries are over-represented in single-country studies, with heavy emphasis on Western Europe, Canada, Australia, and New Zealand. Since 2000, Latin American countries have become more popular in single-country studies. In contrast, countries in Oceania and the Caribbean are the focus of very few studies. === Publication peer review === Peer review is a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are employed to maintain standards of quality, improve performance, and provide credibility. In academia, scholarly peer review is often used to determine an academic paper's suitability for publication. Usually, the peer review process involves experts in the same field who are consulted by editors to give a review of the scholarly works produced by a colleague of theirs from an unbiased and impartial point of view, and this is usually done free of charge. The tradition of peer reviews being done for free has however brought many pitfalls which are also indicative of why most peer reviewers decline many invitations to review. It was observed that publications from periphery countries rarely rise to the same elite status as those of North America and Europe. === Open research === The open research, open science and open access movements assume that all information generally deemed useful should be free and belongs to a "public domain", that of "humanity". This idea gained prevalence as a result of Western colonial history and ignores alternative conceptions of knowledge circulation. For instance, most indigenous communities consider that access to certain information proper to the group should be determined by relationships. There is alleged to be a double standard in the Western knowledge system. On the one hand, "digital right management" used to restrict access to personal information on social networking platforms is celebrated as a protection of privacy, while simultaneously when similar functions are used by cultural groups (i.e. indigenous communities) this is denounced as "access control" and reprehended as censorship. == Professionalisation == In several national and private academic systems, the professionalisation of research has resulted in formal job titles. === In Russia === In present-day Russia, and some other countries of the former Soviet Union, the term researcher (Russian: Научный сотрудник, nauchny sotrudnik) has been used both as a generic term for a person who has been carrying out scientific research, and as a job position within the frameworks of the Academy of Sciences, universities, and in other research-oriented establishments. The following ranks are known: Junior Researcher (Junior Research Associate) Researcher (Research Associate) Senior Researcher (Senior Research Associate) Leading Researcher (Leading Research Associate) Chief Researcher (Chief Research Associate) == Publishing == Academic publishing is a system that is necessary for academic scholars to peer review the work and make it available for a wider audience. The system varies widely by field and is also always changing, if often slowly. Most academic work is published in journal article or book form. There is also a large body of research that exists in either a thesis or dissertation form. These forms of research can be found in databases explicitly for theses and dissertations. In publishing, STM publishing is an abbreviation for academic publications in science, technology, and medicine. Most established academic fields have their own scientific journals and other outlets for publication, though many academic journals are somewhat interdisciplinary, and publish work from several distinct fields or subfields. The kinds of publications that are accepted as contributions of knowledge or research vary greatly between fields, from the print to the electronic format. A study suggests that researchers should not give great consideration to findings that are not replicated frequently. It has also been suggested that all published studies should be subjected to some measure for assessing the validity or reliability of its procedures to prevent the publication of unproven findings. Business models are different in the electronic environment. Since about the early 1990s, licensing of electronic resources, particularly journals, has been very common. Presently, a major trend, particularly with respect to scholarly journals, is open access. There are two main forms of open access: open access publishing, in which the articles or the whole journal is freely available from the time of publication, and self-archiving, where the author makes a copy of their own work freely available on the web. == Research statistics and funding == Most funding for scientific research comes from three major sources: corporate research and development departments; private foundations; and government research councils such as the National Institutes of Health in the US and the Medical Research Council in the UK. These are managed primarily through universities and in some cases through military contractors. Many senior researchers (such as group leaders) spend a significant amount of their time applying for grants for research funds. These grants are necessary not only for researchers to carry out their research but also as a source of merit. The Social Psychology Network provides a comprehensive list of U.S. Government and private foundation funding sources. The total number of researchers (full-time equivalents) per million inhabitants for individual countries is shown in the following table. Research expenditure by type of research as a share of GDP for individual countries is shown in the following table. == See also == == Notes == == References == == Sources == Creswell, John W. (2008). Educational Research: Planning, conducting, and evaluating quantitative and qualitative research (3rd ed.). Upper Saddle River, NJ: Pearson. ISBN 0-13-613550-1. Kara, Helen (2012). Research and Evaluation for Busy Practitioners: A Time-Saving Guide. Bristol: The Policy Press. ISBN 978-1-44730-115-8. == Further reading == Groh, Arnold (2018). Research Methods in Indigenous Contexts. New York: Springer. ISBN 978-3-319-72774-5. Cohen, N.; Arieli, T. (2011). "Field research in conflict environments: Methodological challenges and snowball sampling". Journal of Peace Research. 48 (4): 423–436. doi:10.1177/0022343311405698. S2CID 145328311. Soeters, Joseph; Shields, Patricia and Rietjens, Sebastiaan. 2014. Handbook of Research Methods in Military Studies New York: Routledge. Talja, Sanna and Pamela J. Mckenzie (2007). Editor's Introduction: Special Issue on Discursive Approaches to Information Seeking in Context, The University of Chicago Press. == External links == The dictionary definition of research at Wiktionary Quotations related to Research at Wikiquote Media related to Research at Wikimedia Commons
Wikipedia/Research_methods
In molecular biology, DNA replication is the biological process of producing two identical replicas of DNA from one original DNA molecule. DNA replication occurs in all living organisms, acting as the most essential part of biological inheritance. This is essential for cell division during growth and repair of damaged tissues, while it also ensures that each of the new cells receives its own copy of the DNA. The cell possesses the distinctive property of division, which makes replication of DNA essential. DNA is made up of a double helix of two complementary strands. DNA is often called double helix. The double helix describes the appearance of a double-stranded DNA which is composed of two linear strands that run opposite to each other and twist together. During replication, these strands are separated. Each strand of the original DNA molecule then serves as a template for the production of its counterpart, a process referred to as semiconservative replication. As a result, the new helix will be composed of an original DNA strand as well as a newly synthesized strand. Cellular proofreading and error-checking mechanisms ensure near perfect fidelity for DNA replication. In a cell, DNA replication begins at specific locations (origins of replication) in the genome which contains the genetic material of an organism. Unwinding of DNA at the origin and synthesis of new strands, accommodated by an enzyme known as helicase, results in replication forks growing bi-directionally from the origin. A number of proteins are associated with the replication fork to help in the initiation and continuation of DNA synthesis. Most prominently, DNA polymerase synthesizes the new strands by adding nucleotides that complement each (template) strand. DNA replication occurs during the S-stage of interphase. DNA replication (DNA amplification) can also be performed in vitro (artificially, outside a cell). DNA polymerases isolated from cells and artificial DNA primers can be used to start DNA synthesis at known sequences in a template DNA molecule. Polymerase chain reaction (PCR), ligase chain reaction (LCR), and transcription-mediated amplification (TMA) are examples. In March 2021, researchers reported evidence suggesting that a preliminary form of transfer RNA, a necessary component of translation, the biological synthesis of new proteins in accordance with the genetic code, could have been a replicator molecule itself in the very early development of life, or abiogenesis. == DNA structure == DNA is a double-stranded structure, with both strands coiled together to form the characteristic double helix. Each single strand of DNA is a chain of four types of nucleotides. Nucleotides in DNA contain a deoxyribose sugar, a phosphate, and a nucleobase. The four types of nucleotide correspond to the four nucleobases adenine, cytosine, guanine, and thymine, commonly abbreviated as A, C, G, and T. Adenine and guanine are purine bases, while cytosine and thymine are pyrimidines. These nucleotides form phosphodiester bonds, creating the phosphate-deoxyribose backbone of the DNA double helix with the nucleobases pointing inward (i.e., toward the opposing strand). Nucleobases are matched between strands through hydrogen bonds to form base pairs. Adenine pairs with thymine (two hydrogen bonds), and guanine pairs with cytosine (three hydrogen bonds). DNA strands have a directionality, and the different ends of a single strand are called the "3′ (three-prime) end" and the "5′ (five-prime) end". By convention, if the base sequence of a single strand of DNA is given, the left end of the sequence is the 5′ end, while the right end of the sequence is the 3′ end. The strands of the double helix are anti-parallel, with one being 5′ to 3′, and the opposite strand 3′ to 5′. These terms refer to the carbon atom in deoxyribose to which the next phosphate in the chain attaches. Directionality has consequences in DNA synthesis, because DNA polymerase can synthesize DNA in only one direction by adding nucleotides to the 3′ end of a DNA strand. The pairing of complementary bases in DNA (through hydrogen bonding) means that the information contained within each strand is redundant. Phosphodiester (intra-strand) bonds are stronger than hydrogen (inter-strand) bonds. The actual job of the phosphodiester bonds is where in DNA polymers connect the 5' carbon atom of one nucleotide to the 3' carbon atom of another nucleotide, while the hydrogen bonds stabilize DNA double helices across the helix axis but not in the direction of the axis. This makes it possible to separate the strands from one another. The nucleotides on a single strand can therefore be used to reconstruct nucleotides on a newly synthesized partner strand. == DNA polymerase == DNA polymerases are a family of enzymes that carry out all forms of DNA replication. DNA polymerases in general cannot initiate synthesis of new strands but can only extend an existing DNA or RNA strand paired with a template strand. To begin synthesis, a short fragment of RNA, called a primer, must be created and paired with the template DNA strand. DNA polymerase adds a new strand of DNA by extending the 3′ end of an existing nucleotide chain, adding new nucleotides matched to the template strand, one at a time, via the creation of phosphodiester bonds. The energy for this process of DNA polymerization comes from hydrolysis of the high-energy phosphate (phosphoanhydride) bonds between the three phosphates attached to each unincorporated base. Free bases with their attached phosphate groups are called nucleotides; in particular, bases with three attached phosphate groups are called nucleoside triphosphates. When a nucleotide is being added to a growing DNA strand, the formation of a phosphodiester bond between the proximal phosphate of the nucleotide to the growing chain is accompanied by hydrolysis of a high-energy phosphate bond with release of the two distal phosphate groups as a pyrophosphate. Enzymatic hydrolysis of the resulting pyrophosphate into inorganic phosphate consumes a second high-energy phosphate bond and renders the reaction effectively irreversible. In general, DNA polymerases are highly accurate, with an intrinsic error rate of less than one mistake for every 107 nucleotides added. Some DNA polymerases can also delete nucleotides from the end of a developing strand in order to fix mismatched bases. This is known as proofreading. Finally, post-replication mismatch repair mechanisms monitor the DNA for errors, being capable of distinguishing mismatches in the newly synthesized DNA Strand from the original strand sequence. Together, these three discrimination steps enable replication fidelity of less than one mistake for every 109 nucleotides added. The rate of DNA replication in a living cell was first measured as the rate of phage T4 DNA elongation in phage-infected E. coli. During the period of exponential DNA increase at 37 °C, the rate was 749 nucleotides per second. The mutation rate per base pair per replication during phase T4 DNA synthesis is 1.7 per 108. == Replication process == DNA replication, like all biological polymerization processes, proceeds in three enzymatically catalyzed and coordinated steps: initiation, elongation and termination. === Initiation === For a cell to divide, it must first replicate its DNA. DNA replication is an all-or-none process; once replication begins, it proceeds to completion. Once replication is complete, it does not occur again in the same cell cycle. This is made possible by the division of initiation of the pre-replication complex. === Pre-replication complex === In late mitosis and early G1 phase, a large complex of initiator proteins assembles into the pre-replication complex at particular points in the DNA, known as "origins". In E. coli the primary initiator protein is Dna A; in yeast, this is the origin recognition complex. Sequences used by initiator proteins tend to be "AT-rich" (rich in adenine and thymine bases), because A-T base pairs have two hydrogen bonds (rather than the three formed in a C-G pair) and thus are easier to strand-separate. In eukaryotes, the origin recognition complex catalyzes the assembly of initiator proteins into the pre-replication complex. In addition, a recent report suggests that budding yeast ORC dimerizes in a cell cycle dependent manner to control licensing. In turn, the process of ORC dimerization is mediated by a cell cycle-dependent Noc3p dimerization cycle in vivo, and this role of Noc3p is separable from its role in ribosome biogenesis. An essential Noc3p dimerization cycle mediates ORC double-hexamer formation in replication licensing ORC and Noc3p are continuously bound to the chromatin throughout the cell cycle. Cdc6 and Cdt1 then associate with the bound origin recognition complex at the origin in order to form a larger complex necessary to load the Mcm complex onto the DNA. In eukaryotes, the Mcm complex is the helicase that will split the DNA helix at the replication forks and origins. The Mcm complex is recruited at late G1 phase and loaded by the ORC-Cdc6-Cdt1 complex onto the DNA via ATP-dependent protein remodeling. The loading of the MCM complex onto the origin DNA marks the completion of pre-replication complex formation. If environmental conditions are right in late G1 phase, the G1 and G1/S cyclin-Cdk complexes are activated, which stimulate expression of genes that encode components of the DNA synthetic machinery. G1/S-Cdk activation also promotes the expression and activation of S-Cdk complexes, which may play a role in activating replication origins depending on species and cell type. Control of these Cdks vary depending on cell type and stage of development. This regulation is best understood in budding yeast, where the S cyclins Clb5 and Clb6 are primarily responsible for DNA replication. Clb5,6-Cdk1 complexes directly trigger the activation of replication origins and are therefore required throughout S phase to directly activate each origin. In a similar manner, Cdc7 is also required through S phase to activate replication origins. Cdc7 is not active throughout the cell cycle, and its activation is strictly timed to avoid premature initiation of DNA replication. In late G1, Cdc7 activity rises abruptly as a result of association with the regulatory subunit DBF4, which binds Cdc7 directly and promotes its protein kinase activity. Cdc7 has been found to be a rate-limiting regulator of origin activity. Together, the G1/S-Cdks and/or S-Cdks and Cdc7 collaborate to directly activate the replication origins, leading to initiation of DNA synthesis. === Preinitiation complex === In early S phase, S-Cdk and Cdc7 activation lead to the assembly of the preinitiation complex, a massive protein complex formed at the origin. Formation of the preinitiation complex displaces Cdc6 and Cdt1 from the origin replication complex, inactivating and disassembling the pre-replication complex. Loading the preinitiation complex onto the origin activates the Mcm helicase, causing unwinding of the DNA helix. The preinitiation complex also loads α-primase and other DNA polymerases onto the DNA. After α-primase synthesizes the first primers, the primer-template junctions interact with the clamp loader, which loads the sliding clamp onto the DNA to begin DNA synthesis. The components of the preinitiation complex remain associated with replication forks as they move out from the origin. === Elongation === DNA polymerase has 5′–3′ activity. All known DNA replication systems require a free 3′ hydroxyl group before synthesis can be initiated (note: the DNA template is read in 3′ to 5′ direction whereas a new strand is synthesized in the 5′ to 3′ direction—this is often confused). Four distinct mechanisms for DNA synthesis are recognized: All cellular life forms and many DNA viruses, phages and plasmids use a primase to synthesize a short RNA primer with a free 3′ OH group which is subsequently elongated by a DNA polymerase. The retroelements (including retroviruses) employ a transfer RNA that primes DNA replication by providing a free 3′ OH that is used for elongation by the reverse transcriptase. In the adenoviruses and the φ29 family of bacteriophages, the 3′ OH group is provided by the side chain of an amino acid of the genome attached protein (the terminal protein) to which nucleotides are added by the DNA polymerase to form a new strand. In the single stranded DNA viruses—a group that includes the circoviruses, the geminiviruses, the parvoviruses and others—and also the many phages and plasmids that use the rolling circle replication (RCR) mechanism, the RCR endonuclease creates a nick in the genome strand (single stranded viruses) or one of the DNA strands (plasmids). The 5′ end of the nicked strand is transferred to a tyrosine residue on the nuclease and the free 3′ OH group is then used by the DNA polymerase to synthesize the new strand. Cellular organisms use the first of these pathways since it is the most well-known. In this mechanism, once the two strands are separated, primase adds RNA primers to the template strands. The leading strand receives one RNA primer while the lagging strand receives several. The leading strand is continuously extended from the primer by a DNA polymerase with high processivity, while the lagging strand is extended discontinuously from each primer forming Okazaki fragments. RNase removes the primer RNA fragments, and a low processivity DNA polymerase distinct from the replicative polymerase enters to fill the gaps. When this is complete, a single nick on the leading strand and several nicks on the lagging strand can be found. Ligase works to fill these nicks in, thus completing the newly replicated DNA molecule. The primase used in this process differs significantly between bacteria and archaea/eukaryotes. Bacteria use a primase belonging to the DnaG protein superfamily which contains a catalytic domain of the TOPRIM fold type. The TOPRIM fold contains an α/β core with four conserved strands in a Rossmann-like topology. This structure is also found in the catalytic domains of topoisomerase Ia, topoisomerase II, the OLD-family nucleases and DNA repair proteins related to the RecR protein. The primase used by archaea and eukaryotes, in contrast, contains a highly derived version of the RNA recognition motif (RRM). This primase is structurally similar to many viral RNA-dependent RNA polymerases, reverse transcriptases, cyclic nucleotide generating cyclases and DNA polymerases of the A/B/Y families that are involved in DNA replication and repair. In eukaryotic replication, the primase forms a complex with Pol α. Multiple DNA polymerases take on different roles in the DNA replication process. In E. coli, DNA Pol III is the polymerase enzyme primarily responsible for DNA replication. It assembles into a replication complex at the replication fork that exhibits extremely high processivity, remaining intact for the entire replication cycle. In contrast, DNA Pol I is the enzyme responsible for replacing RNA primers with DNA. DNA Pol I has a 5′ to 3′ exonuclease activity in addition to its polymerase activity, and uses its exonuclease activity to degrade the RNA primers ahead of it as it extends the DNA strand behind it, in a process called nick translation. Pol I is much less processive than Pol III because its primary function in DNA replication is to create many short DNA regions rather than a few very long regions. In eukaryotes, the low-processivity enzyme, Pol α, helps to initiate replication because it forms a complex with primase. In eukaryotes, leading strand synthesis is thought to be conducted by Pol ε; however, this view has recently been challenged, suggesting a role for Pol δ. Primer removal is completed Pol δ while repair of DNA during replication is completed by Pol ε. As DNA synthesis continues, the original DNA strands continue to unwind on each side of the bubble, forming a replication fork with two prongs. In bacteria, which have a single origin of replication on their circular chromosome, this process creates a "theta structure" (resembling the Greek letter theta: θ). In contrast, eukaryotes have longer linear chromosomes and initiate replication at multiple origins within these. === Replication fork === The replication fork is a structure that forms within the long helical DNA during DNA replication. It is produced by enzymes called helicases that break the hydrogen bonds that hold the DNA strands together in a helix. The resulting structure has two branching "prongs", each one made up of a single strand of DNA. These two strands serve as the template for the leading and lagging strands, which will be created as DNA polymerase matches complementary nucleotides to the templates; the templates may be properly referred to as the leading strand template and the lagging strand template. DNA is read by DNA polymerase in the 3′ to 5′ direction, meaning the new strand is synthesized in the 5' to 3' direction. Since the leading and lagging strand templates are oriented in opposite directions at the replication fork, a major issue is how to achieve synthesis of new lagging strand DNA, whose direction of synthesis is opposite to the direction of the growing replication fork. ==== Leading strand ==== The leading strand is the strand of new DNA which is synthesized in the same direction as the growing replication fork. This sort of DNA replication is continuous. ==== Lagging strand ==== The lagging strand is the strand of new DNA whose direction of synthesis is opposite to the direction of the growing replication fork. Because of its orientation, replication of the lagging strand is more complicated as compared to that of the leading strand. As a consequence, the DNA polymerase on this strand is seen to "lag behind" the other strand. The lagging strand is synthesized in short, separated segments. On the lagging strand template, a primase "reads" the template DNA and initiates synthesis of a short complementary RNA primer. A DNA polymerase extends the primed segments, forming Okazaki fragments. The RNA primers are then removed and replaced with DNA, and the fragments of DNA are joined by DNA ligase. ==== Dynamics at the replication fork ==== In all cases the helicase is composed of six polypeptides that wrap around only one strand of the DNA being replicated. The two polymerases are bound to the helicase hexamer. In eukaryotes the helicase wraps around the leading strand, and in prokaryotes it wraps around the lagging strand. As helicase unwinds DNA at the replication fork, the DNA ahead is forced to rotate. This process results in a build-up of twists in the DNA ahead. This build-up creates a torsional load that would eventually stop the replication fork. Topoisomerases are enzymes that temporarily break the strands of DNA, relieving the tension caused by unwinding the two strands of the DNA helix; topoisomerases (including DNA gyrase) achieve this by adding negative supercoils to the DNA helix. Bare single-stranded DNA tends to fold back on itself forming secondary structures; these structures can interfere with the movement of DNA polymerase. To prevent this, single-strand binding proteins bind to the DNA until a second strand is synthesized, preventing secondary structure formation. Double-stranded DNA is coiled around histones that play an important role in regulating gene expression so the replicated DNA must be coiled around histones at the same places as the original DNA. To ensure this, histone chaperones disassemble the chromatin before it is replicated and replace the histones in the correct place. Some steps in this reassembly are somewhat speculative. Clamp proteins act as a sliding clamp on DNA, allowing the DNA polymerase to bind to its template and aid in processivity. The inner face of the clamp enables DNA to be threaded through it. Once the polymerase reaches the end of the template or detects double-stranded DNA, the sliding clamp undergoes a conformational change that releases the DNA polymerase. Clamp-loading proteins are used to initially load the clamp, recognizing the junction between template and RNA primers.:274-5 === DNA replication proteins === At the replication fork, many replication enzymes assemble on the DNA into a complex molecular machine called the replisome. The following is a list of major DNA replication enzymes that participate in the replisome: In vitro single-molecule experiments (using optical tweezers and magnetic tweezers) have found synergetic interactions between the replisome enzymes (helicase, polymerase, and Single-strand DNA-binding protein) and with the DNA replication fork enhancing DNA-unwinding and DNA-replication. These results lead to the development of kinetic models accounting for the synergetic interactions and their stability. === Replication machinery === Replication machineries consist of factors involved in DNA replication and appearing on template ssDNAs. Replication machineries include primosotors are replication enzymes; DNA polymerase, DNA helicases, DNA clamps and DNA topoisomerases, and replication proteins; e.g. single-stranded DNA binding proteins (SSB). In the replication machineries these components coordinate. In most of the bacteria, all of the factors involved in DNA replication are located on replication forks and the complexes stay on the forks during DNA replication. Replication machineries are also referred to as replisomes, or DNA replication systems. These terms are generic terms for proteins located on replication forks. In eukaryotic and some bacterial cells the replisomes are not formed. In an alternative figure, DNA factories are similar to projectors and DNAs are like as cinematic films passing constantly into the projectors. In the replication factory model, after both DNA helicases for leading strands and lagging strands are loaded on the template DNAs, the helicases run along the DNAs into each other. The helicases remain associated for the remainder of replication process. Peter Meister et al. observed directly replication sites in budding yeast by monitoring green fluorescent protein (GFP)-tagged DNA polymerases α. They detected DNA replication of pairs of the tagged loci spaced apart symmetrically from a replication origin and found that the distance between the pairs decreased markedly by time. This finding suggests that the mechanism of DNA replication goes with DNA factories. That is, couples of replication factories are loaded on replication origins and the factories associated with each other. Also, template DNAs move into the factories, which bring extrusion of the template ssDNAs and new DNAs. Meister's finding is the first direct evidence of replication factory model. Subsequent research has shown that DNA helicases form dimers in many eukaryotic cells and bacterial replication machineries stay in single intranuclear location during DNA synthesis. Replication Factories Disentangle Sister Chromatids. The disentanglement is essential for distributing the chromatids into daughter cells after DNA replication. Because sister chromatids after DNA replication hold each other by Cohesin rings, there is the only chance for the disentanglement in DNA replication. Fixing of replication machineries as replication factories can improve the success rate of DNA replication. If replication forks move freely in chromosomes, catenation of nuclei is aggravated and impedes mitotic segregation. === Termination === Eukaryotes initiate DNA replication at multiple points in the chromosome, so replication forks meet and terminate at many points in the chromosome. Because eukaryotes have linear chromosomes, DNA replication is unable to reach the very end of the chromosomes. Due to this problem, DNA is lost in each replication cycle from the end of the chromosome. Telomeres are regions of repetitive DNA close to the ends and help prevent loss of genes due to this shortening. Shortening of the telomeres is a normal process in somatic cells. This shortens the telomeres of the daughter DNA chromosome. As a result, cells can only divide a certain number of times before the DNA loss prevents further division. (This is known as the Hayflick limit.) Within the germ cell line, which passes DNA to the next generation, telomerase extends the repetitive sequences of the telomere region to prevent degradation. Telomerase can become mistakenly active in somatic cells, sometimes leading to cancer formation. Increased telomerase activity is one of the hallmarks of cancer. Termination requires that the progress of the DNA replication fork must stop or be blocked. Termination at a specific locus, when it occurs, involves the interaction between two components: (1) a termination site sequence in the DNA, and (2) a protein which binds to this sequence to physically stop DNA replication. In various bacterial species, this is named the DNA replication terminus site-binding protein, or Ter protein. Because bacteria have circular chromosomes, termination of replication occurs when the two replication forks meet each other on the opposite end of the parental chromosome. E. coli regulates this process through the use of termination sequences that, when bound by the Tus protein, enable only one direction of replication fork to pass through. As a result, the replication forks are constrained to always meet within the termination region of the chromosome. == Regulation == === Eukaryotes === Within eukaryotes, DNA replication is controlled within the context of the cell cycle. As the cell grows and divides, it progresses through stages in the cell cycle; DNA replication takes place during the S phase (synthesis phase). The progress of the eukaryotic cell through the cycle is controlled by cell cycle checkpoints. Progression through checkpoints is controlled through complex interactions between various proteins, including cyclins and cyclin-dependent kinases. Unlike bacteria, eukaryotic DNA replicates in the confines of the nucleus. The G1/S checkpoint (restriction checkpoint) regulates whether eukaryotic cells enter the process of DNA replication and subsequent division. Cells that do not proceed through this checkpoint remain in the G0 stage and do not replicate their DNA. Once the DNA has gone through the "G1/S" test, it can only be copied once in every cell cycle. When the Mcm complex moves away from the origin, the pre-replication complex is dismantled. Because a new Mcm complex cannot be loaded at an origin until the pre-replication subunits are reactivated, one origin of replication can not be used twice in the same cell cycle. Activation of S-Cdks in early S phase promotes the destruction or inhibition of individual pre-replication complex components, preventing immediate reassembly. S and M-Cdks continue to block pre-replication complex assembly even after S phase is complete, ensuring that assembly cannot occur again until all Cdk activity is reduced in late mitosis. In budding yeast, inhibition of assembly is caused by Cdk-dependent phosphorylation of pre-replication complex components. At the onset of S phase, phosphorylation of Cdc6 by Cdk1 causes the binding of Cdc6 to the SCF ubiquitin protein ligase, which causes proteolytic destruction of Cdc6. Cdk-dependent phosphorylation of Mcm proteins promotes their export out of the nucleus along with Cdt1 during S phase, preventing the loading of new Mcm complexes at origins during a single cell cycle. Cdk phosphorylation of the origin replication complex also inhibits pre-replication complex assembly. The individual presence of any of these three mechanisms is sufficient to inhibit pre-replication complex assembly. However, mutations of all three proteins in the same cell does trigger reinitiation at many origins of replication within one cell cycle. In animal cells, the protein geminin is a key inhibitor of pre-replication complex assembly. Geminin binds Cdt1, preventing its binding to the origin recognition complex. In G1, levels of geminin are kept low by the APC, which ubiquitinates geminin to target it for degradation. When geminin is destroyed, Cdt1 is released, allowing it to function in pre-replication complex assembly. At the end of G1, the APC is inactivated, allowing geminin to accumulate and bind Cdt1. Replication of chloroplast and mitochondrial genomes occurs independently of the cell cycle, through the process of D-loop replication. ==== Replication focus ==== In vertebrate cells, replication sites concentrate into positions called replication foci. Replication sites can be detected by immunostaining daughter strands and replication enzymes and monitoring GFP-tagged replication factors. By these methods it is found that replication foci of varying size and positions appear in S phase of cell division and their number per nucleus is far smaller than the number of genomic replication forks. P. Heun et al.,(2001) tracked GFP-tagged replication foci in budding yeast cells and revealed that replication origins move constantly in G1 and S phase and the dynamics decreased significantly in S phase. Traditionally, replication sites were fixed on spatial structure of chromosomes by nuclear matrix or lamins. The Heun's results denied the traditional concepts, budding yeasts do not have lamins, and support that replication origins self-assemble and form replication foci. By firing of replication origins, controlled spatially and temporally, the formation of replication foci is regulated. D. A. Jackson et al.(1998) revealed that neighboring origins fire simultaneously in mammalian cells. Spatial juxtaposition of replication sites brings clustering of replication forks. The clustering do rescue of stalled replication forks and favors normal progress of replication forks. Progress of replication forks is inhibited by many factors; collision with proteins or with complexes binding strongly on DNA, deficiency of dNTPs, nicks on template DNAs and so on. If replication forks get stuck and the rest of the sequences from the stuck forks are not copied, then the daughter strands get nick nick unreplicated sites. The un-replicated sites on one parent's strand hold the other strand together but not daughter strands. Therefore, the resulting sister chromatids cannot separate from each other and cannot divide into 2 daughter cells. When neighboring origins fire and a fork from one origin is stalled, fork from other origin access on an opposite direction of the stalled fork and duplicate the un-replicated sites. As other mechanism of the rescue there is application of dormant replication origins that excess origins do not fire in normal DNA replication. === Bacteria === Most bacteria do not go through a well-defined cell cycle but instead continuously copy their DNA; during rapid growth, this can result in the concurrent occurrence of multiple rounds of replication. In E. coli, the best-characterized bacteria, DNA replication is regulated through several mechanisms, including: the hemimethylation and sequestering of the origin sequence, the ratio of adenosine triphosphate (ATP) to adenosine diphosphate (ADP), and the levels of protein DnaA. All these control the binding of initiator proteins to the origin sequences. Because E. coli methylates GATC DNA sequences, DNA synthesis results in hemimethylated sequences. This hemimethylated DNA is recognized by the protein SeqA, which binds and sequesters the origin sequence; in addition, DnaA (required for initiation of replication) binds less well to hemimethylated DNA. As a result, newly replicated origins are prevented from immediately initiating another round of DNA replication. ATP builds up when the cell is in a rich medium, triggering DNA replication once the cell has reached a specific size. ATP competes with ADP to bind to DnaA, and the DnaA-ATP complex is able to initiate replication. A certain number of DnaA proteins are also required for DNA replication — each time the origin is copied, the number of binding sites for DnaA doubles, requiring the synthesis of more DnaA to enable another initiation of replication. In fast-growing bacteria, such as E. coli, chromosome replication takes more time than dividing the cell. The bacteria solve this by initiating a new round of replication before the previous one has been terminated. The new round of replication will form the chromosome of the cell that is born two generations after the dividing cell. This mechanism creates overlapping replication cycles. == Problems with DNA replication == There are many events that contribute to replication stress, including: Misincorporation of ribonucleotides Unusual DNA structures Conflicts between replication and transcription Insufficiency of essential replication factors Common fragile sites Overexpression or constitutive activation of oncogenes Chromatin inaccessibility == Polymerase chain reaction == Researchers commonly replicate DNA in vitro using the polymerase chain reaction (PCR). PCR uses a pair of primers to span a target region in template DNA, and then polymerizes partner strands in each direction from these primers using a thermostable DNA polymerase. Repeating this process through multiple cycles amplifies the targeted DNA region. At the start of each cycle, the mixture of template and primers is heated, separating the newly synthesized molecule and template. Then, as the mixture cools, both of these become templates for annealing of new primers, and the polymerase extends from these. As a result, the number of copies of the target region doubles each round, increasing exponentially. == See also == Autopoiesis Cell (biology) Cell division Chromosome segregation Data storage device Gene Gene expression Epigenetics Genome Hachimoji DNA Life Replication (computing) Self-replication == Notes == == References ==
Wikipedia/DNA_replication
In theoretical physics, an invariant is an observable of a physical system which remains unchanged under some transformation. Invariance, as a broader term, also applies to the no change of form of physical laws under a transformation, and is closer in scope to the mathematical definition. Invariants of a system are deeply tied to the symmetries imposed by its environment. Invariance is an important concept in modern theoretical physics, and many theories are expressed in terms of their symmetries and invariants. == Examples == In classical and quantum mechanics, invariance of space under translation results in momentum being an invariant and the conservation of momentum, whereas invariance of the origin of time, i.e. translation in time, results in energy being an invariant and the conservation of energy. In general, by Noether's theorem, any invariance of a physical system under a continuous symmetry leads to a fundamental conservation law. In crystals, the electron density is periodic and invariant with respect to discrete translations by unit cell vectors. In very few materials, this symmetry can be broken due to enhanced electron correlations. Another examples of physical invariants are the speed of light, and charge and mass of a particle observed from two reference frames moving with respect to one another (invariance under a spacetime Lorentz transformation), and invariance of time and acceleration under a Galilean transformation between two such frames moving at low velocities. Quantities can be invariant under some common transformations but not under others. For example, the velocity of a particle is invariant when switching coordinate representations from rectangular to curvilinear coordinates, but is not invariant when transforming between frames of reference that are moving with respect to each other. Other quantities, like the speed of light, are always invariant. Physical laws are said to be invariant under transformations when their predictions remain unchanged. This generally means that the form of the law (e.g. the type of differential equations used to describe the law) is unchanged in transformations so that no additional or different solutions are obtained. For example the rule describing Newton's force of gravity between two chunks of matter is the same whether they are in this galaxy or another (translational invariance in space). It is also the same today as it was a million years ago (translational invariance in time). The law does not work differently depending on whether one chunk is east or north of the other one (rotational invariance). Nor does the law have to be changed depending on whether you measure the force between the two chunks in a railroad station, or do the same experiment with the two chunks on a uniformly moving train (principle of relativity). Covariance and contravariance generalize the mathematical properties of invariance in tensor mathematics, and are frequently used in electromagnetism, special relativity, and general relativity. == Informal usage == In the field of physics, the adjective covariant (as in covariance and contravariance of vectors) is often used informally as a synonym for "invariant". For example, the Schrödinger equation does not keep its written form under the coordinate transformations of special relativity. Thus, a physicist might say that the Schrödinger equation is not covariant. In contrast, the Klein–Gordon equation and the Dirac equation do keep their written form under these coordinate transformations. Thus, a physicist might say that these equations are covariant. Despite this usage of "covariant", it is more accurate to say that the Klein–Gordon and Dirac equations are invariant, and that the Schrödinger equation is not invariant. Additionally, to remove ambiguity, the transformation by which the invariance is evaluated should be indicated. == See also == == References ==
Wikipedia/Invariance_(physics)
Philosophy of science is the branch of philosophy concerned with the foundations, methods, and implications of science. Amongst its central questions are the difference between science and non-science, the reliability of scientific theories, and the ultimate purpose and meaning of science as a human endeavour. Philosophy of science focuses on metaphysical, epistemic and semantic aspects of scientific practice, and overlaps with metaphysics, ontology, logic, and epistemology, for example, when it explores the relationship between science and the concept of truth. Philosophy of science is both a theoretical and empirical discipline, relying on philosophical theorising as well as meta-studies of scientific practice. Ethical issues such as bioethics and scientific misconduct are often considered ethics or science studies rather than the philosophy of science. Many of the central problems concerned with the philosophy of science lack contemporary consensus, including whether science can infer truth about unobservable entities and whether inductive reasoning can be justified as yielding definite scientific knowledge. Philosophers of science also consider philosophical problems within particular sciences (such as biology, physics and social sciences such as economics and psychology). Some philosophers of science also use contemporary results in science to reach conclusions about philosophy itself. While philosophical thought pertaining to science dates back at least to the time of Aristotle, the general philosophy of science emerged as a distinct discipline only in the 20th century following the logical positivist movement, which aimed to formulate criteria for ensuring all philosophical statements' meaningfulness and objectively assessing them. Karl Popper criticized logical positivism and helped establish a modern set of standards for scientific methodology. Thomas Kuhn's 1962 book The Structure of Scientific Revolutions was also formative, challenging the view of scientific progress as the steady, cumulative acquisition of knowledge based on a fixed method of systematic experimentation and instead arguing that any progress is relative to a "paradigm", the set of questions, concepts, and practices that define a scientific discipline in a particular historical period. Subsequently, the coherentist approach to science, in which a theory is validated if it makes sense of observations as part of a coherent whole, became prominent due to W. V. Quine and others. Some thinkers such as Stephen Jay Gould seek to ground science in axiomatic assumptions, such as the uniformity of nature. A vocal minority of philosophers, and Paul Feyerabend in particular, argue against the existence of the "scientific method", so all approaches to science should be allowed, including explicitly supernatural ones. Another approach to thinking about science involves studying how knowledge is created from a sociological perspective, an approach represented by scholars like David Bloor and Barry Barnes. Finally, a tradition in continental philosophy approaches science from the perspective of a rigorous analysis of human experience. Philosophies of the particular sciences range from questions about the nature of time raised by Einstein's general relativity, to the implications of economics for public policy. A central theme is whether the terms of one scientific theory can be intra- or intertheoretically reduced to the terms of another. Can chemistry be reduced to physics, or can sociology be reduced to individual psychology? The general questions of philosophy of science also arise with greater specificity in some particular sciences. For instance, the question of the validity of scientific reasoning is seen in a different guise in the foundations of statistics. The question of what counts as science and what should be excluded arises as a life-or-death matter in the philosophy of medicine. Additionally, the philosophies of biology, psychology, and the social sciences explore whether the scientific studies of human nature can achieve objectivity or are inevitably shaped by values and by social relations. == Introduction == === Defining science === Distinguishing between science and non-science is referred to as the demarcation problem. For example, should psychoanalysis, creation science, and historical materialism be considered pseudosciences? Karl Popper called this the central question in the philosophy of science. However, no unified account of the problem has won acceptance among philosophers, and some regard the problem as unsolvable or uninteresting. Martin Gardner has argued for the use of a Potter Stewart standard ("I know it when I see it") for recognizing pseudoscience. Early attempts by the logical positivists grounded science in observation while non-science was non-observational and hence meaningless. Popper argued that the central property of science is falsifiability. That is, every genuinely scientific claim is capable of being proven false, at least in principle. An area of study or speculation that masquerades as science in an attempt to claim a legitimacy that it would not otherwise be able to achieve is referred to as pseudoscience, fringe science, or junk science. Physicist Richard Feynman coined the term "cargo cult science" for cases in which researchers believe they are doing science because their activities have the outward appearance of it but actually lack the "kind of utter honesty" that allows their results to be rigorously evaluated. === Scientific explanation === A closely related question is what counts as a good scientific explanation. In addition to providing predictions about future events, society often takes scientific theories to provide explanations for events that occur regularly or have already occurred. Philosophers have investigated the criteria by which a scientific theory can be said to have successfully explained a phenomenon, as well as what it means to say a scientific theory has explanatory power. One early and influential account of scientific explanation is the deductive-nomological model. It says that a successful scientific explanation must deduce the occurrence of the phenomena in question from a scientific law. This view has been subjected to substantial criticism, resulting in several widely acknowledged counterexamples to the theory. It is especially challenging to characterize what is meant by an explanation when the thing to be explained cannot be deduced from any law because it is a matter of chance, or otherwise cannot be perfectly predicted from what is known. Wesley Salmon developed a model in which a good scientific explanation must be statistically relevant to the outcome to be explained. Others have argued that the key to a good explanation is unifying disparate phenomena or providing a causal mechanism. === Justifying science === Although it is often taken for granted, it is not at all clear how one can infer the validity of a general statement from a number of specific instances or infer the truth of a theory from a series of successful tests. For example, a chicken observes that each morning the farmer comes and gives it food, for hundreds of days in a row. The chicken may therefore use inductive reasoning to infer that the farmer will bring food every morning. However, one morning, the farmer comes and kills the chicken. How is scientific reasoning more trustworthy than the chicken's reasoning? One approach is to acknowledge that induction cannot achieve certainty, but observing more instances of a general statement can at least make the general statement more probable. So the chicken would be right to conclude from all those mornings that it is likely the farmer will come with food again the next morning, even if it cannot be certain. However, there remain difficult questions about the process of interpreting any given evidence into a probability that the general statement is true. One way out of these particular difficulties is to declare that all beliefs about scientific theories are subjective, or personal, and correct reasoning is merely about how evidence should change one's subjective beliefs over time. Some argue that what scientists do is not inductive reasoning at all but rather abductive reasoning, or inference to the best explanation. In this account, science is not about generalizing specific instances but rather about hypothesizing explanations for what is observed. As discussed in the previous section, it is not always clear what is meant by the "best explanation". Ockham's razor, which counsels choosing the simplest available explanation, thus plays an important role in some versions of this approach. To return to the example of the chicken, would it be simpler to suppose that the farmer cares about it and will continue taking care of it indefinitely or that the farmer is fattening it up for slaughter? Philosophers have tried to make this heuristic principle more precise regarding theoretical parsimony or other measures. Yet, although various measures of simplicity have been brought forward as potential candidates, it is generally accepted that there is no such thing as a theory-independent measure of simplicity. In other words, there appear to be as many different measures of simplicity as there are theories themselves, and the task of choosing between measures of simplicity appears to be every bit as problematic as the job of choosing between theories. Nicholas Maxwell has argued for some decades that unity rather than simplicity is the key non-empirical factor in influencing the choice of theory in science, persistent preference for unified theories in effect committing science to the acceptance of a metaphysical thesis concerning unity in nature. In order to improve this problematic thesis, it needs to be represented in the form of a hierarchy of theses, each thesis becoming more insubstantial as one goes up the hierarchy. === Observation inseparable from theory === When making observations, scientists look through telescopes, study images on electronic screens, record meter readings, and so on. Generally, on a basic level, they can agree on what they see, e.g., the thermometer shows 37.9 degrees C. But, if these scientists have different ideas about the theories that have been developed to explain these basic observations, they may disagree about what they are observing. For example, before Albert Einstein's general theory of relativity, observers would have likely interpreted an image of the Einstein cross as five different objects in space. In light of that theory, however, astronomers will tell you that there are actually only two objects, one in the center and four different images of a second object around the sides. Alternatively, if other scientists suspect that something is wrong with the telescope and only one object is actually being observed, they are operating under yet another theory. Observations that cannot be separated from theoretical interpretation are said to be theory-laden. All observation involves both perception and cognition. That is, one does not make an observation passively, but rather is actively engaged in distinguishing the phenomenon being observed from surrounding sensory data. Therefore, observations are affected by one's underlying understanding of the way in which the world functions, and that understanding may influence what is perceived, noticed, or deemed worthy of consideration. In this sense, it can be argued that all observation is theory-laden. === The purpose of science === Should science aim to determine ultimate truth, or are there questions that science cannot answer? Scientific realists claim that science aims at truth and that one ought to regard scientific theories as true, approximately true, or likely true. Conversely, scientific anti-realists argue that science does not aim (or at least does not succeed) at truth, especially truth about unobservables like electrons or other universes. Instrumentalists argue that scientific theories should only be evaluated on whether they are useful. In their view, whether theories are true or not is beside the point, because the purpose of science is to make predictions and enable effective technology. Realists often point to the success of recent scientific theories as evidence for the truth (or near truth) of current theories. Antirealists point to either the many false theories in the history of science, epistemic morals, the success of false modeling assumptions, or widely termed postmodern criticisms of objectivity as evidence against scientific realism. Antirealists attempt to explain the success of scientific theories without reference to truth. Some antirealists claim that scientific theories aim at being accurate only about observable objects and argue that their success is primarily judged by that criterion. ==== Real patterns ==== The notion of real patterns has been propounded, notably by philosopher Daniel C. Dennett, as an intermediate position between strong realism and eliminative materialism. This concept delves into the investigation of patterns observed in scientific phenomena to ascertain whether they signify underlying truths or are mere constructs of human interpretation. Dennett provides a unique ontological account concerning real patterns, examining the extent to which these recognized patterns have predictive utility and allow for efficient compression of information. The discourse on real patterns extends beyond philosophical circles, finding relevance in various scientific domains. For example, in biology, inquiries into real patterns seek to elucidate the nature of biological explanations, exploring how recognized patterns contribute to a comprehensive understanding of biological phenomena. Similarly, in chemistry, debates around the reality of chemical bonds as real patterns continue. Evaluation of real patterns also holds significance in broader scientific inquiries. Researchers, like Tyler Millhouse, propose criteria for evaluating the realness of a pattern, particularly in the context of universal patterns and the human propensity to perceive patterns, even where there might be none. This evaluation is pivotal in advancing research in diverse fields, from climate change to machine learning, where recognition and validation of real patterns in scientific models play a crucial role. === Values and science === Values intersect with science in different ways. There are epistemic values that mainly guide the scientific research. The scientific enterprise is embedded in particular culture and values through individual practitioners. Values emerge from science, both as product and process and can be distributed among several cultures in the society. When it comes to the justification of science in the sense of general public participation by single practitioners, science plays the role of a mediator between evaluating the standards and policies of society and its participating individuals, wherefore science indeed falls victim to vandalism and sabotage adapting the means to the end. If it is unclear what counts as science, how the process of confirming theories works, and what the purpose of science is, there is considerable scope for values and other social influences to shape science. Indeed, values can play a role ranging from determining which research gets funded to influencing which theories achieve scientific consensus. For example, in the 19th century, cultural values held by scientists about race shaped research on evolution, and values concerning social class influenced debates on phrenology (considered scientific at the time). Feminist philosophers of science, sociologists of science, and others explore how social values affect science. == History == === Pre-modern === The origins of philosophy of science trace back to Plato and Aristotle, who distinguished the forms of approximate and exact reasoning, set out the threefold scheme of abductive, deductive, and inductive inference, and also analyzed reasoning by analogy. The eleventh century Arab polymath Ibn al-Haytham (known in Latin as Alhazen) conducted his research in optics by way of controlled experimental testing and applied geometry, especially in his investigations into the images resulting from the reflection and refraction of light. Roger Bacon (1214–1294), an English thinker and experimenter heavily influenced by al-Haytham, is recognized by many to be the father of modern scientific method. His view that mathematics was essential to a correct understanding of natural philosophy is considered to have been 400 years ahead of its time. === Modern === Francis Bacon (no direct relation to Roger Bacon, who lived 300 years earlier) was a seminal figure in philosophy of science at the time of the Scientific Revolution. In his work Novum Organum (1620)—an allusion to Aristotle's Organon—Bacon outlined a new system of logic to improve upon the old philosophical process of syllogism. Bacon's method relied on experimental histories to eliminate alternative theories. In 1637, René Descartes established a new framework for grounding scientific knowledge in his treatise, Discourse on Method, advocating the central role of reason as opposed to sensory experience. By contrast, in 1713, the 2nd edition of Isaac Newton's Philosophiae Naturalis Principia Mathematica argued that "... hypotheses ... have no place in experimental philosophy. In this philosophy[,] propositions are deduced from the phenomena and rendered general by induction." This passage influenced a "later generation of philosophically-inclined readers to pronounce a ban on causal hypotheses in natural philosophy". In particular, later in the 18th century, David Hume would famously articulate skepticism about the ability of science to determine causality and gave a definitive formulation of the problem of induction, though both theses would be contested by the end of the 18th century by Immanuel Kant in his Critique of Pure Reason and Metaphysical Foundations of Natural Science. In 19th century Auguste Comte made a major contribution to the theory of science. The 19th century writings of John Stuart Mill are also considered important in the formation of current conceptions of the scientific method, as well as anticipating later accounts of scientific explanation. === Logical positivism === Instrumentalism became popular among physicists around the turn of the 20th century, after which logical positivism defined the field for several decades. Logical positivism accepts only testable statements as meaningful, rejects metaphysical interpretations, and embraces verificationism (a set of theories of knowledge that combines logicism, empiricism, and linguistics to ground philosophy on a basis consistent with examples from the empirical sciences). Seeking to overhaul all of philosophy and convert it to a new scientific philosophy, the Berlin Circle and the Vienna Circle propounded logical positivism in the late 1920s. Interpreting Ludwig Wittgenstein's early philosophy of language, logical positivists identified a verifiability principle or criterion of cognitive meaningfulness. From Bertrand Russell's logicism they sought reduction of mathematics to logic. They also embraced Russell's logical atomism, Ernst Mach's phenomenalism—whereby the mind knows only actual or potential sensory experience, which is the content of all sciences, whether physics or psychology—and Percy Bridgman's operationalism. Thereby, only the verifiable was scientific and cognitively meaningful, whereas the unverifiable was unscientific, cognitively meaningless "pseudostatements"—metaphysical, emotive, or such—not worthy of further review by philosophers, who were newly tasked to organize knowledge rather than develop new knowledge. Logical positivism is commonly portrayed as taking the extreme position that scientific language should never refer to anything unobservable—even the seemingly core notions of causality, mechanism, and principles—but that is an exaggeration. Talk of such unobservables could be allowed as metaphorical—direct observations viewed in the abstract—or at worst metaphysical or emotional. Theoretical laws would be reduced to empirical laws, while theoretical terms would garner meaning from observational terms via correspondence rules. Mathematics in physics would reduce to symbolic logic via logicism, while rational reconstruction would convert ordinary language into standardized equivalents, all networked and united by a logical syntax. A scientific theory would be stated with its method of verification, whereby a logical calculus or empirical operation could verify its falsity or truth. In the late 1930s, logical positivists fled Germany and Austria for Britain and America. By then, many had replaced Mach's phenomenalism with Otto Neurath's physicalism, and Rudolf Carnap had sought to replace verification with simply confirmation. With World War II's close in 1945, logical positivism became milder, logical empiricism, led largely by Carl Hempel, in America, who expounded the covering law model of scientific explanation as a way of identifying the logical form of explanations without any reference to the suspect notion of "causation". The logical positivist movement became a major underpinning of analytic philosophy, and dominated Anglosphere philosophy, including philosophy of science, while influencing sciences, into the 1960s. Yet the movement failed to resolve its central problems, and its doctrines were increasingly assaulted. Nevertheless, it brought about the establishment of philosophy of science as a distinct subdiscipline of philosophy, with Carl Hempel playing a key role. === Thomas Kuhn === In the 1962 book The Structure of Scientific Revolutions, Thomas Kuhn argued that the process of observation and evaluation takes place within a "paradigm", which he describes as "universally recognized achievements that for a time provide model problems and solutions to community of practitioners." A paradigm implicitly identifies the objects and relations under study and suggests what experiments, observations or theoretical improvements need to be carried out to produce a useful result. He characterized normal science as the process of observation and "puzzle solving" which takes place within a paradigm, whereas revolutionary science occurs when one paradigm overtakes another in a paradigm shift. Kurn was a historian of science, and his ideas were inspired by the study of older paradigms that have been discarded, such as Aristotelian mechanics or aether theory. These had often been portrayed by historians as using "unscientific" methods or beliefs. But careful examination showed that they were no less "scientific" than modern paradigms. Both were based on valid evidence, both failed to answer every possible question. A paradigm shift occurred when a significant number of observational anomalies arose in the old paradigm and efforts to resolve them within the paradigm were unsuccessful. A new paradigm was available that handled the anomalies with less difficulty and yet still covered (most of) the previous results. Over a period of time, often as long as a generation, more practitioners began working within the new paradigm and eventually the old paradigm was abandoned. For Kuhn, acceptance or rejection of a paradigm is a social process as much as a logical process. Kuhn's position, however, is not one of relativism; he wrote "terms like 'subjective' and 'intuitive' cannot be applied to [paradigms]." Paradigms are grounded in objective, observable evidence, but our use of them is psychological and our acceptance of them is social. == Current approaches == === Naturalism's axiomatic assumptions === According to Robert Priddy, all scientific study inescapably builds on at least some essential assumptions that cannot be tested by scientific processes; that is, that scientists must start with some assumptions as to the ultimate analysis of the facts with which it deals. These assumptions would then be justified partly by their adherence to the types of occurrence of which we are directly conscious, and partly by their success in representing the observed facts with a certain generality, devoid of ad hoc suppositions." Kuhn also claims that all science is based on assumptions about the character of the universe, rather than merely on empirical facts. These assumptions – a paradigm – comprise a collection of beliefs, values and techniques that are held by a given scientific community, which legitimize their systems and set the limitations to their investigation. For naturalists, nature is the only reality, the "correct" paradigm, and there is no such thing as supernatural, i.e. anything above, beyond, or outside of nature. The scientific method is to be used to investigate all reality, including the human spirit. Some claim that naturalism is the implicit philosophy of working scientists, and that the following basic assumptions are needed to justify the scientific method: That there is an objective reality shared by all rational observers."The basis for rationality is acceptance of an external objective reality." "Objective reality is clearly an essential thing if we are to develop a meaningful perspective of the world. Nevertheless its very existence is assumed." "Our belief that objective reality exist is an assumption that it arises from a real world outside of ourselves. As infants we made this assumption unconsciously. People are happy to make this assumption that adds meaning to our sensations and feelings, than live with solipsism." "Without this assumption, there would be only the thoughts and images in our own mind (which would be the only existing mind) and there would be no need of science, or anything else." That this objective reality is governed by natural laws; "Science, at least today, assumes that the universe obeys knowable principles that don't depend on time or place, nor on subjective parameters such as what we think, know or how we behave." Hugh Gauch argues that science presupposes that "the physical world is orderly and comprehensible." That reality can be discovered by means of systematic observation and experimentation.Stanley Sobottka said: "The assumption of external reality is necessary for science to function and to flourish. For the most part, science is the discovering and explaining of the external world." "Science attempts to produce knowledge that is as universal and objective as possible within the realm of human understanding." That Nature has uniformity of laws and most if not all things in nature must have at least a natural cause.Biologist Stephen Jay Gould referred to these two closely related propositions as the constancy of nature's laws and the operation of known processes. Simpson agrees that the axiom of uniformity of law, an unprovable postulate, is necessary in order for scientists to extrapolate inductive inference into the unobservable past in order to meaningfully study it. "The assumption of spatial and temporal invariance of natural laws is by no means unique to geology since it amounts to a warrant for inductive inference which, as Bacon showed nearly four hundred years ago, is the basic mode of reasoning in empirical science. Without assuming this spatial and temporal invariance, we have no basis for extrapolating from the known to the unknown and, therefore, no way of reaching general conclusions from a finite number of observations. (Since the assumption is itself vindicated by induction, it can in no way "prove" the validity of induction — an endeavor virtually abandoned after Hume demonstrated its futility two centuries ago)." Gould also notes that natural processes such as Lyell's "uniformity of process" are an assumption: "As such, it is another a priori assumption shared by all scientists and not a statement about the empirical world." According to R. Hooykaas: "The principle of uniformity is not a law, not a rule established after comparison of facts, but a principle, preceding the observation of facts ... It is the logical principle of parsimony of causes and of economy of scientific notions. By explaining past changes by analogy with present phenomena, a limit is set to conjecture, for there is only one way in which two things are equal, but there are an infinity of ways in which they could be supposed different." That experimental procedures will be done satisfactorily without any deliberate or unintentional mistakes that will influence the results. That experimenters won't be significantly biased by their presumptions. That random sampling is representative of the entire population.A simple random sample (SRS) is the most basic probabilistic option used for creating a sample from a population. The benefit of SRS is that the investigator is guaranteed to choose a sample that represents the population that ensures statistically valid conclusions. === Coherentism === In contrast to the view that science rests on foundational assumptions, coherentism asserts that statements are justified by being a part of a coherent system. Or, rather, individual statements cannot be validated on their own: only coherent systems can be justified. A prediction of a transit of Venus is justified by its being coherent with broader beliefs about celestial mechanics and earlier observations. As explained above, observation is a cognitive act. That is, it relies on a pre-existing understanding, a systematic set of beliefs. An observation of a transit of Venus requires a huge range of auxiliary beliefs, such as those that describe the optics of telescopes, the mechanics of the telescope mount, and an understanding of celestial mechanics. If the prediction fails and a transit is not observed, that is likely to occasion an adjustment in the system, a change in some auxiliary assumption, rather than a rejection of the theoretical system. According to the Duhem–Quine thesis, after Pierre Duhem and W.V. Quine, it is impossible to test a theory in isolation. One must always add auxiliary hypotheses in order to make testable predictions. For example, to test Newton's Law of Gravitation in the solar system, one needs information about the masses and positions of the Sun and all the planets. Famously, the failure to predict the orbit of Uranus in the 19th century led not to the rejection of Newton's Law but rather to the rejection of the hypothesis that the Solar System comprises only seven planets. The investigations that followed led to the discovery of an eighth planet, Neptune. If a test fails, something is wrong. But there is a problem in figuring out what that something is: a missing planet, badly calibrated test equipment, an unsuspected curvature of space, or something else. One consequence of the Duhem–Quine thesis is that one can make any theory compatible with any empirical observation by the addition of a sufficient number of suitable ad hoc hypotheses. Karl Popper accepted this thesis, leading him to reject naïve falsification. Instead, he favored a "survival of the fittest" view in which the most falsifiable scientific theories are to be preferred. === Anything goes methodology === Paul Feyerabend (1924–1994) argued that no description of scientific method could possibly be broad enough to include all the approaches and methods used by scientists, and that there are no useful and exception-free methodological rules governing the progress of science. He argued that "the only principle that does not inhibit progress is: anything goes". Feyerabend said that science started as a liberating movement, but that over time it had become increasingly dogmatic and rigid and had some oppressive features, and thus had become increasingly an ideology. Because of this, he said it was impossible to come up with an unambiguous way to distinguish science from religion, magic, or mythology. He saw the exclusive dominance of science as a means of directing society as authoritarian and ungrounded. Promulgation of this epistemological anarchism earned Feyerabend the title of "the worst enemy of science" from his detractors. === Sociology of scientific knowledge methodology === According to Kuhn, science is an inherently communal activity which can only be done as part of a community. For him, the fundamental difference between science and other disciplines is the way in which the communities function. Others, especially Feyerabend and some post-modernist thinkers, have argued that there is insufficient difference between social practices in science and other disciplines to maintain this distinction. For them, social factors play an important and direct role in scientific method, but they do not serve to differentiate science from other disciplines. On this account, science is socially constructed, though this does not necessarily imply the more radical notion that reality itself is a social construct. Michel Foucault sought to analyze and uncover how disciplines within the social sciences developed and adopted the methodologies used by their practitioners. In works like The Archaeology of Knowledge, he used the term human sciences. The human sciences do not comprise mainstream academic disciplines; they are rather an interdisciplinary space for the reflection on man who is the subject of more mainstream scientific knowledge, taken now as an object, sitting between these more conventional areas, and of course associating with disciplines such as anthropology, psychology, sociology, and even history. Rejecting the realist view of scientific inquiry, Foucault argued throughout his work that scientific discourse is not simply an objective study of phenomena, as both natural and social scientists like to believe, but is rather the product of systems of power relations struggling to construct scientific disciplines and knowledge within given societies. With the advances of scientific disciplines, such as psychology and anthropology, the need to separate, categorize, normalize and institutionalize populations into constructed social identities became a staple of the sciences. Constructions of what were considered "normal" and "abnormal" stigmatized and ostracized groups of people, like the mentally ill and sexual and gender minorities. However, some (such as Quine) do maintain that scientific reality is a social construct: Physical objects are conceptually imported into the situation as convenient intermediaries not by definition in terms of experience, but simply as irreducible posits comparable, epistemologically, to the gods of Homer ... For my part I do, qua lay physicist, believe in physical objects and not in Homer's gods; and I consider it a scientific error to believe otherwise. But in point of epistemological footing, the physical objects and the gods differ only in degree and not in kind. Both sorts of entities enter our conceptions only as cultural posits. The public backlash of scientists against such views, particularly in the 1990s, became known as the science wars. A major development in recent decades has been the study of the formation, structure, and evolution of scientific communities by sociologists and anthropologists – including David Bloor, Harry Collins, Bruno Latour, Ian Hacking and Anselm Strauss. Concepts and methods (such as rational choice, social choice or game theory) from economics have also been applied for understanding the efficiency of scientific communities in the production of knowledge. This interdisciplinary field has come to be known as science and technology studies. Here the approach to the philosophy of science is to study how scientific communities actually operate. === Continental philosophy === Philosophers in the continental philosophical tradition are not traditionally categorized as philosophers of science. However, they have much to say about science, some of which has anticipated themes in the analytical tradition. For example, in The Genealogy of Morals (1887) Friedrich Nietzsche advanced the thesis that the motive for the search for truth in sciences is a kind of ascetic ideal. In general, continental philosophy views science from a world-historical perspective. Philosophers such as Pierre Duhem (1861–1916) and Gaston Bachelard (1884–1962) wrote their works with this world-historical approach to science, predating Kuhn's 1962 work by a generation or more. All of these approaches involve a historical and sociological turn to science, with a priority on lived experience (a kind of Husserlian "life-world"), rather than a progress-based or anti-historical approach as emphasised in the analytic tradition. One can trace this continental strand of thought through the phenomenology of Edmund Husserl (1859–1938), the late works of Merleau-Ponty (Nature: Course Notes from the Collège de France, 1956–1960), and the hermeneutics of Martin Heidegger (1889–1976). The largest effect on the continental tradition with respect to science came from Martin Heidegger's critique of the theoretical attitude in general, which of course includes the scientific attitude. For this reason, the continental tradition has remained much more skeptical of the importance of science in human life and in philosophical inquiry. Nonetheless, there have been a number of important works: especially those of a Kuhnian precursor, Alexandre Koyré (1892–1964). Another important development was that of Michel Foucault's analysis of historical and scientific thought in The Order of Things (1966) and his study of power and corruption within the "science" of madness. Post-Heideggerian authors contributing to continental philosophy of science in the second half of the 20th century include Jürgen Habermas (e.g., Truth and Justification, 1998), Carl Friedrich von Weizsäcker (The Unity of Nature, 1980; German: Die Einheit der Natur (1971)), and Wolfgang Stegmüller (Probleme und Resultate der Wissenschaftstheorie und Analytischen Philosophie, 1973–1986). == Other topics == === Reductionism === Analysis involves breaking an observation or theory down into simpler concepts in order to understand it. Reductionism can refer to one of several philosophical positions related to this approach. One type of reductionism suggests that phenomena are amenable to scientific explanation at lower levels of analysis and inquiry. Perhaps a historical event might be explained in sociological and psychological terms, which in turn might be described in terms of human physiology, which in turn might be described in terms of chemistry and physics. Daniel Dennett distinguishes legitimate reductionism from what he calls greedy reductionism, which denies real complexities and leaps too quickly to sweeping generalizations. === Social accountability === A broad issue affecting the neutrality of science concerns the areas which science chooses to explore—that is, what part of the world and of humankind are studied by science. Philip Kitcher in his Science, Truth, and Democracy argues that scientific studies that attempt to show one segment of the population as being less intelligent, less successful, or emotionally backward compared to others have a political feedback effect which further excludes such groups from access to science. Thus such studies undermine the broad consensus required for good science by excluding certain people, and so proving themselves in the end to be unscientific. == Philosophy of particular sciences == There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination. In addition to addressing the general questions regarding science and induction, many philosophers of science are occupied by investigating foundational problems in particular sciences. They also examine the implications of particular sciences for broader philosophical questions. The late 20th and early 21st century has seen a rise in the number of practitioners of philosophy of a particular science. === Philosophy of statistics === The problem of induction discussed above is seen in another form in debates over the foundations of statistics. The standard approach to statistical hypothesis testing avoids claims about whether evidence supports a hypothesis or makes it more probable. Instead, the typical test yields a p-value, which is the probability of the evidence being such as it is, under the assumption that the null hypothesis is true. If the p-value is too high, the hypothesis is rejected, in a way analogous to falsification. In contrast, Bayesian inference seeks to assign probabilities to hypotheses. Related topics in philosophy of statistics include probability interpretations, overfitting, and the difference between correlation and causation. === Philosophy of mathematics === Philosophy of mathematics is concerned with the philosophical foundations and implications of mathematics. The central questions are whether numbers, triangles, and other mathematical entities exist independently of the human mind and what is the nature of mathematical propositions. Is asking whether "1 + 1 = 2" is true fundamentally different from asking whether a ball is red? Was calculus invented or discovered? A related question is whether learning mathematics requires experience or reason alone. What does it mean to prove a mathematical theorem and how does one know whether a mathematical proof is correct? Philosophers of mathematics also aim to clarify the relationships between mathematics and logic, human capabilities such as intuition, and the material universe. === Philosophy of physics === Philosophy of physics is the study of the fundamental, philosophical questions underlying modern physics, the study of matter and energy and how they interact. The main questions concern the nature of space and time, atoms and atomism. Also included are the predictions of cosmology, the interpretation of quantum mechanics, the foundations of statistical mechanics, causality, determinism, and the nature of physical laws. Classically, several of these questions were studied as part of metaphysics (for example, those about causality, determinism, and space and time). === Philosophy of chemistry === Philosophy of chemistry is the philosophical study of the methodology and content of the science of chemistry. It is explored by philosophers, chemists, and philosopher-chemist teams. It includes research on general philosophy of science issues as applied to chemistry. For example, can all chemical phenomena be explained by quantum mechanics or is it not possible to reduce chemistry to physics? For another example, chemists have discussed the philosophy of how theories are confirmed in the context of confirming reaction mechanisms. Determining reaction mechanisms is difficult because they cannot be observed directly. Chemists can use a number of indirect measures as evidence to rule out certain mechanisms, but they are often unsure if the remaining mechanism is correct because there are many other possible mechanisms that they have not tested or even thought of. Philosophers have also sought to clarify the meaning of chemical concepts which do not refer to specific physical entities, such as chemical bonds. === Philosophy of astronomy === The philosophy of astronomy seeks to understand and analyze the methodologies and technologies used by experts in the discipline, focusing on how observations made about space and astrophysical phenomena can be studied. Given that astronomers rely and use theories and formulas from other scientific disciplines, such as chemistry and physics, the pursuit of understanding how knowledge can be obtained about the cosmos, as well as the relation in which Earth and the Solar System have within personal views of humanity's place in the universe, philosophical insights into how facts about space can be scientifically analyzed and configure with other established knowledge is a main point of inquiry. === Philosophy of Earth sciences === The philosophy of Earth science is concerned with how humans obtain and verify knowledge of the workings of the Earth system, including the atmosphere, hydrosphere, and geosphere (solid earth). Earth scientists' ways of knowing and habits of mind share important commonalities with other sciences, but also have distinctive attributes that emerge from the complex, heterogeneous, unique, long-lived, and non-manipulatable nature of the Earth system. === Philosophy of biology === Philosophy of biology deals with epistemological, metaphysical, and ethical issues in the biological and biomedical sciences. Although philosophers of science and philosophers generally have long been interested in biology (e.g., Aristotle, Descartes, Leibniz and even Kant), philosophy of biology only emerged as an independent field of philosophy in the 1960s and 1970s. Philosophers of science began to pay increasing attention to developments in biology, from the rise of the modern synthesis in the 1930s and 1940s to the discovery of the structure of deoxyribonucleic acid (DNA) in 1953 to more recent advances in genetic engineering. Other key ideas such as the reduction of all life processes to biochemical reactions as well as the incorporation of psychology into a broader neuroscience are also addressed. Research in current philosophy of biology includes investigation of the foundations of evolutionary theory (such as Peter Godfrey-Smith's work), and the role of viruses as persistent symbionts in host genomes. As a consequence, the evolution of genetic content order is seen as the result of competent genome editors in contrast to former narratives in which error replication events (mutations) dominated. === Philosophy of medicine === Beyond medical ethics and bioethics, the philosophy of medicine is a branch of philosophy that includes the epistemology and ontology/metaphysics of medicine. Within the epistemology of medicine, evidence-based medicine (EBM) (or evidence-based practice (EBP)) has attracted attention, most notably the roles of randomisation, blinding and placebo controls. Related to these areas of investigation, ontologies of specific interest to the philosophy of medicine include Cartesian dualism, the monogenetic conception of disease and the conceptualization of 'placebos' and 'placebo effects'. There is also a growing interest in the metaphysics of medicine, particularly the idea of causation. Philosophers of medicine might not only be interested in how medical knowledge is generated, but also in the nature of such phenomena. Causation is of interest because the purpose of much medical research is to establish causal relationships, e.g. what causes disease, or what causes people to get better. === Philosophy of psychiatry === Philosophy of psychiatry explores philosophical questions relating to psychiatry and mental illness. The philosopher of science and medicine Dominic Murphy identifies three areas of exploration in the philosophy of psychiatry. The first concerns the examination of psychiatry as a science, using the tools of the philosophy of science more broadly. The second entails the examination of the concepts employed in discussion of mental illness, including the experience of mental illness, and the normative questions it raises. The third area concerns the links and discontinuities between the philosophy of mind and psychopathology. === Philosophy of psychology === Philosophy of psychology refers to issues at the theoretical foundations of modern psychology. Some of these issues are epistemological concerns about the methodology of psychological investigation. For example, is the best method for studying psychology to focus only on the response of behavior to external stimuli or should psychologists focus on mental perception and thought processes? If the latter, an important question is how the internal experiences of others can be measured. Self-reports of feelings and beliefs may not be reliable because, even in cases in which there is no apparent incentive for subjects to intentionally deceive in their answers, self-deception or selective memory may affect their responses. Then even in the case of accurate self-reports, how can responses be compared across individuals? Even if two individuals respond with the same answer on a Likert scale, they may be experiencing very different things. Other issues in philosophy of psychology are philosophical questions about the nature of mind, brain, and cognition, and are perhaps more commonly thought of as part of cognitive science, or philosophy of mind. For example, are humans rational creatures? Is there any sense in which they have free will, and how does that relate to the experience of making choices? Philosophy of psychology also closely monitors contemporary work conducted in cognitive neuroscience, psycholinguistics, and artificial intelligence, questioning what they can and cannot explain in psychology. Philosophy of psychology is a relatively young field, because psychology only became a discipline of its own in the late 1800s. In particular, neurophilosophy has just recently become its own field with the works of Paul Churchland and Patricia Churchland. Philosophy of mind, by contrast, has been a well-established discipline since before psychology was a field of study at all. It is concerned with questions about the very nature of mind, the qualities of experience, and particular issues like the debate between dualism and monism. === Philosophy of social science === The philosophy of social science is the study of the logic and method of the social sciences, such as sociology and cultural anthropology. Philosophers of social science are concerned with the differences and similarities between the social and the natural sciences, causal relationships between social phenomena, the possible existence of social laws, and the ontological significance of structure and agency. The French philosopher, Auguste Comte (1798–1857), established the epistemological perspective of positivism in The Course in Positivist Philosophy, a series of texts published between 1830 and 1842. The first three volumes of the Course dealt chiefly with the natural sciences already in existence (geoscience, astronomy, physics, chemistry, biology), whereas the latter two emphasised the inevitable coming of social science: "sociologie". For Comte, the natural sciences had to necessarily arrive first, before humanity could adequately channel its efforts into the most challenging and complex "Queen science" of human society itself. Comte offers an evolutionary system proposing that society undergoes three phases in its quest for the truth according to a general 'law of three stages'. These are (1) the theological, (2) the metaphysical, and (3) the positive. Comte's positivism established the initial philosophical foundations for formal sociology and social research. Durkheim, Marx, and Weber are more typically cited as the fathers of contemporary social science. In psychology, a positivistic approach has historically been favoured in behaviourism. Positivism has also been espoused by 'technocrats' who believe in the inevitability of social progress through science and technology. The positivist perspective has been associated with 'scientism'; the view that the methods of the natural sciences may be applied to all areas of investigation, be it philosophical, social scientific, or otherwise. Among most social scientists and historians, orthodox positivism has long since lost popular support. Today, practitioners of both social and physical sciences instead take into account the distorting effect of observer bias and structural limitations. This scepticism has been facilitated by a general weakening of deductivist accounts of science by philosophers such as Thomas Kuhn, and new philosophical movements such as critical realism and neopragmatism. The philosopher-sociologist Jürgen Habermas has critiqued pure instrumental rationality as meaning that scientific-thinking becomes something akin to ideology itself. === Philosophy of technology === The philosophy of technology is a sub-field of philosophy that studies the nature of technology. Specific research topics include study of the role of tacit and explicit knowledge in creating and using technology, the nature of functions in technological artifacts, the role of values in design, and ethics related to technology. Technology and engineering can both involve the application of scientific knowledge. The philosophy of engineering is an emerging sub-field of the broader philosophy of technology. == See also == == References == === Sources === == Further reading == == External links == Philosophy of science at PhilPapers Philosophy of science at the Indiana Philosophy Ontology Project "Philosophy of science". Internet Encyclopedia of Philosophy.
Wikipedia/Philosophers_of_science
An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated. Experiments vary greatly in goal and scale but always rely on repeatable procedure and logical analysis of the results. There also exist natural experimental studies. A child may carry out basic experiments to understand how things fall to the ground, while teams of scientists may take years of systematic investigation to advance their understanding of a phenomenon. Experiments and other types of hands-on activities are very important to student learning in the science classroom. Experiments can raise test scores and help a student become more engaged and interested in the material they are learning, especially when used over time. Experiments can vary from personal and informal natural comparisons (e.g. tasting a range of chocolates to find a favorite), to highly controlled (e.g. tests requiring complex apparatus overseen by many scientists that hope to discover information about subatomic particles). Uses of experiments vary considerably between the natural and human sciences. Experiments typically include controls, which are designed to minimize the effects of variables other than the single independent variable. This increases the reliability of the results, often through a comparison between control measurements and the other measurements. Scientific controls are a part of the scientific method. Ideally, all variables in an experiment are controlled (accounted for by the control measurements) and none are uncontrolled. In such an experiment, if all controls work as expected, it is possible to conclude that the experiment works as intended, and that results are due to the effect of the tested variables. == Overview == In the scientific method, an experiment is an empirical procedure that arbitrates competing models or hypotheses. Researchers also use experimentation to test existing theories or new hypotheses to support or disprove them. An experiment usually tests a hypothesis, which is an expectation about how a particular process or phenomenon works. However, an experiment may also aim to answer a "what-if" question, without a specific expectation about what the experiment reveals, or to confirm prior results. If an experiment is carefully conducted, the results usually either support or disprove the hypothesis. According to some philosophies of science, an experiment can never "prove" a hypothesis, it can only add support. On the other hand, an experiment that provides a counterexample can disprove a theory or hypothesis, but a theory can always be salvaged by appropriate ad hoc modifications at the expense of simplicity. An experiment must also control the possible confounding factors—any factors that would mar the accuracy or repeatability of the experiment or the ability to interpret the results. Confounding is commonly eliminated through scientific controls and/or, in randomized experiments, through random assignment. In engineering and the physical sciences, experiments are a primary component of the scientific method. They are used to test theories and hypotheses about how physical processes work under particular conditions (e.g., whether a particular engineering process can produce a desired chemical compound). Typically, experiments in these fields focus on replication of identical procedures in hopes of producing identical results in each replication. Random assignment is uncommon. In medicine and the social sciences, the prevalence of experimental research varies widely across disciplines. When used, however, experiments typically follow the form of the clinical trial, where experimental units (usually individual human beings) are randomly assigned to a treatment or control condition where one or more outcomes are assessed. In contrast to norms in the physical sciences, the focus is typically on the average treatment effect (the difference in outcomes between the treatment and control groups) or another test statistic produced by the experiment. A single study typically does not involve replications of the experiment, but separate studies may be aggregated through systematic review and meta-analysis. There are various differences in experimental practice in each of the branches of science. For example, agricultural research frequently uses randomized experiments (e.g., to test the comparative effectiveness of different fertilizers), while experimental economics often involves experimental tests of theorized human behaviors without relying on random assignment of individuals to treatment and control conditions. == History == One of the first methodical approaches to experiments in the modern sense is visible in the works of the Arab mathematician and scholar Ibn al-Haytham. He conducted his experiments in the field of optics—going back to optical and mathematical problems in the works of Ptolemy—by controlling his experiments due to factors such as self-criticality, reliance on visible results of the experiments as well as a criticality in terms of earlier results. He was one of the first scholars to use an inductive-experimental method for achieving results. In his Book of Optics he describes the fundamentally new approach to knowledge and research in an experimental sense: We should, that is, recommence the inquiry into its principles and premisses, beginning our investigation with an inspection of the things that exist and a survey of the conditions of visible objects. We should distinguish the properties of particulars, and gather by induction what pertains to the eye when vision takes place and what is found in the manner of sensation to be uniform, unchanging, manifest and not subject to doubt. After which we should ascend in our inquiry and reasonings, gradually and orderly, criticizing premisses and exercising caution in regard to conclusions—our aim in all that we make subject to inspection and review being to employ justice, not to follow prejudice, and to take care in all that we judge and criticize that we seek the truth and not to be swayed by opinion. We may in this way eventually come to the truth that gratifies the heart and gradually and carefully reach the end at which certainty appears; while through criticism and caution we may seize the truth that dispels disagreement and resolves doubtful matters. For all that, we are not free from that human turbidity which is in the nature of man; but we must do our best with what we possess of human power. From God we derive support in all things. According to his explanation, a strictly controlled test execution with a sensibility for the subjectivity and susceptibility of outcomes due to the nature of man is necessary. Furthermore, a critical view on the results and outcomes of earlier scholars is necessary: It is thus the duty of the man who studies the writings of scientists, if learning the truth is his goal, to make himself an enemy of all that he reads, and, applying his mind to the core and margins of its content, attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency. Thus, a comparison of earlier results with the experimental results is necessary for an objective experiment—the visible results being more important. In the end, this may mean that an experimental researcher must find enough courage to discard traditional opinions or results, especially if these results are not experimental but results from a logical/ mental derivation. In this process of critical consideration, the man himself should not forget that he tends to subjective opinions—through "prejudices" and "leniency"—and thus has to be critical about his own way of building hypotheses. Francis Bacon (1561–1626), an English philosopher and scientist active in the 17th century, became an influential supporter of experimental science in the English renaissance. He disagreed with the method of answering scientific questions by deduction—similar to Ibn al-Haytham—and described it as follows: "Having first determined the question according to his will, man then resorts to experience, and bending her to conformity with his placets, leads her about like a captive in a procession." Bacon wanted a method that relied on repeatable observations, or experiments. Notably, he first ordered the scientific method as we understand it today. There remains simple experience; which, if taken as it comes, is called accident, if sought for, experiment. The true method of experience first lights the candle [hypothesis], and then by means of the candle shows the way [arranges and delimits the experiment]; commencing as it does with experience duly ordered and digested, not bungling or erratic, and from it deducing axioms [theories], and from established axioms again new experiments.: 101  In the centuries that followed, people who applied the scientific method in different areas made important advances and discoveries. For example, Galileo Galilei (1564–1642) accurately measured time and experimented to make accurate measurements and conclusions about the speed of a falling body. Antoine Lavoisier (1743–1794), a French chemist, used experiment to describe new areas, such as combustion and biochemistry and to develop the theory of conservation of mass (matter). Louis Pasteur (1822–1895) used the scientific method to disprove the prevailing theory of spontaneous generation and to develop the germ theory of disease. Because of the importance of controlling potentially confounding variables, the use of well-designed laboratory experiments is preferred when possible. A considerable amount of progress on the design and analysis of experiments occurred in the early 20th century, with contributions from statisticians such as Ronald Fisher (1890–1962), Jerzy Neyman (1894–1981), Oscar Kempthorne (1919–2000), Gertrude Mary Cox (1900–1978), and William Gemmell Cochran (1909–1980), among others. == Types == Experiments might be categorized according to a number of dimensions, depending upon professional norms and standards in different fields of study. In some disciplines (e.g., psychology or political science), a 'true experiment' is a method of social research in which there are two kinds of variables. The independent variable is manipulated by the experimenter, and the dependent variable is measured. The signifying characteristic of a true experiment is that it randomly allocates the subjects to neutralize experimenter bias, and ensures, over a large number of iterations of the experiment, that it controls for all confounding factors. Depending on the discipline, experiments can be conducted to accomplish different but not mutually exclusive goals: test theories, search for and document phenomena, develop theories, or advise policymakers. These goals also relate differently to validity concerns. === Controlled experiments === A controlled experiment often compares the results obtained from experimental samples against control samples, which are practically identical to the experimental sample except for the one aspect whose effect is being tested (the independent variable). A good example would be a drug trial. The sample or group receiving the drug would be the experimental group (treatment group); and the one receiving the placebo or regular treatment would be the control one. In many laboratory experiments it is good practice to have several replicate samples for the test being performed and have both a positive control and a negative control. The results from replicate samples can often be averaged, or if one of the replicates is obviously inconsistent with the results from the other samples, it can be discarded as being the result of an experimental error (some step of the test procedure may have been mistakenly omitted for that sample). Most often, tests are done in duplicate or triplicate. A positive control is a procedure similar to the actual experimental test but is known from previous experience to give a positive result. A negative control is known to give a negative result. The positive control confirms that the basic conditions of the experiment were able to produce a positive result, even if none of the actual experimental samples produce a positive result. The negative control demonstrates the base-line result obtained when a test does not produce a measurable positive result. Most often the value of the negative control is treated as a "background" value to subtract from the test sample results. Sometimes the positive control takes the quadrant of a standard curve. An example that is often used in teaching laboratories is a controlled protein assay. Students might be given a fluid sample containing an unknown (to the student) amount of protein. It is their job to correctly perform a controlled experiment in which they determine the concentration of protein in the fluid sample (usually called the "unknown sample"). The teaching lab would be equipped with a protein standard solution with a known protein concentration. Students could make several positive control samples containing various dilutions of the protein standard. Negative control samples would contain all of the reagents for the protein assay but no protein. In this example, all samples are performed in duplicate. The assay is a colorimetric assay in which a spectrophotometer can measure the amount of protein in samples by detecting a colored complex formed by the interaction of protein molecules and molecules of an added dye. In the illustration, the results for the diluted test samples can be compared to the results of the standard curve (the blue line in the illustration) to estimate the amount of protein in the unknown sample. Controlled experiments can be performed when it is difficult to exactly control all the conditions in an experiment. In this case, the experiment begins by creating two or more sample groups that are probabilistically equivalent, which means that measurements of traits should be similar among the groups and that the groups should respond in the same manner if given the same treatment. This equivalency is determined by statistical methods that take into account the amount of variation between individuals and the number of individuals in each group. In fields such as microbiology and chemistry, where there is very little variation between individuals and the group size is easily in the millions, these statistical methods are often bypassed and simply splitting a solution into equal parts is assumed to produce identical sample groups. Once equivalent groups have been formed, the experimenter tries to treat them identically except for the one variable that he or she wishes to isolate. Human experimentation requires special safeguards against outside variables such as the placebo effect. Such experiments are generally double blind, meaning that neither the volunteer nor the researcher knows which individuals are in the control group or the experimental group until after all of the data have been collected. This ensures that any effects on the volunteer are due to the treatment itself and are not a response to the knowledge that he is being treated. In human experiments, researchers may give a subject (person) a stimulus that the subject responds to. The goal of the experiment is to measure the response to the stimulus by a test method. In the design of experiments, two or more "treatments" are applied to estimate the difference between the mean responses for the treatments. For example, an experiment on baking bread could estimate the difference in the responses associated with quantitative variables, such as the ratio of water to flour, and with qualitative variables, such as strains of yeast. Experimentation is the step in the scientific method that helps people decide between two or more competing explanations—or hypotheses. These hypotheses suggest reasons to explain a phenomenon or predict the results of an action. An example might be the hypothesis that "if I release this ball, it will fall to the floor": this suggestion can then be tested by carrying out the experiment of letting go of the ball, and observing the results. Formally, a hypothesis is compared against its opposite or null hypothesis ("if I release this ball, it will not fall to the floor"). The null hypothesis is that there is no explanation or predictive power of the phenomenon through the reasoning that is being investigated. Once hypotheses are defined, an experiment can be carried out and the results analysed to confirm, refute, or define the accuracy of the hypotheses. Experiments can be also designed to estimate spillover effects onto nearby untreated units. === Natural experiments === The term "experiment" usually implies a controlled experiment, but sometimes controlled experiments are prohibitively difficult, impossible, unethical or illegal. In this case researchers resort to natural experiments or quasi-experiments. Natural experiments rely solely on observations of the variables of the system under study, rather than manipulation of just one or a few variables as occurs in controlled experiments. To the degree possible, they attempt to collect data for the system in such a way that contribution from all variables can be determined, and where the effects of variation in certain variables remain approximately constant so that the effects of other variables can be discerned. The degree to which this is possible depends on the observed correlation between explanatory variables in the observed data. When these variables are not well correlated, natural experiments can approach the power of controlled experiments. Usually, however, there is some correlation between these variables, which reduces the reliability of natural experiments relative to what could be concluded if a controlled experiment were performed. Also, because natural experiments usually take place in uncontrolled environments, variables from undetected sources are neither measured nor held constant, and these may produce illusory correlations in variables under study. Much research in several science disciplines, including economics, human geography, archaeology, sociology, cultural anthropology, geology, paleontology, ecology, meteorology, and astronomy, relies on quasi-experiments. For example, in astronomy it is clearly impossible, when testing the hypothesis "Stars are collapsed clouds of hydrogen", to start out with a giant cloud of hydrogen, and then perform the experiment of waiting a few billion years for it to form a star. However, by observing various clouds of hydrogen in various states of collapse, and other implications of the hypothesis (for example, the presence of various spectral emissions from the light of stars), we can collect data we require to support the hypothesis. An early example of this type of experiment was the first verification in the 17th century that light does not travel from place to place instantaneously, but instead has a measurable speed. Observation of the appearance of the moons of Jupiter were slightly delayed when Jupiter was farther from Earth, as opposed to when Jupiter was closer to Earth; and this phenomenon was used to demonstrate that the difference in the time of appearance of the moons was consistent with a measurable speed. === Field experiments === Field experiments are so named to distinguish them from laboratory experiments, which enforce scientific control by testing a hypothesis in the artificial and highly controlled setting of a laboratory. Often used in the social sciences, and especially in economic analyses of education and health interventions, field experiments have the advantage that outcomes are observed in a natural setting rather than in a contrived laboratory environment. For this reason, field experiments are sometimes seen as having higher external validity than laboratory experiments. However, like natural experiments, field experiments suffer from the possibility of contamination: experimental conditions can be controlled with more precision and certainty in the lab. Yet some phenomena (e.g., voter turnout in an election) cannot be easily studied in a laboratory. == Observational studies == An observational study is used when it is impractical, unethical, cost-prohibitive (or otherwise inefficient) to fit a physical or social system into a laboratory setting, to completely control confounding factors, or to apply random assignment. It can also be used when confounding factors are either limited or known well enough to analyze the data in light of them (though this may be rare when social phenomena are under examination). For an observational science to be valid, the experimenter must know and account for confounding factors. In these situations, observational studies have value because they often suggest hypotheses that can be tested with randomized experiments or by collecting fresh data. Fundamentally, however, observational studies are not experiments. By definition, observational studies lack the manipulation required for Baconian experiments. In addition, observational studies (e.g., in biological or social systems) often involve variables that are difficult to quantify or control. Observational studies are limited because they lack the statistical properties of randomized experiments. In a randomized experiment, the method of randomization specified in the experimental protocol guides the statistical analysis, which is usually specified also by the experimental protocol. Without a statistical model that reflects an objective randomization, the statistical analysis relies on a subjective model. Inferences from subjective models are unreliable in theory and practice. In fact, there are several cases where carefully conducted observational studies consistently give wrong results, that is, where the results of the observational studies are inconsistent and also differ from the results of experiments. For example, epidemiological studies of colon cancer consistently show beneficial correlations with broccoli consumption, while experiments find no benefit. A particular problem with observational studies involving human subjects is the great difficulty attaining fair comparisons between treatments (or exposures), because such studies are prone to selection bias, and groups receiving different treatments (exposures) may differ greatly according to their covariates (age, height, weight, medications, exercise, nutritional status, ethnicity, family medical history, etc.). In contrast, randomization implies that for each covariate, the mean for each group is expected to be the same. For any randomized trial, some variation from the mean is expected, of course, but the randomization ensures that the experimental groups have mean values that are close, due to the central limit theorem and Markov's inequality. With inadequate randomization or low sample size, the systematic variation in covariates between the treatment groups (or exposure groups) makes it difficult to separate the effect of the treatment (exposure) from the effects of the other covariates, most of which have not been measured. The mathematical models used to analyze such data must consider each differing covariate (if measured), and results are not meaningful if a covariate is neither randomized nor included in the model. To avoid conditions that render an experiment far less useful, physicians conducting medical trials—say for U.S. Food and Drug Administration approval—quantify and randomize the covariates that can be identified. Researchers attempt to reduce the biases of observational studies with matching methods such as propensity score matching, which require large populations of subjects and extensive information on covariates. However, propensity score matching is no longer recommended as a technique because it can increase, rather than decrease, bias. Outcomes are also quantified when possible (bone density, the amount of some cell or substance in the blood, physical strength or endurance, etc.) and not based on a subject's or a professional observer's opinion. In this way, the design of an observational study can render the results more objective and therefore, more convincing. == Ethics == By placing the distribution of the independent variable(s) under the control of the researcher, an experiment—particularly when it involves human subjects—introduces potential ethical considerations, such as balancing benefit and harm, fairly distributing interventions (e.g., treatments for a disease), and informed consent. For example, in psychology or health care, it is unethical to provide a substandard treatment to patients. Therefore, ethical review boards are supposed to stop clinical trials and other experiments unless a new treatment is believed to offer benefits as good as current best practice. It is also generally unethical (and often illegal) to conduct randomized experiments on the effects of substandard or harmful treatments, such as the effects of ingesting arsenic on human health. To understand the effects of such exposures, scientists sometimes use observational studies to understand the effects of those factors. Even when experimental research does not directly involve human subjects, it may still present ethical concerns. For example, the nuclear bomb experiments conducted by the Manhattan Project implied the use of nuclear reactions to harm human beings even though the experiments did not directly involve any human subjects. == See also == == Notes == == Further reading == Dunning, Thad (2012). Natural experiments in the social sciences : a design-based approach. Cambridge: Cambridge University Press. ISBN 978-1107698000. Shadish, William R.; Cook, Thomas D.; Campbell, Donald T. (2002). Experimental and quasi-experimental designs for generalized causal inference (Nachdr. ed.). Boston: Houghton Mifflin. ISBN 0-395-61556-9. (Excerpts) Jeremy, Teigen (2014). "Experimental Methods in Military and Veteran Studies". In Soeters, Joseph; Shields, Patricia; Rietjens, Sebastiaan (eds.). Routledge Handbook of Research Methods in Military Studies. New York: Routledge. pp. 228–238. == External links == Media related to Experiments at Wikimedia Commons Lessons In Electric Circuits – Volume VI – Experiments Experiment in Physics from Stanford Encyclopedia of Philosophy
Wikipedia/Experimental_science