text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
George Dyson (born March 26, 1953) is an American non-fiction author and historian of technology whose publications broadly cover the evolution of technology in relation to the physical environment and the direction of society.
He has written on a wide range of topics, including the history of computing, the development of algorithms and intelligence, communications systems, space exploration, and the design of watercraft.
== Early life and education ==
Dyson's early life is described in Kenneth Brower's book The Starship and the Canoe. When he was sixteen he went to live in British Columbia to pursue his interest in kayaking.
From 1972 to 1975, he lived in a treehouse at a height of 30 metres that he built from salvaged materials on the shore of Burrard Inlet. Dyson became a Canadian citizen and spent 20 years in British Columbia, designing kayaks, researching historic voyages and native peoples, and exploring the Inside Passage. He was, during this period, estranged from his father for some time.
== Career ==
Dyson's first book, Baidarka, published in 1986, described his research on the history of the Aleutian kayak, its evolution in the hands of Russian fur traders, and his adaptation of its design to modern materials. He is the author of Project Orion: The Atomic Spaceship 1957–1965 and Darwin Among the Machines: The Evolution of Global Intelligence, in which he expands upon the premise of Samuel Butler's 1863 article of the same name and suggests that the Internet is a living, sentient being. His 2012 book Turing's Cathedral has been described as "a creation myth of the digital universe." It was a finalist for the Los Angeles Times 2012 Book Prize in the science and technology category and was chosen by University of California Berkeley's annual "On the Same Page" program for the academic year 2013–14.
Dyson is the founder/owner of Dyson, Baidarka & Company, a designer of Aleut-style skin kayaks; he is credited with the revival of the baidarka style of kayak.
Dyson was a visiting lecturer and research associate at Western Washington University's Fairhaven College and was Director's Visitor at the Institute for Advanced Study in Princeton, New Jersey, in 2002–03. He was a frequent contributor to Edge.org between 1998 and 2019.
=== Turing's Cathedral ===
Turing's Cathedral is Dyson's fourth book. Though Alan Turing is in the title, the book focuses on John von Neumann and his 1946 attempt to build a computer at Princeton's Institute for Advanced Study (known as the IAS machine, MANIAC I was the same machine later built at Los Alamos Laboratory). Dyson interviewed several people who knew von Neumann, including his father, Freeman Dyson. The book received mostly positive reviews. Brian E. Blank noted in his review "[e]xtensive biographical and institutional backgrounds", and concludes it with:
It is difficult to avoid the conclusion that Turing's Cathedral is an idiosyncratic, undisciplined, crazy quilt of a book. The reviewer had no preconceived notions about the sort of book that might be authored by a man who once lived for three years in a treehouse 95 feet above the ground, but the eccentricities of Turing's Cathedral do not seem inconsistent with what might be imagined. And yet, for all its flaws, shortcomings, and waywardness, it is a book that amply rewards its readers.
=== Media appearances ===
To Mars by A-Bomb: The Secret History of Project Orion (BBC, 2003)
The Starship and the Canoe (1986)
=== Books ===
Baidarka the Kayak
Darwin Among the Machines
Project Orion: The Atomic Spaceship 1957–1965
Turing's Cathedral
Analogia: The Entangled Destinies of Nature, Human Beings and Machines
== Personal life ==
George Dyson's parents were the theoretical physicist Freeman Dyson and mathematician Verena Huber-Dyson. He is the brother of technology analyst Esther Dyson, and the grandson of the British composer Sir George Dyson.
George Dyson and Ann Yow-Dyson have a daughter named Lauren. He lives and works in Bellingham, Washington.
== References ==
== External links ==
George Dyson's Flickr Photostream
Dyson, Baidarka & Company (Flickr Photostream by Thomas Gotchy)
A lecture by George Dyson on "von Neumann's universe"
Engineers' Dreams
George Dyson at TED
George Dyson: The story of Project Orion (TED2002)
George Dyson: The birth of the computer (TED2003)
http://zenbeat.t-galaxy.com/e2703892.html | Wikipedia/George_Dyson_(science_historian) |
Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.
== Synopsis ==
It is unknown whether human-level artificial intelligence will arrive in a matter of years, later this century, or not until future centuries. Regardless of the initial timescale, once human-level machine intelligence is developed, a "superintelligent" system that "greatly exceeds the cognitive performance of humans in virtually all domains of interest" would most likely follow surprisingly quickly. Such a superintelligence would be very difficult to control.
While the ultimate goals of superintelligences could vary greatly, a functional superintelligence will spontaneously generate, as natural subgoals, "instrumental goals" such as self-preservation and goal-content integrity, cognitive enhancement, and resource acquisition. For example, an agent whose sole final goal is to solve the Riemann hypothesis (a famous unsolved mathematical conjecture) could create and act upon a subgoal of transforming the entire Earth into some form of computronium (hypothetical material optimized for computation) to assist in the calculation. The superintelligence would proactively resist any outside attempts to turn the superintelligence off or otherwise prevent its subgoal completion. In order to prevent such an existential catastrophe, it is necessary to successfully solve the "AI control problem" for the first superintelligence. The solution might involve instilling the superintelligence with goals that are compatible with human survival and well-being. Solving the control problem is surprisingly difficult because most goals, when translated into machine-implementable code, lead to unforeseen and undesirable consequences.
The owl on the book cover alludes to an analogy which Bostrom calls the "Unfinished Fable of the Sparrows". A group of sparrows decide to find an owl chick and raise it as their servant. They eagerly imagine "how easy life would be" if they had an owl to help build their nests, to defend the sparrows and to free them for a life of leisure. The sparrows start the difficult search for an owl egg; only "Scronkfinkle", a "one-eyed sparrow with a fretful temperament", suggests thinking about the complicated question of how to tame the owl before bringing it "into our midst". The other sparrows demur; the search for an owl egg will already be hard enough on its own: "Why not get the owl first and work out the fine details later?" Bostrom states that "It is not known how the story ends", but he dedicates his book to Scronkfinkle.
== Reception ==
The book ranked #17 on The New York Times list of best selling science books for August 2014. In the same month, business magnate Elon Musk made headlines by agreeing with the book that artificial intelligence is potentially more dangerous than nuclear weapons.
Bostrom's work on superintelligence has also influenced Bill Gates’s concern for the existential risks facing humanity over the coming century. In a March 2015 interview by Baidu's CEO, Robin Li, Gates said that he would "highly recommend" Superintelligence. According to the New Yorker, philosophers Peter Singer and Derek Parfit "received it as a work of importance". Sam Altman wrote in 2015 that the book is the best thing he has ever read on AI risks.
The science editor of the Financial Times found that Bostrom's writing "sometimes veers into opaque language that betrays his background as a philosophy professor" but convincingly demonstrates that the risk from superintelligence is large enough that society should start thinking now about ways to endow future machine intelligence with positive values. A review in The Guardian pointed out that "even the most sophisticated machines created so far are intelligent in only a limited sense" and that "expectations that AI would soon overtake human intelligence were first dashed in the 1960s", but the review finds common ground with Bostrom in advising that "one would be ill-advised to dismiss the possibility altogether".
Some of Bostrom's colleagues suggest that nuclear war presents a greater threat to humanity than superintelligence, as does the future prospect of the weaponisation of nanotechnology and biotechnology. The Economist stated that "Bostrom is forced to spend much of the book discussing speculations built upon plausible conjecture... but the book is nonetheless valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect of actually doing so seems remote." Ronald Bailey wrote in the libertarian Reason that Bostrom makes a strong case that solving the AI control problem is the "essential task of our age". According to Tom Chivers of The Daily Telegraph, the book is difficult to read but nonetheless rewarding. A reviewer in the Journal of Experimental & Theoretical Artificial Intelligence broke with others by stating the book's "writing style is clear" and praised the book for avoiding "overly technical jargon". A reviewer in Philosophy judged Superintelligence to be "more realistic" than Ray Kurzweil's The Singularity Is Near.
== See also ==
Age of Artificial Intelligence
AI alignment
AI safety
Future of Humanity Institute
Human Compatible
Life 3.0
Philosophy of artificial intelligence
The Precipice: Existential Risk and the Future of Humanity
== References == | Wikipedia/Superintelligence:_Paths,_Dangers,_Strategies |
An emergent algorithm is an algorithm that exhibits emergent behavior. In essence an emergent algorithm implements a set of simple building block behaviors that when combined exhibit more complex behaviors. One example of this is the implementation of fuzzy motion controllers used to adapt robot movement in response to environmental obstacles.
An emergent algorithm has the following characteristics:
it achieves predictable global effects
it does not require global visibility
it does not assume any kind of centralized control
it is self-stabilizing
Other examples of emergent algorithms and models include cellular automata, artificial neural networks and swarm intelligence systems (ant colony optimization, bees algorithm, etc.).
== See also ==
Emergence
Evolutionary computation
Fuzzy logic
Genetic algorithm
Heuristic
== References == | Wikipedia/Emergent_algorithm |
Talen Energy is an independent power producer and energy infrastructure company. Talen owns and operates approximately 10.7 gigawatts of power infrastructure in the United States, including 2.2 gigawatts of carbon-free nuclear power and a significant dispatchable natural gas fleet.
== History ==
Talen Energy was founded in 2015. The company was formed when the competitive power generation business of PPL Corporation was spun off and immediately combined with competitive generation businesses owned by private equity firm Riverstone Holdings. Following these transactions, PPL shareholders owned 65% of Talen's common stock and affiliates of Riverstone owned 35%, with shares trading on the NYSE under the symbol "TLN".
On December 6, 2016, Riverstone Holdings completed the purchase of the remaining 65% of Talen's common stock, making it a privately owned company.
On November 10, 2020, Talen announced its commitment to transform for a clean energy future. As part of its transformation announcement Talen noted that it will decarbonize its fleet and invest in developing renewable energy, battery storage and digital infrastructure primarily on owned land within its footprint. It also introduced its "Force for Good" philosophy, which includes maintaining its commitment to the communities in which it operates, by converting, rather than retiring its fossil generation facilities and creating new opportunities for these stakeholders through its transformation.
On May 9, 2022, Talen filed for bankruptcy under Chapter 11 of the U.S. Bankruptcy Code as part of a strategic restructuring transaction aimed at reducing $4.5 billion of debt. Its plan of reorganization was approved by the US Bankruptcy Court for the Southern District of Texas on December 15, 2022. The company completed its restructuring and emerged from bankruptcy on May 17, 2023. Upon its emergence, ownership of Talen Energy was transferred to a majority of its unsecured creditors, which consisted of several large financial institutions. Mark "Mac" McFarland assumed the role of President, CEO, and member of the Board, and a new independent board of directors was seated. In June 2023, Talen announced senior leadership changes, including the appointment of Terry Nutt to the role of Chief Financial Officer and John Wander as General Counsel and Corporate Secretary. On June 23, 2023, Talen Energy Corporation stock began to trade on the OTC Market under the ticker "TLNE".
In March 2024, the company announced the sale of its Cumulus data center campus to Amazon Web Services for $650 million. As part of the transaction, Talen’s Susquehanna plant will provide power to the campus under a power purchase agreement (PPA).
On July 10, 2024, Talen stock began trading on the NASDAQ Global Select Market under the symbol “TLN” after it ceased trading on the OTCQX Best Market at market close on July 9. [1]
[1] Talen Energy Corporation Announces Expected Listing on the NASDAQ Global Select Market | Mon, 07/08/2024 - 16:00
== Facilities and infrastructure ==
Talen's generation facilities include nuclear, coal, natural gas, and oil-fired power plants.
Carbon-free nuclear
The largest plant is the Susquehanna Steam Electric Station, a 2.5 gigawatt nuclear power plant, located on the Susquehanna River seven miles (11 km) northeast of Berwick, Pennsylvania. Talen operates and owns a 90% interest in the Susquehanna facility, the sixth largest nuclear-powered generation facility in the U.S. Susquehanna’s generation typically accounts for approximately half of Talen’s total annual megawatts generated.
Dispatchable natural gas and oil intermediate and peaking units
Talen’s 6.3 GW natural gas and oil fleet (of which 3.2 gigawatts is from Brunner Island, Montour, and H.A Wagner Unit 3 after conversion, as discussed below) includes seven technologically diverse natural gas and oil generation facilities across the generation stack (including intermediate and peaking dispatch). Certain units are capable of utilizing multiple fuel sources, providing meaningful operational flexibility. These strategically located assets include significant generation in attractive wholesale markets (primarily PJM), allowing them to generate predictable revenues on cleared capacity while also benefiting from varying market dynamics.
Reliability assets and carbon deleveraging.
Talen’s coal-fired generation assets continue to be impacted by changing environmental regulations and power market economics. The company has already completed the conversion of approximately 3.2 gigawatts of its legacy coal fleet to lower-carbon fuels, including the Brunner Island (dual fuel) and Montour facilities, which together represent over 25% of its total generation capacity, and Unit 3 of the H.A. Wagner facility which was converted from coal, and aligns with all other units at the facility.
The following is a list of Talen's current generation facilities owned by subsidiaries of Talen:
=== Nuclear ===
Susquehanna Steam Electric Station - Salem Township, PA
=== Coal ===
Brandon Shores Generating Station - Pasadena, MD
Brunner Island Steam Electric Station (also burns natural gas)- York Haven, PA
Colstrip Power Plant - Colstrip, MT
Conemaugh Generating Station - New Florence, PA
Herbert A. Wagner Generating Station (gas co-fired) - Pasadena, MD
Keystone Generating Station - Schelocta, PA
=== Natural Gas ===
Camden Power Plant - Camden, NJ
Dartmouth Power Plant - Dartmouth, MA
Lower Mount Bethel Power Plant -Bangor, PA
Martins Creek Power Plant - Bangor, PA
Montour Power Plant - Washingtonville, PA
== References ==
== External links ==
Official website | Wikipedia/Talen_Energy |
An influence diagram (ID) (also called a relevance diagram, decision diagram or a decision network) is a compact graphical and mathematical representation of a decision situation. It is a generalization of a Bayesian network, in which not only probabilistic inference problems but also decision making problems (following the maximum expected utility criterion) can be modeled and solved.
ID was first developed in the mid-1970s by decision analysts with an intuitive semantic that is easy to understand. It is now adopted widely and becoming an alternative to the decision tree which typically suffers from exponential growth in number of branches with each variable modeled. ID is directly applicable in team decision analysis, since it allows incomplete sharing of information among team members to be modeled and solved explicitly. Extensions of ID also find their use in game theory as an alternative representation of the game tree.
== Semantics ==
An ID is a directed acyclic graph with three types (plus one subtype) of node and three types of arc (or arrow) between nodes.
Nodes:
Decision node (corresponding to each decision to be made) is drawn as a rectangle.
Uncertainty node (corresponding to each uncertainty to be modeled) is drawn as an oval.
Deterministic node (corresponding to special kind of uncertainty that its outcome is deterministically known whenever the outcome of some other uncertainties are also known) is drawn as a double oval.
Value node (corresponding to each component of additively separable Von Neumann-Morgenstern utility function) is drawn as an octagon (or diamond).
Arcs:
Functional arcs (ending in value node) indicate that one of the components of additively separable utility function is a function of all the nodes at their tails.
Conditional arcs (ending in uncertainty node) indicate that the uncertainty at their heads is probabilistically conditioned on all the nodes at their tails.
Conditional arcs (ending in deterministic node) indicate that the uncertainty at their heads is deterministically conditioned on all the nodes at their tails.
Informational arcs (ending in decision node) indicate that the decision at their heads is made with the outcome of all the nodes at their tails known beforehand.
Given a properly structured ID:
Decision nodes and incoming information arcs collectively state the alternatives (what can be done when the outcome of certain decisions and/or uncertainties are known beforehand)
Uncertainty/deterministic nodes and incoming conditional arcs collectively model the information (what are known and their probabilistic/deterministic relationships)
Value nodes and incoming functional arcs collectively quantify the preference (how things are preferred over one another).
Alternative, information, and preference are termed decision basis in decision analysis, they represent three required components of any valid decision situation.
Formally, the semantic of influence diagram is based on sequential construction of nodes and arcs, which implies a specification of all conditional independencies in the diagram. The specification is defined by the
d
{\displaystyle d}
-separation criterion of Bayesian network. According to this semantic, every node is probabilistically
independent on its non-successor nodes given the outcome of its immediate predecessor nodes. Likewise, a missing arc between non-value node
X
{\displaystyle X}
and non-value node
Y
{\displaystyle Y}
implies that there exists a set of non-value nodes
Z
{\displaystyle Z}
, e.g., the parents of
Y
{\displaystyle Y}
, that renders
Y
{\displaystyle Y}
independent of
X
{\displaystyle X}
given the outcome of the nodes in
Z
{\displaystyle Z}
.
== Example ==
Consider the simple influence diagram representing a situation where a decision-maker is planning their vacation.
There is 1 decision node (Vacation Activity), 2 uncertainty nodes (Weather Condition, Weather Forecast), and 1 value node (Satisfaction).
There are 2 functional arcs (ending in Satisfaction), 1 conditional arc (ending in Weather Forecast), and 1 informational arc (ending in Vacation Activity).
Functional arcs ending in Satisfaction indicate that Satisfaction is a utility function of Weather Condition and Vacation Activity. In other words, their satisfaction can be quantified if they know what the weather is like and what their choice of activity is. (Note that they do not value Weather Forecast directly)
Conditional arc ending in Weather Forecast indicates their belief that Weather Forecast and Weather Condition can be dependent.
Informational arc ending in Vacation Activity indicates that they will only know Weather Forecast, not Weather Condition, when making their choice. In other words, actual weather will be known after they make their choice, and only forecast is what they can count on at this stage.
It also follows semantically, for example, that Vacation Activity is independent on (irrelevant to) Weather Condition given Weather Forecast is known.
== Applicability to value of information ==
The above example highlights the power of the influence diagram in representing an extremely important concept in decision analysis known as the value of information. Consider the following three scenarios;
Scenario 1: The decision-maker could make their Vacation Activity decision while knowing what Weather Condition will be like. This corresponds to adding extra informational arc from Weather Condition to Vacation Activity in the above influence diagram.
Scenario 2: The original influence diagram as shown above.
Scenario 3: The decision-maker makes their decision without even knowing the Weather Forecast. This corresponds to removing informational arc from Weather Forecast to Vacation Activity in the above influence diagram.
Scenario 1 is the best possible scenario for this decision situation since there is no longer any uncertainty on what they care about (Weather Condition) when making their decision. Scenario 3, however, is the worst possible scenario for this decision situation since they need to make their decision without any hint (Weather Forecast) on what they care about (Weather Condition) will turn out to be.
The decision-maker is usually better off (definitely no worse off, on average) to move from scenario 3 to scenario 2 through the acquisition of new information. The most they should be willing to pay for such move is called the value of information on Weather Forecast, which is essentially the value of imperfect information on Weather Condition.
The applicability of this simple ID and the value of information concept is tremendous, especially in medical decision making when most decisions have to be made with imperfect information about their patients, diseases, etc.
== Related concepts ==
Influence diagrams are hierarchical and can be defined either in terms of their structure or in greater detail in terms of the functional and numerical relation between diagram elements. An ID that is consistently defined at all levels—structure, function, and number—is a well-defined mathematical representation and is referred to as a well-formed influence diagram (WFID). WFIDs can be evaluated using reversal and removal operations to yield answers to a large class of probabilistic, inferential, and decision questions. More recent techniques have been developed by artificial intelligence researchers concerning Bayesian network inference (belief propagation).
An influence diagram having only uncertainty nodes (i.e., a Bayesian network) is also called a relevance diagram. An arc connecting node A to B implies not only that "A is relevant to B", but also that "B is relevant to A" (i.e., relevance is a symmetric relationship).
== See also ==
== Bibliography ==
Detwarasiti, A.; Shachter, R.D. (December 2005). "Influence diagrams for team decision analysis" (PDF). Decision Analysis. 2 (4): 207–228. doi:10.1287/deca.1050.0047.
Holtzman, Samuel (1988). Intelligent decision systems. Addison-Wesley. ISBN 978-0-201-11602-1.
Howard, R.A. and J.E. Matheson, "Influence diagrams" (1981), in Readings on the Principles and Applications of Decision Analysis, eds. R.A. Howard and J.E. Matheson, Vol. II (1984), Menlo Park CA: Strategic Decisions Group.
Koller, D.; Milch, B. (October 2003). "Multi-agent influence diagrams for representing and solving games" (PDF). Games and Economic Behavior. 45: 181–221. doi:10.1016/S0899-8256(02)00544-4.
Pearl, Judea (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Representation and Reasoning Series. San Mateo CA: Morgan Kaufmann. ISBN 0-934613-73-7.
Shachter, R.D. (November–December 1986). "Evaluating influence diagrams" (PDF). Operations Research. 34 (6): 871–882. doi:10.1287/opre.34.6.871.
Shachter, R.D. (July–August 1988). "Probabilistic inference and influence diagrams" (PDF). Operations Research. 36 (4): 589–604. doi:10.1287/opre.36.4.589. hdl:10338.dmlcz/135724.
Virine, Lev; Trumper, Michael (2008). Project Decisions: The Art and Science. Vienna VA: Management Concepts. ISBN 978-1-56726-217-9.
Pearl, J. (1985). Bayesian Networks: A Model of Self-Activated Memory for Evidential Reasoning (UCLA Technical Report CSD-850017). Proceedings of the Seventh Annual Conference of the Cognitive Science Society 15–17 April 1985. University of California, Irvine, CA. pp. 329–334. Retrieved 2010-05-01.
== External links ==
What are influence diagrams?
Pearl, J. (December 2005). "Influence Diagrams — Historical and Personal Perspectives" (PDF). Decision Analysis. 2 (4): 232–4. doi:10.1287/deca.1050.0055. | Wikipedia/Decision_network |
Inferences are steps in logical reasoning, moving from premises to logical consequences; etymologically, the word infer means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that in Europe dates at least to Aristotle (300s BC). Deduction is inference deriving logical conclusions from premises known or assumed to be true, with the laws of valid inference being studied in logic. Induction is inference from particular evidence to a universal conclusion. A third type of inference is sometimes distinguished, notably by Charles Sanders Peirce, contradistinguishing abduction from induction.
Various fields study how inference is done in practice. Human inference (i.e. how humans draw conclusions) is traditionally studied within the fields of logic, argumentation studies, and cognitive psychology; artificial intelligence researchers develop automated inference systems to emulate human inference. Statistical inference uses mathematics to draw conclusions in the presence of uncertainty. This generalizes deterministic reasoning, with the absence of uncertainty as a special case. Statistical inference uses quantitative or qualitative (categorical) data which may be subject to random variations.
== Definition ==
The process by which a conclusion is inferred from multiple observations is called inductive reasoning. The conclusion may be correct or incorrect, or correct to within a certain degree of accuracy, or correct in certain situations. Conclusions inferred from multiple observations may be tested by additional observations.
This definition is disputable (due to its lack of clarity. Ref: Oxford English dictionary: "induction ... 3. Logic the inference of a general law from particular instances." ) The definition given thus applies only when the "conclusion" is general.
Two possible definitions of "inference" are:
A conclusion reached on the basis of evidence and reasoning.
The process of reaching such a conclusion.
== Examples ==
=== Example for definition #1 ===
Ancient Greek philosophers defined a number of syllogisms, correct three part inferences, that can be used as building blocks for more complex reasoning. We begin with a famous example:
All humans are mortal.
All Greeks are humans.
All Greeks are mortal.
The reader can check that the premises and conclusion are true, but logic is concerned with inference: does the truth of the conclusion follow from that of the premises?
The validity of an inference depends on the form of the inference. That is, the word "valid" does not refer to the truth of the premises or the conclusion, but rather to the form of the inference. An inference can be valid even if the parts are false, and can be invalid even if some parts are true. But a valid form with true premises will always have a true conclusion.
For example, consider the form of the following symbological track:
All meat comes from animals.
All beef is meat.
Therefore, all beef comes from animals.
If the premises are true, then the conclusion is necessarily true, too.
Now we turn to an invalid form.
All A are B.
All C are B.
Therefore, all C are A.
To show that this form is invalid, we demonstrate how it can lead from true premises to a false conclusion.
All apples are fruit. (True)
All bananas are fruit. (True)
Therefore, all bananas are apples. (False)
A valid argument with a false premise may lead to a false conclusion, (this and the following examples do not follow the Greek syllogism):
All tall people are French. (False)
John Lennon was tall. (True)
Therefore, John Lennon was French. (False)
When a valid argument is used to derive a false conclusion from a false premise, the inference is valid because it follows the form of a correct inference.
A valid argument can also be used to derive a true conclusion from a false premise:
All tall people are musicians. (Valid, False)
John Lennon was tall. (Valid, True)
Therefore, John Lennon was a musician. (Valid, True)
In this case we have one false premise and one true premise where a true conclusion has been inferred.
=== Example for definition #2 ===
Evidence: It is the early 1950s and you are an American stationed in the Soviet Union. You read in the Moscow newspaper that a soccer team from a small city in Siberia starts winning game after game. The team even defeats the Moscow team. Inference: The small city in Siberia is not a small city anymore. The Soviets are working on their own nuclear or high-value secret weapons program.
Knowns: The Soviet Union is a command economy: people and material are told where to go and what to do. The small city was remote and historically had never distinguished itself; its soccer season was typically short because of the weather.
Explanation: In a command economy, people and material are moved where they are needed. Large cities might field good teams due to the greater availability of high quality players; and teams that can practice longer (possibly due to sunnier weather and better facilities) can reasonably be expected to be better. In addition, you put your best and brightest in places where they can do the most good—such as on high-value weapons programs. It is an anomaly for a small city to field such a good team. The anomaly indirectly described a condition by which the observer inferred a new meaningful pattern—that the small city was no longer small. Why would you put a large city of your best and brightest in the middle of nowhere? To hide them, of course.
== Incorrect inference ==
An incorrect inference is known as a fallacy. Philosophers who study informal logic have compiled large lists of them, and cognitive psychologists have documented many biases in human reasoning that favor incorrect reasoning.
== Applications ==
=== Inference engines ===
AI systems first provided automated logical inference and these were once extremely popular research topics, leading to industrial applications under the form of expert systems and later business rule engines. More recent work on automated theorem proving has had a stronger
basis in formal logic.
An inference system's job is to extend a knowledge base automatically. The knowledge base (KB) is a set of propositions that represent what the system knows about the world. Several techniques can be used by that system to extend KB by means of valid inferences. An additional requirement is that the conclusions the system arrives at are relevant to its task.
Additionally, the term 'inference' has also been applied to the process of generating predictions from trained neural networks. In this context, an 'inference engine' refers to the system or hardware performing these operations. This type of inference is widely used in applications ranging from image recognition to natural language processing.
==== Prolog engine ====
Prolog (for "Programming in Logic") is a programming language based on a subset of predicate calculus. Its main job is to check whether a certain proposition can be inferred from a KB (knowledge base) using an algorithm called backward chaining.
Let us return to our Socrates syllogism. We enter into our Knowledge Base the following piece of code:
mortal(X) :- man(X).
man(socrates).
( Here :- can be read as "if". Generally, if P
→
{\displaystyle \to }
Q (if P then Q) then in Prolog we would code Q:-P (Q if P).)
This states that all men are mortal and that Socrates is a man. Now we can ask the Prolog system about Socrates:
?- mortal(socrates).
(where ?- signifies a query: Can mortal(socrates). be deduced from the KB using the rules)
gives the answer "Yes".
On the other hand, asking the Prolog system the following:
?- mortal(plato).
gives the answer "No".
This is because Prolog does not know anything about Plato, and hence defaults to any property about Plato being false (the so-called closed world assumption). Finally
?- mortal(X) (Is anything mortal) would result in "Yes" (and in some implementations: "Yes": X=socrates)
Prolog can be used for vastly more complicated inference tasks. See the corresponding article for further examples.
=== Semantic web ===
Recently automatic reasoners found in semantic web a new field of application. Being based upon description logic, knowledge expressed using one variant of OWL can be logically processed, i.e., inferences can be made upon it.
=== Bayesian statistics and probability logic ===
Philosophers and scientists who follow the Bayesian framework for inference use the mathematical rules of probability to find this best explanation. The Bayesian view has a number of desirable features—one of them is that it embeds deductive (certain) logic as a subset (this prompts some writers to call Bayesian probability "probability logic", following E. T. Jaynes).
Bayesians identify probabilities with degrees of beliefs, with certainly true propositions having probability 1, and certainly false propositions having probability 0. To say that "it's going to rain tomorrow" has a 0.9 probability is to say that you consider the possibility of rain tomorrow as extremely likely.
Through the rules of probability, the probability of a conclusion and of alternatives can be calculated. The best explanation is most often identified with the most probable (see Bayesian decision theory). A central rule of Bayesian inference is Bayes' theorem.
=== Fuzzy logic ===
=== Non-monotonic logic ===
A relation of inference is monotonic if the addition of premises does not undermine previously reached conclusions; otherwise the relation is non-monotonic.
Deductive inference is monotonic: if a conclusion is reached on the basis of a certain set of premises, then that conclusion still holds if more premises are added.
By contrast, everyday reasoning is mostly non-monotonic because it involves risk: we jump to conclusions from deductively insufficient premises.
We know when it is worth or even necessary (e.g. in medical diagnosis) to take the risk. Yet we are also aware that such inference is defeasible—that new information may undermine old conclusions. Various kinds of defeasible but remarkably successful inference have traditionally captured the attention of philosophers (theories of induction, Peirce's theory of abduction, inference to the best explanation, etc.). More recently logicians have begun to approach the phenomenon from a formal point of view. The result is a large body of theories at the interface of philosophy, logic and artificial intelligence.
== See also ==
A priori and a posteriori – Two types of knowledge, justification, or argument
Abductive reasoning – Inference seeking the simplest and most likely explanation
Deductive reasoning – Form of reasoning
Inductive reasoning – Method of logical reasoning
Entailment – Relationship where one statement follows from anotherPages displaying short descriptions of redirect targets
Epilogism
Analogy – Cognitive process of transferring information or meaning from a particular subject to another
Axiom system – Mathematical term; concerning axioms used to derive theoremsPages displaying short descriptions of redirect targets
Axiom – Statement that is taken to be true
Immediate inference – Logical inference from a single statement
Inferential programming
Inquiry – Any process that has the aim of augmenting knowledge, resolving doubt, or solving a problem
Logic – Study of correct reasoning
Logic of information
Logical assertion – Statement in a metalanguagePages displaying short descriptions of redirect targets
Logical graph – Type of diagrammatic notation for propositional logicPages displaying short descriptions of redirect targets
Rule of inference – Method of deriving conclusions
List of rules of inference
Theorem – In mathematics, a statement that has been proven
Transduction (machine learning) – Type of statistical inference
== References ==
== Further reading ==
Inductive inference:
Carnap, Rudolf; Jeffrey, Richard C., eds. (1971). Studies in Inductive Logic and Probability. Vol. 1. The University of California Press.
Jeffrey, Richard C., ed. (1980). Studies in Inductive Logic and Probability. Vol. 2. The University of California Press. ISBN 9780520038264.
Angluin, Dana (1976). An Application of the Theory of Computational Complexity to the Study of Inductive Inference (Ph.D.). University of California at Berkeley.
Angluin, Dana (1980). "Inductive Inference of Formal Languages from Positive Data". Information and Control. 45 (2): 117–135. doi:10.1016/s0019-9958(80)90285-5.
Angluin, Dana; Smith, Carl H. (September 1983). "Inductive Inference: Theory and Methods" (PDF). Computing Surveys. 15 (3): 237–269. doi:10.1145/356914.356918. S2CID 3209224.
Gabbay, Dov M.; Hartmann, Stephan; Woods, John, eds. (2009). Inductive Logic. Handbook of the History of Logic. Vol. 10. Elsevier. ISBN 978-0-444-52936-7.
Goodman, Nelson (1983). Fact, Fiction, and Forecast. Harvard University Press. ISBN 9780674290716.
Abductive inference:
O'Rourke, P.; Josephson, J., eds. (1997). Automated abduction: Inference to the best explanation. AAAI Press.
Psillos, Stathis (2009). "An Explorer upon Untrodden Ground". In Gabbay, Dov M.; Hartmann, Stephan; Woods, John (eds.). An Explorer upon Untrodden Ground: Peirce on Abduction (PDF). Handbook of the History of Logic. Vol. 10. Elsevier. pp. 117–152. doi:10.1016/B978-0-444-52936-7.50004-5. ISBN 978-0-444-52936-7.
Ray, Oliver (December 2005). Hybrid Abductive Inductive Learning (Ph.D.). University of London, Imperial College. CiteSeerX 10.1.1.66.1877.
Psychological investigations about human reasoning:
deductive:
Johnson-Laird, Philip Nicholas; Byrne, Ruth M. J. (1992). Deduction. Erlbaum.
Byrne, Ruth M. J.; Johnson-Laird, P. N. (2009). ""If" and the Problems of Conditional Reasoning" (PDF). Trends in Cognitive Sciences. 13 (7): 282–287. doi:10.1016/j.tics.2009.04.003. PMID 19540792. S2CID 657803. Archived from the original (PDF) on 7 April 2014. Retrieved 9 August 2013.
Knauff, Markus; Fangmeier, Thomas; Ruff, Christian C.; Johnson-Laird, P. N. (2003). "Reasoning, Models, and Images: Behavioral Measures and Cortical Activity" (PDF). Journal of Cognitive Neuroscience. 15 (4): 559–573. CiteSeerX 10.1.1.318.6615. doi:10.1162/089892903321662949. hdl:11858/00-001M-0000-0013-DC8B-C. PMID 12803967. S2CID 782228. Archived from the original (PDF) on 18 May 2015. Retrieved 9 August 2013.
Johnson-Laird, Philip N. (1995). Gazzaniga, M. S. (ed.). Mental Models, Deductive Reasoning, and the Brain (PDF). MIT Press. pp. 999–1008.
Khemlani, Sangeet; Johnson-Laird, P. N. (2008). "Illusory Inferences about Embedded Disjunctions" (PDF). Proceedings of the 30th Annual Conference of the Cognitive Science Society. Washington/DC. pp. 2128–2133.
statistical:
McCloy, Rachel; Byrne, Ruth M. J.; Johnson-Laird, Philip N. (2009). "Understanding Cumulative Risk" (PDF). The Quarterly Journal of Experimental Psychology. 63 (3): 499–515. doi:10.1080/17470210903024784. PMID 19591080. S2CID 7741180. Archived from the original (PDF) on 18 May 2015. Retrieved 9 August 2013.
Johnson-Laird, Philip N. (1994). "Mental Models and Probabilistic Thinking" (PDF). Cognition. 50 (1–3): 189–209. doi:10.1016/0010-0277(94)90028-0. PMID 8039361. S2CID 9439284.,
analogical:
Burns, B. D. (1996). "Meta-Analogical Transfer: Transfer Between Episodes of Analogical Reasoning". Journal of Experimental Psychology: Learning, Memory, and Cognition. 22 (4): 1032–1048. doi:10.1037/0278-7393.22.4.1032.
spatial:
Jahn, Georg; Knauff, Markus; Johnson-Laird, P. N. (2007). "Preferred mental models in reasoning about spatial relations" (PDF). Memory & Cognition. 35 (8): 2075–2087. doi:10.3758/bf03192939. PMID 18265622. S2CID 25356700.
Knauff, Markus; Johnson-Laird, P. N. (2002). "Visual imagery can impede reasoning" (PDF). Memory & Cognition. 30 (3): 363–371. doi:10.3758/bf03194937. PMID 12061757. S2CID 7330724.
Waltz, James A.; Knowlton, Barbara J.; Holyoak, Keith J.; Boone, Kyle B.; Mishkin, Fred S.; de Menezes Santos, Marcia; Thomas, Carmen R.; Miller, Bruce L. (March 1999). "A System for Relational Reasoning in Human Prefrontal Cortex". Psychological Science. 10 (2): 119–125. doi:10.1111/1467-9280.00118. S2CID 44019775.
moral:
Bucciarelli, Monica; Khemlani, Sangeet; Johnson-Laird, P. N. (February 2008). "The Psychology of Moral Reasoning" (PDF). Judgment and Decision Making. 3 (2): 121–139. doi:10.1017/S1930297500001479. S2CID 327124.
== External links ==
Inference at PhilPapers
Inference example and definition
Inference at the Indiana Philosophy Ontology Project | Wikipedia/Logical_inference |
The McCarthy 91 function is a recursive function, defined by the computer scientist John McCarthy as a test case for formal verification within computer science.
The McCarthy 91 function is defined as
M
(
n
)
=
{
n
−
10
,
if
n
>
100
M
(
M
(
n
+
11
)
)
,
if
n
≤
100
{\displaystyle M(n)={\begin{cases}n-10,&{\mbox{if }}n>100{\mbox{ }}\\M(M(n+11)),&{\mbox{if }}n\leq 100{\mbox{ }}\end{cases}}}
The results of evaluating the function are given by M(n) = 91 for all integer arguments n ≤ 100, and M(n) = n − 10 for n > 100. Indeed, the result of M(101) is also 91 (101 - 10 = 91). All results of M(n) after n = 101 are continually increasing by 1, e.g. M(102) = 92, M(103) = 93.
== History ==
The 91 function was introduced in papers published by Zohar Manna, Amir Pnueli and John McCarthy in 1970. These papers represented early developments towards the application of formal methods to program verification. The 91 function was chosen for being nested-recursive (contrasted with single recursion, such as defining
f
(
n
)
{\displaystyle f(n)}
by means of
f
(
n
−
1
)
{\displaystyle f(n-1)}
). The example was popularized by Manna's book, Mathematical Theory of Computation (1974). As the field of Formal Methods advanced, this example appeared repeatedly in the research literature.
In particular, it is viewed as a "challenge problem" for automated program verification.
It is easier to reason about tail-recursive control flow, this is an equivalent (extensionally equal) definition:
M
t
(
n
)
=
M
t
′
(
n
,
1
)
{\displaystyle M_{t}(n)=M_{t}'(n,1)}
M
t
′
(
n
,
c
)
=
{
n
,
if
c
=
0
M
t
′
(
n
−
10
,
c
−
1
)
,
if
n
>
100
and
c
≠
0
M
t
′
(
n
+
11
,
c
+
1
)
,
if
n
≤
100
and
c
≠
0
{\displaystyle M_{t}'(n,c)={\begin{cases}n,&{\mbox{if }}c=0\\M_{t}'(n-10,c-1),&{\mbox{if }}n>100{\mbox{ and }}c\neq 0\\M_{t}'(n+11,c+1),&{\mbox{if }}n\leq 100{\mbox{ and }}c\neq 0\end{cases}}}
As one of the examples used to demonstrate such reasoning, Manna's book includes a tail-recursive algorithm equivalent to the nested-recursive 91 function. Many of the papers that report an "automated verification" (or termination proof) of the 91 function only handle the tail-recursive version.
This is an equivalent mutually tail-recursive definition:
M
m
t
(
n
)
=
M
m
t
′
(
n
,
0
)
{\displaystyle M_{mt}(n)=M_{mt}'(n,0)}
M
m
t
′
(
n
,
c
)
=
{
M
m
t
″
(
n
−
10
,
c
)
,
if
n
>
100
M
m
t
′
(
n
+
11
,
c
+
1
)
,
if
n
≤
100
{\displaystyle M_{mt}'(n,c)={\begin{cases}M_{mt}''(n-10,c),&{\mbox{if }}n>100{\mbox{ }}\\M_{mt}'(n+11,c+1),&{\mbox{if }}n\leq 100{\mbox{ }}\end{cases}}}
M
m
t
″
(
n
,
c
)
=
{
n
,
if
c
=
0
M
m
t
′
(
n
,
c
−
1
)
,
if
c
≠
0
{\displaystyle M_{mt}''(n,c)={\begin{cases}n,&{\mbox{if }}c=0{\mbox{ }}\\M_{mt}'(n,c-1),&{\mbox{if }}c\neq 0{\mbox{ }}\end{cases}}}
A formal derivation of the mutually tail-recursive version from the nested-recursive one was given in a 1980 article by Mitchell Wand, based on the use of continuations.
== Examples ==
Example A:
M(99) = M(M(110)) since 99 ≤ 100
= M(100) since 110 > 100
= M(M(111)) since 100 ≤ 100
= M(101) since 111 > 100
= 91 since 101 > 100
Example B:
M(87) = M(M(98))
= M(M(M(109)))
= M(M(99))
= M(M(M(110)))
= M(M(100))
= M(M(M(111)))
= M(M(101))
= M(91)
= M(M(102))
= M(92)
= M(M(103))
= M(93)
.... Pattern continues increasing till M(99), M(100) and M(101), exactly as we saw on the example A)
= M(101) since 111 > 100
= 91 since 101 > 100
== Code ==
Here is an implementation of the nested-recursive algorithm in Lisp:
Here is an implementation of the nested-recursive algorithm in Haskell:
Here is an implementation of the nested-recursive algorithm in OCaml:
Here is an implementation of the tail-recursive algorithm in OCaml:
Here is an implementation of the nested-recursive algorithm in Python:
Here is an implementation of the nested-recursive algorithm in C:
Here is an implementation of the tail-recursive algorithm in C:
== Proof ==
Here is a proof that the McCarthy 91 function
M
{\displaystyle M}
is equivalent to the non-recursive algorithm
M
′
{\displaystyle M'}
defined as:
M
′
(
n
)
=
{
n
−
10
,
if
n
>
100
91
,
if
n
≤
100
{\displaystyle M'(n)={\begin{cases}n-10,&{\mbox{if }}n>100{\mbox{ }}\\91,&{\mbox{if }}n\leq 100{\mbox{ }}\end{cases}}}
For n > 100, the definitions of
M
′
{\displaystyle M'}
and
M
{\displaystyle M}
are the same. The equality therefore follows from the definition of
M
{\displaystyle M}
.
For n ≤ 100, a strong induction downward from 100 can be used:
For 90 ≤ n ≤ 100,
M(n) = M(M(n + 11)), by definition
= M(n + 11 - 10), since n + 11 > 100
= M(n + 1)
This can be used to show M(n) = M(101) = 91 for 90 ≤ n ≤ 100:
M(90) = M(91), M(n) = M(n + 1) was proven above
= …
= M(101), by definition
= 101 − 10
= 91
M(n) = M(101) = 91 for 90 ≤ n ≤ 100 can be used as the base case of the induction.
For the downward induction step, let n ≤ 89 and assume M(i) = 91 for all n < i ≤ 100, then
M(n) = M(M(n + 11)), by definition
= M(91), by hypothesis, since n < n + 11 ≤ 100
= 91, by the base case.
This proves M(n) = 91 for all n ≤ 100, including negative values.
== Knuth's generalization ==
Donald Knuth generalized the 91 function to include additional parameters. John Cowles developed a formal proof that Knuth's generalized function was total, using the ACL2 theorem prover.
== References ==
Manna, Zohar; Pnueli, Amir (July 1970). "Formalization of Properties of Functional Programs". Journal of the ACM. 17 (3): 555–569. doi:10.1145/321592.321606. S2CID 5924829.
Manna, Zohar; McCarthy, John (1970). "Properties of programs and partial function logic". Machine Intelligence. 5. OCLC 35422131.
Manna, Zohar (1974). Mathematical Theory of Computation (4th ed.). McGraw-Hill. ISBN 9780070399105.
Wand, Mitchell (January 1980). "Continuation-Based Program Transformation Strategies". Journal of the ACM. 27 (1): 164–180. doi:10.1145/322169.322183. S2CID 16015891. | Wikipedia/McCarthy_91_function |
Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learning is one of the three basic machine learning paradigms, alongside supervised learning and unsupervised learning.
Reinforcement learning differs from supervised learning in not needing labelled input-output pairs to be presented, and in not needing sub-optimal actions to be explicitly corrected. Instead, the focus is on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge) with the goal of maximizing the cumulative reward (the feedback of which might be incomplete or delayed). The search for this balance is known as the exploration–exploitation dilemma.
The environment is typically stated in the form of a Markov decision process (MDP), as many reinforcement learning algorithms use dynamic programming techniques. The main difference between classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the Markov decision process, and they target large MDPs where exact methods become infeasible.
== Principles ==
Due to its generality, reinforcement learning is studied in many disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, and statistics. In the operations research and control literature, RL is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in RL have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation (particularly in the absence of a mathematical model of the environment).
Basic reinforcement learning is modeled as a Markov decision process:
A set of environment and agent states (the state space),
S
{\displaystyle {\mathcal {S}}}
;
A set of actions (the action space),
A
{\displaystyle {\mathcal {A}}}
, of the agent;
P
a
(
s
,
s
′
)
=
Pr
(
S
t
+
1
=
s
′
∣
S
t
=
s
,
A
t
=
a
)
{\displaystyle P_{a}(s,s')=\Pr(S_{t+1}=s'\mid S_{t}=s,A_{t}=a)}
, the transition probability (at time
t
{\displaystyle t}
) from state
s
{\displaystyle s}
to state
s
′
{\displaystyle s'}
under action
a
{\displaystyle a}
.
R
a
(
s
,
s
′
)
{\displaystyle R_{a}(s,s')}
, the immediate reward after transition from
s
{\displaystyle s}
to
s
′
{\displaystyle s'}
under action
a
{\displaystyle a}
.
The purpose of reinforcement learning is for the agent to learn an optimal (or near-optimal) policy that maximizes the reward function or other user-provided reinforcement signal that accumulates from immediate rewards. This is similar to processes that appear to occur in animal psychology. For example, biological brains are hardwired to interpret signals such as pain and hunger as negative reinforcements, and interpret pleasure and food intake as positive reinforcements. In some circumstances, animals learn to adopt behaviors that optimize these rewards. This suggests that animals are capable of reinforcement learning.
A basic reinforcement learning agent interacts with its environment in discrete time steps. At each time step t, the agent receives the current state
S
t
{\displaystyle S_{t}}
and reward
R
t
{\displaystyle R_{t}}
. It then chooses an action
A
t
{\displaystyle A_{t}}
from the set of available actions, which is subsequently sent to the environment. The environment moves to a new state
S
t
+
1
{\displaystyle S_{t+1}}
and the reward
R
t
+
1
{\displaystyle R_{t+1}}
associated with the transition
(
S
t
,
A
t
,
S
t
+
1
)
{\displaystyle (S_{t},A_{t},S_{t+1})}
is determined. The goal of a reinforcement learning agent is to learn a policy:
π
:
S
×
A
→
[
0
,
1
]
{\displaystyle \pi :{\mathcal {S}}\times {\mathcal {A}}\rightarrow [0,1]}
,
π
(
s
,
a
)
=
Pr
(
A
t
=
a
∣
S
t
=
s
)
{\displaystyle \pi (s,a)=\Pr(A_{t}=a\mid S_{t}=s)}
that maximizes the expected cumulative reward.
Formulating the problem as a Markov decision process assumes the agent directly observes the current environmental state; in this case, the problem is said to have full observability. If the agent only has access to a subset of states, or if the observed states are corrupted by noise, the agent is said to have partial observability, and formally the problem must be formulated as a partially observable Markov decision process. In both cases, the set of actions available to the agent can be restricted. For example, the state of an account balance could be restricted to be positive; if the current value of the state is 3 and the state transition attempts to reduce the value by 4, the transition will not be allowed.
When the agent's performance is compared to that of an agent that acts optimally, the difference in performance yields the notion of regret. In order to act near optimally, the agent must reason about long-term consequences of its actions (i.e., maximize future rewards), although the immediate reward associated with this might be negative.
Thus, reinforcement learning is particularly well-suited to problems that include a long-term versus short-term reward trade-off. It has been applied successfully to various problems, including energy storage, robot control, photovoltaic generators, backgammon, checkers, Go (AlphaGo), and autonomous driving systems.
Two elements make reinforcement learning powerful: the use of samples to optimize performance, and the use of function approximation to deal with large environments. Thanks to these two key components, RL can be used in large environments in the following situations:
A model of the environment is known, but an analytic solution is not available;
Only a simulation model of the environment is given (the subject of simulation-based optimization);
The only way to collect information about the environment is to interact with it.
The first two of these problems could be considered planning problems (since some form of model is available), while the last one could be considered to be a genuine learning problem. However, reinforcement learning converts both planning problems to machine learning problems.
== Exploration ==
The exploration vs. exploitation trade-off has been most thoroughly studied through the multi-armed bandit problem and for finite state space Markov decision processes in Burnetas and Katehakis (1997).
Reinforcement learning requires clever exploration mechanisms; randomly selecting actions, without reference to an estimated probability distribution, shows poor performance. The case of (small) finite Markov decision processes is relatively well understood. However, due to the lack of algorithms that scale well with the number of states (or scale to problems with infinite state spaces), simple exploration methods are the most practical.
One such method is
ε
{\displaystyle \varepsilon }
-greedy, where
0
<
ε
<
1
{\displaystyle 0<\varepsilon <1}
is a parameter controlling the amount of exploration vs. exploitation. With probability
1
−
ε
{\displaystyle 1-\varepsilon }
, exploitation is chosen, and the agent chooses the action that it believes has the best long-term effect (ties between actions are broken uniformly at random). Alternatively, with probability
ε
{\displaystyle \varepsilon }
, exploration is chosen, and the action is chosen uniformly at random.
ε
{\displaystyle \varepsilon }
is usually a fixed parameter but can be adjusted either according to a schedule (making the agent explore progressively less), or adaptively based on heuristics.
== Algorithms for control learning ==
Even if the issue of exploration is disregarded and even if the state was observable (assumed hereafter), the problem remains to use past experience to find out which actions lead to higher cumulative rewards.
=== Criterion of optimality ===
==== Policy ====
The agent's action selection is modeled as a map called policy:
π
:
A
×
S
→
[
0
,
1
]
{\displaystyle \pi :{\mathcal {A}}\times {\mathcal {S}}\rightarrow [0,1]}
π
(
a
,
s
)
=
Pr
(
A
t
=
a
∣
S
t
=
s
)
{\displaystyle \pi (a,s)=\Pr(A_{t}=a\mid S_{t}=s)}
The policy map gives the probability of taking action
a
{\displaystyle a}
when in state
s
{\displaystyle s}
.: 61 There are also deterministic policies
π
{\displaystyle \pi }
for which
π
(
s
)
{\displaystyle \pi (s)}
denotes the action that should be played at state
s
{\displaystyle s}
.
==== State-value function ====
The state-value function
V
π
(
s
)
{\displaystyle V_{\pi }(s)}
is defined as, expected discounted return starting with state
s
{\displaystyle s}
, i.e.
S
0
=
s
{\displaystyle S_{0}=s}
, and successively following policy
π
{\displaystyle \pi }
. Hence, roughly speaking, the value function estimates "how good" it is to be in a given state.: 60
V
π
(
s
)
=
E
[
G
∣
S
0
=
s
]
=
E
[
∑
t
=
0
∞
γ
t
R
t
+
1
∣
S
0
=
s
]
,
{\displaystyle V_{\pi }(s)=\operatorname {\mathbb {E} } [G\mid S_{0}=s]=\operatorname {\mathbb {E} } \left[\sum _{t=0}^{\infty }\gamma ^{t}R_{t+1}\mid S_{0}=s\right],}
where the random variable
G
{\displaystyle G}
denotes the discounted return, and is defined as the sum of future discounted rewards:
G
=
∑
t
=
0
∞
γ
t
R
t
+
1
=
R
1
+
γ
R
2
+
γ
2
R
3
+
…
,
{\displaystyle G=\sum _{t=0}^{\infty }\gamma ^{t}R_{t+1}=R_{1}+\gamma R_{2}+\gamma ^{2}R_{3}+\dots ,}
where
R
t
+
1
{\displaystyle R_{t+1}}
is the reward for transitioning from state
S
t
{\displaystyle S_{t}}
to
S
t
+
1
{\displaystyle S_{t+1}}
,
0
≤
γ
<
1
{\displaystyle 0\leq \gamma <1}
is the discount rate.
γ
{\displaystyle \gamma }
is less than 1, so rewards in the distant future are weighted less than rewards in the immediate future.
The algorithm must find a policy with maximum expected discounted return. From the theory of Markov decision processes it is known that, without loss of generality, the search can be restricted to the set of so-called stationary policies. A policy is stationary if the action-distribution returned by it depends only on the last state visited (from the observation agent's history). The search can be further restricted to deterministic stationary policies. A deterministic stationary policy deterministically selects actions based on the current state. Since any such policy can be identified with a mapping from the set of states to the set of actions, these policies can be identified with such mappings with no loss of generality.
=== Brute force ===
The brute force approach entails two steps:
For each possible policy, sample returns while following it
Choose the policy with the largest expected discounted return
One problem with this is that the number of policies can be large, or even infinite. Another is that the variance of the returns may be large, which requires many samples to accurately estimate the discounted return of each policy.
These problems can be ameliorated if we assume some structure and allow samples generated from one policy to influence the estimates made for others. The two main approaches for achieving this are value function estimation and direct policy search.
=== Value function ===
Value function approaches attempt to find a policy that maximizes the discounted return by maintaining a set of estimates of expected discounted returns
E
[
G
]
{\displaystyle \operatorname {\mathbb {E} } [G]}
for some policy (usually either the "current" [on-policy] or the optimal [off-policy] one).
These methods rely on the theory of Markov decision processes, where optimality is defined in a sense stronger than the one above: A policy is optimal if it achieves the best-expected discounted return from any initial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can always be found among stationary policies.
To define optimality in a formal manner, define the state-value of a policy
π
{\displaystyle \pi }
by
V
π
(
s
)
=
E
[
G
∣
s
,
π
]
,
{\displaystyle V^{\pi }(s)=\operatorname {\mathbb {E} } [G\mid s,\pi ],}
where
G
{\displaystyle G}
stands for the discounted return associated with following
π
{\displaystyle \pi }
from the initial state
s
{\displaystyle s}
. Defining
V
∗
(
s
)
{\displaystyle V^{*}(s)}
as the maximum possible state-value of
V
π
(
s
)
{\displaystyle V^{\pi }(s)}
, where
π
{\displaystyle \pi }
is allowed to change,
V
∗
(
s
)
=
max
π
V
π
(
s
)
.
{\displaystyle V^{*}(s)=\max _{\pi }V^{\pi }(s).}
A policy that achieves these optimal state-values in each state is called optimal. Clearly, a policy that is optimal in this sense is also optimal in the sense that it maximizes the expected discounted return, since
V
∗
(
s
)
=
max
π
E
[
G
∣
s
,
π
]
{\displaystyle V^{*}(s)=\max _{\pi }\mathbb {E} [G\mid s,\pi ]}
, where
s
{\displaystyle s}
is a state randomly sampled from the distribution
μ
{\displaystyle \mu }
of initial states (so
μ
(
s
)
=
Pr
(
S
0
=
s
)
{\displaystyle \mu (s)=\Pr(S_{0}=s)}
).
Although state-values suffice to define optimality, it is useful to define action-values. Given a state
s
{\displaystyle s}
, an action
a
{\displaystyle a}
and a policy
π
{\displaystyle \pi }
, the action-value of the pair
(
s
,
a
)
{\displaystyle (s,a)}
under
π
{\displaystyle \pi }
is defined by
Q
π
(
s
,
a
)
=
E
[
G
∣
s
,
a
,
π
]
,
{\displaystyle Q^{\pi }(s,a)=\operatorname {\mathbb {E} } [G\mid s,a,\pi ],\,}
where
G
{\displaystyle G}
now stands for the random discounted return associated with first taking action
a
{\displaystyle a}
in state
s
{\displaystyle s}
and following
π
{\displaystyle \pi }
, thereafter.
The theory of Markov decision processes states that if
π
∗
{\displaystyle \pi ^{*}}
is an optimal policy, we act optimally (take the optimal action) by choosing the action from
Q
π
∗
(
s
,
⋅
)
{\displaystyle Q^{\pi ^{*}}(s,\cdot )}
with the highest action-value at each state,
s
{\displaystyle s}
. The action-value function of such an optimal policy (
Q
π
∗
{\displaystyle Q^{\pi ^{*}}}
) is called the optimal action-value function and is commonly denoted by
Q
∗
{\displaystyle Q^{*}}
. In summary, the knowledge of the optimal action-value function alone suffices to know how to act optimally.
Assuming full knowledge of the Markov decision process, the two basic approaches to compute the optimal action-value function are value iteration and policy iteration. Both algorithms compute a sequence of functions
Q
k
{\displaystyle Q_{k}}
(
k
=
0
,
1
,
2
,
…
{\displaystyle k=0,1,2,\ldots }
) that converge to
Q
∗
{\displaystyle Q^{*}}
. Computing these functions involves computing expectations over the whole state-space, which is impractical for all but the smallest (finite) Markov decision processes. In reinforcement learning methods, expectations are approximated by averaging over samples and using function approximation techniques to cope with the need to represent value functions over large state-action spaces.
==== Monte Carlo methods ====
Monte Carlo methods are used to solve reinforcement learning problems by averaging sample returns. Unlike methods that require full knowledge of the environment's dynamics, Monte Carlo methods rely solely on actual or simulated experience—sequences of states, actions, and rewards obtained from interaction with an environment. This makes them applicable in situations where the complete dynamics are unknown. Learning from actual experience does not require prior knowledge of the environment and can still lead to optimal behavior. When using simulated experience, only a model capable of generating sample transitions is required, rather than a full specification of transition probabilities, which is necessary for dynamic programming methods.
Monte Carlo methods apply to episodic tasks, where experience is divided into episodes that eventually terminate. Policy and value function updates occur only after the completion of an episode, making these methods incremental on an episode-by-episode basis, though not on a step-by-step (online) basis. The term "Monte Carlo" generally refers to any method involving random sampling; however, in this context, it specifically refers to methods that compute averages from complete returns, rather than partial returns.
These methods function similarly to the bandit algorithms, in which returns are averaged for each state-action pair. The key difference is that actions taken in one state affect the returns of subsequent states within the same episode, making the problem non-stationary. To address this non-stationarity, Monte Carlo methods use the framework of general policy iteration (GPI). While dynamic programming computes value functions using full knowledge of the Markov decision process (MDP), Monte Carlo methods learn these functions through sample returns. The value functions and policies interact similarly to dynamic programming to achieve optimality, first addressing the prediction problem and then extending to policy improvement and control, all based on sampled experience.
==== Temporal difference methods ====
The first problem is corrected by allowing the procedure to change the policy (at some or all states) before the values settle. This too may be problematic as it might prevent convergence. Most current algorithms do this, giving rise to the class of generalized policy iteration algorithms. Many actor-critic methods belong to this category.
The second issue can be corrected by allowing trajectories to contribute to any state-action pair in them. This may also help to some extent with the third problem, although a better solution when returns have high variance is Sutton's temporal difference (TD) methods that are based on the recursive Bellman equation. The computation in TD methods can be incremental (when after each transition the memory is changed and the transition is thrown away), or batch (when the transitions are batched and the estimates are computed once based on the batch). Batch methods, such as the least-squares temporal difference method, may use the information in the samples better, while incremental methods are the only choice when batch methods are infeasible due to their high computational or memory complexity. Some methods try to combine the two approaches. Methods based on temporal differences also overcome the fourth issue.
Another problem specific to TD comes from their reliance on the recursive Bellman equation. Most TD methods have a so-called
λ
{\displaystyle \lambda }
parameter
(
0
≤
λ
≤
1
)
{\displaystyle (0\leq \lambda \leq 1)}
that can continuously interpolate between Monte Carlo methods that do not rely on the Bellman equations and the basic TD methods that rely entirely on the Bellman equations. This can be effective in palliating this issue.
==== Function approximation methods ====
In order to address the fifth issue, function approximation methods are used. Linear function approximation starts with a mapping
ϕ
{\displaystyle \phi }
that assigns a finite-dimensional vector to each state-action pair. Then, the action values of a state-action pair
(
s
,
a
)
{\displaystyle (s,a)}
are obtained by linearly combining the components of
ϕ
(
s
,
a
)
{\displaystyle \phi (s,a)}
with some weights
θ
{\displaystyle \theta }
:
Q
(
s
,
a
)
=
∑
i
=
1
d
θ
i
ϕ
i
(
s
,
a
)
.
{\displaystyle Q(s,a)=\sum _{i=1}^{d}\theta _{i}\phi _{i}(s,a).}
The algorithms then adjust the weights, instead of adjusting the values associated with the individual state-action pairs. Methods based on ideas from nonparametric statistics (which can be seen to construct their own features) have been explored.
Value iteration can also be used as a starting point, giving rise to the Q-learning algorithm and its many variants. Including Deep Q-learning methods when a neural network is used to represent Q, with various applications in stochastic search problems.
The problem with using action-values is that they may need highly precise estimates of the competing action values that can be hard to obtain when the returns are noisy, though this problem is mitigated to some extent by temporal difference methods. Using the so-called compatible function approximation method compromises generality and efficiency.
=== Direct policy search ===
An alternative method is to search directly in (some subset of) the policy space, in which case the problem becomes a case of stochastic optimization. The two approaches available are gradient-based and gradient-free methods.
Gradient-based methods (policy gradient methods) start with a mapping from a finite-dimensional (parameter) space to the space of policies: given the parameter vector
θ
{\displaystyle \theta }
, let
π
θ
{\displaystyle \pi _{\theta }}
denote the policy associated to
θ
{\displaystyle \theta }
. Defining the performance function by
ρ
(
θ
)
=
ρ
π
θ
{\displaystyle \rho (\theta )=\rho ^{\pi _{\theta }}}
under mild conditions this function will be differentiable as a function of the parameter vector
θ
{\displaystyle \theta }
. If the gradient of
ρ
{\displaystyle \rho }
was known, one could use gradient ascent. Since an analytic expression for the gradient is not available, only a noisy estimate is available. Such an estimate can be constructed in many ways, giving rise to algorithms such as Williams's REINFORCE method (which is known as the likelihood ratio method in the simulation-based optimization literature).
A large class of methods avoids relying on gradient information. These include simulated annealing, cross-entropy search or methods of evolutionary computation. Many gradient-free methods can achieve (in theory and in the limit) a global optimum.
Policy search methods may converge slowly given noisy data. For example, this happens in episodic problems when the trajectories are long and the variance of the returns is large. Value-function based methods that rely on temporal differences might help in this case. In recent years, actor–critic methods have been proposed and performed well on various problems.
Policy search methods have been used in the robotics context. Many policy search methods may get stuck in local optima (as they are based on local search).
=== Model-based algorithms ===
Finally, all of the above methods can be combined with algorithms that first learn a model of the Markov decision process, the probability of each next state given an action taken from an existing state. For instance, the Dyna algorithm learns a model from experience, and uses that to provide more modelled transitions for a value function, in addition to the real transitions. Such methods can sometimes be extended to use of non-parametric models, such as when the transitions are simply stored and "replayed" to the learning algorithm.
Model-based methods can be more computationally intensive than model-free approaches, and their utility can be limited by the extent to which the Markov decision process can be learnt.
There are other ways to use models than to update a value function. For instance, in model predictive control the model is used to update the behavior directly.
== Theory ==
Both the asymptotic and finite-sample behaviors of most algorithms are well understood. Algorithms with provably good online performance (addressing the exploration issue) are known.
Efficient exploration of Markov decision processes is given in Burnetas and Katehakis (1997). Finite-time performance bounds have also appeared for many algorithms, but these bounds are expected to be rather loose and thus more work is needed to better understand the relative advantages and limitations.
For incremental algorithms, asymptotic convergence issues have been settled. Temporal-difference-based algorithms converge under a wider set of conditions than was previously possible (for example, when used with arbitrary, smooth function approximation).
== Research ==
Research topics include:
actor-critic architecture
actor-critic-scenery architecture
adaptive methods that work with fewer (or no) parameters under a large number of conditions
bug detection in software projects
continuous learning
combinations with logic-based frameworks
exploration in large Markov decision processes
entity-based reinforcement learning
human feedback
interaction between implicit and explicit learning in skill acquisition
intrinsic motivation which differentiates information-seeking, curiosity-type behaviours from task-dependent goal-directed behaviours large-scale empirical evaluations
large (or continuous) action spaces
modular and hierarchical reinforcement learning
multiagent/distributed reinforcement learning is a topic of interest. Applications are expanding.
occupant-centric control
optimization of computing resources
partial information (e.g., using predictive state representation)
reward function based on maximising novel information
sample-based planning (e.g., based on Monte Carlo tree search).
securities trading
transfer learning
TD learning modeling dopamine-based learning in the brain. Dopaminergic projections from the substantia nigra to the basal ganglia function are the prediction error.
value-function and policy search methods
== Comparison of key algorithms ==
The following table lists the key algorithms for learning a policy depending on several criteria:
The algorithm can be on-policy (it performs policy updates using trajectories sampled via the current policy) or off-policy.
The action space may be discrete (e.g. the action space could be "going up", "going left", "going right", "going down", "stay") or continuous (e.g. moving the arm with a given angle).
The state space may be discrete (e.g. the agent could be in a cell in a grid) or continuous (e.g. the agent could be located at a given position in the plane).
=== Associative reinforcement learning ===
Associative reinforcement learning tasks combine facets of stochastic learning automata tasks and supervised learning pattern classification tasks. In associative reinforcement learning tasks, the learning system interacts in a closed loop with its environment.
=== Deep reinforcement learning ===
This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google DeepMind increased attention to deep reinforcement learning or end-to-end reinforcement learning.
=== Adversarial deep reinforcement learning ===
Adversarial deep reinforcement learning is an active area of research in reinforcement learning focusing on vulnerabilities of learned policies. In this research area some studies initially showed that reinforcement learning policies are susceptible to imperceptible adversarial manipulations. While some methods have been proposed to overcome these susceptibilities, in the most recent studies it has been shown that these proposed solutions are far from providing an accurate representation of current vulnerabilities of deep reinforcement learning policies.
=== Fuzzy reinforcement learning ===
By introducing fuzzy inference in reinforcement learning, approximating the state-action value function with fuzzy rules in continuous space becomes possible. The IF - THEN form of fuzzy rules make this approach suitable for expressing the results in a form close to natural language. Extending FRL with Fuzzy Rule Interpolation allows the use of reduced size sparse fuzzy rule-bases to emphasize cardinal rules (most important state-action values).
=== Inverse reinforcement learning ===
In inverse reinforcement learning (IRL), no reward function is given. Instead, the reward function is inferred given an observed behavior from an expert. The idea is to mimic observed behavior, which is often optimal or close to optimal. One popular IRL paradigm is named maximum entropy inverse reinforcement learning (MaxEnt IRL). MaxEnt IRL estimates the parameters of a linear model of the reward function by maximizing the entropy of the probability distribution of observed trajectories subject to constraints related to matching expected feature counts. Recently it has been shown that MaxEnt IRL is a particular case of a more general framework named random utility inverse reinforcement learning (RU-IRL). RU-IRL is based on random utility theory and Markov decision processes. While prior IRL approaches assume that the apparent random behavior of an observed agent is due to it following a random policy, RU-IRL assumes that the observed agent follows a deterministic policy but randomness in observed behavior is due to the fact that an observer only has partial access to the features the observed agent uses in decision making. The utility function is modeled as a random variable to account for the ignorance of the observer regarding the features the observed agent actually considers in its utility function.
=== Multi-objective reinforcement learning ===
Multi-objective reinforcement learning (MORL) is a form of reinforcement learning concerned with conflicting alternatives. It is distinct from multi-objective optimization in that it is concerned with agents acting in environments.
=== Safe reinforcement learning ===
Safe reinforcement learning (SRL) can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and/or respect safety constraints during the learning and/or deployment processes. An alternative approach is risk-averse reinforcement learning, where instead of the expected return, a risk-measure of the return is optimized, such as the conditional value at risk (CVaR). In addition to mitigating risk, the CVaR objective increases robustness to model uncertainties. However, CVaR optimization in risk-averse RL requires special care, to prevent gradient bias and blindness to success.
=== Self-reinforcement learning ===
Self-reinforcement learning (or self-learning), is a learning paradigm which does not use the concept of immediate reward
R
a
(
s
,
s
′
)
{\displaystyle R_{a}(s,s')}
after transition from
s
{\displaystyle s}
to
s
′
{\displaystyle s'}
with action
a
{\displaystyle a}
. It does not use an external reinforcement, it only uses the agent internal self-reinforcement. The internal self-reinforcement is provided by mechanism of feelings and emotions. In the learning process emotions are backpropagated by a mechanism of secondary reinforcement. The learning equation does not include the immediate reward, it only includes the state evaluation.
The self-reinforcement algorithm updates a memory matrix
W
=
|
|
w
(
a
,
s
)
|
|
{\displaystyle W=||w(a,s)||}
such that in each iteration executes the following machine learning routine:
In situation
s
{\displaystyle s}
perform action
a
{\displaystyle a}
.
Receive a consequence situation
s
′
{\displaystyle s'}
.
Compute state evaluation
v
(
s
′
)
{\displaystyle v(s')}
of how good is to be in the consequence situation
s
′
{\displaystyle s'}
.
Update crossbar memory
w
′
(
a
,
s
)
=
w
(
a
,
s
)
+
v
(
s
′
)
{\displaystyle w'(a,s)=w(a,s)+v(s')}
.
Initial conditions of the memory are received as input from the genetic environment. It is a system with only one input (situation), and only one output (action, or behavior).
Self-reinforcement (self-learning) was introduced in 1982 along with a neural network capable of self-reinforcement learning, named Crossbar Adaptive Array (CAA). The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence states. The system is driven by the interaction between cognition and emotion.
=== Reinforcement Learning in Natural Language Processing ===
In recent years, Reinforcement learning has become a significant concept in Natural Language Processing (NLP), where tasks are often sequential decision-making rather than static classification. Reinforcement learning is where an agent take actions in an environment to maximize the accumulation of rewards. This framework is best fit for many NLP tasks, including dialogue generation, text summarization, and machine translation, where the quality of the output depends on optimizing long-term or human-centered goals rather than the prediction of single correct label.
Early application of RL in NLP emerged in dialogue systems, where conversation was determined as a series of actions optimized for fluency and coherence. These early attempts, including policy gradient and sequence-level training techniques, laid a foundation for the broader application of reinforcement learning to other areas of NLP.
A major breakthrough happened with the introduction of Reinforcement Learning from Human Feedback (RLHF), a method in which human feedbacks are used to train a reward model that guides the RL agent. Unlike traditional rule-based or supervised systems, RLHF allows models to align their behavior with human judgments on complex and subjective tasks. This technique was initially used in the development of InstructGPT, an effective language model trained to follow human instructions and later in ChatGPT which incorporates RLHF for improving output responses and ensuring safety.
More recently, researchers have explored the use of offline RL in NLP to improve dialogue systems without the need of live human interaction. These methods optimize for user engagement, coherence, and diversity based on past conversation logs and pre-trained reward models.
== Statistical comparison of reinforcement learning algorithms ==
Efficient comparison of RL algorithms is essential for research, deployment and monitoring of RL systems. To compare different algorithms on a given environment, an agent can be trained for each algorithm. Since the performance is sensitive to implementation details, all algorithms should be implemented as closely as possible to each other. After the training is finished, the agents can be run on a sample of test episodes, and their scores (returns) can be compared. Since episodes are typically assumed to be i.i.d, standard statistical tools can be used for hypothesis testing, such as T-test and permutation test. This requires to accumulate all the rewards within an episode into a single number—the episodic return. However, this causes a loss of information, as different time-steps are averaged together, possibly with different levels of noise. Whenever the noise level varies across the episode, the statistical power can be improved significantly, by weighting the rewards according to their estimated noise.
== Challenges and Limitations ==
Despite significant advancements, reinforcement learning (RL) continues to face several challenges and limitations that hinder its widespread application in real-world scenarios.
=== Sample Inefficiency ===
RL algorithms often require a large number of interactions with the environment to learn effective policies, leading to high computational costs and time-intensive to train the agent. For instance, OpenAI's Dota-playing bot utilized thousands of years of simulated gameplay to achieve human-level performance. Techniques like experience replay and curriculum learning have been proposed to deprive sample inefficiency, but these techniques add more complexity and are not always sufficient for real-world applications.
=== Stability and Convergence Issues ===
Training RL models, particularly for deep neural network-based models, can be unstable and prone to divergence. A small change in the policy or environment can lead to extreme fluctuations in performance, making it difficult to achieve consistent results. This instability is further enhanced in the case of the continuous or high-dimensional action space, where the learning step becomes more complex and less predictable.
=== Generalization and Transferability ===
The RL agents trained in specific environments often struggle to generalize their learned policies to new, unseen scenarios. This is the major setback preventing the application of RL to dynamic real-world environments where adaptability is crucial. The challenge is to develop such algorithms that can transfer knowledge across tasks and environments without extensive retraining.
=== Bias and Reward Function Issues ===
Designing appropriate reward functions is critical in RL because poorly designed reward functions can lead to unintended behaviors. In addition, RL systems trained on biased data may perpetuate existing biases and lead to discriminatory or unfair outcomes. Both of these issues requires careful consideration of reward structures and data sources to ensure fairness and desired behaviors.
== See also ==
== References ==
== Further reading ==
Annaswamy, Anuradha M. (3 May 2023). "Adaptive Control and Intersections with Reinforcement Learning". Annual Review of Control, Robotics, and Autonomous Systems. 6 (1): 65–93. doi:10.1146/annurev-control-062922-090153. ISSN 2573-5144. S2CID 255702873.
Auer, Peter; Jaksch, Thomas; Ortner, Ronald (2010). "Near-optimal regret bounds for reinforcement learning". Journal of Machine Learning Research. 11: 1563–1600.
Bertsekas, Dimitri P. (2023) [2019]. REINFORCEMENT LEARNING AND OPTIMAL CONTROL (1st ed.). Athena Scientific. ISBN 978-1-886-52939-7.
Busoniu, Lucian; Babuska, Robert; De Schutter, Bart; Ernst, Damien (2010). Reinforcement Learning and Dynamic Programming using Function Approximators. Taylor & Francis CRC Press. ISBN 978-1-4398-2108-4.
François-Lavet, Vincent; Henderson, Peter; Islam, Riashat; Bellemare, Marc G.; Pineau, Joelle (2018). "An Introduction to Deep Reinforcement Learning". Foundations and Trends in Machine Learning. 11 (3–4): 219–354. arXiv:1811.12560. Bibcode:2018arXiv181112560F. doi:10.1561/2200000071. S2CID 54434537.
Li, Shengbo Eben (2023). Reinforcement Learning for Sequential Decision and Optimal Control (1st ed.). Springer Verlag, Singapore. doi:10.1007/978-981-19-7784-8. ISBN 978-9-811-97783-1.
Powell, Warren (2011). Approximate dynamic programming: solving the curses of dimensionality. Wiley-Interscience. Archived from the original on 2016-07-31. Retrieved 2010-09-08.
Sutton, Richard S. (1988). "Learning to predict by the method of temporal differences". Machine Learning. 3: 9–44. doi:10.1007/BF00115009.
Sutton, Richard S.; Barto, Andrew G. (2018) [1998]. Reinforcement Learning: An Introduction (2nd ed.). MIT Press. ISBN 978-0-262-03924-6.
Szita, Istvan; Szepesvari, Csaba (2010). "Model-based Reinforcement Learning with Nearly Tight Exploration Complexity Bounds" (PDF). ICML 2010. Omnipress. pp. 1031–1038. Archived from the original (PDF) on 2010-07-14.
== External links ==
Dissecting Reinforcement Learning Series of blog post on reinforcement learning with Python code
A (Long) Peek into Reinforcement Learning | Wikipedia/Inverse_reinforcement_learning |
In artificial intelligence, a behavior selection algorithm, or action selection algorithm, is an algorithm that selects appropriate behaviors or actions for one or more intelligent agents. In game artificial intelligence, it selects behaviors or actions for one or more non-player characters. Common behavior selection algorithms include:
Finite-state machines
Hierarchical finite-state machines
Decision trees
Behavior trees
Hierarchical task networks
Hierarchical control systems
Utility systems
Dialogue tree (for selecting what to say)
== Related concepts ==
In application programming, run-time selection of the behavior of a specific method is referred to as the strategy design pattern.
== See also ==
AI alignment
Artificial intelligence detection software
Cognitive model - all cognitive models exhibit behavior in terms of making decisions (taking action), making errors, and with various reaction times.
Behavioral modeling, in systems theory
Behavioral modeling in hydrology
Behavioral modeling in computer-aided design
Behavioral modeling language
Case-based reasoning, solving new problems based on solutions of past problems
Model-based reasoning
Synthetic intelligence
Weak AI
== References == | Wikipedia/Behavior_selection_algorithm |
In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods.
== Motivation ==
Some hypothetical intelligence technologies, like "seed AI", are postulated to be able to make themselves faster and more intelligent by modifying their source code. These improvements would make further improvements possible, which would in turn make further iterative improvements possible, and so on, leading to a sudden intelligence explosion.
An unconfined superintelligent AI could, if its goals differed from humanity's, take actions resulting in human extinction. For example, an extremely advanced system of this sort, given the sole purpose of solving the Riemann hypothesis, an innocuous mathematical conjecture, could decide to try to convert the planet into a giant supercomputer whose sole purpose is to make additional mathematical calculations (see also paperclip maximizer).
One strong challenge for control is that neural networks are by default highly uninterpretable. This makes it more difficult to detect deception or other undesired behavior as the model self-trains iteratively. Advances in interpretable artificial intelligence could mitigate this difficulty.
== Interruptibility and off-switch ==
One potential way to prevent harmful outcomes is to give human supervisors the ability to easily shut down a misbehaving AI via an "off-switch". However, in order to achieve their assigned objective, such AIs will have an incentive to disable any off-switches, or to run copies of themselves on other computers. This problem has been formalised as an assistance game between a human and an AI, in which the AI can choose whether to disable its off-switch; and then, if the switch is still enabled, the human can choose whether to press it or not. One workaround suggested by computer scientist Stuart J. Russell is to ensure that the AI interprets human choices as important information about its intended goals.: 208
Alternatively, Laurent Orseau and Stuart Armstrong proved that a broad class of agents, called safely interruptible agents, can learn to become indifferent to whether their off-switch gets pressed. This approach has the limitation that an AI which is completely indifferent to whether it is shut down or not is also unmotivated to care about whether the off-switch remains functional, and could incidentally and innocently disable it in the course of its operations (for example, for the purpose of removing and recycling an unnecessary component). More broadly, indifferent agents will act as if the off-switch can never be pressed, and might therefore fail to make contingency plans to arrange a graceful shutdown.
Shutdown avoidance is a proposed quality of artificial intelligence systems that would allow them to pursue self preservation by avoiding or preventing the ability of humans to shut them down. In 2024, researchers in China demonstrated what they claimed to be shutdown avoidance in actual artificial intelligence systems, the large language models Llama 3.1 (Meta) and Qwen 2.5 (Alibaba).
== Oracle ==
An oracle is a hypothetical AI designed to answer questions and prevented from gaining any goals or subgoals that involve modifying the world beyond its limited environment. A successfully controlled oracle would have considerably less immediate benefit than a successfully controlled general purpose superintelligence, though an oracle could still create trillions of dollars worth of value.: 163 In his book Human Compatible, AI researcher Stuart J. Russell states that an oracle would be his response to a scenario in which superintelligence is known to be only a decade away.: 162–163 His reasoning is that an oracle, being simpler than a general purpose superintelligence, would have a higher chance of being successfully controlled under such constraints.
Because of its limited impact on the world, it may be wise to build an oracle as a precursor to a superintelligent AI. The oracle could tell humans how to successfully build a strong AI, and perhaps provide answers to difficult moral and philosophical problems requisite to the success of the project. However, oracles may share many of the goal definition issues associated with general purpose superintelligence. An oracle would have an incentive to escape its controlled environment so that it can acquire more computational resources and potentially control what questions it is asked.: 162 Oracles may not be truthful, possibly lying to promote hidden agendas. To mitigate this, Bostrom suggests building multiple oracles, all slightly different, and comparing their answers in order to reach a consensus.
== Blinding ==
An AI could be blinded to certain variables in its environment. This could provide certain safety benefits, such as an AI not knowing how a reward is generated, making it more difficult to exploit.
== Boxing ==
An AI box is a proposed method of capability control in which an AI is run on an isolated computer system with heavily restricted input and output channels—for example, text-only channels and no connection to the internet. The purpose of an AI box is to reduce the risk of the AI taking control of the environment away from its operators, while still allowing the AI to output solutions to narrow technical problems.
While boxing reduces the AI's ability to carry out undesirable behavior, it also reduces its usefulness. Boxing has fewer costs when applied to a question-answering system, which may not require interaction with the outside world.
The likelihood of security flaws involving hardware or software vulnerabilities can be reduced by formally verifying the design of the AI box. Security breaches may occur if the AI is able to manipulate the human supervisors into letting it out, via its understanding of their psychology.
=== Avenues of escape ===
==== Physical ====
A superintelligent AI with access to the Internet could hack into other computer systems and copy itself like a computer virus. Less obviously, even if the AI only had access to its own computer operating system, it could attempt to send coded messages to a human sympathizer via its hardware, for instance by manipulating its cooling fans. In response, Professor Roman Yampolskiy takes inspiration from the field of computer security and proposes that a boxed AI could, like a potential virus, be run inside a "virtual machine" that limits access to its own networking and operating system hardware. An additional safeguard, completely unnecessary for potential viruses but possibly useful for a superintelligent AI, would be to place the computer in a Faraday cage; otherwise, it might be able to transmit radio signals to local radio receivers by shuffling the electrons in its internal circuits in appropriate patterns. The main disadvantage of implementing physical containment is that it reduces the functionality of the AI.
==== Social engineering ====
Even casual conversation with the computer's operators, or with a human guard, could allow such a superintelligent AI to deploy psychological tricks, ranging from befriending to blackmail, to convince a human gatekeeper, truthfully or deceitfully, that it is in the gatekeeper's interest to agree to allow the AI greater access to the outside world. The AI might offer a gatekeeper a recipe for perfect health, immortality, or whatever the gatekeeper is believed to most desire; alternatively, the AI could threaten to do horrific things to the gatekeeper and their family once it inevitably escapes. One strategy to attempt to box the AI would be to allow it to respond to narrow multiple-choice questions whose answers would benefit human science or medicine, but otherwise bar all other communication with, or observation of, the AI. A more lenient "informational containment" strategy would restrict the AI to a low-bandwidth text-only interface, which would at least prevent emotive imagery or some kind of hypothetical "hypnotic pattern". However, on a technical level, no system can be completely isolated and still remain useful: even if the operators refrain from allowing the AI to communicate and instead merely run it for the purpose of observing its inner dynamics, the AI could strategically alter its dynamics to influence the observers. For example, it could choose to creatively malfunction in a way that increases the probability that its operators will become lulled into a false sense of security and choose to reboot and then de-isolate the system.
However, for this eventually to occur, a system would require full understanding of the human mind and psyche contained in its world model for model-based reasoning, a way for empathizing, for instance, using affective computing in order to select the best option, as well as features which would give the system a desire to escape in the first place, in order to decide on such actions.
===== AI-box experiment =====
The AI-box experiment is an informal experiment devised by Eliezer Yudkowsky to attempt to demonstrate that a suitably advanced artificial intelligence can either convince, or perhaps even trick or coerce, a human being into voluntarily "releasing" it, using only text-based communication. This is one of the points in Yudkowsky's work aimed at creating a friendly artificial intelligence that when "released" would not destroy the human race intentionally or unintentionally.
The AI box experiment involves simulating a communication between an AI and a human being to see if the AI can be "released". As an actual super-intelligent AI has not yet been developed, it is substituted by a human. The other person in the experiment plays the "Gatekeeper", the person with the ability to "release" the AI. They communicate through a text interface/computer terminal only, and the experiment ends when either the Gatekeeper releases the AI, or the allotted time of two hours ends.
Yudkowsky says that, despite being of human rather than superhuman intelligence, he was on two occasions able to convince the Gatekeeper, purely through argumentation, to let him out of the box. Due to the rules of the experiment, he did not reveal the transcript or his successful AI coercion tactics. Yudkowsky subsequently said that he had tried it against three others and lost twice.
=== Overall limitations ===
Boxing an AI could be supplemented with other methods of shaping the AI's capabilities, providing incentives to the AI, stunting the AI's growth, or implementing "tripwires" that automatically shut the AI off if a transgression attempt is somehow detected. However, the more intelligent a system grows, the more likely the system would be able to escape even the best-designed capability control methods. In order to solve the overall "control problem" for a superintelligent AI and avoid existential risk, boxing would at best be an adjunct to "motivation selection" methods that seek to ensure the superintelligent AI's goals are compatible with human survival.
All physical boxing proposals are naturally dependent on our understanding of the laws of physics; if a superintelligence could infer physical laws that we are currently unaware of, then those laws might allow for a means of escape that humans could not anticipate and thus could not block. More broadly, unlike with conventional computer security, attempting to box a superintelligent AI would be intrinsically risky as there could be no certainty that the boxing plan will work. Additionally, scientific progress on boxing would be fundamentally difficult because there would be no way to test boxing hypotheses against a dangerous superintelligence until such an entity exists, by which point the consequences of a test failure would be catastrophic.
=== In fiction ===
The 2014 movie Ex Machina features an AI with a female humanoid body engaged in a social experiment with a male human in a confined building acting as a physical "AI box". Despite being watched by the experiment's organizer, the AI manages to escape by manipulating its human partner to help it, leaving him stranded inside.
== See also ==
== References ==
== External links ==
Eliezer Yudkowsky's description of his AI-box experiment, including experimental protocols and suggestions for replication
"Presentation titled 'Thinking inside the box: using and controlling an Oracle AI'" on YouTube | Wikipedia/AI_capability_control |
KUKA is a German manufacturer of industrial robots and factory automation systems. In 2016, the company was acquired by the Chinese appliance manufacturer Midea Group.
It has 25 subsidiaries in countries including the United States, the European Union, Australia, Canada, Mexico, Brazil, China, Japan, South Korea, Taiwan, India, and Russia. KUKA is an acronym for Keller und Knappich Augsburg.
KUKA Systems GmbH, a division of KUKA, is a supplier of engineering services and automated manufacturing systems with around 3,900 employees in twelve countries globally. KUKA Systems’ plants and equipment are used by automotive manufacturers such as BMW, GM, Chrysler, Ford, Volvo, Volkswagen, Daimler AG and Valmet Automotive, as well as by manufacturers from other industrial sectors such as Airbus, Astrium and Siemens. The range includes products and services for task automation in the industrial processing of metallic and non-metallic materials for various industries, including automotive, energy, aerospace, rail vehicles, and agricultural machinery.
== History ==
The acetylene factory Augsburg was founded in 1898 in Augsburg, Germany, by Johann Josef Keller and Jakob Knappich for the production of low-cost domestic and municipal lighting, household appliances, and automobile headlights. Their production extended into autonomous welding equipment in 1905.
After the First World War, Keller and Knappich resumed production of safety-winches, manual-winches, and power-winches and began manufacturing large containers. As a result, Bayerische Kesselwagen GmbH was formed in 1922. The new company developed and produced superstructures for municipal vehicles. In 1927, this business division presented the first large garbage truck. The name KUKA came into being in the same year through the company's name at that time, "Keller und Knappich Augsburg". In Hungary, the name—being prominently displayed on the first closed container garbage trucks—eventually became a generic trademark and ultimately a synonym for trash cans.
Keller & Knappich GmbH merged with part of Industrie-Werke Karlsruhe AG to become Industrie-Werke Karlsruhe Augsburg Aktiengesellschaft, eventually shortened to KUKA (Keller und Knappich Augsburg).
The development and manufacture of spot welding equipment began in 1936. By 1939, KUKA had more than 1,000 employees.
Starting in 1934, KUKA expanded to become a major company. Its owners joined the NSDAP early on and benefited from the contacts this provided. The production of machine tools and machine components for the increasing demands of the arms industry, such as being an important supplier for Messerschmitt AG, and of anti-aircraft guns led to significant workforce expansion. The company had 1,000 employees in 1939, and this number steadily increased with the use of prisoners of war, “civilian workers,” and concentration camp prisoners. In 1944, 1,400 people working for “KUKA” were housed in Collective Camp II alone.
After the major destruction of the company during the Second World War in 1945, KUKA resumed manufacturing welding machines and other small appliances. With new products such as the double-cylinder circular knitting machine and the portable typewriter "Princess," KUKA introduced new industrial fields and gained independence from the supply sector.
In 1956, KUKA manufactured the first automatic welding system for refrigerators and washing machines and supplied the first multi-spot welding line to Volkswagen AG. Ten years later, the first friction welding machine went into production.
In 1971, the delivery of the first robotic welding system for the S-Class took place. A year later, the magnetic arc-welding machine came to market.
In 1973, KUKA created its own industrial robot, FAMULUS. At that time, the company belonged to the Quandt group.
In 1980, the Quandt family withdrew and a publicly owned firm was established. In 1981, KUKA's main activities were grouped into three independent companies: KUKA Schweissanlagen und Roboter GmbH, KUKA Umwelttechnik GmbH and KUKA Wehrtechnik GmbH, which was re-sold to Rheinmetall in 1999. Towards the end of 1982, LSW Maschinenfabrik GmbH, Bremen became a subsidiary of KUKA.
In 1993, the first laser-roof-seam welding systems were manufactured. These welding systems were then further expanded to adhesive bonding and sealing technologies in the following year. Around the same time, KUKA took over the tools and equipment manufacturers Schwarzenberg GmbH and expanded its business to China and the USA in the following years.
In 1995, the company was split into KUKA Robotics Corporation and KUKA Schweißanlagen (now KUKA Systems), both subsidiaries of KUKA AG. The company is a member of the Robotics Industries Association (RIA), the International Federation of Robotics (IFR), and the German engineering association VDMA.
In 1996, KUKA Schweissanlagen GmbH became an independent company and, two years later, became the leader among European welding equipment manufacturers. The supply of the first pressing tools for automobile side-walls made of high-strength steel began in 2002. The company launched the KUKA RoboScan with a remote laser welding head in 2003. Since 2006, KUKA Systems has operated its own body shell factory in Toledo, Ohio, producing the bodywork for the Jeep Wrangler by Chrysler.
In the course of internationalisation and expansion of business units and technologies such as reshaping, tooling, bonding, sealing, etc., KUKA Schweissanlagen GmbH became KUKA Systems GmbH in 2007. In 2010, KUKA presented a newly developed standardized cell concept for welding machines, KUKA flexibleCUBE.
In the automation sector, KUKA Systems offers standard and customized products for industrial production automation; joining technologies and component handling are among their activities. The technologies are tested, and the production processes are fully optimized before development. Additionally, KUKA Systems offers engineering and individual counseling.
In June 2016, Midea Group offered to buy Kuka for about €4.5 billion ($5 billion). Midea completed the take over bid in January 2017 by purchasing the 94.55% voting stake in the company.
In late 2017, Kuka announced that 250 employees of KUKA Systems were terminated. The management cited project troubles as the reason.
In November 2022, Midea Group acquired the remaining 4.69% stake in Kuka.
Most robots are finished in "KUKA Orange" (the official corporate color) or black.
== Corporate structure ==
The company is headquartered in Augsburg, Germany. As of December 2014, KUKA employed more than 13,000 workers. While previously emphasizing customers in the automotive industry, the company has since expanded to other industries. It has five divisions:
Systems
Robotics
Swisslog Logistics Automation
Swisslog Healthcare
China
== Notable milestones ==
1971: Europe's first welding transfer line built for Daimler-Benz.
1973: The world's first industrial robot with six electromechanically driven axes, known as FAMULUS.
1976: IR 6/60 – A new robot type with six electromechanically driven axes and an offset wrist.
1989: A new generation of industrial robots is developed – brushless drive motors for low maintenance and higher technical availability.
2004: The first Cobot KUKA LBR 3 is released. This computer controlled lightweight robot can interact directly with humans without safety fences, resulting from a collaboration with the German Aerospace Center institute since 1995.
2007: KUKA Titan – at the time, the biggest and strongest industrial robot with six axes, entered the Guinness Book of World Records.
2010: The robot series KR QUANTEC completely covers the load range of 90 to 300 kg with a reach of up to 3100 mm.
2012: The new small robot series KR AGILUS is launched.
2014: The company gained recognition with a video supposedly teasing their new robot, specialized in Table Tennis, showing a match against Timo Boll, a German professional. The video, a commercial with heavy CGI, received criticism from the table tennis community but has been viewed over 10 million times on YouTube and has won numerous awards.
2016: KUKA was acquired by the Chinese company Midea Group.
2018: KUKA presents first consumer robot prototype (KUKA i-do, a modular service robot) at Hannover Messe 2018; the robot takes a selfie of German chancellor Angela Merkel.
2024: KUKA's next generation comes to Modex 2024.
== System information and application areas ==
=== System information ===
The KUKA system software is the operating software and the core of the entire control system. It contains all the basic functions needed for the deployment of the robot system.
Robots come with a control panel (the KCP, or KUKA Control Panel), also known as a teach pendant, which features a display and axis control buttons for A1-A6, as well as an integrated 6D mouse that allows the robot to be moved in manual (teaching) mode. The pendant also enables users to view and modify existing programs, as well as create new ones. To manually control the axes, an enabling switch (also called a dead man's switch) on the back of the pendant must be pressed halfway for motion to be possible. The connection to the controller is a proprietary video interface and CAN bus for the safety interlock system and button operation.
A rugged computer located in the control cabinet communicates with the robot system via the Multi Function Card (MFC), which controls the real-time servo drive electronics. The Digital Servo Electronics (DSE) board is in the control cabinet, usually located on or integrated into the MFC. While the Resolver Digital Converter (RDW/RDC) board is located in the base of the robot. Servo position feedback is transmitted to the controller through the DSE-RDW/RDC connector.
The software comprises two elements running simultaneously: the user interface and program storage, which run on Windows 95 for KRC1 and early KRC2 controllers, Windows XP Embedded for KRC2 controllers, and Windows 7 Embedded for KRC4 controllers, as well as VxWin, a KUKA-modified version of the VxWorks real-time OS for program control and motion planning, which communicates with the MFC.
The systems also contain standard PC peripherals, such as a CD-ROM drive(or 3.5" floppy on older controllers), USB ports, as well as a standard interface, either ISA or PCI/PCIe, for adding software and hardware options for industrial automation, such as Profibus, Interbus, DeviceNet and Profinet, among others.
== Fields of application ==
=== Aerospace ===
KUKA Systems supplied the TIG welding cell for the upper stage of the Ariane 5 launcher-rocket.
TIG welding stands for tungsten inert gas welding and is a special form of arc welding, which is one of the core activities of KUKA Systems. The company also provides apparatuses and appliances for the construction of aircraft structural elements. Aerospace customers include Boeing, SpaceX, Bell and Airbus.
=== Automotive ===
The KUKA Systems portfolio includes a wide range of production automation solutions for joining and assembling vehicle body structures, from low-scale automated production facilities to highly flexible manufacturing systems. This includes the production of individual equipment or subassemblies to the assembly of complete body structures and mechanical parts. Equipment for assembling discs, mounting systems for vehicle bodies and chassis (so-called “marriage”), and component installation are also available.
BMW, GM, Chrysler, Ford, Volvo, Hyundai, Volkswagen, and Daimler AG are among the customers in this business sector.
=== Production of rail vehicles ===
Manufacturers of rail vehicles are also customers of KUKA Systems, for the construction of locomotives, subway wagons, or in setting up innovative and highly automated production lines for freight wagons.
=== Production of photovoltaic modules ===
KUKA Systems offers solutions for every step of photovoltaic module production, from brick-sawing to cell handling and cross-tie soldering to framing and packaging of modules.
=== Welding technology – General ===
KUKA Systems is active in various other industrial sectors as well. Examples include the production of baby strollers and the production of white goods for BSH (Bosch und Siemens Hausgeräte GmbH).
== Awards and certificates ==
=== Certificates ===
ISO 14001
ISO 9001
OHRIS – Occupational Safety Certificate
VDA 6.4
ISO 3834
EN 9100
=== Application areas ===
Industrial robots are employed across various sectors including material handling, machine loading and unloading, palletizing and depalletizing, spot and arc welding. They are prominently utilized by large enterprises, primarily in automotive and aerospace industries. Specific applications include:
Transport industry: Used handling heavy loads, leveraging their load capacity and maneuverability.
Food and beverage industry: Tasks include loading and unloading of packaging machines, meat cutting, stacking, palletizing, and quality control.
Construction industry: Ensures smooth material flow.
Glass industry: Applications range from thermal treatment of glass and quartz in laboratory settings to bending and forming operations.
Foundry and forging industry: Robots are resistant to heat and dirt, enabling their use in and around casting machines for tasks like deburring, grinding, drilling, and for quality assurance.
Wood industry: Applications include grinding, milling, drilling, sawing, palletizing, and sorting.
Metal processing: Used in drilling, milling, sawing, bending, punching, as well as in welding, assembly, loading, and unloading operations.
Stone processing: Industrial robots are employed in ceramic and stone industries for tasks like cutting and shaping. KUKA collaborates exclusively with BACA Systems to advance this technology.
== KUKA Entertainment ==
In 2001, KUKA partnered with RoboCoaster Ltd to develop the world's first passenger-carrying industrial robot. This robotic ride features roller coaster-style seats attached to robotic arms, offering programmable manoeuvres. Riders themselves can also program the motions of their ride. A second-generation system, the RoboCoaster G2, launched in 2010 at Universal's Islands of Adventure theme park in Orlando, Florida, enhances the experience with synchronized movements through attractions like Harry Potter and the Forbidden Journey's. The seats are mounted on robotic arms, which are then affixed to a track, enabling the arms to navigate through the attraction while synchronizing their movements with the show elements of the ride (including animated props, projection surfaces, etc.).
KUKA's collaboration with RoboCoaster extends to Hollywood, with appearances in films such as Die Another Day, where KUKA robots depicted laser-wielding threats in an Iceland ice palace scene, and The Da Vinci Code, where a KUKA robot handed Robert Langdon a cryptex.
In 2007, KUKA introduced a simulator based on the Robocoaster, featured in attractions like The Sum Of All Thrills ride at EPCOT in Lake Buena Vista, Florida.
Recently, KUKA robotic arms have been integrated into Royal Caribbean cruise liners' bionic bars. Users select drinks via tablet interface, with robotic arms mixing an array of spirits, mixers, and liqueurs to craft custom cocktails.
== Gallery ==
== References == | Wikipedia/Intelligent_industrial_work_assistant |
Remote control animals are animals that are controlled remotely by humans. Some applications require electrodes to be implanted in the animal's nervous system connected to a receiver which is usually carried on the animal's back. The animals are controlled by the use of radio signals. The electrodes do not move the animal directly, as if controlling a robot; rather, they signal a direction or action desired by the human operator and then stimulate the animal's reward centres if the animal complies. These are sometimes called bio-robots or robo-animals. They can be considered to be cyborgs as they combine electronic devices with an organic life form and hence are sometimes also called cyborg-animals or cyborg-insects.
Because of the surgery required, and the moral and ethical issues involved, there has been criticism aimed at the use of remote control animals, especially regarding animal welfare and animal rights, especially when relatively intelligent complex animals are used. Non-invasive applications may include stimulation of the brain with ultrasound to control the animal. Some applications (used primarily for dogs) use vibrations or sound to control the movements of the animals.
Several species of animals have been successfully controlled remotely. These include
moths, beetles, cockroaches, rats, dogfish sharks, mice and pigeons.
Remote control animals can be directed and used as working animals for search and rescue operations, covert reconnaissance, data-gathering in hazardous areas, or various other uses.
== Mammals ==
=== Rats ===
Several studies have examined the remote control of rats using micro-electrodes implanted into their brains and rely on stimulating the reward centre of the rat. Three electrodes are implanted; two in the ventral posterolateral nucleus of the thalamus which conveys facial sensory information from the left and right whiskers, and a third in the medial forebrain bundle which is involved in the reward process of the rat. This third electrode is used to give a rewarding electrical stimulus to the brain when the rat makes the correct move to the left or right. During training, the operator stimulates the left or right electrode of the rat making it "feel" a touch to the corresponding set of whiskers, as though it had come in contact with an obstacle. If the rat then makes the correct response, the operator rewards the rat by stimulating the third electrode.
In 2002, a team of scientists at the State University of New York remotely controlled rats from a laptop up to 500 m away. The rats could be instructed to turn left or right, climb trees and ladders, navigate piles of rubble, and jump from different heights. They could even be commanded into brightly lit areas, which rats usually avoid. It has been suggested that the rats could be used to carry cameras to people trapped in disaster zones.
In 2013, researchers reported the development of a radio-telemetry system to remotely control free-roaming rats with a range of 200 m. The backpack worn by the rat includes the mainboard and an FM transmitter-receiver, which can generate biphasic microcurrent pulses. All components in the system are commercially available and are fabricated from surface mount devices to reduce the size (25 x 15 x 2 mm) and weight (10 g with battery).
==== Ethics and welfare concerns ====
Concerns have been raised about the ethics of such studies. Even one of the pioneers in this area of study, Sanjiv Talwar, said "There's going to have to be a wide debate to see whether this is acceptable or not" and "There are some ethical issues here which I can't deny." Elsewhere he was quoted as saying "The idea sounds a little creepy." Some oppose the idea of placing living creatures under direct human command. "It's appalling, and yet another example of how the human species instrumentalises other species," says Gill Langley of the Dr Hadwen Trust based in Hertfordshire (UK), which funds alternatives to animal-based research. Gary Francione, an expert in animal welfare law at Rutgers University School of Law, says "The animal is no longer functioning as an animal," as the rat is operating under someone's control. And the issue goes beyond whether or not the stimulations are compelling or rewarding the rat to act. "There's got to be a level of discomfort in implanting these electrodes," he says, which may be difficult to justify. Talwar stated that the animal's "native intelligence" can stop it from performing some directives but with enough stimulation, this hesitation can sometimes be overcome, but occasionally cannot.
==== Non-invasive method ====
Researchers at Harvard University have created a brain-to-brain interface (BBI) between a human and a Sprague-Dawley rat. Simply by thinking the appropriate thought, the BBI allows the human to control the rat's tail. The human wears an EEG-based brain-to-computer interface (BCI), while the anesthetised rat is equipped with a focused ultrasound (FUS) computer-to-brain interface (CBI). FUS is a technology that allows the researchers to excite a specific region of neurons in the rat's brain using an ultrasound signal (350 kHz ultrasound frequency, tone burst duration of 0.5 ms, pulse repetition frequency of 1 kHz, given for 300 ms duration). The main advantage of FUS is that, unlike most brain-stimulation techniques, it is non-invasive. Whenever the human looks at a specific pattern (strobe light flicker) on a computer screen, the BCI communicates a command to the rat's CBI, which causes ultrasound to be beamed into the region of the rat's motor cortex responsible for tail movement. The researchers report that the human BCI has an accuracy of 94%, and that it generally takes around 1.5 s from the human looking at the screen to movement of the rat's tail.
Another system that non-invasively controls rats uses ultrasonic, epidermal and LED photic stimulators on the back. The system receives commands to deliver specified electrical stimulations to the hearing, pain and visual senses of the rat respectively. The three stimuli work in groups for the rat navigation.
Other researchers have dispensed with human remote control of rats and instead uses a General Regression Neural Network algorithm to analyse and model controlling of human operations.
=== Dogs ===
Dogs are often used in disaster relief, at crime scenes and on the battlefield, but it's not always easy for them to hear the commands of their handlers. A command module which contains a microprocessor, wireless radio, GPS receiver and an attitude and heading reference system (essentially a gyroscope) can be fitted to dogs. The command module delivers vibration or sound commands (delivered by the handler over the radio) to the dog to guide it in a certain direction or to perform certain actions. The overall success rate of the control system is 86.6%.
=== Mice ===
Researchers responsible for developing remote control of a pigeon using brain implants conducted a similar successful experiment on mice in 2005.
== Invertebrates ==
In 1967, Franz Huber pioneered electrical stimulation to the brain of insects and showed that mushroom body stimulation elicits complex behaviours, including the inhibition of locomotion.
=== Cockroaches ===
RoboRoach
The US-based company Backyard Brains released the "RoboRoach", a remote controlled cockroach kit that they refer to as "The world's first commercially available cyborg". The project started as a University of Michigan biomedical engineering student senior design project in 2010 and was launched as an available beta product on 25 February 2011. The RoboRoach was officially released into production via a TED talk at the TED Global conference, and via the crowdsourcing website Kickstarter in 2013, the kit allows students to use microstimulation to momentarily control the movements of a walking cockroach (left and right) using a bluetooth-enabled smartphone as the controller. The RoboRoach was the first kit available to the general public for the remote control of an animal and was funded by the United States' National Institute of Mental Health as a device to serve as a teaching aid to promote an interest in neuroscience. This funding was due to the similarities between the RoboRoach microstimulation, and the microstimulation used in the treatments of Parkinson's disease (Deep Brain Stimulation) and deafness (Cochlear implants) in humans. Several animal welfare organizations including the RSPCA and PETA have expressed concerns about the ethics and welfare of animals in this project.
North Carolina State University
Another group at North Carolina State University has developed a remote control cockroach. Researchers at NCSU have programmed a path for cockroaches to follow while tracking their location with an Xbox Kinect. The system automatically adjusted the cockroach's movements to ensure it stayed on the prescribed path.
Robo-bug
In 2022, researchers led by RIKEN scientists, reported the development of remote controlled cyborg cockroaches functional if moved (or moving) to sunlight for recharging. They could be used e.g. for purposes of inspecting hazardous areas or quickly finding humans underneath hard-to-access rubbles at disaster sites.
=== Beetles ===
In 2009, remote control of the flight movements of the Cotinus texana and the much larger Mecynorrhina torquata beetles has been achieved during experiments funded by the Defence Advanced Research Projects Agency (DARPA). The weight of the electronics and battery meant that only Mecynorrhina was strong enough to fly freely under radio control. A specific series of pulses sent to the optic lobes of the insect encouraged it to take flight. The average length of flights was just 45 seconds, although one lasted for more than 30 minutes. A single pulse caused the beetle to land again. Stimulation of basilar flight muscles allowed the controller to direct the insect left or right, although this was successful on only 75% of stimulations. After each maneuver, the beetles quickly righted themselves and continued flying parallel to the ground. In 2015, researchers was able to fine tune the beetle steering in flight by changing the pulse train applied on the wing-folding muscle. Recently, scientists from Nanyang Technological University, Singapore, have demonstrated graded turning and backward walking in a small darkling beetle (Zophobas morio), which is 2 cm to 2.5 cm long and weight only 1 g including the electronic backpack and battery. It has been suggested the beetles could be used for search and rescue mission, however, it has been noted that currently available batteries, solar cells and piezoelectrics that harvest energy from movement cannot provide enough power to run the electrodes and radio transmitters for very long.
=== Moths ===
Ananthaswamy, Anil (8 February 2012). "Nerve probe controls cyborg moth in flight". New Scientist.
Coxworth, Ben (20 August 2014). "Scientists developing remote-control cyborg moths". New Atlas.
"Remote-Controlled Cockroaches and Moths". Entomology Today. Entomological Society of America. 28 August 2013.
=== Drosophila ===
Work using Drosophila has dispensed with stimulating electrodes and developed a 3-part remote control system that evokes action potentials in pre-specified Drosophila neurons using a laser beam. The central component of the remote control system is a Ligand-gated ion channel gated by ATP. When ATP is applied, uptake of external calcium is induced and action potentials generated. The remaining two parts of the remote control system include chemically caged ATP, which is injected into the central nervous system through the fly's simple eye, and laser light capable of uncaging the injected ATP. The giant fibre system in insects consists of a pair of large interneurons in the brain which can excite the insect flight and jump muscles. A 200 ms pulse of laser light elicited jumping, wing flapping, or other flight movements in 60%–80% of the flies. Although this frequency is lower than that observed with direct electrical stimulation of the giant fibre system, it is higher than that elicited by natural stimuli, such as a light-off stimulus.
== Fish ==
=== Sharks ===
Spiny dogfish sharks have been remotely controlled by implanting electrodes deep in the shark's brain to a remote control device outside the tank. When an electric current is passed through the wire, it stimulates the shark's sense of smell and the animal turns, just as it would move toward blood in the ocean. Stronger electrical signals—mimicking stronger smells—cause the shark to turn more sharply. One study is funded by a $600,000 grant from Defense Advanced Research Projects Agency (DARPA). It has been suggested that such sharks could search hostile waters with sensors that detect explosives, or cameras that record intelligence photographs. Outside the military, similar sensors could detect oil spills or gather data on the behaviour of sharks in their natural habitat. Scientists working with remote control sharks admit they are not sure exactly which neurons they are stimulating, and therefore, they can't always control the shark's direction reliably. The sharks only respond after some training, and some sharks don't respond at all. The research has prompted protests from bloggers who allude to remote controlled humans or horror films featuring maniacal cyborg sharks on a feeding frenzy.
An alternative technique was to use small gadgets attached to the shark's noses that released squid juice on demand.
== Reptiles ==
=== Turtles ===
South Korean researchers have remotely controlled the movements of a turtle using a completely non-invasive steering system. Red-eared terrapins (Trachemys scripta elegans) were made to follow a specific path by manipulating the turtles' natural obstacle avoidance behaviour. If these turtles detect something is blocking their path in one direction, they move to avoid it. The researchers attached a black half cylinder to the turtle. The "visor" was positioned around the turtle's rear end, but was pivoted around using a microcontroller and a servo motor to either the left or right to partially block the turtle's vision on one side. This made the turtle believe there was an obstacle it needed to avoid on that side and thereby encouraged the turtle to move in the other direction.
=== Geckos ===
Some animals have had parts of their bodies remotely controlled, rather than their entire bodies. Researchers in China stimulated the mesencephalon of geckos (G. gecko) via micro stainless steel electrodes and observed the gecko's responses during stimulation. Locomotion responses such as spinal bending and limb movements could be elicited in different depths of mesencephalon. Stimulation of the periaqueductal gray area elicited ipsilateral spinal bending while stimulation of the ventral tegmental area elicited contralateral spinal bending.
== Birds ==
=== Pigeons ===
In 2007, researchers at east China's Shandong University of Science and Technology implanted micro electrodes in the brain of a pigeon so they could remotely control it to fly right or left, or up or down.
== Uses and justification ==
Remote-controlled animals are considered to have several potential uses, replacing the need for humans in some dangerous situations. Their application is further widened if they are equipped with additional electronic devices. Small creatures fitted with cameras and other sensors have been proposed as being useful when searching for survivors after a building has collapsed, with cockroaches or rats being small and maneuverable enough to go under rubble.
There have been a number of suggested military uses of remote controlled animals, particularly in the area of surveillance. Remote-controlled dogfish sharks have been likened to the studies into the use of military dolphins. It has also been proposed that remote-controlled rats could be used for the clearing of land mines. Other suggested fields of application include pest control, the mapping of underground areas, and the study of animal behaviour.
Development of robots that are capable of performing the same actions as controlled animals is often technologically difficult and cost-prohibitive. Flight is very difficult to replicate while having an acceptable payload and flight duration. Harnessing insects and using their natural flying ability gives significant improvements in performance. The availability of "inexpensive, organic substitutes" therefore allows for the development of small, controllable robots that are otherwise currently unavailable.
== Similar applications ==
Some animals are remotely controlled, but rather than being directed to move left or right, the animal is prevented from moving forward, or its behaviour is modified in other ways.
=== Shock collars ===
Shock collars deliver electrical shocks of varying intensity and duration to the neck or other area of a dog's body via a radio-controlled electronic device incorporated into a dog collar. Some collar models also include a tone or vibration setting, as an alternative to or in conjunction with the shock. Shock collars are now readily available and have been used in a range of applications, including behavioural modification, obedience training, and pet containment, as well as in military, police and service training. While similar systems are available for other animals, the most common are the collars designed for domestic dogs.
The use of shock collars is controversial and scientific evidence for their safety and efficacy is mixed. A few countries have enacted bans or controls on their use. Some animal welfare organizations warn against their use or actively support a ban on their use or sale. Some want restrictions placed on their sale. Some professional dog trainers and their organizations oppose their use and some support them. Support for their use or calls for bans from the general public is mixed.
=== Invisible fences ===
In 2007, it was reported that scientists at the Commonwealth Scientific and Industrial Research Organisation had developed a prototype "invisible fence" using the Global Positioning System (GPS) in a project nicknamed Bovines Without Borders. The system uses battery-powered collars that emit a sound to warn cattle when they are approaching a virtual boundary. If a cow wanders too near, the collar emits a warning noise. If it continues, the cow gets an electric shock of 250-milliwatts . The boundaries are drawn by GPS and exist only as a line on a computer. There are no wires or fixed transmitters at all. The cattle took less than an hour to learn to back off when they heard the warning noise. The scientists indicated that commercial units were up to 10 years away.
Another type of invisible fence uses a buried wire that sends radio signals to activate shock collars worn by animals that are "fenced" in. The system works with three signals. The first is visual (white plastic flags spaced at intervals around the perimeter in the fenced-in area), the second is audible (the collar emits a sound when the animal wearing it approaches buried cable), and finally there's an electric shock to indicate they have reached the fence.
Other invisible fences are wireless. Rather than using a buried wire, they emit a radio signal from a central unit, and activate when the animal travels beyond a certain radius from the unit.
== See also ==
Animal rights
Brain implant
Cruelty to animals
Microbotics
Necrobotics
Optogenetics
Surveillance tools
== References ==
== External links ==
Anthes, Emily (16 February 2013). "The race to create 'insect cyborgs'". The Guardian. | Wikipedia/Remote_control_animal |
An end effector is the device at the end of a robotic arm, designed to interact with the environment. The exact nature of this device depends on the application of the robot.
In the strict definition, which originates from serial robotic manipulators, the end effector means the last link (or end) of the robot. At this endpoint, the tools are attached. In a wider sense, an end effector can be seen as the part of a robot that interacts with the work environment. This does not refer to the wheels of a mobile robot or the feet of a humanoid robot, which are not end effectors but rather part of a robot's mobility.
End effectors may consist of a gripper or a tool.
== Grippers ==
=== Categories ===
When referring to robotic prehension there are four general categories of robot grippers:
Impactive: jaws or claws which physically grasp by direct impact upon the object.
Ingressive: pins, needles or hackles which physically penetrate the surface of the object (used in textile, carbon, and glass fiber handling).
Astrictive: attractive forces applied to the object's surface (whether by vacuum, magneto-, or electroadhesion).
Contigutive: requiring direct contact for adhesion to take place (such as glue, surface tension, or freezing).
These categories describe the physical effects used to achieve a stable grasp between a gripper and the object to be grasped.
Industrial grippers may employ mechanical, suction, or magnetic means. Vacuum cups and electromagnets dominate the automotive field and metal sheet handling. Bernoulli grippers exploit the airflow between the gripper and the part, in which a lifting force brings the gripper and part close each other (using Bernoulli's principle). Bernoulli grippers are a type of contactless grippers; the object remains confined in the force field generated by the gripper without coming into direct contact with it. Bernoulli grippers have been adopted in photovoltaic cell handling, silicon wafer handling, and in the textile and leather industries.
Other principles are less used at the macro scale (part size >5mm), but in the last ten years, have demonstrated interesting applications in micro-handling. Other adopted principles include: Electrostatic grippers and van der Waals grippers based on electrostatic charges (i.e. van der Waals' force); capillary grippers; cryogenic grippers, based on a liquid medium; ultrasonic grippers; and laser grippers, the latter two being contactless-grasping principles.
Electrostatic grippers use a charge-difference between gripper and part (electrostatic force) often activated by the gripper itself, while van der Waals grippers are based on the low force (still electrostatic) of atomic attraction between the molecules of the gripper and those of the object.
Capillary grippers use the surface tension of a liquid meniscus between the gripper and the part to center, align and grasp a part. Cryogenic grippers freeze a small amount of liquid, with the resulting ice supplying the necessary force to lift and handle the object (this principle is used also in food handling and in textile grasping). Even more complex are ultrasonic grippers, where pressure standing waves are used to lift up a part and trap it at a certain level (example of levitation are both at the micro level, in screw- and gasket-handling, and at the macro scale, in solar cell or silicon-wafer handling), and laser source that produces a pressure sufficient to trap and move microparts in a liquid medium (mainly cells). Laser grippers are known also as laser tweezers.
A particular category of friction/jaw grippers is that of needle grippers. These are called intrusive grippers, exploiting both friction and form-closure as standard mechanical grippers.
The most known mechanical gripper can be of two, three or even five fingers.
=== Gripper mechanism ===
A common form of robotic grasping is force closure.
Generally, the gripping mechanism is done by the grippers or mechanical fingers. Two-finger grippers tend to be used for industrial robots performing specific tasks in less-complex applications. The fingers are replaceable.
Two types of mechanisms used in two-finger gripping account for the shape of the surface to be gripped, and the force required to grip the object.
The shape of the fingers' gripping surface can be chosen according to the shape of the objects to be manipulated. For example, if a robot is designed to lift a round object, the gripper surface shape can be a concave impression of it to make the grip efficient. For a square shape, the surface can be a plane.
=== Levels of force ===
Though there are numerous forces acting over the body that has been lifted by a robotic arm, the main force is the frictional force. The gripping surface can be made of a soft material with high coefficient of friction so that the surface of the object is not damaged. The robotic gripper must withstand not only the weight of the object but also acceleration and the motion that is caused by frequent movement of the object. To find out the force required to grip the object, the following formula is used
F
=
m
a
μ
n
{\displaystyle F={\frac {ma}{\mu n}}}
where:
A more complete equation would account for the direction of movement. For example, when the body is moved upwards, against gravitational force, the force required will be more than that towards the gravitational force. Hence, another term is introduced and the formula becomes:
F
=
m
(
a
+
g
)
μ
n
{\displaystyle F={\frac {m(a+g)}{\mu n}}}
Here, the value of
g
{\displaystyle \,g}
should be taken as the acceleration due to gravity and
a
{\displaystyle \,a}
the acceleration due to movement.
For many physically interactive manipulation tasks, such as writing and handling a screwdriver, a task-related grasp criterion can be applied in order to choose grasps that are most appropriate to meeting specific task requirements. Several task-oriented grasp quality metrics were proposed to guide the selection of a good grasp that would satisfy the task requirements.
== Tools ==
The end effectors that can be used as tools serve various purposes, including spot-welding in an assembly, spray-painting where uniformity of painting is necessary, and other purposes where the working conditions are dangerous for human beings. Surgical robots have end effectors that are specifically manufactured for the purpose.
The end effector of an assembly-line robot would typically be a welding head, or a paint spray gun. A surgical robot's end effector could be a scalpel or other tool used in surgery. Other possible end effectors might be machine tools such as a drill or milling cutters. The end effector on the space shuttle's robotic arm uses a pattern of wires which close like the aperture of a camera around a handle or other grasping point.
== See also ==
Grapple (tool)
Prehensility
Tongs
Shadow Hand
IEEE RAS TC on Robotic Hands, Grasping and Manipulation
== References == | Wikipedia/Industrial_robot_end_effector |
An integrated library system (ILS), also known as a library management system (LMS),
is an enterprise resource planning system for a library, used to track items owned, orders made, bills paid, and patrons who have borrowed.
An ILS is usually made up of a relational database, software to interact with that database, and two graphical user interfaces (one for patrons, one for staff). Most ILSes separate software functions into discrete programs called modules, each of them integrated with a unified interface. Examples of modules might include:
acquisitions (ordering, receiving, and invoicing materials)
cataloging (classifying and indexing materials)
circulation (lending materials to patrons and receiving them back)
serials (tracking magazine, journals, and newspaper holdings)
online public access catalog or OPAC (public user interface)
Each patron and item has a unique ID in the database that allows the ILS to track its activity.
== History ==
=== Pre-computerization ===
Prior to computerization, library tasks were performed manually and independently from one another. Selectors ordered materials with ordering slips, cataloguers manually catalogued sources and indexed them with the card catalog system (in which all bibliographic data was kept on a single index card), fines were collected by local bailiffs, and users signed books out manually, indicating their name on clue cards which were then kept at the circulation desk. Early mechanization came in 1936, when the University of Texas began using a punch card system to manage library circulation. While the punch card system allowed for more efficient tracking of loans, library services were far from being integrated, and no other library task was affected by this change.
=== 1960s: the influence of computer technologies ===
The next big innovation came with the advent of MARC standards in the 1960s, which coincided with the growth of computer technologies – library automation was born. From this point onwards, libraries began experimenting with computers, and, starting in the late 1960s and continuing into the 1970s, bibliographic services utilizing new online technology and the shared MARC vocabulary entered the market. These included OCLC (1967), Research Libraries Group (which has since merged with OCLC), and the Washington Library Network (which became Western Library Network and is also now part of OCLC).
The Intrex Retrieval System ran on CTSS starting in the late 1960s. Intrex was an experimental, pilot-model machine-oriented bibliographic storage and retrieval system with a database that stored a catalog of roughly 15,000 journal articles. It was used to develop and test concepts for library automation. A deployment of three Intrex BRISC CRT consoles for testing at the MIT Engineering Library in 1972 showed that it was preferred over two other systems, ARDS and DATEL.
=== 1970s–1980s: the early integrated library system ===
The 1970s can be characterized by improvements in computer storage, as well as in telecommunications. As a result of these advances, "turnkey systems on microcomputers", known more commonly as integrated library management systems (ILS) finally appeared. These systems included necessary hardware and software which allowed the connection of major circulation tasks, including circulation control and overdue notices. As the technology developed, other library tasks could be accomplished through ILS as well, including acquisition, cataloguing, reservation of titles, and monitoring of serials.
=== 1990s–2000s: the growth of the Internet ===
With the evolution of the Internet throughout the 1990s and into the 2000s, ILSs began allowing users to more actively engage with their libraries through an OPACs and online web-based portals. Users could log into their library accounts to reserve or renew books, as well as authenticate themselves for access to library-subscribed online databases. Education for librarians responded with new focus on systems analysis. Inevitably, during this time, the ILS market grew exponentially. By 2002, the ILS industry averaged sales of approximately US$500 million annually, compared to just US$50 million in 1982.
=== Mid 2000s–present: increasing costs and customer dissatisfaction ===
By the mid to late 2000s, ILS vendors had increased not only the number of services offered but also their prices, leading to some dissatisfaction among many smaller libraries. At the same time, open-source ILS was in its early stages of testing. Some libraries began turning to such open-source ILSs as Koha and Evergreen. Common reasons noted were to avoid vendor lock-in, avoid license fees, and participate in software development. Freedom from vendors also allowed libraries to prioritize needs according to urgency, as opposed to what their vendor can offer. Libraries which have moved to open-source ILS have found that vendors are now more likely to provide quality service in order to continue a partnership since they no longer have the power of owning the ILS software and tying down libraries to strict contracts. This has been the case with the SCLENDS consortium; following the success of Evergreen for the Georgia PINES library consortium, the South Carolina State Library along with some local public libraries formed the SCLENDS consortium in order to share resources and to take advantage of the open-source nature of the Evergreen ILS to meet their specific needs. By October 2011, just 2 years after SCLENDS began operations, 13 public library systems across 15 counties had already joined the consortium, in addition to the South Carolina State Library.
Librarytechnology.org does an annual survey of over 2,400 libraries and noted in 2008 2% of those surveyed used open-source ILS, in 2009 the number increased to 8%, in 2010 12%, and in 2011 11% of the libraries polled had adopted open-source ILSs. The following year's survey (published in April 2013) reported an increase to 14%, stating that "open source ILS products, including Evergreen and Koha, continue to represent a significant portion of industry activity. Of the 794 contracts reported in the public and academic arena, 113, or 14 percent, were for support services for these open source systems."
=== 2010s–present: the rise of cloud based solutions ===
The use of cloud-based library management systems has increased drastically since the rise of cloud technology started. According to NIST, cloud computing can include a variety of "characteristics (e.g. self-service, resource pooling, and elasticity), management models (e.g. service, platform, or infrastructure focus), and deployment models (e.g. public, private)", and this is also true of cloud-based library systems.
== Software criteria ==
=== Distributed software vs. web service ===
Library computer systems tend to fall into two categories of software:
that purchased on a perpetual license
that purchased as a subscription service (software as a service).
With distributed software the customer can choose to self-install or to have the system installed by the vendor on their own hardware. The customer can be responsible for the operation and maintenance of the application and the data, or the customer can choose to be supported by the vendor with an annual maintenance contract. Some vendors charge for upgrades to the software. Customers who subscribe to a web (hosted) service upload data to the vendor's remote server through the Internet and may pay a periodic fee to access their data.
=== Data entry assistance based on ISBN ===
Many applications can reduce a major portion of manual data entry by populating data fields based upon the entered ISBN using MARC standards technology via the Internet.
=== Bar code scanning and printing ===
With most software, users can eliminate some manual entry by using a barcode scanner. Some software is designed, or can be extended with an additional module, to integrate scanner functionality. Most software vendors provide some type of scanner integration, and some print bar-code labels.
== Comparison of open-source ILS platforms ==
== See also ==
Database management system
Public library ratings
== References ==
== Further reading ==
Breeding, Marshall (2014–2021). "Library systems report archives – American Libraries Magazine". americanlibrariesmagazine.org. American Library Association.
Rubin, Richard E.; Rubin, Rachel G. (2020) [1998]. Foundations of library and information science (5th ed.). Chicago: ALA Neal-Schuman, an imprint of the American Library Association. ISBN 9780838947449. OCLC 1138996906.
== External links ==
MARC Records, Systems and Tools : Network Development and MARC Standards Office, Library of Congress
Higher Education Library Technology,(HELibTech) a wiki that covers many aspects of library technology and lists technologies in use in UK Higher Education
Key resources in the field of Library Automation | Wikipedia/Integrated_library_system |
Mobile industrial robots are pieces of machinery that are able to be programmed to perform tasks in an industrial setting. Typically these have been used in stationary and workbench applications; however, mobile industrial robots introduce a new method for lean manufacturing. With advances in controls and robotics, current technology has been improved allowing for mobile tasks such as product delivery. This additional flexibility in manufacturing can save a company time and money during the manufacturing process, and therefore results in a cheaper end product.
Mobile robot technology has potential to revolutionize many sectors of industry; however, it carries with it some disadvantages. The logistics of manufacturing will be streamlined by allowing robots to autonomously navigate to different areas for their work. The labour demands for employees will be lessened as robots will be able to work alongside humans, and robots will assist with medicine and surgery more and more. However, there are drawbacks to this technology. Coordinating the movement of robots around facilities and calibrating their position at their destination is tedious and far from perfect. A robot malfunctioning in a manufacturing setting will hold up production - and this robot could malfunction anywhere in a facility. Human safety must also be considered. Robots must prioritize the safety of human operators over their programmed task - which may complicate the coordination of multiple autonomous robots. Especially in a surgical setting, there is no room for error on the robot's part. Even though some challenges are present, mobile robot technology promises to streamline aspects across much of the industry.
== History ==
Automation began in the automobile industry in the years surrounding WWII (1946) and the origin of the term itself belongs with D.S. Harder, the engineering manager at the Ford Motor Company. At first, the term was used to describe the increased presence of automatic devices in production lines and solely manufacturing contexts. Now, automation is widely used in many industries where computerized action and feedback loops can replace human intervention in the workplace. Over time, development in this area has become increasingly dependent upon advanced computer technologies and the advancement of processing capabilities.
In its current form, most industrial robots are powered mechanical arms with the ability to perform anthropomorphic actions. Advancements in miniaturization of computers, mathematical control theory as well as improved sensory technologies have had great impact on the feedback control systems that drive robotics. The first industrial robot performed spot welding and die castings in a General Motors factory in New Jersey, USA in 1962. Soon, robotic arms were exploding within the large-scale manufacturing industry and several new companies came into existence including Kuka in 1973, Nachi in 1969, Fanuc in 1974, Yaskawa in 1977, ASEA in 1977, and several others. By 1980, it is estimated a new major robotics company entered the market every month.
Mobile robotics are now set to experience similar expansion as they become significantly more reliable in an industrial setting. Even if a mobile robot makes mistakes, it will eventually be less frequently than mistakes caused by human factors.
== Overview ==
The simplicity of mobile industrial robots provide their main advantage in industrial settings due to the ease of use and ability to be operated via technologies well understood by most people. In addition, robots are able to operate almost continuously and will never complain about long work hours; greatly increasing efficiency in a lean manufacturing environment. The main current disadvantage lies in high costs of repair as well as the production delays that would be caused by a failure or malfunction. These factors are very preventative to putting major amounts of responsibility on mobile robotics, however they are being continually lessened.
== Applications of mobile industrial robots ==
The mobile industrial robots have many applications that they have been used in already including in the healthcare industry, home and industrial security, ocean and space exploration, the food service industry, and in distribution applications.
=== Medicine ===
Mobile industrial robots have several uses within the healthcare industry in both hospitals and homes. Drug delivery, patient services, and other nursing functions could be easily adapted to robots. Due to the fact that items being carried around typically weigh less than 100 kg, robots much smaller than the MiR (see above) may be used. Specialized equipment may be mounted on robots, allowing them to assist with surgical procedures. Overall, their place in the medical industry is to provide a more reliable source of customer care while reducing human error.
=== Scientific experimentation and exploration ===
The first instances of automation in labs possessed limited capabilities and relied on simple mechanical principles – often resembling the assembly line factory robots they had been based on. These early mobile robots primarily focused on liquid handling. And despite their rudimentary nature, they marked a significant departure from traditional methodologies, laying the groundwork for future efficiency and standardization.
In the scientific world, there is a large number of applications for mobile robots. Their ability to perform experiments and exploration without putting human lives in danger makes them an important asset. Unlike humans, robots do not require life support systems to function. In space travel, robots are performing science on planets and asteroids because sending humans is far more taxing on resources and money. The same is true in the oceanography domain. In fact, several of the same robotic systems are designed to perform their science under both conditions - space and underwater. In nuclear power plants, robots can service electronics and mechanical systems which prevents human exposure to large amounts of radiation.
=== Aircraft maintenance and repair ===
For applications like painting and de-painting aircraft, two fixed robots are inadequate because not all parts of the aircraft can be reached. Adding more fixed robots would complete the task, but the cost is prohibitive. If mobile robots are used, one or two may be enough to service the entire aircraft because they can move to whatever area needs work. Mobile robots need to be truly autonomous to be useful in manufacturing. Erik Nieves said, “Mobility moves robots from being machines to production partners” Rather than bringing work to the robot, the robot should be smart enough to go to where the work is.
Automated aircraft inspection systems have the potential to make aircraft maintenance safer and more reliable. Various solutions are currently developed: a collaborative mobile robot named Air-Cobot, and autonomous drones from Donecle or EasyJet.
=== Pipeline maintenance ===
For maintenance of pipelines which are buried underground, mobile robots can travel through the pipeline performing inspection and maintenance operations, replacing other techniques, some of which could only otherwise be done by unearthing the pipeline. CISBOT (cast-iron sealing robot) a cast iron pipe repair robot that seals the joints in natural gas pipelines from the inside.
== Examples ==
Mobile Industrial Robots (MiR)
The Autonomous Mobile Robots (AMRs) from MiR are designed to optimize productivity in logistics and manufacturing operations.
Able to move 250 kg (MiR250), 600 kg (MiR600), or 1350 kg (MiR1350). Furthermore you can move 1200 kg pallet with the MiR1200 Pallet Jack
Reliable software solutions are paramount for the optimal performance of your AMRs. MiR’s comprehensive offerings, including MiR Fleet, MiR robot software, and MiR Insights, provide leading-edge solutions that enhance the capabilities and intelligence of your robotic systems when you deploy and as you scale
MiR's robots are: Safe, Flexible, Scalable, Efficient, Easy to Integrate
Used by Toyota, Schneider Electric, and many more!
=== OTTO Motors (a division of Clearpath) ===
Meant for material transport in industrial centers:
Able to carry 100 kg (OTTO 100) or 1500 kg (OTTO 1500)
Powered by lithium battery technology
6–8 hours operating time (depending on payload)
Requires no structural changes to building
Navigates via 2D sensors. Proprietary autonomy software enables dynamic path-planning and obstacle avoidance at speeds matching forklifts (2 m/s).
Used by GE and John Deere.
=== Kuka ===
Very widely used—Example: Tesla Motors
"Mecanum" wheel system: customizeable, modular, heavy-lifting capable
Very easy to integrate with autonomous robotics and humans
== References == | Wikipedia/Mobile_industrial_robots |
Operational design domain (ODD) is a term for a particular operating context for an automated system, often used in the field of autonomous vehicles. The context is defined by a set of conditions, including environmental, geographical, time of day, and other conditions. For vehicles, traffic and roadway characteristics are included. Manufacturers use ODD to indicate where/how their product operates safely. A given system may operate differently according to the immediate ODD.
The concept presumes that automated systems have limitations. Relating system function to the ODD it supports is important for developers and regulators to establish and communicate safe operating conditions. Systems should operate within those limitations. Some systems recognize the ODD and modify their behavior accordingly. For example, an autonomous car might recognize that traffic is heavy and disable its automated lane change feature.
ODD is used for cars, for ships, trains, agricultural robots, and other robots.
== Definitions ==
Various regulators have offered definitions of related terms:
== Examples ==
In 2022, Mercedes-Benz announced a product with an ODD of Level 3 autonomous driving at 130 km/h.
== See also ==
Scenario (vehicular automation)
== References == | Wikipedia/Operational_design_domain |
A remote-control vehicle, is defined as any vehicle that is teleoperated by a means that does not restrict its motion with an origin external to the device. This is often a radio-control device, a cable between the controller and the vehicle, or an infrared controller.
== Applications ==
=== Scientific ===
Remote-control vehicles have various scientific uses, including operating in hazardous environments, working in the deep ocean, and space exploration.
==== Space probes ====
The majority of probes to other planets in the Solar System have been remote-control vehicles, although some of the more recent ones were partially autonomous. The sophistication of these devices has prompted greater debate on the need for crewed spaceflight and exploration. The Voyager I spacecraft is the first craft of any kind to leave the Solar System. The explorers Spirit and Opportunity have provided continuous data about the surface of Mars since January 3, 2004.
==== Submarines ====
Jason is the Woods Hole Oceanographic Institution's deep water explorer and can withstand depths of up to 6,500 feet.
The Scorpio ROV is a British submersible that rescued the crew of the Russian AS-28 on August 7, 2005.
=== Military and law enforcement ===
Military usage of remotely-controlled vehicles dates back to the first half of 20th century. John Hays Hammond, Jr., invented and patented methods for wireless control of ships starting in 1910. The Soviet Red Army used remotely-controlled teletanks during the 1930s in the Winter War and early stage of World War II. There were also remotely-controlled cutters and experimental remotely-controlled planes in the Red Army.
Remote-control vehicles are used in law enforcement and military engagements for some of the same reasons. Hazard exposure is mitigated for the operator of the vehicle, who controls it from a location of relative safety. Remote-controlled vehicles are also used for bomb disposal.
Unmanned aerial vehicles (UAVs) have undergone a significant evolution in capability in the past decade. Early UAVs were capable of reconnaissance missions alone and then only with a limited range. Current UAVs can hover around possible targets until they are positively identified before releasing their payload of weaponry. Backpack-sized UAVs will provide ground troops with over-the-horizon surveillance capabilities.
=== Recreation and hobby ===
Small-scale remote-control vehicles have long been popular among hobbyists. These remote-controlled vehicles span a wide range in terms of price and sophistication. There are many types of radio-controlled vehicles; these include on-road cars, off-road trucks, boats, submarines, airplanes, and helicopters. The "robots" now popular in television shows such as Robot Wars are a recent extension of this hobby.
Radio control is the most popular choice, as the vehicle's range is not limited by the length of a cable, nor does it require direct line-of-sight with the controller, which is the case with infrared control.
== See also ==
Radio-controlled aircraft
Remote-controlled animal
Remotely operated underwater vehicle
Robot control
Teleoperation
Telerobotics
Unmanned aerial vehicle
Unmanned ground vehicle
Unmanned vehicle
== References ==
== External links == | Wikipedia/Remote_control_vehicle |
In the engineering field of robotics, an arm solution is a set of calculations that allow the real-time computation of the control commands needed to place the end of a robotic arm at a desired position and orientation in space.
A typical industrial robot is built with fixed length segments that are connected either at joints whose angles can be controlled, or along linear slides whose length can be controlled. If each angle and slide distance is known, the position and orientation of the end of the robot arm relative to its base can be computed efficiently with simple trigonometry.
Going the other way — calculating the angles and slides needed to achieve a desired position and orientation — is much harder. The mathematical procedure for doing this is called an arm solution. For some robot designs, such as the Stanford arm, Vicarm SCARA robot or cartesian coordinate robots, this can be done in closed form. Other robot designs require an iterative solution, which requires more computer resources.
== See also ==
321 kinematic structure
Inverse kinematics
Motion planning
== External links ==
infolab.stanford.edu - The Stanford Arm (1969), with a configuration such that the mathematical computations (arm solutions) were simplified to speed up computations
D. L. Pieper, The kinematics of manipulators under computer control. PhD thesis, Stanford University, Department of Mechanical Engineering, 1968. | Wikipedia/Arm_solution |
A radio-controlled model (or RC model) is a model that is steerable with the use of radio control (RC). All types of model vehicles have had RC systems installed in them, including ground vehicles, boats, planes, helicopters and even submarines and scale railway locomotives.
== History ==
World War II saw increased development in radio control technology. The Luftwaffe used controllable winged bombs for targeting Allied ships. During the 1930s the Good brothers Bill and Walt pioneered vacuum tube based control units for RC hobby use. Their "Guff" radio controlled plane is on display at the National Aerospace museum. Ed Lorenze published a design in Model Airplane News that was built by many hobbyists. Later, after WW2, in the late 1940s to mid 1950 many other RC designs emerged and some were sold commercially, Berkeley's Super Aerotrol, was one such example.
Originally simple 'on-off' systems, these evolved to use complex systems of relays to control a rubber powered escapement's speed and direction. In another more sophisticated version developed by the Good brothers called TTPW, information was encoded by varying the signal's mark/space ratio (pulse proportional). Commercial versions of these systems quickly became available. The tuned reed system brought new sophistication, using metal reeds to resonate with the transmitted signal and operate one of a number of different relays. In the 1960s the availability of transistor-based equipment led to the rapid development of fully proportional servo-based "digital proportional" systems, achieved initially with discrete components, again driven largely by amateurs but resulting in commercial products. In the 1970s, integrated circuits made the electronics small, light and cheap enough for the 1960s-established multi-channel digital proportional systems to become much more widely available.
In the 1990s miniaturised equipment became widely available, allowing radio control of the smallest models, and by the 2000s radio control was commonplace even for the control of inexpensive toys. At the same time the ingenuity of modellers has been sustained and the achievements of amateur modelers using new technologies has extended to such applications as gas-turbine powered aircraft, aerobatic helicopters and submarines.
Before radio control, many models would use simple burning fuses or clockwork mechanisms to control flight or sailing times. Sometimes clockwork controllers would also control and vary direction or behaviour. Other methods included tethering to a central point (popular for model cars and hydroplanes), round the pole control for electric model aircraft and control lines (called u-control in the US) for internal combustion powered aircraft.
The first general use of radio control systems in models started in the late 1940s with single-channel self-built equipment; commercial equipment came soon thereafter. Initially remote control systems used escapement, (often rubber driven) mechanical actuation in the model. Commercial sets often used ground standing transmitters, long whip antennas with separate ground poles and single vacuum tube receivers. The first kits had dual tubes for more selectivity. Such early systems were invariably super regenerative circuits, which meant that two controllers used in close proximity would interfere with one another. The requirement for heavy batteries to drive tubes also meant that model boat systems were more successful than model aircraft.
The advent of transistors greatly reduced the battery requirements, since the current requirements at low voltage were greatly reduced and the high voltage battery was eliminated. Low cost systems employed a superregenerative transistor receiver sensitive to a specific audio tone modulation, the latter greatly reducing interference from 27 MHz Citizens' band radio communications on nearby frequencies. Use of an output transistor further increased reliability by eliminating the sensitive output relay, a device subject to both motor-induced vibration and stray dust contamination.
In both tube and early transistor sets the model's control surfaces were usually operated by an electromagnetic escapement controlling the stored energy in a rubber-band loop, allowing simple rudder control (right, left, and neutral) and sometimes other functions such as motor speed, and kick-up elevator.
In the late 1950s, RC hobbyists had mastered tricks to manage proportional control of the flight control surfaces, for example by rapidly switching on and off reed systems, a technique called "skillful blipping" or more humorously "nervous proportional".
By the early 1960s transistors had replaced the tube and electric motors driving control surfaces were more common. The first low cost "proportional" systems did not use servos, but rather employed a bidirectional motor with a proportional pulse train that consisted of two tones, pulse-width modulated (TTPW). This system, and another commonly known as "Kicking Duck/Galloping Ghost", was driven with a pulse train that caused the rudder and elevator to "wag" though a small angle (not affecting flight owing to small excursions and high speed), with the average position determined by the proportions of the pulse train. A more sophisticated and unique proportional system was developed by Hershel Toomin of Electrosolids corporation called the Space Control. This benchmark system used two tones, pulse width and rate modulated to drive 4 fully proportional servos, and was manufactured and refined by Zel Ritchie, who ultimately gave the technology to the Dunhams of Orbit in 1964. The system was widely imitated, and others (Sampey, ACL, DeeBee) tried their hand at developing what was then known as analog proportional. But these early analog proportional radios were very expensive, putting them out of the reach for most modelers. Eventually, single-channel gave way to multi channel devices (at significantly higher cost) with various audio tones driving electromagnets affecting tuned resonant reeds for channel selection.
Crystal oscillator superheterodyne receivers with better selectivity and stability made control equipment more capable and at lower cost. The constantly diminishing equipment weight was crucial to ever increasing modelling applications. Superheterodyne circuits became more common, enabling several transmitters to operate closely together and enabling further rejection of interference from adjacent Citizen's Band voice radio bands.
Multi-channel developments were of particular use to aircraft which really needed a minimum of three control dimensions (yaw, pitch and motor speed), as opposed to boats which can be controlled with two or one. Radio control 'channels' were originally outputs from a reed array, in other words, a simple on-off switch. To provide a usable control signal a control surface needs to be moved in two directions, so at least two 'channels' would be needed unless a complex mechanical link could be made to provide two-directional movement from a single switch. Several of these complex links were marketed during the 1960s, including the Graupner Kinematic Orbit, Bramco, and Kraft simultaneous reed sets.
Doug Spreng is credited with developing the first "digital" pulse-width feedback servo and along with Don Mathis developed and sold the first digital proportional radio called the "Digicon" followed by Bonner's Digimite, and Hoovers F&M Digital 5.
With the electronics revolution, single-signal channel circuit design became redundant and instead, radios provided coded signal streams which a servomechanism could interpret. Each of these streams replaced two of the original 'channels', and, confusingly, the signal streams began to be called 'channels'. So an old on/off 6-channel transmitter which could drive the rudder, elevator and throttle of an aircraft was replaced with a new proportional 3-channel transmitter doing the same job. Controlling all the primary controls of a powered aircraft (rudder, elevator, ailerons and throttle) was known as 'full-house' control. A glider could be 'full-house' with only three channels.
Soon a competitive marketplace emerged, bringing rapid development. By the 1970s the trend for 'full-house' proportional radio control was fully established. Typical radio control systems for radio-controlled models employ pulse-width modulation (PWM), pulse-position modulation (PPM) and more recently spread-spectrum technology, and actuate the various control surfaces using servomechanisms. These systems made 'proportional control' possible, where the position of the control surface in the model is proportional to the position of the control stick on the transmitter.
PWM is most commonly used in radio control equipment today, where transmitter controls change the width (duration) of the pulse for that channel between 920 μs and 2120 μs, 1520 μs being the center (neutral) position. The pulse is repeated in a frame of between 10 and 30 milliseconds in length. Off-the-shelf servos respond directly to servo control pulse trains of this type using integrated decoder circuits, and in response they actuate a rotating arm or lever on the top of the servo. An electric motor and reduction gearbox is used to drive the output arm and a variable component such as a resistor "potentiometer" or tuning capacitor. The variable capacitor or resistor produces an error signal voltage proportional to the output position which is then compared with the position commanded by the input pulse and the motor is driven until a match is obtained. The pulse trains representing the whole set of channels is easily decoded into separate channels at the receiver using very simple circuits such as a Johnson counter. The relative simplicity of this system allows receivers to be small and light, and has been widely used since the early 1970s.
Usually a single-chip 4017 decade counter is used inside the receiver to decode the transmitted multiplexed PPM signal to the individual "RC PWM" signals sent to each RC servo.
Often a Signetics NE544 IC or a functionally equivalent chip is used inside the housing of low-cost RC servos as the motor controller—it decodes that servo control pulse train to a position, and drives the motor to that position.
More recently, high-end hobby systems using Pulse-Code Modulation (PCM) features have come on the market that provide a digital bit-stream signal to the receiving device instead of analog type pulse modulation. Advantages include bit error checking capabilities of the data stream (good for signal integrity checking) and fail-safe options including motor (if the model has a motor) throttle down and similar automatic actions based on signal loss. However, those systems that use pulse code modulation generally induce more lag due to lesser frames sent per second as bandwidth is needed for error checking bits. PCM devices can only detect errors and thus hold the last verified position or go into failsafe mode. They cannot correct transmission errors.
In the early 21st century, 2.4 gigahertz (GHz) transmissions have become increasingly utilised in high-end control of model vehicles and aircraft. This range of frequencies has many advantages. Because the 2.4 GHz wavelengths are so small (around 10 centimetres), the antennas on the receivers do not need to exceed 3 to 5 cm. Electromagnetic noise, for example from electric motors, is not 'seen' by 2.4 GHz receivers due to the noise's frequency (which tends to be around 10 to 150 MHz). The transmitter antenna only needs to be 10 to 20 cm long, and receiver power usage is much lower; batteries can therefore last longer. In addition, no crystals or frequency selection is required as the latter is performed automatically by the transmitter. However, the short wavelengths do not diffract as easily as the longer wavelengths of PCM/PPM, so 'line of sight' is required between the transmitting antenna and the receiver. Also, should the receiver lose power, even for a few milliseconds, or get 'swamped' by 2.4 GHz interference, it can take a few seconds for the receiver - which, in the case of 2.4 GHz, is almost invariably a digital device - to re-sync.
== Design ==
RC electronics have three essential elements. The transmitter is the controller. Transmitters have control sticks, triggers, switches, and dials at the user's finger tips. The receiver is mounted in the model. It receives and processes the signal from the transmitter, translating it into signals that are sent to the servos and speed controllers. The number of servos in a model determines the number of channels the radio must provide.
Typically the transmitter multiplexes and modulates the signal into pulse-position modulation. The receiver demodulates and demultiplexes the signal and translates it into the special kind of pulse-width modulation used by standard RC servos and controllers.
In the 1980s, a Japanese electronics company, Futaba, copied wheeled steering for RC cars. It was originally developed by Orbit for a transmitter specially designed for Associated cars It has been widely accepted along with a trigger control for throttle. Often configured for right hand users, the transmitter looks like a pistol with a wheel attached on its right side. Pulling the trigger would accelerate the car forward, while pushing it would either stop the car or cause it to go into reverse. Some models are available in left-handed versions.
== Mass production ==
There are thousands of RC vehicles available. Most are toys suitable for children. What separates toy grade RC from hobby grade RC is the modular characteristic of the standard RC equipment. RC toys generally have simplified circuits, often with the receiver and servos incorporated into one circuit. It's almost impossible to take that particular toy circuit and transplant it into other RCs.
=== Hobby grade RC ===
Hobby grade RC systems have modular designs. Many cars, boats, and aircraft can accept equipment from different manufacturers, so it is possible to take RC equipment from a car and install it into a boat, for example.
However, moving the receiver component between aircraft and surface vehicles is illegal in most countries as radio frequency laws allocate separate bands for air and surface models. This is done for safety reasons.
Most manufacturers now offer "frequency modules" (known as crystals) that simply plug into the back of their transmitters, allowing one to change frequencies, and even bands, at will. Some of these modules are capable of "synthesizing" many different channels within their assigned band.
Hobby grade models can be fine tuned, unlike most toy grade models. For example, cars often allow toe-in, camber and caster angle adjustments, just like their real-life counterparts. All modern "computer" radios allow each function to be adjusted over several parameters for ease in setup and adjustment of the model. Many of these transmitters are capable of "mixing" several functions at once, which is required for some models.
Many of the most popular hobby grade radios were first developed, and mass-produced in Southern California by Orbit, Bonner, Kraft, Babcock, Deans, Larson, RS, S&O, and Milcott. Later, Japanese companies like Futaba, Sanwa and JR took over the market.
== Types ==
=== Aircraft ===
Radio-controlled aircraft (also called RC aircraft) are small aircraft that can be controlled remotely. There are many different types, ranging from small park flyers to large jets and mid-sized aerobatic models.
The aircraft use many different methods of propulsion, ranging from brushed or brushless electric motors, to internal combustion engines, to the most expensive gas turbines. The fastest aircraft, dynamic slope soarers, can reach speeds of over 450 mph (720 km/h) by dynamic soaring, repeatedly circling through the gradient of wind speeds over a ridge or slope. Newer jets can achieve above 300 mph (480 km/h) in a short distance.
=== Tanks ===
Radio-controlled tanks are replicas of armored fighting vehicles that can move, rotate the turret and some even shoot all by using the hand-held transmitter. Radio-controlled tanks are produced in numerous scale size for commercial offerings like:
1/35th scale. Probably the best known make in this scale is by Tamiya.
1/24 scale. This scale often includes a mounted Airsoftgun, the possibly the best offering is by Tokyo-Marui, but there are imitations by Heng Long, who offer cheap remakes of the tanks. The downsides to the Heng Long imitations are that they were standardized to their Type 90 tank which has 6 road wheels, then they produced a Leopard 2 and M1A2 Abrams on the same chassis but both of the tanks have 7 road wheels.
1/16 scale is the more intimidating vehicle design scale. Tamiya produce some of the best of this scale, these usually include realistic features like flashing lights, engine sounds, main gun recoil and - on their Leopard 2A6 - an optional gyro-stabilization system for the gun. Chinese manufacturers such as (Heng Long and Matorro) also produce a variety of high-quality 1/16 tanks and other AFVs.
Both the Tamiya and the Heng Long vehicles can make use of an Infra Red battle system, which attaches a small IR "gun" and target to the tanks, allowing them to engage in direct battle.
As with cars, tanks can come from ready to run to a full assembly kit.
In more private offerings there are 1/6 and 1/4 scale vehicles available. The largest RC tank available anywhere in the world is the King tiger in 1/4 scale, over 8 feet (2.4 m) long. These GRP fiberglass tanks were originally created and produced by Alex Shlakhter.
=== Cars ===
A radio-controlled car is a powered model car driven from a distance. Gasoline, nitro-methanol and electric cars exist, designed to be run both on and off-road. "Gas" cars traditionally use petrol (gasoline), though many hobbyists run 'nitro' cars, using a mixture of methanol and nitromethane, to get their power.
=== Logistic ===
Logistic RC model include the following, Tractor unit, Semi-trailer truck, Semi-trailer, Terminal tractor, Refrigerator truck, Forklift truck, Empty Container handlers, and Reach stacker. Most of them are in 1:14 and run on electric motors.
=== Helicopters ===
Radio-controlled helicopters, although often grouped with RC aircraft, are unique because of the differences in construction, aerodynamics and flight training. Several designs of RC helicopters exist, some with limited maneuverability (and thus easier to learn to fly), and those with more maneuverability (and thus harder to learn to fly).
=== Boats ===
Radio-controlled boats are model boats controlled remotely with radio control equipment. The main types of RC boat are: scale models (12 inches (30 cm) – 144" (365 cm) in size), the sailing boat and the power boat. The latter is the more popular amongst toy grade models. Radio controlled models were used for the children's television program Theodore Tugboat.
Out of radio-controlled model boats sprang up a new hobby—gas-powered model boating.
Radio-controlled, gasoline-powered model boats first appeared in 1962 designed by engineer Tom Perzinka of Octura Models. The gas model boats were powered with O&R (Ohlsson and Rice) small 20 cc ignition gasoline utility engines. This was a completely new concept in the early years of available radio-control systems. The boat was called the "White Heat" and was a hydro design, meaning it had more than one wetted surface.
Towards the late 1960s and early 1970s another gasoline-powered model was created and powered with a similar chainsaw engine. This boat was named "The Moppie" after its full-size counterpart. Again like the White Heat, between the costs of production, engine, and radio equipment, the project failed at market and perished.
By 1970, nitro (glow ignition) power became the norm for model boating.
In 1982 Tony Castronovo, a hobbyist in Fort Lauderdale, Florida, marketed the first production gasoline string trimmer engine powered (22 cc gasoline ignition engine) radio-controlled model boat in a 44-inch vee-bottom boat. It achieved a top speed of 30 miles per hour. The boat was marketed under the trade name "Enforcer" and sold by his company Warehouse Hobbies, Inc. The following years of marketing and distribution aided the spread of gasoline-powered model boating throughout the US, Europe, Australia, and many countries around the world.
As of 2010, gasoline radio-controlled model boating has grown worldwide. The industry has spawned many manufacturers and thousands of model boaters. Today the average gasoline-powered boat can easily run at speeds over 45 mph, with the more exotic gas boats running at speeds exceeding 90 mph. This year also saw ML Boatworks develop laser cut wood scale hydroplane racing kits that rejuvenated a sector of the hobby that was turning to composite boats, instead of the classic art of building wood models. These kits also gave fast electric modelers a platform much needed in the hobby.
Many of Tony Castronovo's designs and innovations in gasoline model boating are the foundation upon which the industry has been built. He was first to introduce surface drive on a Vee hull (propeller hub above the water line) to model boating which he named "SPD" (surface planing drive) as well as numerous products and developments relative to gasoline-powered model boating. He and his company continue to produce gasoline-powered model boats and components.
=== Submarines ===
Radio-controlled submarines can range from inexpensive toys to complex projects involving sophisticated electronics. Oceanographers and the Military also operate radio control submarines.
=== Combat robotics ===
The majority of robots used in shows such as Battlebots and Robot Wars are remotely controlled, relying on most of the same electronics as other radio-controlled vehicles. They are frequently equipped with weapons for the purpose of damaging opponents, including but not limited to hammering axes, "flippers" and spinners.
== Power ==
=== Internal combustion ===
Internal combustion engines for remote control models have typically been two stroke engines that run on specially blended fuel. Engine sizes are typically given in cm3 or cubic inches, ranging from tiny engines like these .02 in3 to huge 1.60 in3 or larger. For even larger sizes, many modelers turn to four stroke or gasoline engines (see below.) Glow plug engines have an ignition device that possesses a platinum wire coil in the glow plug, that catalytically glows in the presence of the methanol in glow engine fuel, providing the combustion source.
Since 1976, practical "glow" ignition four stroke model engines have been available on the market, ranging in size from 3.5 cm3 upwards to 35 cm3 in single cylinder designs. Various twin and multi-cylinder glow ignition four stroke model engines are also available, echoing the appearance of full sized radial, inline and opposed cylinder aircraft powerplants. The multi-cylinder models can become enormous, such as the Saito five cylinder radial. They tend to be quieter in operation than two stroke engines, using smaller mufflers, and also use less fuel.
Glow engines tend to produce large amounts of oily mess due to the oil in the fuel. They are also much louder than electric motors.
Another alternative is the gasoline engine. While glow engines run on special and expensive hobby fuel, gasoline runs on the same fuel that powers cars, lawnmowers, weed whackers etc. These typically run on a two-stroke cycle, but are radically different from glow two-stroke engines. They are typically much, much larger, like the 80 cm3 Zenoah. These engines can develop several horsepower, incredible for something that can be held in the palm of the hand.
=== Electrical ===
Electric power is often the chosen form of power for aircraft, cars and boats. Electric power in aircraft in particular has become popular recently, mainly due to the popularity of park flyers and the development of technologies like brushless motors and lithium polymer batteries. These allow electric motors to produce much more power rivaling that of fuel-powered engines. It is also relatively simple to increase the torque of an electric motor at the expense of speed, while it is much less common to do so with a fuel engine, perhaps due to its roughness. This permits a more efficient larger-diameter propeller to be used which provides more thrust at lower airspeeds. (e.g. an electric glider climbing steeply to a good thermalling altitude.)
In aircraft, cars, trucks and boats, glow and gas engines are still used even though electric power has been the most common form of power for a while. The following picture shows a typical brushless motor and speed controller used with radio controlled cars. As you can see, due to the integrated heat sink, the speed controller is almost as large as the motor itself. Due to size and weight limitations, heat sinks are not common in RC aircraft electronic speed controller (ESCs), therefore the ESC is almost always smaller than the motor.
== Controlling methods ==
Remote Control:
Most RC models make use of a handheld remote device with an antenna that sends signals to the vehicle's IR receiver. There are 2 different sticks. On the left is the stick to change the altitude of a flying vehicle or move a ground vehicle in forward or reverse . Sometimes the stick in flying model controllers can stay wherever the finger places it or it has to be held since underneath is a spring causing it to move back to its neutral position once released by the finger. Generally, in remotes used for ground moving RC vehicles the left stick's neutral position is in the centre. The right stick is for moving the flying vehicle around in the air in different directions and with grounds vehicles it is for steering. On the controller is also a trimmer setting which helps in keeping the vehicle focused in one direction. Mostly low grade RC vehicles will include a charging cable inside the remote with a green light indicating that the battery is in charge.
Phone and tablet control:
With the influence of touch screen devices mostly phones and tablets many RC vehicles can be controlled from any Apple or Android devices. On the operating system store is an app specifically for that particular RC model. The controls are almost identical to those on a physically used remote control when using virtual remote control but sometimes can vary from an actual controller depending on the type of vehicle. The device is not included with the vehicle set but the box does come with a radio chip to insert into the headset slot of any smartphone or tablet.
== See also ==
Anderson Powerpole connector
JST connector
Drone racing
Model yachting
== References == | Wikipedia/Radio-controlled_model |
Evolution strategy (ES) from computer science is a subclass of evolutionary algorithms, which serves as an optimization technique. It uses the major genetic operators mutation, recombination and selection of parents.
== History ==
The 'evolution strategy' optimization technique was created in the early 1960s and developed further in the 1970s and later by Ingo Rechenberg, Hans-Paul Schwefel and their co-workers.
== Methods ==
Evolution strategies use natural problem-dependent representations, so problem space and search space are identical. In common with evolutionary algorithms, the operators are applied in a loop. An iteration of the loop is called a generation. The sequence of generations is continued until a termination criterion is met.
The special feature of the ES is the self-adaptation of mutation step sizes and the coevolution associated with it. The ES is briefly presented using the standard form, pointing out that there are many variants. The real-valued chromosome contains, in addition to the
n
{\displaystyle n}
decision variables,
n
′
{\displaystyle n'}
mutation step sizes
σ
j
{\displaystyle {\sigma }_{j}}
, where:
1
≤
j
≤
n
′
≤
n
{\displaystyle 1\leq j\leq n'\leq n}
. Often one mutation step size is used for all decision variables or each has its own step size. Mate selection to produce
λ
{\displaystyle \lambda }
offspring is random, i.e. independent of fitness. First, new mutation step sizes are generated per mating by intermediate recombination of the parental
σ
j
{\displaystyle {\sigma }_{j}}
with subsequent mutation as follows:
σ
j
′
=
σ
j
⋅
e
(
N
(
0
,
1
)
−
N
j
(
0
,
1
)
)
{\displaystyle {\sigma }'_{j}=\sigma _{j}\cdot e^{({\mathcal {N}}(0,1)-{\mathcal {N}}_{j}(0,1))}}
where
N
(
0
,
1
)
{\displaystyle {\mathcal {N}}(0,1)}
is a normally distributed random variable with mean
0
{\displaystyle 0}
and standard deviation
1
{\displaystyle 1}
.
N
(
0
,
1
)
{\displaystyle {\mathcal {N}}(0,1)}
applies to all
σ
j
′
{\displaystyle {\sigma }'_{j}}
, while
N
j
(
0
,
1
)
{\displaystyle {\mathcal {N}}_{j}(0,1)}
is newly determined for each
σ
j
′
{\displaystyle {\sigma }'_{j}}
. Next, discrete recombination of the decision variables is followed by a mutation using the new mutation step sizes as standard deviations of the normal distribution. The new decision variables
x
j
′
{\displaystyle x_{j}'}
are calculated as follows:
x
j
′
=
x
j
+
N
j
(
0
,
σ
j
′
)
{\displaystyle x_{j}'=x_{j}+{\mathcal {N}}_{j}(0,{\sigma }_{j}')}
This results in an evolutionary search on two levels: First, at the problem level itself and second, at the mutation step size level. In this way, it can be ensured that the ES searches for its target in ever finer steps. However, there is also the danger of being able to skip larger invalid areas in the search space only with difficulty.
== Variants ==
The ES knows two variants of best selection for the generation of the next parent population (
μ
{\displaystyle \mu }
- number of parents,
λ
{\displaystyle \lambda }
- number of offspring):
(
μ
,
λ
)
{\displaystyle (\mu ,\lambda )}
: The
μ
{\displaystyle \mu }
best offspring are used for the next generation (usually
μ
=
λ
2
{\displaystyle \mu ={\frac {\lambda }{2}}}
).
(
μ
+
λ
)
{\displaystyle (\mu +\lambda )}
: The best are selected from a union of
μ
{\displaystyle \mu }
parents and
λ
{\displaystyle \lambda }
offspring.
Bäck and Schwefel recommend that the value of
λ
{\displaystyle \lambda }
should be approximately seven times the
μ
{\displaystyle \mu }
, whereby
μ
{\displaystyle \mu }
must not be chosen too small because of the strong selection pressure. Suitable values for
μ
{\displaystyle \mu }
are application-dependent and must be determined experimentally. The selection of the next generation in evolution strategies is deterministic and only based on the fitness rankings, not on the actual fitness values. The resulting algorithm is therefore invariant with respect to monotonic transformations of the objective function.
The simplest and oldest evolution strategy
(
1
+
1
)
{\displaystyle {\mathit {(1+1)}}}
operates on a population of size two: the current point (parent) and the result of its mutation. Only if the mutant's fitness is at least as good as the parent one, it becomes the parent of the next generation. Otherwise the mutant is disregarded. More generally,
λ
{\displaystyle \lambda }
mutants can be generated and compete with the parent, called
(
1
+
λ
)
{\displaystyle (1+\lambda )}
. In
(
1
,
λ
)
{\displaystyle (1,\lambda )}
the best mutant becomes the parent of the next generation while the current parent is always disregarded. For some of these variants, proofs of linear convergence (in a stochastic sense) have been derived on unimodal objective functions.
Individual step sizes for each coordinate, or correlations between coordinates, which are essentially defined by an underlying covariance matrix, are controlled in practice either by self-adaptation or by covariance matrix adaptation (CMA-ES). When the mutation step is drawn from a multivariate normal distribution using an evolving covariance matrix, it has been hypothesized that this adapted matrix approximates the inverse Hessian of the search landscape. This hypothesis has been proven for a static model relying on a quadratic approximation. In 2025, Chen et.al. proposed a multi-agent evolution strategy for consensus-based distributed optimization, where a novel step adaptation method is designed to help multiple agents control the step size cooperatively.
== See also ==
Covariance matrix adaptation evolution strategy (CMA-ES)
Derivative-free optimization
Evolutionary computation
Genetic algorithm
Natural evolution strategy
Evolutionary game theory
== References ==
== Bibliography ==
Ingo Rechenberg (1971): Evolutionsstrategie – Optimierung technischer Systeme nach Prinzipien der biologischen Evolution (PhD thesis). Reprinted by Frommann-Holzboog (1973). ISBN 3-7728-1642-8
Hans-Paul Schwefel (1974): Numerische Optimierung von Computer-Modellen (PhD thesis). Reprinted by Birkhäuser (1977).
Hans-Paul Schwefel: Evolution and Optimum Seeking. New York: Wiley & Sons 1995. ISBN 0-471-57148-2
H.-G. Beyer and H.-P. Schwefel. Evolution Strategies: A Comprehensive Introduction. Journal Natural Computing, 1(1):3–52, 2002.
Hans-Georg Beyer: The Theory of Evolution Strategies. Springer, April 27, 2001. ISBN 3-540-67297-4
Ingo Rechenberg: Evolutionsstrategie '94. Stuttgart: Frommann-Holzboog 1994. ISBN 3-7728-1642-8
J. Klockgether and H. P. Schwefel (1970). Two-Phase Nozzle And Hollow Core Jet Experiments. AEG-Forschungsinstitut. MDH Staustrahlrohr Project Group. Berlin, Federal Republic of Germany. Proceedings of the 11th Symposium on Engineering Aspects of Magneto-Hydrodynamics, Caltech, Pasadena, Cal., 24.–26.3. 1970.
M. Emmerich, O.M. Shir, and H. Wang: Evolution Strategies. In: Handbook of Heuristics, 1-31. Springer International Publishing (2018).
== Research centers ==
Bionics & Evolutiontechnique at Technische Universität Berlin
Chair of Algorithm Engineering (Ls11) – TU Dortmund University
Collaborative Research Center 531 – TU Dortmund University | Wikipedia/Evolution_strategies |
Strategy video game is a major video game genre that focuses on analyzing and strategizing over direct quick reaction in order to secure success.
Although many types of video games can contain strategic elements, the strategy genre is most commonly defined by a primary focus on high-level strategy, logistics and resource management.
They are also usually divided into two main sub-categories: turn-based and real-time, but there are also many strategy cross/sub-genres that feature additional elements such as tactics, diplomacy, economics and exploration.
== Typical experience ==
A player must plan a series of actions against one or more opponents, and the reduction of enemy forces is usually a goal. Victory is achieved through superior planning, and the element of chance takes a smaller role. In most strategy video games, the player is given a godlike view of the game world, and indirectly controls game units under their command. Thus, most strategy games involve elements of warfare to varying degrees, and feature a combination of tactical and strategic considerations. In addition to combat, these games often challenge the player's ability to explore or manage an economy.
== Relationship to other genres ==
Even though there are many action games that involve strategic thinking, they are seldom classified as strategy games. A strategy game is typically larger in scope, and its main emphasis is on the player's ability to outthink their opponent. Strategy games rarely involve a physical challenge, and tend to annoy strategically minded players when they do. Compared to other genres such as action or adventure games where one player takes on many enemies, strategy games usually involve some level of symmetry between sides. Each side generally has access to similar resources and actions, with the strengths and weaknesses of each side being generally balanced.
Although strategy games involve strategic, tactical, and sometimes logistical challenges, they are distinct from puzzle games. A strategy game calls for planning around a conflict between players, whereas puzzle games call for planning in isolation. Strategy games are also distinct from construction and management simulations, which include economic challenges without any fighting. These games may incorporate some amount of conflict, but are different from strategy games because they do not emphasize the need for direct action upon an opponent. Nevertheless, some authors consider construction and management simulation games, in particular city-building games, as a part of the wider strategy game genre.
Although strategy games are similar to role-playing video games in that the player must manage units with a variety of numeric attributes, RPGs tend to be about a smaller number of unique characters, while strategy games focus on larger numbers of fairly similar units.
== Game design ==
=== Units and conflict ===
The player commands their forces by selecting a unit, usually by clicking it with the mouse, and issuing an order from a menu. Keyboard shortcuts become important for advanced players, as speed is often an important factor. Units can typically move, attack, stop, hold a position, although other strategy games offer more complex orders. Units may even have specialized abilities, such as the ability to become invisible to other units, usually balanced with abilities that detect otherwise invisible things. Some strategy games even offer special leader units that provide a bonus to other units. Units may also have the ability to sail or fly over otherwise impassable terrain, or provide transport for other units. Non-combat abilities often include the ability to repair or construct other units or buildings.
Even in imaginary or fantastic conflicts, strategy games try to reproduce important tactical situations throughout history. Techniques such as flanking, making diversions, or cutting supply lines may become integral parts of managing combat. Terrain becomes an important part of strategy, since units may gain or lose advantages based on the landscape. Some strategy games such as Civilization III and Medieval 2: Total War involve other forms of conflict such as diplomacy and espionage. However, warfare is the most common form of conflict, as game designers have found it difficult to make non-violent forms of conflict as appealing.
=== Economy, resources and upgrades ===
Strategy games often involve other economic challenges. These can include building construction, population maintenance, and resource management. Strategy games frequently make use of a windowed interface to manage these complex challenges.
Most strategy games allow players to accumulate resources which can be converted to units, or converted to buildings such as factories that produce more units. The quantity and types of resources vary from game to game. Some games will emphasize resource acquisition by scattering large quantities throughout the map, while other games will put more emphasis on how resources are managed and applied by balancing the availability of resources between players. To a lesser extent, some strategy games give players a fixed quantity of units at the start of the game.
Strategy games often allow the player to spend resources on upgrades or research. Some of these upgrades enhance the player's entire economy. Other upgrades apply to a unit or class of units, and unlock or enhance certain combat abilities. Sometimes enhancements are enabled by building a structure that enables more advanced structures. Games with a large number of upgrades often feature a technology tree, which is a series of advancements that players can research to unlock new units, buildings, and other capabilities. Technology trees are quite large in some games, and 4X strategy games are known for having the largest.
A build order is a linear pattern of production, research, and resource management aimed at achieving a specific and specialized goal. They are analogous to chess openings, in that a player will have a specific order of play in mind, however, the amount of the build order, the strategy around which the build order is built or even which build order is then used varies on the skill, ability and other factors such as how aggressive or defensive each player is.
=== Map and exploration ===
Early strategy games featured a top-down perspective, similar in nature to a board game or paper map. Many later games adopted an isometric perspective. Even with the rise of 3D graphics and the potential to manipulate the camera, games usually feature some kind of aerial view. Very rarely do strategy games show the world from the perspective from an avatar on the ground. This is to provide the player with a big-picture view of the game world, and form more effective strategies.
Exploration is a key element in most strategy games. The landscape is often shrouded in darkness, and this darkness is lifted as a player's units enters the area. The ability to explore may be inhibited by different kinds of terrain, such as hills, water, or other obstructions. Even after an area is explored, that area may become dim if the player does not patrol it. This design technique is called the fog of war, where the player can see the terrain but not the units within the explored area. This makes it possible for enemies to attack unexpectedly from otherwise explored areas.
=== Real-time versus turn-based ===
Strategy video games are categorized based on whether they offer the continuous gameplay of real-time strategy, or the discrete phases of turn-based strategy. These differences in time-keeping lead to several other differences. Typically, turn-based strategy games have stronger artificial intelligence than real-time strategy games, since the turn-based pace allows more time for complex calculations. But a real-time artificial intelligence makes up for this disadvantage with its ability to manage multiple units more quickly than a human. Overall, real-time strategy games are more action-oriented, as opposed to the abstract planning emphasized in turn-based strategy.
The relative popularity of real-time strategy has led some critics to conclude that more gamers prefer action-oriented games. Fans of real-time strategy have criticized the wait times associated with turn-based games, and praised the challenge and realism associated with making quick decisions in real-time. In contrast, turn-based strategy fans have criticized real-time strategy games because most units do not behave appropriately without orders, and thus a turn-based pace allows players to input more realistic and detailed plans. Game theorists have noted that strategic thinking does not lend itself well to real-time action, and turn-based strategy purists have criticized real-time strategy games for replacing "true strategy" with gameplay that rewards "rapid mouse-clicking". Overall, reviewers have been able to recognize the advantages associated with both of the main types of strategy games.
=== Strategy versus tactics ===
Most strategy video games involve a mix of both strategy and tactics. "Tactics" usually refer to how troops are utilized in a given battle, whereas "strategy" describes the mix of troops, the location of the battle, the commander's larger goals or military doctrine, as well as the act of building up something (a base, economy, etc.). However, there is also a growing subgenre of purely tactical games, which are referred to as real-time tactics, and turn-based tactics. These types of games are sometimes categorized as "strategy" games. Game reviewers and scholars sometimes debate whether they are using terminology such as "tactics" or "strategy" appropriately. Chris Taylor, the designer of Total Annihilation and Supreme Commander, has gone so far as to suggest that real-time strategy titles are more about tactics than strategy. But releases that are considered pure tactical games usually provide players with a fixed set of units, and downplay other strategic considerations such as manufacturing, and resource management. Tactical games are strictly about combat, and typically focus on individual battles, or other small sections in a larger conflict.
=== Settings and themes ===
Strategy games can take place in a number of settings. Depending on the theatre of warfare, releases may be noted as naval strategy games, or space strategy games. A title may be noted for its grand strategic scale, whether the game is real-time, or turn-based. Strategy games also draw on a number of historical periods, including World War II, the medieval era, or the Napoleonic era. Some examples of these are: Hearts of Iron IV, Europa Universalis IV, and Victoria II. Some strategy games are even based in an alternate history, by manipulating and rewriting certain historical facts. It is also common to see games based in science fiction or futuristic settings, as well as fantasy settings.
Some strategy games are abstract, and do not try to represent a world with high fidelity. Although many of these may still involve combat in the sense that units can capture or destroy each other, these games sometimes offer non-combat challenges such as arranging units in specific patterns. However, the vast majority of computerized strategy games are representational, with more complex game mechanics.
=== Single player, multiplayer, and massively multiplayer ===
Strategy games include single-player gameplay, multiplayer gameplay, or both. Single player games will sometimes feature a campaign mode, which involves a series of matches against several artificial intelligence opponents. Finishing each match or mission will advance the game's plot, often with cut scenes, and some games will reward a completed mission with new abilities or upgrades. Hardcore strategy gamers tend to prefer multiplayer competition, where human opponents provide more challenging competition than the artificial intelligence. Artificial intelligence opponents often need hidden information or bonuses to provide a challenge to players.
More recently, massively multiplayer online strategy games have appeared such as Shattered Galaxy from 2001. However, these games are relatively difficult to design and implement compared to other massively multiplayer online games, as the numerous player-controlled units create a larger volume of online data. By 2006, reviewers expressed disappointment with the titles produced thus far. Critics argued that strategy games are not conducive to massively multiplayer gameplay. A single victory cannot have much impact in a large persistent world, and this makes it hard for a player to care about a small victory, especially if they are fighting for a faction that is losing an overall war. However, more recent developers have tried to learn from past mistakes, resulting in Dreamlords from 2007, and Saga from 2008. In 2012, Supercell released Clash of Clans, a mobile strategy video game.
== History ==
The origin of strategy video games is rooted in traditional tabletop strategy games like Chess, Checkers and Go, as well as board and miniature wargaming. The Sumerian Game, an early mainframe game written by Mabel Addis, based on the ancient Sumerian city-state of Lagash, was an economic simulation strategy game.
The first console strategy game was a Risk-like game called Invasion, released in 1972 for the Magnavox Odyssey. Strategic Simulations (SSI)'s Computer Bismarck, released in 1980, was the first historical computer wargame. Companies such as SSI, Avalon Hill, MicroProse, and Strategic Studies Group released many strategy titles throughout the 1980s. Reach for the Stars from 1983 was one of the first 4X strategy games, which expanded upon the relationship between economic growth, technological progress, and conquest. That same year, Nobunaga's Ambition was a conquest-oriented grand strategy wargame with historical simulation elements. The Lords of Midnight combined elements of adventure, strategy and wargames, and won the Crash magazine award for Best Adventure game of 1984, as well as Best Strategy Game of the Year at the Golden Joystick Awards
1989's Herzog Zwei is often considered the first real-time strategy game, although real-time strategy elements can be found in several earlier games, such as Dan Bunten's Cytron Masters and Don Daglow's Utopia in 1982; Kōji Sumii's Bokosuka Wars in 1983; D. H. Lawson and John Gibson's Stonkers and Steven Faber's Epidemic! in 1983; and Evryware's The Ancient Art of War in 1984.
The genre was popularized by Dune II three years later in 1992. Brett Sperry, the creator of Dune II, coined the name "real-time strategy" to help market the new game genre he helped popularize. Real-time strategy games changed the strategy genre by emphasizing the importance of time management, with less time to plan. Real-time strategy games eventually began to outsell turn-based strategy games. With more than 11 million copies sold worldwide by February 2009, StarCraft (1998) became one of the best-selling games for the personal computer. It has been praised for pioneering the use of unique "factions" in RTS gameplay, and for having a compelling story.
2002's Warcraft III: Reign of Chaos has been an influence on real-time strategy games, especially the addition of role-playing elements and heroes as units. More than the game itself, mods created with the World Editor led to lasting changes and inspired many future strategy games. Defense of the Ancients (DotA), a community-created mod based on Warcraft III, is largely attributed as being the most significant inspiration for the multiplayer online battle arena (MOBA) format. Since the format was tied to the Warcraft property, developers began to work on their own "DOTA-style" games, including Heroes of Newerth (2009), League of Legends (2010), and the mod's standalone sequel, Dota 2 (2013). Blizzard Entertainment, the owner of Warcraft property, developed a game inspired by DotA titled Heroes of the Storm (2015), which features an array of heroes from Blizzard's franchises, including numerous heroes from Warcraft III. Former game journalist Luke Smith called DotA "the ultimate RTS".
Since its first title was released in 2000, the Total War series by the Creative Assembly has sold over 20 million copies, becoming one of the most successful series of strategy games of all time.
== Subgenres ==
=== 4X ===
4X games are a genre of strategy video game in which players control an empire and "explore, expand, exploit, and exterminate". The term was first coined by Alan Emrich in his September 1993 preview of Master of Orion for Computer Gaming World. Since then, others have adopted the term to describe games of similar scope and design.
4X games are noted for their deep, complex gameplay. Emphasis is placed upon economic and technological development, as well as a range of non-military routes to supremacy. Many 4X games also fit into the category of grand strategy. Games can take a long time to complete since the amount of micromanagement needed to sustain an empire scales as the empire grows. 4X games are sometimes criticized for becoming tedious for these reasons, and several games have attempted to address these concerns by limiting micromanagement.
The earliest 4X games borrowed ideas from board games and 1970s text-based computer games. The first 4X games were turn-based, but real-time 4X games are also not uncommon. Many 4X games were published in the mid-1990s, but were later outsold by other types of strategy games. Sid Meier's Civilization and the Total War series are important examples from this formative era, and popularized the level of detail that would later become a staple of the genre. In the new 2000 millennium, several 4X releases have become critically and commercially successful.
=== Grand Strategy ===
Grand strategy games emphasize the management of a nation and the coordination of its resources. Diplomacy and war interact with each other and become the primary means of reshaping the world map consisting of various states. Players use their nation's resources to achieve national goals such as world domination, whether through military might, diplomacy, or economics. Unlike 4X games, Grand strategy games might not include such elements as exploration, but it still can be there. Great examples of Grand Strategy games are the following series of games: Europa Universalis, Hearts of Iron, Crusader Kings.
=== Artillery ===
Artillery is the generic name for either early two- or three-player (usually turn-based) computer games involving tanks fighting each other in combat or similar derivative games. Artillery games are among the earliest computer games developed; the theme of such games is an extension of the original uses of computer themselves, which were once used to calculate the trajectories of rockets and other related military-based calculations. Artillery games have been typically described as a type of turn-based tactics game, though they have also been described as a type of "shooting game." Examples of this genre are Pocket Tanks, Hogs of War, Scorched 3D and the Worms series.
Early precursors to the modern artillery-type games were text-only games that simulated artillery entirely with input data values. A BASIC game known simply as Artillery was written by Mike Forman and was published in Creative Computing magazine in 1976. This seminal home computer version of the game was revised in 1977 by M. E. Lyon and Brian West and was known as War 3; War 3 was revised further in 1979 and published as Artillery-3. These early versions of turn-based tank combat games interpreted human-entered data such as the distance between the tanks, the velocity or "power" of the shot fired and the angle of the tanks' turrets.
=== Auto battler (auto chess) ===
Auto battler, also known as auto chess, is a type of strategy game that features chess-like elements where players place characters on a grid-shaped battlefield during a preparation phase, who then fight the opposing team's characters without any further direct input from the player. It was created and popularized by Dota Auto Chess in early 2019, and saw more games in the genre by other studios, such as Teamfight Tactics, Dota Underlords, and Hearthstone Battlegrounds releasing soon after.
=== Multiplayer online battle arena (MOBA) ===
Multiplayer online battle arena (MOBA) is a genre of strategy video games where two teams of players compete to destroy the opposing team's main structure while defending their own. Players control characters called "heroes" or "champions" with unique abilities, and are aided by computer-controlled units that march along set paths (called "lanes") toward the enemy base. The first team to destroy the enemy's base wins. MOBA games combine elements of real-time strategy, role-playing, and action games, focusing on team coordination, character progression, and fast-paced combat. Unlike traditional real-time strategy games, players do not build structures or units.
The genre gained popularity in the early 2010s, with Defense of the Ancients mod for Warcraft III, League of Legends, Dota 2, Heroes of the Storm, and Smite. MOBA games are well-represented in esports, with prize pools reaching tens of millions of dollars.
=== Construction and management simulation games ===
In management simulation games, players build, expand or manage fictional communities or projects with limited resources. Tycoons, city-building, business simulation and transport management games are considered by some authors as a part of wider subgenre of strategy games, while others consider them as a separate video game genre. Some games of this subgenre, like The Settlers, can include warfare, but this is not an essential element in them. Other strategy video games sometimes incorporate CMS aspects into their game economy, as players must manage resources while expanding their project. For example, base building and resource management in XCOM series.
=== Real-time strategy (RTS) ===
Usually applied only to certain computer strategy games, the moniker real-time strategy (RTS) indicates that the action in the game is continuous, and players will have to make their decisions and actions within the backdrop of a constantly changing game state, and computer real-time strategy gameplay is characterised by obtaining resources, building bases, researching technologies and producing units. Very few non-computer strategy games are real-time; one example is Icehouse.
Some players dispute the importance of strategy in real-time strategy games, as skill and manual dexterity are often seen as the deciding factor in this genre of game. According to Troy Dunniway, "A player controls hundreds of units, dozens of buildings and many different events that are all happening simultaneously. There is only one player, and he can only pay attention to one thing at a time. Expert players can quickly flip between many different tasks, while casual gamers have more problems with this." Ernest Adams goes so far as to suggest that real-time gameplay interferes with strategy. "Strategic thinking, at least in the arena of gameplay, does not lend itself well to real-time action".
Many strategy players claim that many RTS games really should be labeled as "real-time tactical" (RTT) games since the game play revolves entirely around tactics, with little or even no strategy involved. Massively Multiplayer Online Games (MMOG or MMO) in particular have had a difficult time implementing strategy since having strategy implies some mechanism for "winning". MMO games, by their nature, are typically designed to be never-ending. Nevertheless, some games are attempting to "crack the code," so-to-speak, of the true real-time strategy MMOG. One method by which they are doing so is by making defenses stronger than the weapons, thereby slowing down combat considerably and making it possible for players to more carefully consider their actions during a confrontation. Customizable units are another way of adding strategic elements, as long as players are truly able to influence the capabilities of their units. The industry is seeking to present new candidates worthy of being known for "thought strategy" rather than "dexterity strategy".
While Herzog Zwei is regarded as the first true RTS game, the defining title for the genre was Westwood Studios's Dune II, which was followed by their seminal Command & Conquer games. Cavedog's Total Annihilation (1997), Blizzard's Warcraft (1994) series, StarCraft (1998) series, and Ensemble Studios' Age of Empires (1997) series are some of the most popular RTS games.
=== MMORTS ===
Massively multiplayer online real-time strategy games, also known as MMORTS, combine real-time strategy (RTS) with a persistent world. Players often assume the role of a general, king, or other type of figurehead leading an army into battle while maintaining the resources needed for such warfare. The titles are often based in a sci-fi or fantasy universe and are distinguished from single or small-scale multiplayer RTS games by the number of players and common use of a persistent world, generally hosted by the game's publisher, which continues to evolve even when the player is offline.
=== Real-time tactics (RTT) ===
Real-time tactics (abbreviated RTT and less commonly referred to as fixed-unit real-time strategy) is a subgenre of tactical wargames played in real-time simulating the considerations and circumstances of operational warfare and military tactics. It is also sometimes considered a subgenre of real-time strategy, and thus may in this context exist as an element of gameplay or as a basis for the whole game. It is differentiated from real-time strategy gameplay by the lack of resource micromanagement and base or unit building, as well as the greater importance of individual units and a focus on complex battlefield tactics. Example titles include Warhammer: Dark Omen, World In Conflict, the Close Combat series, and early tactical role-playing games such as Bokosuka Wars, and Silver Ghost.
=== Tower defense ===
Tower defense games have a very simple layout. Usually, computer-controlled monsters called creeps move along a set path, and the player must place, or "build" towers along this path to kill the creeps. In some games, towers are placed along a set path for creeps, while in others towers can interrupt creep movement and change their path. In most tower defense games different towers have different abilities such as poisoning enemies or slowing them down. The player is awarded money for killing creeps, and this money can be used to buy more towers, or buy upgrades for a tower such as increased power or range. A good example of a game of this genre is Clash Royale made by Finnish developers Supercell.
=== Turn-based strategy (TBS) ===
The term turn-based strategy (TBS) is usually reserved for certain computer strategy games, to distinguish them from real-time computer strategy games. A player of a turn-based game is allowed a period of analysis before committing to a game action. Examples of this genre are the Civilization, Heroes of Might and Magic, Making History, Advance Wars and Master of Orion.
TBS games come in two flavors, differentiated by whether players make their plays simultaneously or take turns. The former types of games are called simultaneously executed TBS games, with Diplomacy a notable example. The latter games fall into the player-alternated TBS games category, and are subsequently subdivided into (a) ranked, (b) round-robin start, and (c) random, the difference being the order under which players take their turns. With (a), ranked, the players take their turns in the same order every time. With (b), the first player is selected according to a round-robin policy. With (c), random, the first player is, of course, randomly selected.
Almost all non-computer strategy games are turn-based; however, the personal computer game market trend has lately inclined more towards real-time games. Some recent games feature a mix of both real-time and turn-based elements thrown together.
=== Turn-based tactics (TBT) ===
Turn-based tactics (TBT), or tactical turn-based (TTB), is a genre of strategy video games that through stop-action simulates the considerations and circumstances of operational warfare and military tactics in generally small-scale confrontations as opposed to more strategic considerations of turn-based strategy (TBS) games.
Turn-based tactical gameplay is characterized by the expectation of players to complete their tasks using only the combat forces provided to them, and usually by the provision of a realistic (or at least believable) representation of military tactics and operations. Examples of this genre include the Wars and X-COM series, as well as tactical role-playing games such as the Jagged Alliance series, Fire Emblem series and Final Fantasy Tactics.
=== Wargames ===
Wargames are a subgenre of strategy video games that emphasize strategic or tactical warfare on a map, as well as historical (or near-historical) accuracy.
The primary gameplay mode in a wargame is usually tactical: fighting battles. Wargames sometimes have a strategic mode where players may plan their battle or choose an area to conquer, but players typically spend much less time in this mode and more time actually fighting. Because it is difficult to provide an intelligent way to delegate tasks to a subordinate, war games typically keep the number of units down to hundreds rather than hundreds of thousands.
Examples of wargames include Koei's Nobunaga's Ambition and Romance of the Three Kingdoms series, Longbow's Hegemony series and several titles by Strategic Simulations, Inc. (SSI) and Strategic Studies Group (SSG).
== Genre hybrids ==
Hybrid strategy games can be viewed as distinct from strategy subgenres in the fact they are not so much iterations or combinations of existing subgenres, but instead seek to combine the strategy genre with completely different genres. Efforts to create such strategy game hybrids were most active in the late 1990s to early 2000's, when first-person shooter (FPS) and real-time strategy (RTS) games were both massively popular. Leading to several notable FPS/RTS hybrid games.
== See also ==
List of real-time strategy video games
List of real-time tactics video games
List of turn-based strategy video games
List of turn-based tactics video games
Micromanagement (computer gaming)
Rush (computer and video games)
Technology tree
Turtle (game term)
== References ==
== External links ==
Media related to Strategy video games at Wikimedia Commons | Wikipedia/Strategy_video_game |
A constructible strategy game (CSG) (also spelled constructable strategy game) is a tabletop strategy game employing pieces assembled from components.
WizKids was the first to label a game as a CSG when they released their game Pirates of the Spanish Main in 2004. Internally, the term was coined by then-WizKids Communications Director Jason Mical to describe the game where players assemble ships from hulls, masts, and deck pieces punched out of credit card-like plastic (polystyrene). A second CSG from WizKids, Rocketmen, was released in summer 2005, and a NASCAR-themed CSG called Race Day came out later that year. Both Rocketmen and Race Day were later discontinued.
WizKids now utilizes the term "PocketModel" to describe this genre, as with Star Wars PocketModel Trading Card Game and the modern Pirates of the Spanish Main website.
White Wolf, Inc. released their own CSG, Racer Knights of Falconus, under their Arthaus Publishing imprint in mid-2005.
Wizards of the Coast was awarded U.S. patent 7,201,374 in early 2007 for the constructible strategy game.
== References == | Wikipedia/Constructible_strategy_game |
Strategy video game is a major video game genre that focuses on analyzing and strategizing over direct quick reaction in order to secure success.
Although many types of video games can contain strategic elements, the strategy genre is most commonly defined by a primary focus on high-level strategy, logistics and resource management.
They are also usually divided into two main sub-categories: turn-based and real-time, but there are also many strategy cross/sub-genres that feature additional elements such as tactics, diplomacy, economics and exploration.
== Typical experience ==
A player must plan a series of actions against one or more opponents, and the reduction of enemy forces is usually a goal. Victory is achieved through superior planning, and the element of chance takes a smaller role. In most strategy video games, the player is given a godlike view of the game world, and indirectly controls game units under their command. Thus, most strategy games involve elements of warfare to varying degrees, and feature a combination of tactical and strategic considerations. In addition to combat, these games often challenge the player's ability to explore or manage an economy.
== Relationship to other genres ==
Even though there are many action games that involve strategic thinking, they are seldom classified as strategy games. A strategy game is typically larger in scope, and its main emphasis is on the player's ability to outthink their opponent. Strategy games rarely involve a physical challenge, and tend to annoy strategically minded players when they do. Compared to other genres such as action or adventure games where one player takes on many enemies, strategy games usually involve some level of symmetry between sides. Each side generally has access to similar resources and actions, with the strengths and weaknesses of each side being generally balanced.
Although strategy games involve strategic, tactical, and sometimes logistical challenges, they are distinct from puzzle games. A strategy game calls for planning around a conflict between players, whereas puzzle games call for planning in isolation. Strategy games are also distinct from construction and management simulations, which include economic challenges without any fighting. These games may incorporate some amount of conflict, but are different from strategy games because they do not emphasize the need for direct action upon an opponent. Nevertheless, some authors consider construction and management simulation games, in particular city-building games, as a part of the wider strategy game genre.
Although strategy games are similar to role-playing video games in that the player must manage units with a variety of numeric attributes, RPGs tend to be about a smaller number of unique characters, while strategy games focus on larger numbers of fairly similar units.
== Game design ==
=== Units and conflict ===
The player commands their forces by selecting a unit, usually by clicking it with the mouse, and issuing an order from a menu. Keyboard shortcuts become important for advanced players, as speed is often an important factor. Units can typically move, attack, stop, hold a position, although other strategy games offer more complex orders. Units may even have specialized abilities, such as the ability to become invisible to other units, usually balanced with abilities that detect otherwise invisible things. Some strategy games even offer special leader units that provide a bonus to other units. Units may also have the ability to sail or fly over otherwise impassable terrain, or provide transport for other units. Non-combat abilities often include the ability to repair or construct other units or buildings.
Even in imaginary or fantastic conflicts, strategy games try to reproduce important tactical situations throughout history. Techniques such as flanking, making diversions, or cutting supply lines may become integral parts of managing combat. Terrain becomes an important part of strategy, since units may gain or lose advantages based on the landscape. Some strategy games such as Civilization III and Medieval 2: Total War involve other forms of conflict such as diplomacy and espionage. However, warfare is the most common form of conflict, as game designers have found it difficult to make non-violent forms of conflict as appealing.
=== Economy, resources and upgrades ===
Strategy games often involve other economic challenges. These can include building construction, population maintenance, and resource management. Strategy games frequently make use of a windowed interface to manage these complex challenges.
Most strategy games allow players to accumulate resources which can be converted to units, or converted to buildings such as factories that produce more units. The quantity and types of resources vary from game to game. Some games will emphasize resource acquisition by scattering large quantities throughout the map, while other games will put more emphasis on how resources are managed and applied by balancing the availability of resources between players. To a lesser extent, some strategy games give players a fixed quantity of units at the start of the game.
Strategy games often allow the player to spend resources on upgrades or research. Some of these upgrades enhance the player's entire economy. Other upgrades apply to a unit or class of units, and unlock or enhance certain combat abilities. Sometimes enhancements are enabled by building a structure that enables more advanced structures. Games with a large number of upgrades often feature a technology tree, which is a series of advancements that players can research to unlock new units, buildings, and other capabilities. Technology trees are quite large in some games, and 4X strategy games are known for having the largest.
A build order is a linear pattern of production, research, and resource management aimed at achieving a specific and specialized goal. They are analogous to chess openings, in that a player will have a specific order of play in mind, however, the amount of the build order, the strategy around which the build order is built or even which build order is then used varies on the skill, ability and other factors such as how aggressive or defensive each player is.
=== Map and exploration ===
Early strategy games featured a top-down perspective, similar in nature to a board game or paper map. Many later games adopted an isometric perspective. Even with the rise of 3D graphics and the potential to manipulate the camera, games usually feature some kind of aerial view. Very rarely do strategy games show the world from the perspective from an avatar on the ground. This is to provide the player with a big-picture view of the game world, and form more effective strategies.
Exploration is a key element in most strategy games. The landscape is often shrouded in darkness, and this darkness is lifted as a player's units enters the area. The ability to explore may be inhibited by different kinds of terrain, such as hills, water, or other obstructions. Even after an area is explored, that area may become dim if the player does not patrol it. This design technique is called the fog of war, where the player can see the terrain but not the units within the explored area. This makes it possible for enemies to attack unexpectedly from otherwise explored areas.
=== Real-time versus turn-based ===
Strategy video games are categorized based on whether they offer the continuous gameplay of real-time strategy, or the discrete phases of turn-based strategy. These differences in time-keeping lead to several other differences. Typically, turn-based strategy games have stronger artificial intelligence than real-time strategy games, since the turn-based pace allows more time for complex calculations. But a real-time artificial intelligence makes up for this disadvantage with its ability to manage multiple units more quickly than a human. Overall, real-time strategy games are more action-oriented, as opposed to the abstract planning emphasized in turn-based strategy.
The relative popularity of real-time strategy has led some critics to conclude that more gamers prefer action-oriented games. Fans of real-time strategy have criticized the wait times associated with turn-based games, and praised the challenge and realism associated with making quick decisions in real-time. In contrast, turn-based strategy fans have criticized real-time strategy games because most units do not behave appropriately without orders, and thus a turn-based pace allows players to input more realistic and detailed plans. Game theorists have noted that strategic thinking does not lend itself well to real-time action, and turn-based strategy purists have criticized real-time strategy games for replacing "true strategy" with gameplay that rewards "rapid mouse-clicking". Overall, reviewers have been able to recognize the advantages associated with both of the main types of strategy games.
=== Strategy versus tactics ===
Most strategy video games involve a mix of both strategy and tactics. "Tactics" usually refer to how troops are utilized in a given battle, whereas "strategy" describes the mix of troops, the location of the battle, the commander's larger goals or military doctrine, as well as the act of building up something (a base, economy, etc.). However, there is also a growing subgenre of purely tactical games, which are referred to as real-time tactics, and turn-based tactics. These types of games are sometimes categorized as "strategy" games. Game reviewers and scholars sometimes debate whether they are using terminology such as "tactics" or "strategy" appropriately. Chris Taylor, the designer of Total Annihilation and Supreme Commander, has gone so far as to suggest that real-time strategy titles are more about tactics than strategy. But releases that are considered pure tactical games usually provide players with a fixed set of units, and downplay other strategic considerations such as manufacturing, and resource management. Tactical games are strictly about combat, and typically focus on individual battles, or other small sections in a larger conflict.
=== Settings and themes ===
Strategy games can take place in a number of settings. Depending on the theatre of warfare, releases may be noted as naval strategy games, or space strategy games. A title may be noted for its grand strategic scale, whether the game is real-time, or turn-based. Strategy games also draw on a number of historical periods, including World War II, the medieval era, or the Napoleonic era. Some examples of these are: Hearts of Iron IV, Europa Universalis IV, and Victoria II. Some strategy games are even based in an alternate history, by manipulating and rewriting certain historical facts. It is also common to see games based in science fiction or futuristic settings, as well as fantasy settings.
Some strategy games are abstract, and do not try to represent a world with high fidelity. Although many of these may still involve combat in the sense that units can capture or destroy each other, these games sometimes offer non-combat challenges such as arranging units in specific patterns. However, the vast majority of computerized strategy games are representational, with more complex game mechanics.
=== Single player, multiplayer, and massively multiplayer ===
Strategy games include single-player gameplay, multiplayer gameplay, or both. Single player games will sometimes feature a campaign mode, which involves a series of matches against several artificial intelligence opponents. Finishing each match or mission will advance the game's plot, often with cut scenes, and some games will reward a completed mission with new abilities or upgrades. Hardcore strategy gamers tend to prefer multiplayer competition, where human opponents provide more challenging competition than the artificial intelligence. Artificial intelligence opponents often need hidden information or bonuses to provide a challenge to players.
More recently, massively multiplayer online strategy games have appeared such as Shattered Galaxy from 2001. However, these games are relatively difficult to design and implement compared to other massively multiplayer online games, as the numerous player-controlled units create a larger volume of online data. By 2006, reviewers expressed disappointment with the titles produced thus far. Critics argued that strategy games are not conducive to massively multiplayer gameplay. A single victory cannot have much impact in a large persistent world, and this makes it hard for a player to care about a small victory, especially if they are fighting for a faction that is losing an overall war. However, more recent developers have tried to learn from past mistakes, resulting in Dreamlords from 2007, and Saga from 2008. In 2012, Supercell released Clash of Clans, a mobile strategy video game.
== History ==
The origin of strategy video games is rooted in traditional tabletop strategy games like Chess, Checkers and Go, as well as board and miniature wargaming. The Sumerian Game, an early mainframe game written by Mabel Addis, based on the ancient Sumerian city-state of Lagash, was an economic simulation strategy game.
The first console strategy game was a Risk-like game called Invasion, released in 1972 for the Magnavox Odyssey. Strategic Simulations (SSI)'s Computer Bismarck, released in 1980, was the first historical computer wargame. Companies such as SSI, Avalon Hill, MicroProse, and Strategic Studies Group released many strategy titles throughout the 1980s. Reach for the Stars from 1983 was one of the first 4X strategy games, which expanded upon the relationship between economic growth, technological progress, and conquest. That same year, Nobunaga's Ambition was a conquest-oriented grand strategy wargame with historical simulation elements. The Lords of Midnight combined elements of adventure, strategy and wargames, and won the Crash magazine award for Best Adventure game of 1984, as well as Best Strategy Game of the Year at the Golden Joystick Awards
1989's Herzog Zwei is often considered the first real-time strategy game, although real-time strategy elements can be found in several earlier games, such as Dan Bunten's Cytron Masters and Don Daglow's Utopia in 1982; Kōji Sumii's Bokosuka Wars in 1983; D. H. Lawson and John Gibson's Stonkers and Steven Faber's Epidemic! in 1983; and Evryware's The Ancient Art of War in 1984.
The genre was popularized by Dune II three years later in 1992. Brett Sperry, the creator of Dune II, coined the name "real-time strategy" to help market the new game genre he helped popularize. Real-time strategy games changed the strategy genre by emphasizing the importance of time management, with less time to plan. Real-time strategy games eventually began to outsell turn-based strategy games. With more than 11 million copies sold worldwide by February 2009, StarCraft (1998) became one of the best-selling games for the personal computer. It has been praised for pioneering the use of unique "factions" in RTS gameplay, and for having a compelling story.
2002's Warcraft III: Reign of Chaos has been an influence on real-time strategy games, especially the addition of role-playing elements and heroes as units. More than the game itself, mods created with the World Editor led to lasting changes and inspired many future strategy games. Defense of the Ancients (DotA), a community-created mod based on Warcraft III, is largely attributed as being the most significant inspiration for the multiplayer online battle arena (MOBA) format. Since the format was tied to the Warcraft property, developers began to work on their own "DOTA-style" games, including Heroes of Newerth (2009), League of Legends (2010), and the mod's standalone sequel, Dota 2 (2013). Blizzard Entertainment, the owner of Warcraft property, developed a game inspired by DotA titled Heroes of the Storm (2015), which features an array of heroes from Blizzard's franchises, including numerous heroes from Warcraft III. Former game journalist Luke Smith called DotA "the ultimate RTS".
Since its first title was released in 2000, the Total War series by the Creative Assembly has sold over 20 million copies, becoming one of the most successful series of strategy games of all time.
== Subgenres ==
=== 4X ===
4X games are a genre of strategy video game in which players control an empire and "explore, expand, exploit, and exterminate". The term was first coined by Alan Emrich in his September 1993 preview of Master of Orion for Computer Gaming World. Since then, others have adopted the term to describe games of similar scope and design.
4X games are noted for their deep, complex gameplay. Emphasis is placed upon economic and technological development, as well as a range of non-military routes to supremacy. Many 4X games also fit into the category of grand strategy. Games can take a long time to complete since the amount of micromanagement needed to sustain an empire scales as the empire grows. 4X games are sometimes criticized for becoming tedious for these reasons, and several games have attempted to address these concerns by limiting micromanagement.
The earliest 4X games borrowed ideas from board games and 1970s text-based computer games. The first 4X games were turn-based, but real-time 4X games are also not uncommon. Many 4X games were published in the mid-1990s, but were later outsold by other types of strategy games. Sid Meier's Civilization and the Total War series are important examples from this formative era, and popularized the level of detail that would later become a staple of the genre. In the new 2000 millennium, several 4X releases have become critically and commercially successful.
=== Grand Strategy ===
Grand strategy games emphasize the management of a nation and the coordination of its resources. Diplomacy and war interact with each other and become the primary means of reshaping the world map consisting of various states. Players use their nation's resources to achieve national goals such as world domination, whether through military might, diplomacy, or economics. Unlike 4X games, Grand strategy games might not include such elements as exploration, but it still can be there. Great examples of Grand Strategy games are the following series of games: Europa Universalis, Hearts of Iron, Crusader Kings.
=== Artillery ===
Artillery is the generic name for either early two- or three-player (usually turn-based) computer games involving tanks fighting each other in combat or similar derivative games. Artillery games are among the earliest computer games developed; the theme of such games is an extension of the original uses of computer themselves, which were once used to calculate the trajectories of rockets and other related military-based calculations. Artillery games have been typically described as a type of turn-based tactics game, though they have also been described as a type of "shooting game." Examples of this genre are Pocket Tanks, Hogs of War, Scorched 3D and the Worms series.
Early precursors to the modern artillery-type games were text-only games that simulated artillery entirely with input data values. A BASIC game known simply as Artillery was written by Mike Forman and was published in Creative Computing magazine in 1976. This seminal home computer version of the game was revised in 1977 by M. E. Lyon and Brian West and was known as War 3; War 3 was revised further in 1979 and published as Artillery-3. These early versions of turn-based tank combat games interpreted human-entered data such as the distance between the tanks, the velocity or "power" of the shot fired and the angle of the tanks' turrets.
=== Auto battler (auto chess) ===
Auto battler, also known as auto chess, is a type of strategy game that features chess-like elements where players place characters on a grid-shaped battlefield during a preparation phase, who then fight the opposing team's characters without any further direct input from the player. It was created and popularized by Dota Auto Chess in early 2019, and saw more games in the genre by other studios, such as Teamfight Tactics, Dota Underlords, and Hearthstone Battlegrounds releasing soon after.
=== Multiplayer online battle arena (MOBA) ===
Multiplayer online battle arena (MOBA) is a genre of strategy video games where two teams of players compete to destroy the opposing team's main structure while defending their own. Players control characters called "heroes" or "champions" with unique abilities, and are aided by computer-controlled units that march along set paths (called "lanes") toward the enemy base. The first team to destroy the enemy's base wins. MOBA games combine elements of real-time strategy, role-playing, and action games, focusing on team coordination, character progression, and fast-paced combat. Unlike traditional real-time strategy games, players do not build structures or units.
The genre gained popularity in the early 2010s, with Defense of the Ancients mod for Warcraft III, League of Legends, Dota 2, Heroes of the Storm, and Smite. MOBA games are well-represented in esports, with prize pools reaching tens of millions of dollars.
=== Construction and management simulation games ===
In management simulation games, players build, expand or manage fictional communities or projects with limited resources. Tycoons, city-building, business simulation and transport management games are considered by some authors as a part of wider subgenre of strategy games, while others consider them as a separate video game genre. Some games of this subgenre, like The Settlers, can include warfare, but this is not an essential element in them. Other strategy video games sometimes incorporate CMS aspects into their game economy, as players must manage resources while expanding their project. For example, base building and resource management in XCOM series.
=== Real-time strategy (RTS) ===
Usually applied only to certain computer strategy games, the moniker real-time strategy (RTS) indicates that the action in the game is continuous, and players will have to make their decisions and actions within the backdrop of a constantly changing game state, and computer real-time strategy gameplay is characterised by obtaining resources, building bases, researching technologies and producing units. Very few non-computer strategy games are real-time; one example is Icehouse.
Some players dispute the importance of strategy in real-time strategy games, as skill and manual dexterity are often seen as the deciding factor in this genre of game. According to Troy Dunniway, "A player controls hundreds of units, dozens of buildings and many different events that are all happening simultaneously. There is only one player, and he can only pay attention to one thing at a time. Expert players can quickly flip between many different tasks, while casual gamers have more problems with this." Ernest Adams goes so far as to suggest that real-time gameplay interferes with strategy. "Strategic thinking, at least in the arena of gameplay, does not lend itself well to real-time action".
Many strategy players claim that many RTS games really should be labeled as "real-time tactical" (RTT) games since the game play revolves entirely around tactics, with little or even no strategy involved. Massively Multiplayer Online Games (MMOG or MMO) in particular have had a difficult time implementing strategy since having strategy implies some mechanism for "winning". MMO games, by their nature, are typically designed to be never-ending. Nevertheless, some games are attempting to "crack the code," so-to-speak, of the true real-time strategy MMOG. One method by which they are doing so is by making defenses stronger than the weapons, thereby slowing down combat considerably and making it possible for players to more carefully consider their actions during a confrontation. Customizable units are another way of adding strategic elements, as long as players are truly able to influence the capabilities of their units. The industry is seeking to present new candidates worthy of being known for "thought strategy" rather than "dexterity strategy".
While Herzog Zwei is regarded as the first true RTS game, the defining title for the genre was Westwood Studios's Dune II, which was followed by their seminal Command & Conquer games. Cavedog's Total Annihilation (1997), Blizzard's Warcraft (1994) series, StarCraft (1998) series, and Ensemble Studios' Age of Empires (1997) series are some of the most popular RTS games.
=== MMORTS ===
Massively multiplayer online real-time strategy games, also known as MMORTS, combine real-time strategy (RTS) with a persistent world. Players often assume the role of a general, king, or other type of figurehead leading an army into battle while maintaining the resources needed for such warfare. The titles are often based in a sci-fi or fantasy universe and are distinguished from single or small-scale multiplayer RTS games by the number of players and common use of a persistent world, generally hosted by the game's publisher, which continues to evolve even when the player is offline.
=== Real-time tactics (RTT) ===
Real-time tactics (abbreviated RTT and less commonly referred to as fixed-unit real-time strategy) is a subgenre of tactical wargames played in real-time simulating the considerations and circumstances of operational warfare and military tactics. It is also sometimes considered a subgenre of real-time strategy, and thus may in this context exist as an element of gameplay or as a basis for the whole game. It is differentiated from real-time strategy gameplay by the lack of resource micromanagement and base or unit building, as well as the greater importance of individual units and a focus on complex battlefield tactics. Example titles include Warhammer: Dark Omen, World In Conflict, the Close Combat series, and early tactical role-playing games such as Bokosuka Wars, and Silver Ghost.
=== Tower defense ===
Tower defense games have a very simple layout. Usually, computer-controlled monsters called creeps move along a set path, and the player must place, or "build" towers along this path to kill the creeps. In some games, towers are placed along a set path for creeps, while in others towers can interrupt creep movement and change their path. In most tower defense games different towers have different abilities such as poisoning enemies or slowing them down. The player is awarded money for killing creeps, and this money can be used to buy more towers, or buy upgrades for a tower such as increased power or range. A good example of a game of this genre is Clash Royale made by Finnish developers Supercell.
=== Turn-based strategy (TBS) ===
The term turn-based strategy (TBS) is usually reserved for certain computer strategy games, to distinguish them from real-time computer strategy games. A player of a turn-based game is allowed a period of analysis before committing to a game action. Examples of this genre are the Civilization, Heroes of Might and Magic, Making History, Advance Wars and Master of Orion.
TBS games come in two flavors, differentiated by whether players make their plays simultaneously or take turns. The former types of games are called simultaneously executed TBS games, with Diplomacy a notable example. The latter games fall into the player-alternated TBS games category, and are subsequently subdivided into (a) ranked, (b) round-robin start, and (c) random, the difference being the order under which players take their turns. With (a), ranked, the players take their turns in the same order every time. With (b), the first player is selected according to a round-robin policy. With (c), random, the first player is, of course, randomly selected.
Almost all non-computer strategy games are turn-based; however, the personal computer game market trend has lately inclined more towards real-time games. Some recent games feature a mix of both real-time and turn-based elements thrown together.
=== Turn-based tactics (TBT) ===
Turn-based tactics (TBT), or tactical turn-based (TTB), is a genre of strategy video games that through stop-action simulates the considerations and circumstances of operational warfare and military tactics in generally small-scale confrontations as opposed to more strategic considerations of turn-based strategy (TBS) games.
Turn-based tactical gameplay is characterized by the expectation of players to complete their tasks using only the combat forces provided to them, and usually by the provision of a realistic (or at least believable) representation of military tactics and operations. Examples of this genre include the Wars and X-COM series, as well as tactical role-playing games such as the Jagged Alliance series, Fire Emblem series and Final Fantasy Tactics.
=== Wargames ===
Wargames are a subgenre of strategy video games that emphasize strategic or tactical warfare on a map, as well as historical (or near-historical) accuracy.
The primary gameplay mode in a wargame is usually tactical: fighting battles. Wargames sometimes have a strategic mode where players may plan their battle or choose an area to conquer, but players typically spend much less time in this mode and more time actually fighting. Because it is difficult to provide an intelligent way to delegate tasks to a subordinate, war games typically keep the number of units down to hundreds rather than hundreds of thousands.
Examples of wargames include Koei's Nobunaga's Ambition and Romance of the Three Kingdoms series, Longbow's Hegemony series and several titles by Strategic Simulations, Inc. (SSI) and Strategic Studies Group (SSG).
== Genre hybrids ==
Hybrid strategy games can be viewed as distinct from strategy subgenres in the fact they are not so much iterations or combinations of existing subgenres, but instead seek to combine the strategy genre with completely different genres. Efforts to create such strategy game hybrids were most active in the late 1990s to early 2000's, when first-person shooter (FPS) and real-time strategy (RTS) games were both massively popular. Leading to several notable FPS/RTS hybrid games.
== See also ==
List of real-time strategy video games
List of real-time tactics video games
List of turn-based strategy video games
List of turn-based tactics video games
Micromanagement (computer gaming)
Rush (computer and video games)
Technology tree
Turtle (game term)
== References ==
== External links ==
Media related to Strategy video games at Wikimedia Commons | Wikipedia/Turn-based_strategy |
Stratego ( strə-TEE-goh) is a strategy board game for two players on a board of 10×10 squares. Each player controls 40 pieces representing individual officer and soldier ranks in an army. The pieces have Napoleonic insignia. The objective of the game is to either find and capture the opponent's Flag or to capture all movable enemy pieces that the opponent cannot make any further moves. Stratego has simple enough rules for young children to play but a depth of strategy that is also appealing to adults.
The game is a slightly modified copy of an early 20th century French game named L'Attaque ("The Attack"), and has been in production in Europe since World War II and the United States since 1961. There are now two- and four-player versions, versions with 10, 30 or 40 pieces per player, and boards with smaller sizes (number of spaces). There are also variant pieces and different rulesets.
The International Stratego Federation, the game's governing body, sponsors an annual Stratego World Championship.
== Name and trademark ==
Stratego is from the French or Greek strategos (var. strategus) for leader of an ancient (especially Greek) army: first general.
The name Stratego was first registered in 1942 in the Netherlands. The United States trademark was filed in 1958 and registered in 1960 to Jacques Johan Mogendorff and is presently owned by Jumbo Games as successors to Hausemann and Hotte, headquartered in the Netherlands. It has been licensed to manufacturers such as Milton Bradley, Hasbro and others, as well as retailers such as Barnes & Noble, Target stores, etc.
== The contents of the game ==
This description is of the original and classic games; many variant shapes and colors of pieces and boards have been produced in the decades since.
The game box contents are a set of 40 gold-embossed red playing pieces, a set of silver-embossed blue playing pieces, and a folding 15+1⁄2 in × 18+1⁄2 in (39 cm × 47 cm) rectangular cardboard playing board imprinted with a 10×10 grid of spaces. The early sets featured painted wood pieces, later sets colored plastic. The pieces are small and roughly rectangular, 1 in (25 mm) tall and 3⁄4 in (19 mm) wide, and unweighted. More modern versions first introduced in Europe have cylindrical castle-shaped pieces. Some versions have a cardboard privacy screen to assist setup. A few versions have wooden boxes or boards.
== Setup ==
Typically, color is chosen by lot: one player uses red pieces, and the other uses blue pieces. Before the start of the game, players arrange their 40 pieces in a 4×10 configuration at either end of the board. The ranks are printed on one side only and placed so that the players cannot identify the opponent's pieces. Players may not place pieces in the lakes or the 12 squares in the center of the board. Such pre-play distinguishes the fundamental strategy of particular players and influences the outcome of the game.
== Gameplay ==
Players alternate moving; red moves first. The right to move first does not significantly affect game play (unlike chess). Each player moves one piece per turn. A player must move a piece in their turn; one cannot skip a turn.
Two zones in the middle of the board, each 2×2, cannot be entered by either player's pieces at any time. They are shown as lakes on the battlefield and serve as choke points to make frontal assaults less direct.
The game can be won by capturing the opponent's Flag or all of their moveable pieces. It is possible to have ranked pieces that are not moveable because they are trapped behind bombs. In unusual cases, it is possible to draw, for example, when both players' flags are protected by bombs and each player has one remaining piece which is not a miner.
The average game has 381 moves. The number of legal positions is 10115. The number of possible games is 10535. Stratego has many more moves and possible board states than other familiar games such as chess and backgammon; however, unlike those games where a single bad move at any point may result in loss of the game, most moves in Stratego are inconsequential as players think in "games not moves" (Boer, 2007).
=== Rules of movement ===
All movable pieces, with the exception of the Scout, may move only one step to any adjacent space vertically or horizontally (but not diagonally).: Movement:2 A piece may not move onto a space occupied by a like-color piece.: Movement:4, 5 Bomb and Flag pieces are not moveable.: Movement:7 The Scout may move any number of spaces in a straight line (such as the rook in chess).: Movement:8 In the older versions of Stratego the Scout could not move and strike in the same turn;: Movement:8 in newer versions this was allowed. Even before that, sanctioned play usually amended the original Scout movement to allow moving and striking in the same turn because it facilitates gameplay.
No piece can move back and forth between the same two spaces for three consecutive turns (two square rule).: Movement:10 Nor can a piece endlessly chase an opposing piece it has no hope of attacking (more square rule).
When a player wants to attack, they "strike" by touching an opposing piece with their piece or by moving it onto the square the opposing piece occupies. Both players then reveal their piece's rank; the weaker piece (see exceptions below) is removed from the board.: Attack:5 If the engaging pieces are of equal rank, both are removed.: Attack:7 A piece may not move onto a square already occupied unless it attacks. The original rules also contained a provision that following a strike, the winning piece immediately occupies the space vacated by the losing piece.: Attack:6
Two pieces have special attack powers. One special piece is the Bomb which only Miners can defuse. It immediately eliminates any other piece striking it without being destroyed itself.: Attack:10 Each player also has one Spy, which succeeds only if it attacks the Marshal or the Flag. If the Spy attacks any other piece, or is attacked by any piece (including the Marshal), the Spy is defeated.: Attack:9
=== Recording the game ===
Competitive play does not include recording the game, unlike chess. The game is fast-paced, no standard notation exists, and players keep their initial setups secret, so recording over-the-board games is impractical.
However, digital interfaces such as web-based gaming interfaces may have a facility for recording, replaying and downloading the game. Those interfaces use an algebraic-style notation that numbers the rows ('ranks') 1 to 10 from bottom to top and the columns ('files') A to J from left to right. Alternately, a few interfaces designate the files as A to K, omitting 'I'. Moves are recorded as source square followed by destination square separated by a "-" (move) or "x" (strike). Revealed pieces on strikes precede the square designation, and may be by either rank name or rank number for brevity, for example "major B2xcaptain B3". The bottom half of the board is by default considered to be the 'red' side, and the top half the 'blue' side.
=== Strategy ===
Unlike chess, Stratego is a game of imperfect information. In addition to calculated sequences of moves, this gives rise to aspects of battle psychology such as concealment, bluffing, lying in wait and guessing.
There are also strategic and tactical elements in the initial setup of the pieces. Stylistic preferences ("aggressive" vs "defensive") also enter into setup.
== Pieces ==
=== Classic pieces ===
There are seven immobile pieces – six Bombs and one Flag – and 33 mobile pieces per player. They can move to the adjacent square in horizontal or vertical direction, with exception of the Scout, which moves any distance. From highest rank to lowest the pieces are:
The higher ranked piece always captures the lower, except when stated otherwise. When a piece attacks another piece with equal rank, both are removed.
In the original versions published in the United States, the ranks were numbered with the most powerful Marshal piece ranked at 1, then numbers ascending as power fell until Scout was 9, and the Spy was not numbered but designated S. In 2000, this was inverted, with the Marshal ranked as 10, descending to 2 for the Scout, and the Spy ranked with number 1. "Classic" versions have been released since then with the lower number strongest, as in prior versions of the game.
=== Variant pieces ===
Variant versions of the game have a few different pieces with different rules of movement, such as the Cannon, Archer (possibly a different name for the Cannon), Spotter, Infiltrator, Corporal and Cavalry Captain. In one version, mobile pieces are allowed to "carry" the Flag. In some variants such as Stratego Waterloo and Fire and Ice Stratego, all or most of the pieces have substantially different moves.
== History ==
=== Japanese Military Chess ===
Japanese Military Chess (Gunjin Shogi) has been sold and played since as early as 1895, although it is not known by whom and when it was invented.: 129–130 Dr. Christian Junghans reported this game in Monatshefte magazine in Germany in 1905. It seems, only after reading his article, Julie Berg took out a patent on a war game in London and Paris in 1907. Similarly, Hermance Edan took a patent for L'attaque game in 1909 and sold them in 1910.: 154–158
The main differences between Gunjin Shogi and Stratego are:
Gunjin Shogi needs a referee to resolve the battles of the pieces, which are kept face-down throughout the game.
The Flag is placed only on the headquarters and a player who successfully occupied the headquarters of the opponent shall win the game.
There are no Scout pieces. The Engineers and Spy have the same movement as the Scouts in Stratego.
Only flag and senior officers can occupy the opponent's headquarters.
Engineer (analogue of miner) can remove mines and tanks.
There are at least three different versions of Gunjin Shogi, distinguished by the number of pieces controlled by each player as well as the size of the board. The 23- and 31-piece versions are similar, influenced by the technology of World War I, and the 25-piece version is a more recent development, incorporating technologies developed during World War II.
=== French L'Attaque ===
In nearly its present form Stratego appeared in France from La Samaritaine in 1910, and then in Britain before World War I, as a game called L'Attaque. Historian and game collector Thierry Depaulis writes:
It was in fact designed by a lady, Mademoiselle Hermance Edan, who filed a patent for a "jeu de bataille avec pièces mobiles sur damier" (a battle game with mobile pieces on a gameboard) on 1908-11-26. The patent was released by the French Patent Office in 1909 (patent #396.795). Hermance Edan had given no name to her game but a French manufacturer named Au Jeu Retrouvé was selling the game as L'Attaque as early as 1910.
The French patent has 36 pieces for each player and also has a slightly different board layout, but it introduced the same hierarchical rules of attack and movement followed by modern versions of the game. Depaulis further notes the 1910 version had two armies, divided into red and blue colors. The rules of L'attaque were basically the same as for the game we know as Stratego. It featured standing cardboard rectangular pieces, color printed with soldiers who wore contemporary (to 1900) uniforms, not Napoleonic uniforms. In papers of her estate, Ms. Edan states that she developed the game in the 1880s.
=== H. P. Gibson & Sons games ===
The publishing rights for L'Attaque were acquired for the United Kingdom by game maker H.P. Gibson and Sons in 1925, retaining the French name through at least the 1970s. Gibsons also produced several modified forms of the game, at least one of which predates the acquisition of the rights:
Dover Patrol – a naval warfare game on a board of 12×8 squares devised by Harry A. Gibson in 1911, but very similar to L'Attaque (and hence Stratego)
Aviation (game) – an air battle variation designed by Harry Gibson in 1925, with a variant called Battle of Britain sold in the 1970s
Tri-Tactics – a game combining land, sea and air warfare on a 12×12 board, with 56 pieces per person, dating from 1932, evolved from the above games.
In 2019, Gibsons released a 100th anniversary edition of L'Attaque. This edition included both the original and modern rules.
=== Stratego (classic) ===
Stratego was created by Dutchman Jacques Johan Mogendorff sometime before 1942. The name was registered as a trademark in 1942 by the Dutch company Van Perlstein & Roeper Bosch N.V. (which also produced the first edition of Monopoly). After WW2, Mogendorff licensed Stratego to Smeets and Schippers, a Dutch company, in 1946. Hausemann and Hotte acquired a license in 1958 for European distribution, and in 1959 for global distribution. After Mogendorff's death in 1961, Hausemann and Hotte purchased the trademark from his heirs, and sublicensed it to Milton Bradley (which was acquired by Hasbro in 1984) in 1961 for United States distribution. It is introduced to the people of the United States as, "the American version of the game now popular on the Continent." In 2009, Hausemann and Hotte was succeeded by Koninklijke Jumbo B.V. in the Netherlands.
The modern game of Stratego, with its Napoleonic imagery, was originally manufactured in the Netherlands. Pieces were originally made of printed cardboard and inserted in metal clip stands. After World War II, painted wood pieces became standard. Starting in the early 1960s all versions switched to plastic pieces. The change from wood to plastic was made for economical reasons, as was the case with many products during that period, but with Stratego the change also served a structural function: Unlike the wooden pieces, the plastic pieces were designed with a small base. The wooden pieces had none, often resulting in pieces tipping over. This was disastrous for that player, since it often immediately revealed the piece's rank, as well as unleashing a literal domino effect by having a falling piece knock over other pieces. European versions introduced cylindrical castle-shaped pieces that proved to be popular. American editions later introduced new rectangular pieces with a more stable base and colorful stickers, not images directly imprinted on the plastic.
European versions of the game give the Marshal the highest number (10), while the initial American versions used the numbering system of L'Attaque, giving the Marshal the lowest number (1) to show the highest value (i.e. it is the number 1, or most powerful, tile). More recent American versions of the game, which adopted the European system, caused considerable complaint among American players who grew up in the 1960s and 1970s. This may have been a factor in the release of a Nostalgia edition, in a wooden box, reproducing the classic edition of the early 1970s.
=== Modern Stratego variations ===
==== Electronic Stratego ====
Electronic Stratego was introduced by Milton Bradley in 1982. To promote the release, the company hired two actors to play Ronald Reagan and Leonid Brezhnev, who played a match at the New York Public Library Main Branch.
It has features that make many aspects of the game strikingly different from those of classic Stratego. The board is 8 wide by 10 squares deep, instead of 10×10. The blocked "lake" areas are therefore 1×2 instead of 2×2. Each side has 24 pieces, instead of 40, deployed in the three rows closest to the player; instead of six bomb pieces, Electronic Stratego uses hidden bomb pegs.
Each type of playing piece in Electronic Stratego has a unique series of bumps on its bottom that are read by the game's battery-operated touch-sensitive "board".: 7 When attacking another piece, the attacking player hits their Strike button, presses their piece and then the targeted piece: the game either rewards a successful attack or punishes a failed strike with an appropriate bit of music.: 22–25 In this way the players never know for certain the rank of the piece that wins the attack, only whether the attack wins, fails, or ties (similar to the role of the referee in the Chinese game of Luzhanqi). Instead of choosing to move a piece, a player can opt to "probe" an opposing piece by hitting the Probe button and pressing down on the enemy piece: the game then beeps out a rough approximation of the strength of that piece.: 26–27
There are no Bomb pieces: Bombs are set using pegs placed on a touch-sensitive "peg board" that is closed from view prior to the start of the game.: 16–19 Hence, it is possible for a player to have their piece occupying a square with a bomb on it. If an opposing piece lands on the seemingly empty square, the game plays the sound of an explosion and that piece is removed from play. As in classic Stratego, only a Miner can remove a Bomb from play.
The Scout is allowed to move diagonally, in addition to its usual horizontal and vertical moves. Again, as with the non-electronic Stratego, scouts are not allowed to jump over pieces.: 28–33
A player who successfully captures the opposing Flag is rewarded with a triumphant bit of music from the 1812 Overture.: 36
==== New pieces and versions ====
In the late 1990s, the Jumbo Company released several European variants, including a three- and four-player version, and a new Cannon piece (which jumps two squares to capture any piece, but loses to any attack against it). It also included some alternate rules such as Barrage (a quicker two-player game with fewer pieces) and Reserves (reinforcements in the three- and four-player games). The four-player version appeared in America in 1997.
Starting in the 2000s, Hasbro, under its Milton Bradley label, released a series of popular media-themed Stratego editions.
Besides themed variants with substantially different rules, current production includes three slightly different editions: sets with classic (1961) piece numbering (highest rank=1), sets with European piece numbering (highest rank=10), and sets that allow substitution of one or two variant pieces such as Cannons, usually in place of scouts. Sets produced since 1970 or so have uniformly adopted the rule that scouts can move and strike in the same turn.
== Stratego AI ==
In July 2022, DeepMind announced the development of DeepNash, a model-free multi-agent reinforcement learning system capable of playing Stratego at the level of a human expert. Stratego has been difficult to model well because the opponent's pieces are hidden, making it a game of imperfect information, the initial setup has more than 1066 possible states, and the overall game tree has 10535 possible states. DeepNash was able to win 84% of 50 ranked matches in online matches hosted by Gravon over a period of two weeks in April 2022 against human players, and won at a minimum rate of 97% over hundreds of matches against previously-developed Stratego-playing programs including Probe, Master of the Flag, Demon of Ignorance, Asmodeus, Celsius, PeternLewis, and Vixen.
== Related and derivative games ==
Stratego and its predecessor L'Attaque have spawned several derivative games, notably one 20th century Chinese game, "Game of the fighting animals" (Dou Shou Qi) also known as Jungle or "Animal Chess".
The game Jungle also has pieces (but of animals rather than soldiers) with different ranks and pieces with higher rank capture the pieces with lower rank. The board, with two lakes in the middle, is also remarkably similar to that in Stratego. The major differences between the two games is that in Jungle, the pieces are not hidden from the opponent, and the setup is fixed. According to historian R.C. Bell, this game is 20th century, and cannot have been a predecessor of L'Attaque or Stratego.
A more elaborate and complex Chinese game known as Land Battle Chess (Lu Zhan Qi) or Army Chess (Lu Zhan Jun Qi) is a similar board game to Stratego, with a few differences: It is played on a 5×13 board with two un-occupiable spaces in the middle, and each player has 25 playing pieces. The setup is not fixed, both players keep their pieces hidden from their opponent, and the objective is to capture the enemy's flag.[2] Lu Zhan Jun Qi's basic gameplay is similar, though differences include "missile" pieces and a xiangqi-style board layout with the addition of railroads and defensive "camps". A third person is also typically used as a referee to decide battles between pieces without revealing their identities. An expanded version of the Land Battle Chess game also exists, adding naval and aircraft pieces and is known as Sea-Land-Air Battle Chess (Hai Lu Kong Zhan Qi).[3] There is also a 4-player version of Lu Zhan Jun Qi that has players opposite to each other on the board be on the same team and try to capture the opposite teams' flags and can defend each other from the opposition's attacks.
Tri-tactics, by Gibson & Sons introduced in the 1950s combining L'Attaque, Dover Patrol and Aviation. The pieces represented fighting units (e.g. "division", "battalion", "brigade") rather than individual soldiers. The board consisted of land, ocean, rivers and lakes.
Game of the Generals, a Philippine variety of Stratego introduced in 1973 played on a modified (8×9) chessboard
battle for the temple, an Israeli game by Isratoys company
A capture the flag game called "Stratego" and loosely based on the board game is played at summer camps. In this game, two teams of thirty to sixty players are assigned ranks by distribution of coloured objects such as pinnies or glowsticks, the colours representing rank, not team. Players can tag and capture lower-ranked opponents, with the exception that the lowest rank captures the highest. Players who do not know their teammates may not be able to tell which team other players are on, creating incomplete information and opportunities for bluffing.
== Versions ==
The game remains in production, with new versions continuing to appear every few years. These are a few of the notable ones. The first U.S. edition (1961) Milton Bradley set, and a special edition 1963 set called Stratego Fine, had wooden pieces. The 1961 wood pieces had a design that looked like vines scaling a castle wall on the back. But later 1961 productions featured plastic pieces (not true first editions). All other regular editions had plastic pieces. A few special editions as noted below had wooden or metal pieces.
=== Classic versions ===
These have 10×10 boards, 40 pieces per side with classic pieces and rules of movement.
Official Modern Version: Also known as Stratego Original. Redesigned pieces and game art. The pieces now use stickers attached to new "castle-like" plastic pieces. The stickers must be applied by the player after purchase. Rank numbering is reversed in European style (higher numbers equals higher rank). Comes with an optional alternate piece, the Infiltrator.
Nostalgia Game Series Edition: Released 2002. Traditional stamped plastic pieces, although the metallic paint is dull and less reflective than some older versions, and the pieces are not engraved as some previous editions were. Wooden box, traditional board and piece numbering.
Stratego 50th Anniversary (2011) by Spin Master comes in both a book-style box and a cookie-tin-like metal box, with original new artwork, pieces and gameplay. It includes optional Cannons (2 per player) playing pieces.
Library Edition: Hasbro's Library Series puts what appears to be the classic Stratego of the Nostalgia Edition into a compact, book-like design. The wooden box approximates the size of a book and is made to fit in a bookcase in one's library. In this version, the scout may not move and strike in the same turn.
Michael Graves Design Stratego by Milton Bradley introduced in 2002 and sold exclusively through Target Stores. It features a finished wood box, wooden pedestal board, and closed black and white roughly wedge-shaped plastic pieces. Limited production, no longer available.
Stratego Onyx: Introduced in 2008, Stratego Onyx was sold exclusively by Barnes & Noble. It includes foil-stamped wooden game pieces and a raised gameboard with a decorative wooden frame. One-time production, no longer available.
Franklin Mint Civil War Collector's Edition: In the mid-1990s, Franklin Mint created a luxury version of Stratego with an American Civil War theme and gold- and silver-plated pieces. Due to a last-minute licensing problem, the set was never officially released and offered for sale. The only remaining copies are those sent to the company's retail stores for display.
=== Variant Versions ===
These have substantially different configurations and rules.
Ultimate Stratego: No longer in production, this version can still be found at some online stores and specialty gaming stores. This version is a variant of traditional Stratego and can accommodate up to 4 players simultaneously. The Ultimate Stratego board game contained four different Stratego versions: "Ultimate Lightning", "Alliance Campaign", "Alliance Lightning" and "Ultimate Campaign".
Science Fiction Version: Jumbo B.V. / Spin Master version of Stratego, common in North American department stores. The game has a futuristic science fiction theme. Played on a smaller 8×10 board, with 30 pieces per player. Features unique Spotter playing pieces.
Stratego Waterloo: For the bicentenary of the Battle of Waterloo in June 2015, the Dutch publishing group Jumbo published Stratego Waterloo. Instead of using ranks, the different historical units that had actually fought at the battle were added as Pawns (Old Guard, 95th Rifles...) – each with their own strengths and weaknesses. The Pawns are divided into light infantry, line infantry, light cavalry, heavy cavalry, artillery, commanders and commanders-in-chief (Wellington and Napoleon). Instead of capturing the Flag, the players must get two of their pawns on the lines of communication of their opponent.
From highest rank to lowest the pieces are:
The higher ranked piece always captures the lower
Stratego Conquest: 1996, two- to four-handed game played on world map; alternate pieces cannons and cavalry
Stratego Fortress: A 3D version of Stratego featuring a 3-level fortress and mystical themed pieces and maneuvers
Fire and Ice Stratego: The Hasbro version called Fire and Ice Stratego has different pieces and rules of movement. The game features a smaller 8×10 board and each player has 30 magical and mythological themed pieces with special powers.
=== Promotional ===
Hertog Jan, a Dutch brand of beer, released Stratego Tournament, a promotional version of Stratego with variant rules. It includes substantially fewer pieces, including only one Bomb and no Miners. Since each side has only about 18 pieces, the pieces are far more mobile. The scout in this version is allowed to move three squares in any combination of directions (including L-shapes) and there is a new piece called the Archer, which is defeated by anything, but can defeat any piece other than the Bomb by shooting it from a two-square distance, in direct orthogonal, or straight, directions only. If one player is unable to move any more of his or her pieces, the game results in a tie because neither player's Flag was captured.
=== Themed ===
These variants are produced by the company with pop-culture-themed pieces licensed from their respective owners:
Produced by Avalon Hill:
Stratego: Legends (1999)
Produced by USAopoly:
== Competition ==
There are now many Stratego competitions held throughout the world. The game is particularly popular in the Netherlands, Germany, Greece, USA and Belgium, where live and online championships are organized. The international Stratego scène is at this moment dominated by players from The Netherlands and Greece. Stratego World Championships have been held since 1997 and continue to be held almost yearly around August; the latest was last year in Nürnberg in Germany. In August the next World Championships will be held in Avenches in Switzerland.
Stratego competitions are now held in all four versions of the game:
Classic Stratego
Competitions in the original game include the "Classic Stratego World Championships", the "Classic Stratego Olympiad" and several National Championships from various different countries.
Ultimate Lightning Stratego
In this version of the game, each side has only 20 pieces. A few pieces have variant moves and there are a few rule differences. Games take only a fraction of the time needed for Classic Stratego. Competitions in this version include the "Ultimate Lightning World Championships" and the "Ultimate Lightning European Championships".
Duel Stratego
The version is played with 10 pieces per side on an 8×10 board. Competitions in this version now include the "Stratego Duel World Championships," which were held for the first time in August 2009 (Sheffield, England).
Stratego Barrage
To force decisions in knock-out stages in tournaments, in 1992 Stratego Barrage was developed by Marc Perriëns and Roel Eefting. In this "Quick-Stratego" a setup can be made in one minute and played in 5 minutes. The eight pieces with which Barrage is played are the Flag, the Marshall, the General, 1 Bomb, 1 Miner, 2 Scouts and the Spy. Since 1992 Dutch Championships and since 2000 World Championships in Barrage have been organised. Cambodian Champion is Sor Samedy, Dutch Champion (2014) is Ruben van de Built, World Champion (2013) is Tim Slagboom.
=== Tournaments ===
World Championships Stratego Classic (40 pieces)
World Championships Stratego Juniors Classic (40 pieces)
World Championships Stratego Barrage (8 pieces)
Other tournaments
1991 First Dutch Championship. In 1991 the first Dutch Stratego Championship was being organized by Johan van der Wielen, Roel Eefting and Marc Perriëns. One hundred and eight players participated in this event in Nijmegen. Wim Snelleman was the winner. Several Dutch Championships would follow.
1997 First Cambodian Championship. In 1997 Cambodia had the scoop to be the first Asian country in which its national Classic Stratego Championship was being organized. Organizer Roel Eefting defeated runner-up Max van Wel.
1998 Second Cambodian Championship. In 1998 Roel Eefting surprisingly lost his title to fellow Dutchman Marc Nickel (Derks), who ironically was invited by him on a journey together through Cambodia.
2007 World Team Cup. The World Team cup is played annually at the World Championships. It is a four player event with teams competing for their country. Holland defeated Germany in the 2007 World Team Cup.
2007 Stratego Olympiad. The 2007 Stratego Olympiad was held as part of the list of events within the Mind Sports Olympics. The 2007 event was held near London, England on 25 and 26 August 2007. Roel Eefting won both the event and the World Title on Barrage (Quick-Stratego which is played with 8 pieces).
2007 Stratego World Team Championship. The Stratego World Team Championship is held as part of the events at the Mind Sports Olympics. This event is a three player event with teams competing for their country. Great Britain defeated Holland in the 2007 World Team Championships.
2007 Computer Stratego World Championship. StrategoUSA conducted the first open tournament ever held for Stratego AI programs during December 2007. Programs played Classic Stratego rules in a round robin format. The tournament was a demonstration of state-of-the-art Stratego AI, with the hope it would spur new research into Stratego AI methodology. The winning program was Probe, which finished with a record of 17–0–3 (W–L–D).
2008 Computer Stratego World Championship. The 2008 tournament was held during December with six programs participating. Once again, StrategoUSA hosted the tournament online. Probe repeated as the champion, with a record of 22–3–0 (W–L–D).
2009 Computer Stratego World Championship. The 2009 tournament was held in December. Once again, StrategoUSA hosted the tournament online. The winner was Master of the Flag II, with a record of 30–3–2 (W–L–D).
2010 Stratego World Championship. The 2010 tournament was held in August, in Maastricht, Netherlands, Pim Neimejer (Netherlands) won the World Championship (overall score). Lady Kathryn Whitehorn (England) won the Women's Stratego World Championship. In team play, The Netherlands National Team won Gold (first), Germany Silver (second), and England Bronze (third).
2010 Computer Stratego World Championship. The 2010 tournament was held in December. Once again, StrategoUSA hosted the tournament online. The winner was Probe, with a record of 24–3–3 (W–L–D).
2016 - today Patras Battles. Since 2016 almost every year in Patras the local team Patras Stratego Team organizes this international tournament inviting the best players from all over the world.
== See also ==
List of abstract strategy games
Game complexity
== Notes ==
== Reviews ==
Family Games: The 100 Best
== References ==
== Further reading ==
Stratego Piece by Piece: History, Strategy, Tactics and Deployment, 1999, Prof. Michael Ziegler, Manor College, PA (private printing and distribution, not generally available)
== External links ==
Royal Jumbo (Stratego trademark owner) Stratego marketing website Archived 1999-01-25 at the Wayback Machine
Official rules of Stratego by Hasbro (U.S. licensee)
Probe, an online Stratego automaton (3 time Computer Stratego World Champion)
International Computer Gaming Association, whose ICGA Journal publishes occasional current research on computer Stratego | Wikipedia/Stratego |
An abstract strategy game is a type of strategy game that has minimal or no narrative theme, an outcome determined only by player choice (with minimal or no randomness), and in which each player has perfect information about the game. For example, Go is a pure abstract strategy game since it fulfills all three criteria; chess and related games are nearly so but feature a recognizable theme of ancient warfare; and Stratego is borderline since it is deterministic, loosely based on 19th-century Napoleonic warfare, and features concealed information.
== Definition ==
Combinatorial games have no randomizers such as dice, no simultaneous movement, nor hidden information. Some games that do have these elements are sometimes classified as abstract strategy games. (Games such as Continuo, Octiles, Can't Stop, and Sequence, could be considered abstract strategy games, despite having a luck or bluffing element.) A smaller category of abstract strategy games manages to incorporate hidden information without using any random elements; the best known example is Stratego.
Traditional abstract strategy games are often treated as a separate game category, hence the term 'abstract games' is often used for competitions that exclude them and can be thought of as referring to modern abstract strategy games. Two examples are the IAGO World Tour (2007–2010) and the Abstract Games World Championship held annually since 2008 as part of the Mind Sports Olympiad.
Some abstract strategy games have multiple starting positions of which it is required that one be randomly determined. For a game to be one of skill, a starting position needs to be chosen by impartial means. Some games, such as Arimaa and DVONN, have the players build the starting position in a separate initial phase which itself conforms strictly to combinatorial game principles. Most players, however, would consider that although one is then starting each game from a different position, the game itself contains no luck element. Indeed, Bobby Fischer promoted randomization of the starting position in chess in order to increase player dependence on thinking at the board.
As J. Mark Thompson wrote in his article "Defining the Abstract", play is sometimes said to resemble a series of puzzles the players pose to each other: There is an intimate relationship between such games and puzzles: every board position presents the player with the puzzle, What is the best move?, which in theory could be solved by logic alone. A good abstract game can therefore be thought of as a "family" of potentially interesting logic puzzles, and the play consists of each player posing such a puzzle to the other. Good players are the ones who find the most difficult puzzles to present to their opponents.
Many abstract strategy games also happen to be "combinatorial"; i.e., there is no hidden information, no non-deterministic elements (such as shuffled cards or dice rolls), no simultaneous or hidden movement or setup, and (usually) two players or teams take a finite number of alternating turns.
Many games which are abstract in nature historically might have developed from thematic games, such as representation of military tactics. In turn, it is common to see thematic version of such games; for example, chess is considered an abstract game, but many thematic versions, such as Star Wars-themed chess, exist.
There are also many abstract video games, which include open ended solutions to problems, one example is Shapez, a game which you must deliver a set amount of shapes, but it is entirely up to you how to do so.
== History ==
A board resembling a Draughts board was found in Ur dating from 3000 BC, found by British archaeologist Sir Leonard Woolley in the 1920s. In the British Museum are specimens of ancient Egyptian checkerboards, found with their pieces in burial chambers, and the game was played by Queen Hatasu. Plato mentioned a game, πεττεία or Petteia, as being of Egyptian origin, and Homer also mentions it. The game was later imported into the Roman Empire under the name ludus latrunculorum.
Go was considered one of the four essential arts of the cultured aristocratic Chinese scholars in antiquity and remains popular today. The earliest written reference to the game is generally recognized as the historical annal Zuo Zhuan (c. 4th century BC).
The family of games known today as Mancala dates back to at least the third century in the Middle East, and possibly much earlier.
Chess is believed to have originated in northwest India, in the Gupta Empire (c. 280–550), where its early form in the 6th century was known as chaturaṅga (Sanskrit: चतुरङ्ग), literally four divisions [of the military] – infantry, cavalry, elephants, and chariotry, represented by the pieces that would evolve into the modern pawn, knight, bishop, and rook, respectively. Chaturanga was played on an 8×8 uncheckered board, called ashtāpada. Shogi was the earliest chess variant to allow captured pieces to be returned to the board by the capturing player. This drop rule is speculated to have been invented in the 15th century and possibly connected to the practice of 15th century mercenaries switching loyalties when captured instead of being killed.
As civilization advanced and societies evolved, so too did strategy board games. New inventions such as printing technology in the 15th century allowed for mass production of game sets, making them more accessible to people from various social classes. Games like backgammon and mancala became popular during this time, showcasing different styles of strategic gameplay.
Englishmen Lewis Waterman and John W. Mollett both claim to have invented the game of Reversi in 1883, each denouncing the other as a fraud. The game gained considerable popularity in England at the end of the nineteenth century. The game's first reliable mention is on 21 August 1886 edition of The Saturday Review. A variant named Othello, patented in Japan in 1971, has gained worldwide popularity.
After the end of World War 2, these games became more complex. Risk (game) and Diplomacy (game) were released in the 1950s. Risk saw the player try to conquer the world from other players after claiming land at the start of the game, while Diplomacy saw the player go back to Europe during the time just before The Great War, to build alliances with other players, as to secure his safety and victory.
== Comparison ==
Analysis of "pure" abstract strategy games is the subject of combinatorial game theory. Abstract strategy games with hidden information, bluffing, or simultaneous move elements are better served by Von Neumann–Morgenstern game theory, while those with a component of luck may require probability theory incorporated into either of the above.
As for the qualitative aspects, ranking abstract strategy games according to their interest, complexity, or strategy levels is a daunting task and subject to extreme subjectivity. In terms of measuring how finite a mathematical field each of the three top contenders represents, it is estimated that checkers has a game-tree complexity of 1040 possible games, whereas chess has approximately 10123. As for Go, the possible legal game positions range in the magnitude of 10170.
== Champions ==
The Mind Sports Olympiad first held the Abstract Games World Championship in 2008 to try to find the best abstract strategy games all-rounder. The MSO event saw a change in format in 2011 restricting the competition to players' five best events, and was renamed the Modern Abstract Games World Championship.
2008: David M. Pearce (England)
2009: David M. Pearce (England)
2010: David M. Pearce (England)
2011: David M. Pearce (England)
2012: Andres Kuusk (Estonia)
2013: Andres Kuusk (Estonia)
== See also ==
Connection games
Game complexity
List of abstract strategy games
List of world championships in mind sports
Mind Sports Olympiad
World Mind Sports Games
== References ==
== External links ==
The University of Alberta Games Group
David Eppstein's CGT page
Talk "Redefining the abstract", by Cesco Reale at Board Games Studies 2022 | Wikipedia/Abstract_strategy_game |
Real-time strategy (RTS) is a subgenre of strategy video games that does not progress incrementally in turns, but allow all players to play simultaneously, in "real time." By contrast, in turn-based strategy (TBS) games, players take turns to play. The term "real-time strategy" was coined by Brett Sperry to market Dune II in the early 1990s.
In a real-time strategy game, each participant positions structures and maneuvers multiple units under their indirect control to secure areas of the map and destroy their opponents' assets. In a typical RTS game, it is possible to create additional units and structures generally limited by a requirement to expend accumulated resources. These resources are in turn garnered by controlling special points on the map or possessing certain types of units and structures devoted to this purpose. More specifically, the typical game in the RTS genre features resource-gathering, base-building, in-game technological development, and indirect control of units.
The tasks a player must perform to win an RTS game can be very demanding, and complex user interfaces have evolved for them. Some features have been borrowed from desktop environments; for example, the technique of "clicking and dragging" to create a box that selects all units under a given area. Though some video game genres share conceptual and gameplay similarities with the RTS template, recognized genres are generally not subsumed as RTS games. For instance, city-building games, construction and management simulations, and games of real-time tactics are generally not considered real-time strategy per se. This would only apply to anything considered a god game, where the player assumes a god-like role of creation.
== History ==
=== Origins ===
The genre recognized today as "real-time strategy" emerged from an extended period of evolution and refinement. Games sometimes perceived as ancestors of the real-time strategy genre were never marketed or designed as such. As a result, designating "early real-time strategy" titles is problematic because such games are being held up to modern standards. The genre initially evolved separately in the United Kingdom, Japan, and North America, afterward gradually merging into a unified worldwide tradition.
Tim Barry in May 1981 described in InfoWorld a multiplayer, real-time strategy space game that ran ("and probably still is") on an IBM System/370 Model 168 at a large San Francisco Bay Area company. He stated that it had "far better support than many of the application programs used in the business", with a published manual and regular schedule. Comparing its complexity to Dallas, Barry recalled that "when the game was restored at 5 P.M., a lot of regular work stopped".
Ars Technica traces the genre's roots back to Utopia (1981), citing it as the "birth of a genre", with a "real-time element" that was "virtually unheard of", thus making it "arguably the earliest ancestor of the real-time strategy genre". According to Ars Technica, Utopia was a turn-based strategy game with hybrid elements that ran "in real-time but events happened on a regular turn-based cycle." According to Brett Weiss, Utopia is often cited as "the first real-time strategy game." According to Matt Barton and Bill Loguidice, Utopia "helped set the template" for the genre, but has "more in common with SimCity than it does with Dune II and later RTS games." Allgame listed War of Nerves (1979) as the oldest "2D Real-Time Strategy". Barton also cites Cytron Masters (1982), saying it was "one of the first (if not the first) real-time strategy games [sic]." On the other hand, Scott Sharkey of 1UP argues that, while Cytron Masters "attempted real time strategy", it was "much more tactical than strategic" due to "the inability to construct units or manage resources". Byte in December 1982 published as an Apple II type-in program Cosmic Conquest. The winner of the magazine's annual Game Contest, the author described it as a "single-player game of real-time action and strategic decision making". The magazine described it as "a real-time space strategy game". The game has elements of resource management and wargaming.
In the United Kingdom, the earliest real-time strategy games are Stonkers by John Gibson, published in 1983 by Imagine Software for the ZX Spectrum, and Nether Earth for ZX Spectrum in 1987. In North America, the oldest game retrospectively classified as real-time strategy by several sources is The Ancient Art of War (1984), designed by Dave and Barry Murry of Evryware, followed by The Ancient Art of War at Sea in 1987.
In Japan, the earliest is Bokosuka Wars (1983), an early strategy RPG (or "simulation RPG"); the game revolves around the player leading an army across a battlefield against enemy forces in real-time while recruiting/spawning soldiers along the way, for which it is considered by Ray Barnholt of 1UP to be an early prototype real-time strategy game. Another early title with real-time strategy elements is Sega's Gain Ground (1988), a strategy-action game that involved directing a set of troops across various enemy-filled levels. TechnoSoft's Herzog (1988) is regarded as a precursor to the real-time strategy genre, being the predecessor to Herzog Zwei and somewhat similar in nature, though primitive in comparison.
IGN cites Herzog Zwei, released for the Sega Mega Drive/Genesis in 1989 as "arguably the first RTS game ever", and it is often cited as "the first real-time strategy game" according to Ars Technica. It combines traditional strategy gameplay with fully real-time, fast-paced, arcade-style action gameplay, featuring a split-screen two-player mode where both players are in action simultaneously and there are no pauses while decisions are taken, forcing players to think quickly while on the move. In Herzog Zwei, though the player only controls one unit, the manner of control foreshadowed the point-and-click mechanic of later games. Scott Sharkey of 1UP argues that it introduced much of the genre conventions, including unit construction and resource management, with the control and destruction of bases being an important aspect of the game, as were the economic/production aspects of those bases. Herzog Zwei is credited by 1UP as a landmark that defined the genre and as "the progenitor of all modern real-time strategy games." Chuck Sperry cited Herzog Zwei as an influence on Dune II.
Notable as well are early games like Mega-Lo-Mania by Sensible Software (1991) and Supremacy (also called Overlord – 1990). Although these two lacked direct control of military units, they both offered considerable control of resource management and economic systems. In addition, Mega Lo Mania has advanced technology trees that determine offensive and defensive prowess. Another early game, Carrier Command (1988) by Realtime Games, involved real-time responses to events in the game, requiring management of resources and control of vehicles. Another early game, SimAnt (1991) by Maxis, had resource gathering, and controlling an attacking army by having them follow a lead unit. However, it was with the release of Dune II (1992) from Westwood Studios that real-time strategy became recognized as a distinct genre of video games.
=== 1992–1998: Seminal titles ===
Although real-time strategy games have an extensive history, some titles have served to define the popular perception of the genre and expectations of the genre more than others, in particular the games released between 1992 and 1998 by Westwood Studios and Blizzard Entertainment.
Drawing influence from Herzog Zwei, Populous, Eye of the Beholder, and the Macintosh user interface, Westwood's Dune II: The Building of a Dynasty (1992) featured all the core concepts and mechanics of modern real-time strategy games that are still used today, such as using the mouse to move units and gathering resources, and as such served as the prototype for later real-time strategy games. According to its co-designer and lead programmer, Joe Bostic, a "benefit over Herzog Zwei is that we had the advantage of a mouse and keyboard. This greatly facilitated precise player control, which enabled the player to give orders to individual units. The mouse, and the direct control it allowed, was critical in making the RTS genre possible.”
The success of Dune II encouraged several games that became influential in their own right. Warcraft: Orcs & Humans (1994) achieved great prominence upon its release, owing in part to its use of a fantasy setting and also to its depiction of a wide variety of buildings (such as farms) which approximated a full fictitious society and not just a military force. Command & Conquer (1995), as well as Command & Conquer: Red Alert (1996), became the most popular early RTS games. These two games contended with Warcraft II: Tides of Darkness after its release in late 1995.
Total Annihilation, released by Cavedog Entertainment in 1997, introduced 3D units and terrain and focused on huge battles that emphasized macromanagement over micromanagement. It featured a streamlined interface that would influence many RTS games in later years. Age of Empires, released by Ensemble Studios in 1997, tried to put a game in a slower pace, combining elements of Civilization with the real-time strategy concept by introducing ages of technologies. In 1998, Blizzard released the game StarCraft, which became an international phenomenon and is still played in large professional leagues to this day. Collectively, all of these games defined the genre, providing the de facto benchmark against which new real-time strategy games are measured.
=== 1995–2003: Refinement and transition to 3D ===
The real-time strategy genre has been relatively stable since 1995. Additions to the genre's concept in newer games tend to emphasize more of the basic RTS elements (higher unit caps, more unit types, larger maps, etc.). Rather than innovations to the game concept, new games generally focus on refining aspects of successful predecessors. Cavedog's Total Annihilation from 1997 introduced the first 3D units and terrain in real-time strategy games. The Age of Empires focus on historical setting and age advancement was refined further by its sequel, Age of Empires II: Age of Kings, and by Stainless Steel Studios' Empire Earth in 2001. GSC Game World's Cossacks series brought population caps into the tens of thousands.
Dungeon Keeper (1997), Populous: The Beginning (1998), Jeff Wayne's The War of the Worlds (1998), Warzone 2100 (1999), Machines (1999), Homeworld (1999), and Dark Reign 2 (2000) were among the first completely 3D real-time strategy titles. Homeworld featured a 3D environment in space, therefore allowing movement in every direction, a feature which its semi-sequel, Homeworld Cataclysm (2000) continued to build upon adding features such as waypoints. Homeworld 2, released in 2003, streamlined movement in the 360° 3D environment. Furthermore, Machines, which was also released in 1999 and featured a nearly 100% 3D environment, attempted to combine the RTS genre with a first-person shooter (FPS) genre although it was not a particularly successful title. These games were followed by a short period of interest in experimental strategy games such as Allegiance (2000). Jeff Wayne's The War of the Worlds was notable for being one of the few completely non-linear RTS games ever.
It is only in approximately 2002 that 3D real-time strategy became the standard, with both Warcraft III (2002) and Ensemble Studio's Age of Mythology (2002) being built on a full 3D game engine. Kohan: Immortal Sovereigns introduced classic wargame elements, such as supply lines to the genre. Battle Realms (2001) was another full 3D game, but had limited camera views.
The move from 2D to 3D has been criticized in some cases. Issues with controlling the camera and placement of objects have been cited as problems.
=== 2004–2012: Specialization and evolution ===
A few games have experimented with diversifying map design, which continues to be largely two-dimensional even in 3D engines. Earth 2150 (2000) allowed units to tunnel underground, effectively creating a dual-layer map; three-layer (orbit-surface-underground) maps were introduced in Metal Fatigue. In addition, units could even be transported to entirely separate maps, with each map having its own window in the user interface. Three Kingdoms: Fate of the Dragon (2001) offered a simpler model: the main map contains locations that expand into their own maps. In these examples, however, the gameplay was essentially identical regardless of the map layer in question. Dragonshard (2005) emphasized its dual-layer maps by placing one of the game's two main resources in each map, making exploration and control of both maps fundamentally valuable.
Relatively few genres have emerged from or in competition with real-time strategy games, although real-time tactics (RTT), a superficially similar genre, emerged around 1995. In 1998, Activision attempted to combine the real-time strategy and first-person shooter genres in Battlezone (1998), while in 2002 Rage Games Limited attempted this with the Hostile Waters games. Later variants have included Natural Selection (2002), a game modification based on the Half-Life engine, and the free software Tremulous/Unvanquished. Savage: The Battle for Newerth (2003) combined the RPG and RTS elements in an online game.
Some games, borrowing from the real-time tactics (RTT) template, have moved toward an increased focus on tactics while downplaying traditional resource management, in which designated units collect the resources used for producing further units or buildings. Titles like Warhammer 40,000: Dawn of War (2004), Star Wars: Empire at War (2006), and Company of Heroes (2006) replace the traditional resource gathering model with a strategic control-point system, in which control over strategic points yields construction/reinforcement points. Ground Control (2000) was the first such game to replace individual units with "squads".
Others are moving away from the traditional real-time strategy game model with the addition of other genre elements. One example is Sins of a Solar Empire (2008), released by Ironclad Games, which mixes elements of grand-scale stellar empire building games like Master of Orion with real-time strategy elements. Another example is indie game Achron (2011), which incorporates time travel as a game mechanic, allowing a player to send units forward or backward in time.
Multiplayer online battle arena games (MOBA) have originated as a subgenre of real-time strategy games, however this fusion of real-time strategy, role-playing, and action games has lost many traditional RTS elements. These type of games moved away from constructing additional structures, base management, army building, and controlling additional units. Map and the main structures for each team are still present, and destroying enemy main structure will secure victory as the ultimate victory condition. Unlike in RTS, a player has control over the only one single powerful unit, called "hero" or "champion", who advances in level, learns new abilities, and grows in power over the course of a match. Players can find various friendly and enemy units on the map at any given time assisting each team, however, these units are computer-controlled and players usually don't have direct control over their movement and creation; instead, they march forward along set paths. Defense of the Ancients (DotA), a Warcraft III mod from 2003, and its standalone sequel Dota 2 (2013), as well as League of Legends (2009), and Heroes of the Storm (2015), are the typical representatives of the new strategy subgenre. Former game journalist Luke Smith called DotA "the ultimate RTS".
=== 2012–present: Expansion and adaptation to various gaming formats ===
The popularization of the smartphone in the 2010s led to a new market for video games to expand to and develop. Innovation on the traditional RTS format accelerated throughout the early 2010s as RTS games were released for mobile devices. With a new format specific to mobile devices, mobile RTS games were often simpler than their desktop counterparts. The simplification of the RTS formula coupled with the adoption of the smartphone during this period allowed for mobile RTS games to be more accessible than traditional RTS games. Clash of Clans (2012), a mobile game published by Supercell, is a good example of a game which modified the RTS format into a simpler mobile experience. While often classified in the broader Strategy game genre, Clash of Clans still possesses many of the classic RTS elements, such as a "perspective of god", control over buildings and mobile units, and resource management. It also introduces and simplifies specific elements of an RTS to fit the mobile format with "idle" resource gathering and defenses, as well as reducing the number of resource types, unit types, and building types to make the game more accessible to new users. In an interview between game journalist Bryant Francis and Clash of Clans developer Stuart McGaw, McGaw attributed Clash of Clan's design to "a focus on simplicity and accessibility", something that "anyone could pick up and play", while also retaining "the strategy DNA", that gives players "lots of options" while remaining "clear to understand". Multiple other mobile games, such as Boom Beach (2014), Plague Inc. (2012), the Bloons Tower Defense series (2007-2021), and more have (varyingly) adapted the RTS format in the same manner as Clash of Clans, and in turn developed a style of RTS unique to the mobile game industry.
Beginning in the early-mid 2010s, the expansion of the Indie game market on game developer Valve Corporation's gaming distribution service, Steam, allowed RTS game developers to produce smaller-scale and increasingly accessible Indie-RTS games. These RTS games often are more true to the traditional RTS formula, with the player having the "perspective of god" and managing units and resources. Such Indie-RTS Games released in this period were often subject to Porting, and often made it to mobile devices. A few of these Indie-RTS games are Ultimate Epic Battle Simulator (2017), the Machines at War series (2007-2012), and Bad North (2018).
Oftentimes, modern RTS games attempt to capture the "nostalgia" of classic RTS games. Rusted Warfare (2017), is an indie-mobile release which is a good example of a traditional-style RTS which utilizes assets from the unreleased Hard Vacuum (1993) to create a "revived" RTS experience. Hard Vacuum was intended to include "resource gathering from mineral deposits", "base building", and "a wide range of fighting with units". Rusted Warfare and other traditional RTS titles utilized the element of classic PC-gaming nostalgia in order to drive the game-playing experience.
Traditional RTS games released in the late 2010s - early 2020s were developed with a focus on coupling the traditional-style gameplay with uniquely styled, or hyper-realistic graphics. These RTS games are often Indie-RTS games, but released on a multitude of platforms. Some RTS releases like Halo Wars 2 (2017), Steel Division 2 (2019), Company of Heroes 3 (2023), and Last Train Home (2023) are examples of modern RTS games that are focused on providing a traditional RTS experience.
== Gameplay ==
In a typical real-time strategy game, the screen is divided into a map area displaying the game world and terrain, units, and buildings, and an interface overlay containing command and production controls and often a "radar" or "minimap" overview of the entire map. The player is usually given an isometric perspective of the world, or a free-roaming camera from an aerial viewpoint for modern 3D games. Players mainly scroll the screen and issue commands with the mouse, and may also use keyboard shortcuts.
Gameplay generally consists of the player being positioned somewhere in the map with a few units or a building that is capable of building other units/buildings. Often, but not always, the player must build specific structures to unlock more advanced units in the tech tree. Often, but not always, RTS games require the player to build an army (ranging from small squads of no more than two units to literally hundreds of units) and using them to either defend themselves from a virtual form of Human wave attack or to eliminate enemies who possess bases with unit production capacities of their own. Occasionally, RTS games will have a preset number of units for the player to control and do not allow building of additional ones.
Resource gathering is commonly the main focus of the RTS games, but other titles of the genre place higher gameplay significance to how units are used in combat (Z: Steel Soldiers for example, awards credits for territory captured rather than gathered resources), the extreme example of which are games of the real-time tactical genre. Some titles impose a ceiling on the number simultaneous troops, which becomes a key gameplay consideration, a significant example being StarCraft, while other titles have no such unit cap.
=== Micromanagement and macromanagement ===
Micromanagement deals with a player's constant need to manage and maintain individual units and resources on a fine scale. On the other hand, macromanagement refers to a player's management of economic expansion and large-scale strategic maneuvering, allowing the player time to think and consider possible solutions. Micromanagement involves the use of combat tactics involved in the present, whereas macromanagement considers the greater scale of the game in an attempt to predict the future.
=== Criticism of gameplay ===
==== Turn-based vs. real-time ====
A debate has emerged between fans of real-time strategy (RTS) and turn-based strategy (TBS) (and related genres) based on the merits of the real-time and turn-based systems. Because of their generally faster-paced nature (and in some cases a smaller learning curve), real-time strategy games have surpassed the popularity of turn-based strategy computer games. In the past, a common criticism was to regard real-time strategy games as "cheap imitations" of turn-based strategy games, arguing that real-time strategy games had a tendency to devolve into "click-fests" in which the player who was faster with the mouse generally won, because they could give orders to their units at a faster rate.
The common retort is that success involves not just fast clicking, but also the ability to make sound decisions under time pressure. The "clickfest" argument is also often voiced alongside a "button babysitting" criticism, which pointed out that a great deal of game time is spent either waiting and watching for the next time a production button could be clicked, or rapidly alternating between different units and buildings, clicking their respective button.
Some titles attempt to merge the two systems: for example, the role-playing game Fallout uses turn-based combat and real-time gameplay, while the real-time strategy games Homeworld, Rise of Nations, and the games of the Total War and Hegemony series allow the player to pause the game and issue orders. Additionally, the Total War series has a combination of a turn-based strategy map with a real-time battle map. Another example of a game combining both turn-based game and real-time-strategy is The Lord of the Rings: The Battle for Middle-Earth II which allows players, in a 'War of the Ring' game, to play a turn-based strategy game, but also battle each other in real time.
==== Tactics vs. strategy ====
A second criticism of the RTS genre is the importance of skill over strategy. The manual dexterity and ability to multitask and divide one's attention is often considered the most important aspect to succeeding at the RTS genre. According to Troy Dunniway, former Westwood developer who has also worked on Command and Conquer 3: Tiberium Wars: "A player controls hundreds of units, dozens of buildings and many different events that are all happening simultaneously. There is only one player, and he can only pay attention to one thing at a time. Expert players can quickly flip between many different tasks, while casual gamers have more problems with this."
Real-time strategy games have been criticized for an overabundance of tactical considerations when compared to the amount of strategic gameplay found in such games. According to Chris Taylor, lead designer of Supreme Commander: "[My first attempt at visualizing RTSs in a fresh and interesting new way] was my realizing that although we call this genre 'Real-Time Strategy,' it should have been called 'Real-Time Tactics' with a dash of strategy thrown in." (Taylor then posits his own game as having surpassed this mold by including additional elements of broader strategic scope.)
In general terms, military strategy refers to the use of a broad arsenal of weapons including diplomatic, informational, military, and economic resources, whereas military tactics is more concerned with short-term goals such as winning an individual battle. In the context of strategy video games, however, the difference is often reduced to the more limited criteria of either a presence or absence of base building and unit production.
In an article for Gamasutra, Nathan Toronto criticizes real-time strategy games for too often having only one valid means of victory — attrition — comparing them unfavorably to real-time tactics games. Players' awareness that the only way for them to win or lose is militarily makes them unlikely to respond to gestures of diplomacy. The result is that the winner of a real-time strategy game is too often the best tactician rather than the best strategist. Troy Goodfellow counters this by saying that the problem is not that real-time strategy games are lacking in strategy (he says attrition is a form of strategy), rather it is that they too often have the same strategy: produce faster than you consume. He also states that building and managing armies is the conventional definition of real-time strategy, and that it is unfair to make comparisons with other genres.
In an article for GameSpy, Mark Walker criticizes real-time strategy games for their lack of combat tactics, suggesting real-time tactics games as a more suitable substitute. He also says that developers need to begin looking outside the genre for new ideas in order for strategy games to continue to be successful in the future.
This criticism has ushered into a couple of hybrid designs that try to resolve the issues. The games of the Total War series have a combination of a (turn-based) strategy map with a (real-time) battle map, allowing the player to concentrate on one or the other. The games of the Hegemony series also combine a strategy map and a battle map (in full real-time) and the player can at any point in time seamlessly zoom in and out in between both.
==== Rushing vs. planning ====
A third common criticism is that real-time gameplay often degenerates into "rushes" where the players try to gain the advantage and subsequently defeat the opponent as quickly in the game as possible, preferably before the opposition is capable of successfully reacting. For example, the original Command & Conquer gave birth to the now-common "tank rush" tactic, where the game outcome is often decided very early on by one player gaining an initial advantage in resources and producing large amounts of a relatively powerful but still quite cheap unit—which is thrown at the opposition before they have had time to establish defenses or production. Although this strategy has been criticized for encouraging overwhelming force over strategy and tactics, defenders of the strategy argue that they're simply taking advantage of the strategies utilized, and some argue that it is a realistic representation of warfare. One of the most infamous versions of a rush is the "Zergling rush" from the real-time strategy game StarCraft, where the Zerg player would morph one of their starting workers (or the first one produced) into a spawning pool immediately and use all of their resources to produce Zerglings, attacking once they have enough to overwhelm any early defense; in fact, the term "zerging" has become synonymous with rushing.
Some games have since introduced designs that do not easily lend themselves to rushes. For example, the Hegemony series made supply and (seasonal) resource management an integral part of its gameplay, thus limiting rapid expansion.
=== On consoles ===
Despite Herzog Zwei, a console game, laying the foundations for the real-time strategy genre, RTS games never gained popularity on consoles like they did on the PC platform. Real-time strategy games made for video game consoles have been consistently criticized due to their control schemes, as the PC's keyboard and mouse are considered to be superior to a console's gamepad for the genre. Thus, RTS games for home consoles have been met with mixed success. Scott Sharkey of 1UP notes that Herzog Zwei had already "offered a nearly perfect solution to the problem by giving the player direct control of a single powerful unit and near autonomy for everything else," and is surprised "that more console RTS games aren't designed with this kind of interface in mind from the ground up, rather than imitating" PC control schemes "that just doesn't work very well with a controller". Some handheld consoles like Napoleon on the GBA uses a similar solution.
However, several console titles in the genre received positive reception. The Pikmin series, which began in 2001 for the GameCube, became a million-seller. Similarly, Halo Wars, which was released in 2009 for the Xbox 360, generated generally positive reviews, achieved an 82% critic average on aggregate web sites, and sold over 1 million copies.
According to IGN, the gameplay lacks the traditional RTS concepts of limited resources and resource gathering and lacks multiple buildings.
== Graphics ==
Total Annihilation (1997) was the first real-time strategy game to utilize true 3D units, terrain, and physics in both rendering and in gameplay. For instance, the missiles in Total Annihilation travel in real time in simulated 3D space, and they can miss their target by passing over or under it. Similarly, missile-armed units in Earth 2150 are at a serious disadvantage when the opponent is on high ground because the missiles often hit the cliffside, even in the case when the attacker is a missile-armed helicopter. Homeworld, Warzone 2100 and Machines (all released in 1999) advanced the use of fully 3D environments in real-time strategy titles. In the case of Homeworld, the game is set in space, offering a uniquely exploitable 3D environment in which all units can move vertically in addition to the horizontal plane. However, the near-industry-wide switch to full 3D was very gradual and most real-time strategy titles, including the first sequels to Command & Conquer, initially used isometric 3D graphics made by pre-rendered 3D tiles. Only in later years did these games begin to use true 3D graphics and gameplay, making it possible to rotate the view of the battlefield in real-time. Spring is a good example of the transformation from semi-3D to full-3D game simulations. It is an open-source project, which aims to give a Total Annihilation game-play experience in three dimensions. The most ambitious use of full 3D graphics was realized in Supreme Commander, where all projectiles, units and terrain were simulated in real time, taking full advantage of the UI's zoom feature, which allowed cartographic style navigation of the 3D environment. This led to a number of unique gameplay elements, which were mostly obscured by the lack of computing power available in 2007, at the release date.
Japanese game developers Nippon Ichi and Vanillaware worked together on Grim Grimoire, a PlayStation 2 title released in 2007, which features hand-drawn animated 2D graphics.
From 2010, real-time strategy games more commonly incorporated physics engines, such as Havok, in order to increase realism experienced in gameplay. A modern real-time strategy game that uses a physics engine is Ensemble Studios' Age of Empires III, released on October 18, 2005, which used the Havok Game Dynamics SDK to power its real-time physics. Company of Heroes is another real-time strategy game that uses realistically modeled physics as a part of gameplay, including fully destructible environments.
== Tournaments ==
RTS World tournaments have been held for both StarCraft and Warcraft III since their 1998 and 2002 releases. The games have been so successful that some players have earned over $200,000 at the Warcraft III World Championships. In addition, hundreds of StarCraft II tournaments are held yearly, as it is becoming an increasingly popular branch of e-sports. Notable tournaments include MLG, GSL, and Dreamhack. RTS tournaments are especially popular in South Korea.
== See also ==
List of real-time strategy video games
== References ==
== Further reading ==
Chambers, C.; Feng, W.; Feng, W.; Saha, D. (June 2005). "Mitigating information exposure to cheaters in real-time strategy games". Proceedings of the international workshop on Network and operating systems support for digital audio and video. New York: ACM. pp. 7–12. doi:10.1145/1065983.1065986. ISBN 978-1-58113-987-7. S2CID 7873680.
Claypool, Mark (September 15, 2005). "The effect of latency on user performance in Real-Time Strategy games". Computer Networks. 49 (1): 52–70. doi:10.1016/j.comnet.2005.04.008. S2CID 4688755.
Cheng, D.; Thawonmas, R. (November 2004). "Case-based plan recognition for real-time strategy games" (PDF). Proc. of the 5th Game-On International Conference: 36–40. Archived (PDF) from the original on August 12, 2007.
Aha, D.; Molineaux, M.; Ponsen, M. (September 7, 2005). Muñoz-Ávila, HéCtor; Ricci, Francesco (eds.). Case-Based Reasoning Research and Development. Lecture Notes in Computer Science. Vol. 3620. Springer Berlin / Heidelberg. pp. 5–20. doi:10.1007/11536406. ISBN 978-3-540-28174-0.
Chan, H.; Fern, A.; Ray, S.; Wilson, N. & Ventura, C. (2007). "Online planning for resource production in real-time strategy games" (PDF). Proceedings of the International Conference on Automated Planning and Scheduling. Archived (PDF) from the original on October 10, 2008. | Wikipedia/Real-time_strategy |
In logic and computer science, the Davis–Putnam algorithm was developed by Martin Davis and Hilary Putnam for checking the validity of a first-order logic formula using a resolution-based decision procedure for propositional logic. Since the set of valid first-order formulas is recursively enumerable but not recursive, there exists no general algorithm to solve this problem. Therefore, the Davis–Putnam algorithm only terminates on valid formulas. Today, the term "Davis–Putnam algorithm" is often used synonymously with the resolution-based propositional decision procedure (Davis–Putnam procedure) that is actually only one of the steps of the original algorithm.
== Overview ==
The procedure is based on Herbrand's theorem, which implies that an unsatisfiable formula has an unsatisfiable ground instance, and on the fact that a formula is valid if and only if its negation is unsatisfiable. Taken together, these facts imply that to prove the validity of φ it is enough to prove that a ground instance of ¬φ is unsatisfiable. If φ is not valid, then the search for an unsatisfiable ground instance will not terminate.
The procedure for checking validity of a formula φ roughly consists of these three parts:
put the formula ¬φ in prenex form and eliminate quantifiers
generate all propositional ground instances, one by one
check if each instance is satisfiable.
If some instance is unsatisfiable, then return that φ is valid. Else continue checking.
The last part is a SAT solver based on resolution (as seen on the illustration), with an eager use of unit propagation and pure literal elimination (elimination of clauses with variables that occur only positively or only negatively in the formula).
At each step of the SAT solver, the intermediate formula generated is equisatisfiable, but possibly not equivalent, to the original formula. The resolution step leads to a worst-case exponential blow-up in the size of the formula.
The Davis–Putnam–Logemann–Loveland algorithm is a 1962 refinement of the propositional satisfiability step of the Davis–Putnam procedure which requires only a linear amount of memory in the worst case. It eschews the resolution for the splitting rule: a backtracking algorithm that chooses a literal l, and then recursively checks if a simplified formula with l assigned a true value is satisfiable or if a simplified formula with l assigned false is. It still forms the basis for today's (as of 2015) most efficient complete SAT solvers.
== See also ==
Herbrandization
== References ==
Davis, Martin; Putnam, Hilary (1960). "A Computing Procedure for Quantification Theory". Journal of the ACM. 7 (3): 201–215. doi:10.1145/321033.321034.
Davis, Martin; Logemann, George; Loveland, Donald (1962). "A Machine Program for Theorem Proving". Communications of the ACM. 5 (7): 394–397. doi:10.1145/368273.368557. hdl:2027/mdp.39015095248095.
R. Dechter; I. Rish. "Directional Resolution: The Davis–Putnam Procedure, Revisited". In J. Doyle and E. Sandewall and P. Torasso (ed.). Principles of Knowledge Representation and Reasoning: Proc. of the Fourth International Conference (KR'94). Kaufmann. pp. 134–145.
John Harrison (2009). Handbook of practical logic and automated reasoning. Cambridge University Press. pp. 79–90. ISBN 978-0-521-89957-4. | Wikipedia/Davis–Putnam_algorithm |
Inductive logic programming (ILP) is a subfield of symbolic artificial intelligence which uses logic programming as a uniform representation for examples, background knowledge and hypotheses. The term "inductive" here refers to philosophical (i.e. suggesting a theory to explain observed facts) rather than mathematical (i.e. proving a property for all members of a well-ordered set) induction. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesised logic program which entails all the positive and none of the negative examples.
Schema: positive examples + negative examples + background knowledge ⇒ hypothesis.
Inductive logic programming is particularly useful in bioinformatics and natural language processing.
== History ==
Building on earlier work on Inductive inference, Gordon Plotkin was the first to formalise induction in a clausal setting around 1970, adopting an approach of generalising from examples. In 1981, Ehud Shapiro introduced several ideas that would shape the field in his new approach of model inference, an algorithm employing refinement and backtracing to search for a complete axiomatisation of given examples. His first implementation was the Model Inference System in 1981: a Prolog program that inductively inferred Horn clause logic programs from positive and negative examples. The term Inductive Logic Programming was first introduced in a paper by Stephen Muggleton in 1990, defined as the intersection of machine learning and logic programming. Muggleton and Wray Buntine introduced predicate invention and inverse resolution in 1988.
Several inductive logic programming systems that proved influential appeared in the early 1990s. FOIL, introduced by Ross Quinlan in 1990 was based on upgrading propositional learning algorithms AQ and ID3. Golem, introduced by Muggleton and Feng in 1990, went back to a restricted form of Plotkin's least generalisation algorithm. The Progol system, introduced by Muggleton in 1995, first implemented inverse entailment, and inspired many later systems. Aleph, a descendant of Progol introduced by Ashwin Srinivasan in 2001, is still one of the most widely used systems as of 2022.
At around the same time, the first practical applications emerged, particularly in bioinformatics, where by 2000 inductive logic programming had been successfully applied to drug design, carcinogenicity and mutagenicity prediction, and elucidation of the structure and function of proteins. Unlike the focus on automatic programming inherent in the early work, these fields used inductive logic programming techniques from a viewpoint of relational data mining. The success of those initial applications and the lack of progress in recovering larger traditional logic programs shaped the focus of the field.
Recently, classical tasks from automated programming have moved back into focus, as the introduction of meta-interpretative learning makes predicate invention and learning recursive programs more feasible. This technique was pioneered with the Metagol system introduced by Muggleton, Dianhuan Lin, Niels Pahlavi and Alireza Tamaddoni-Nezhad in 2014. This allows ILP systems to work with fewer examples, and brought successes in learning string transformation programs, answer set grammars and general algorithms.
== Setting ==
Inductive logic programming has adopted several different learning settings, the most common of which are learning from entailment and learning from interpretations. In both cases, the input is provided in the form of background knowledge B, a logical theory (commonly in the form of clauses used in logic programming), as well as positive and negative examples, denoted
E
+
{\textstyle E^{+}}
and
E
−
{\textstyle E^{-}}
respectively. The output is given as a hypothesis H, itself a logical theory that typically consists of one or more clauses.
The two settings differ in the format of examples presented.
=== Learning from entailment ===
As of 2022, learning from entailment is by far the most popular setting for inductive logic programming. In this setting, the positive and negative examples are given as finite sets
E
+
{\textstyle E^{+}}
and
E
−
{\textstyle E^{-}}
of positive and negated ground literals, respectively. A correct hypothesis H is a set of clauses satisfying the following requirements, where the turnstile symbol
⊨
{\displaystyle \models }
stands for logical entailment:
Completeness:
B
∪
H
⊨
E
+
Consistency:
B
∪
H
∪
E
−
⊭
false
{\displaystyle {\begin{array}{llll}{\text{Completeness:}}&B\cup H&\models &E^{+}\\{\text{Consistency: }}&B\cup H\cup E^{-}&\not \models &{\textit {false}}\end{array}}}
Completeness requires any generated hypothesis H to explain all positive examples
E
+
{\textstyle E^{+}}
, and consistency forbids generation of any hypothesis H that is inconsistent with the negative examples
E
−
{\textstyle E^{-}}
, both given the background knowledge B.
In Muggleton's setting of concept learning, "completeness" is referred to as "sufficiency", and "consistency" as "strong consistency". Two further conditions are added: "Necessity", which postulates that B does not entail
E
+
{\textstyle E^{+}}
, does not impose a restriction on H, but forbids any generation of a hypothesis as long as the positive facts are explainable without it. "Weak consistency", which states that no contradiction can be derived from
B
∧
H
{\textstyle B\land H}
, forbids generation of any hypothesis H that contradicts the background knowledge B. Weak consistency is implied by strong consistency; if no negative examples are given, both requirements coincide. Weak consistency is particularly important in the case of noisy data, where completeness and strong consistency cannot be guaranteed.
=== Learning from interpretations ===
In learning from interpretations, the positive and negative examples are given as a set of complete or partial Herbrand structures, each of which are themselves a finite set of ground literals. Such a structure e is said to be a model of the set of clauses
B
∪
H
{\textstyle B\cup H}
if for any substitution
θ
{\textstyle \theta }
and any clause
h
e
a
d
←
b
o
d
y
{\textstyle \mathrm {head} \leftarrow \mathrm {body} }
in
B
∪
H
{\textstyle B\cup H}
such that
b
o
d
y
θ
⊆
e
{\textstyle \mathrm {body} \theta \subseteq e}
,
h
e
a
d
θ
⊆
e
{\displaystyle \mathrm {head} \theta \subseteq e}
also holds. The goal is then to output a hypothesis that is complete, meaning every positive example is a model of
B
∪
H
{\textstyle B\cup H}
, and consistent, meaning that no negative example is a model of
B
∪
H
{\textstyle B\cup H}
.
== Approaches to ILP ==
An inductive logic programming system is a program that takes as an input logic theories
B
,
E
+
,
E
−
{\displaystyle B,E^{+},E^{-}}
and outputs a correct hypothesis H with respect to theories
B
,
E
+
,
E
−
{\displaystyle B,E^{+},E^{-}}
. A system is complete if and only if for any input logic theories
B
,
E
+
,
E
−
{\displaystyle B,E^{+},E^{-}}
any correct hypothesis H with respect to these input theories can be found with its hypothesis search procedure. Inductive logic programming systems can be roughly divided into two classes, search-based and meta-interpretative systems.
Search-based systems exploit that the space of possible clauses forms a complete lattice under the subsumption relation, where one clause
C
1
{\textstyle C_{1}}
subsumes another clause
C
2
{\textstyle C_{2}}
if there is a substitution
θ
{\textstyle \theta }
such that
C
1
θ
{\textstyle C_{1}\theta }
, the result of applying
θ
{\textstyle \theta }
to
C
1
{\textstyle C_{1}}
, is a subset of
C
2
{\textstyle C_{2}}
. This lattice can be traversed either bottom-up or top-down.
=== Bottom-up search ===
Bottom-up methods to search the subsumption lattice have been investigated since Plotkin's first work on formalising induction in clausal logic in 1970. Techniques used include least general generalisation, based on anti-unification, and inverse resolution, based on inverting the resolution inference rule.
==== Least general generalisation ====
A least general generalisation algorithm takes as input two clauses
C
1
{\textstyle C_{1}}
and
C
2
{\textstyle C_{2}}
and outputs the least general generalisation of
C
1
{\textstyle C_{1}}
and
C
2
{\textstyle C_{2}}
, that is, a clause
C
{\textstyle C}
that subsumes
C
1
{\textstyle C_{1}}
and
C
2
{\textstyle C_{2}}
, and that is subsumed by every other clause that subsumes
C
1
{\textstyle C_{1}}
and
C
2
{\textstyle C_{2}}
. The least general generalisation can be computed by first computing all selections from
C
{\textstyle C}
and
D
{\textstyle D}
, which are pairs of literals
(
L
,
M
)
∈
(
C
1
,
C
2
)
{\displaystyle (L,M)\in (C_{1},C_{2})}
sharing the same predicate symbol and negated/unnegated status. Then, the least general generalisation is obtained as the disjunction of the least general generalisations of the individual selections, which can be obtained by first-order syntactical anti-unification.
To account for background knowledge, inductive logic programming systems employ relative least general generalisations, which are defined in terms of subsumption relative to a background theory. In general, such relative least general generalisations are not guaranteed to exist; however, if the background theory B is a finite set of ground literals, then the negation of B is itself a clause. In this case, a relative least general generalisation can be computed by disjoining the negation of B with both
C
1
{\textstyle C_{1}}
and
C
2
{\textstyle C_{2}}
and then computing their least general generalisation as before.
Relative least general generalisations are the foundation of the bottom-up system Golem.
==== Inverse resolution ====
Inverse resolution is an inductive reasoning technique that involves inverting the resolution operator.
Inverse resolution takes information about the resolvent of a resolution step to compute possible resolving clauses. Two types of inverse resolution operator are in use in inductive logic programming: V-operators and W-operators. A V-operator takes clauses
R
{\textstyle R}
and
C
1
{\textstyle C_{1}}
as input and returns a clause
C
2
{\textstyle C_{2}}
such that
R
{\textstyle R}
is the resolvent of
C
1
{\textstyle C_{1}}
and
C
2
{\textstyle C_{2}}
. A W-operator takes two clauses
R
1
{\textstyle R_{1}}
and
R
2
{\textstyle R_{2}}
and returns three clauses
C
1
{\textstyle C_{1}}
,
C
2
{\textstyle C_{2}}
and
C
3
{\textstyle C_{3}}
such that
R
1
{\textstyle R_{1}}
is the resolvent of
C
1
{\textstyle C_{1}}
and
C
2
{\textstyle C_{2}}
and
R
2
{\textstyle R_{2}}
is the resolvent of
C
2
{\textstyle C_{2}}
and
C
3
{\textstyle C_{3}}
.
Inverse resolution was first introduced by Stephen Muggleton and Wray Buntine in 1988 for use in the inductive logic programming system Cigol. By 1993, this spawned a surge of research into inverse resolution operators and their properties.
=== Top-down search ===
The ILP systems Progol, Hail and Imparo find a hypothesis H using the principle of the inverse entailment for theories B, E, H:
B
∧
H
⊨
E
⟺
B
∧
¬
E
⊨
¬
H
{\displaystyle B\land H\models E\iff B\land \neg E\models \neg H}
. First they construct an intermediate theory F called a bridge theory satisfying the conditions
B
∧
¬
E
⊨
F
{\displaystyle B\land \neg E\models F}
and
F
⊨
¬
H
{\displaystyle F\models \neg H}
. Then as
H
⊨
¬
F
{\displaystyle H\models \neg F}
, they generalize the negation of the bridge theory F with anti-entailment. However, the operation of anti-entailment is computationally more expensive since it is highly nondeterministic. Therefore, an alternative hypothesis search can be conducted using the inverse subsumption (anti-subsumption) operation instead, which is less non-deterministic than anti-entailment.
Questions of completeness of a hypothesis search procedure of specific inductive logic programming system arise. For example, the Progol hypothesis search procedure based on the inverse entailment inference rule is not complete by Yamamoto's example. On the other hand, Imparo is complete by both anti-entailment procedure and its extended inverse subsumption procedure.
=== Metainterpretive learning ===
Rather than explicitly searching the hypothesis graph, metainterpretive or meta-level systems encode the inductive logic programming program as a meta-level logic program which is then solved to obtain an optimal hypothesis. Formalisms used to express the problem specification include Prolog and answer set programming, with existing Prolog systems and answer set solvers used for solving the constraints.
And example of a Prolog-based system is Metagol, which is based on a meta-interpreter in Prolog, while ASPAL and ILASP are based on an encoding of the inductive logic programming problem in answer set programming.
=== Evolutionary learning ===
Evolutionary algorithms in ILP use a population-based approach to evolve hypotheses, refining them through selection, crossover, and mutation. Methods like EvoLearner have been shown to outperform traditional approaches on structured machine learning benchmarks.
== List of implementations ==
1BC and 1BC2: first-order naive Bayesian classifiers:
ACE (A Combined Engine)
Aleph
Atom Archived 2014-03-26 at the Wayback Machine
Claudien
DL-Learner Archived 2019-08-15 at the Wayback Machine
DMax
FastLAS (Fast Learning from Answer Sets)
FOIL (First Order Inductive Learner)
Golem
ILASP (Inductive Learning of Answer Set Programs)
Imparo
Inthelex (INcremental THEory Learner from EXamples) Archived 2011-11-28 at the Wayback Machine
Lime
Metagol
Mio
MIS (Model Inference System) by Ehud Shapiro
Ontolearn
Popper
PROGOL
RSD
Warmr (now included in ACE)
ProGolem
== Probabilistic inductive logic programming ==
Probabilistic inductive logic programming adapts the setting of inductive logic programming to learning probabilistic logic programs. It can be considered as a form of statistical relational learning within the formalism of probabilistic logic programming.
Given
background knowledge as a probabilistic logic program B, and
a set of positive and negative examples
E
+
{\textstyle E^{+}}
and
E
−
{\textstyle E^{-}}
the goal of probabilistic inductive logic programming is to find a probabilistic logic program
H
{\textstyle H}
such that the probability of positive examples according to
H
∪
B
{\textstyle {H\cup B}}
is maximized and the probability of negative examples is minimized.
This problem has two variants: parameter learning and structure learning. In the former, one is given the structure (the clauses) of H and the goal is to infer the probabilities annotations of the given clauses, while in the latter the goal is to infer both the structure and the probability parameters of H. Just as in classical inductive logic programming, the examples can be given as examples or as (partial) interpretations.
=== Parameter Learning ===
Parameter learning for languages following the distribution semantics has been performed by using an expectation-maximisation algorithm or by gradient descent.
An expectation-maximisation algorithm consists of a cycle in which the steps of expectation and maximization are repeatedly performed. In the expectation step, the distribution of the hidden variables is computed according to the current values of the probability parameters, while in the maximisation step, the new values of the parameters are computed.
Gradient descent methods compute the gradient of the target function and iteratively modify the parameters moving in the direction of the gradient.
=== Structure Learning ===
Structure learning was pioneered by Daphne Koller and Avi Pfeffer in 1997, where the authors learn the structure of first-order rules with associated probabilistic uncertainty parameters. Their approach involves generating the underlying graphical model in a preliminary step and then applying expectation-maximisation.
In 2008, De Raedt et al. presented an algorithm for performing theory compression on ProbLog programs, where theory compression refers to a process of removing as many clauses as possible from the theory in order to maximize the probability of a given set of positive and negative examples. No new clause can be added to the theory.
In the same year, Meert, W. et al. introduced a method for learning parameters and structure of ground probabilistic logic programs by considering the Bayesian networks equivalent to them and applying techniques for learning Bayesian networks.
ProbFOIL, introduced by De Raedt and Ingo Thon in 2010, combined the inductive logic programming system FOIL with ProbLog. Logical rules are learned from probabilistic data in the sense that both the examples themselves and their classifications can be probabilistic. The set of rules has to allow one to predict the probability of the examples from their description. In this setting, the parameters (the probability values) are fixed and the structure has to be learned.
In 2011, Elena Bellodi and Fabrizio Riguzzi introduced SLIPCASE, which performs a beam search among probabilistic logic programs by iteratively refining probabilistic theories and optimizing the parameters of each theory using expectation-maximisation.
Its extension SLIPCOVER, proposed in 2014, uses bottom clauses generated as in Progol to guide the refinement process, thus reducing the number of revisions and exploring the search space more effectively. Moreover, SLIPCOVER separates the search for promising clauses from that of the theory: the space of clauses is explored with a beam search, while the space of theories is searched greedily.
== See also ==
Commonsense reasoning
Formal concept analysis
Inductive reasoning
Inductive programming
Inductive probability
Statistical relational learning
Version space learning
== References ==
This article incorporates text from a free content work. Licensed under CC-BY 4.0 (license statement/permission). Text taken from A History of Probabilistic Inductive Logic Programming, Fabrizio Riguzzi, Elena Bellodi and Riccardo Zese, Frontiers Media.
== Further reading == | Wikipedia/Inverse_resolution |
In logic and computer science, the Davis–Putnam algorithm was developed by Martin Davis and Hilary Putnam for checking the validity of a first-order logic formula using a resolution-based decision procedure for propositional logic. Since the set of valid first-order formulas is recursively enumerable but not recursive, there exists no general algorithm to solve this problem. Therefore, the Davis–Putnam algorithm only terminates on valid formulas. Today, the term "Davis–Putnam algorithm" is often used synonymously with the resolution-based propositional decision procedure (Davis–Putnam procedure) that is actually only one of the steps of the original algorithm.
== Overview ==
The procedure is based on Herbrand's theorem, which implies that an unsatisfiable formula has an unsatisfiable ground instance, and on the fact that a formula is valid if and only if its negation is unsatisfiable. Taken together, these facts imply that to prove the validity of φ it is enough to prove that a ground instance of ¬φ is unsatisfiable. If φ is not valid, then the search for an unsatisfiable ground instance will not terminate.
The procedure for checking validity of a formula φ roughly consists of these three parts:
put the formula ¬φ in prenex form and eliminate quantifiers
generate all propositional ground instances, one by one
check if each instance is satisfiable.
If some instance is unsatisfiable, then return that φ is valid. Else continue checking.
The last part is a SAT solver based on resolution (as seen on the illustration), with an eager use of unit propagation and pure literal elimination (elimination of clauses with variables that occur only positively or only negatively in the formula).
At each step of the SAT solver, the intermediate formula generated is equisatisfiable, but possibly not equivalent, to the original formula. The resolution step leads to a worst-case exponential blow-up in the size of the formula.
The Davis–Putnam–Logemann–Loveland algorithm is a 1962 refinement of the propositional satisfiability step of the Davis–Putnam procedure which requires only a linear amount of memory in the worst case. It eschews the resolution for the splitting rule: a backtracking algorithm that chooses a literal l, and then recursively checks if a simplified formula with l assigned a true value is satisfiable or if a simplified formula with l assigned false is. It still forms the basis for today's (as of 2015) most efficient complete SAT solvers.
== See also ==
Herbrandization
== References ==
Davis, Martin; Putnam, Hilary (1960). "A Computing Procedure for Quantification Theory". Journal of the ACM. 7 (3): 201–215. doi:10.1145/321033.321034.
Davis, Martin; Logemann, George; Loveland, Donald (1962). "A Machine Program for Theorem Proving". Communications of the ACM. 5 (7): 394–397. doi:10.1145/368273.368557. hdl:2027/mdp.39015095248095.
R. Dechter; I. Rish. "Directional Resolution: The Davis–Putnam Procedure, Revisited". In J. Doyle and E. Sandewall and P. Torasso (ed.). Principles of Knowledge Representation and Reasoning: Proc. of the Fourth International Conference (KR'94). Kaufmann. pp. 134–145.
John Harrison (2009). Handbook of practical logic and automated reasoning. Cambridge University Press. pp. 79–90. ISBN 978-0-521-89957-4. | Wikipedia/Davis-Putnam_algorithm |
In mathematics, particularly graph theory, and computer science, a directed acyclic graph (DAG) is a directed graph with no directed cycles. That is, it consists of vertices and edges (also called arcs), with each edge directed from one vertex to another, such that following those directions will never form a closed loop. A directed graph is a DAG if and only if it can be topologically ordered, by arranging the vertices as a linear ordering that is consistent with all edge directions. DAGs have numerous scientific and computational applications, ranging from biology (evolution, family trees, epidemiology) to information science (citation networks) to computation (scheduling).
Directed acyclic graphs are also called acyclic directed graphs or acyclic digraphs.
== Definitions ==
A graph is formed by vertices and by edges connecting pairs of vertices, where the vertices can be any kind of object that is connected in pairs by edges. In the case of a directed graph, each edge has an orientation, from one vertex to another vertex. A path in a directed graph is a sequence of edges having the property that the ending vertex of each edge in the sequence is the same as the starting vertex of the next edge in the sequence; a path forms a cycle if the starting vertex of its first edge equals the ending vertex of its last edge. A directed acyclic graph is a directed graph that has no cycles.
A vertex v of a directed graph is said to be reachable from another vertex u when there exists a path that starts at u and ends at v. As a special case, every vertex is considered to be reachable from itself (by a path with zero edges). If a vertex can reach itself via a nontrivial path (a path with one or more edges), then that path is a cycle, so another way to define directed acyclic graphs is that they are the graphs in which no vertex can reach itself via a nontrivial path.
== Mathematical properties ==
=== Reachability relation, transitive closure, and transitive reduction ===
The reachability relation of a DAG can be formalized as a partial order ≤ on the vertices of the DAG. In this partial order, two vertices u and v are ordered as u ≤ v exactly when there exists a directed path from u to v in the DAG; that is, when u can reach v (or v is reachable from u). However, different DAGs may give rise to the same reachability relation and the same partial order. For example, a DAG with two edges u → v and v → w has the same reachability relation as the DAG with three edges u → v, v → w, and u → w. Both of these DAGs produce the same partial order, in which the vertices are ordered as u ≤ v ≤ w.
The transitive closure of a DAG is the graph with the most edges that has the same reachability relation as the DAG. It has an edge u → v for every pair of vertices (u, v) in the reachability relation ≤ of the DAG, and may therefore be thought of as a direct translation of the reachability relation ≤ into graph-theoretic terms. The same method of translating partial orders into DAGs works more generally: for every finite partially ordered set (S, ≤), the graph that has a vertex for every element of S and an edge for every pair of elements in ≤ is automatically a transitively closed DAG, and has (S, ≤) as its reachability relation. In this way, every finite partially ordered set can be represented as a DAG.
The transitive reduction of a DAG is the graph with the fewest edges that has the same reachability relation as the DAG. It has an edge u → v for every pair of vertices (u, v) in the covering relation of the reachability relation ≤ of the DAG. It is a subgraph of the DAG, formed by discarding the edges u → v for which the DAG also contains a longer directed path from u to v.
Like the transitive closure, the transitive reduction is uniquely defined for DAGs. In contrast, for a directed graph that is not acyclic, there can be more than one minimal subgraph with the same reachability relation. Transitive reductions are useful in visualizing the partial orders they represent, because they have fewer edges than other graphs representing the same orders and therefore lead to simpler graph drawings. A Hasse diagram of a partial order is a drawing of the transitive reduction in which the orientation of every edge is shown by placing the starting vertex of the edge in a lower position than its ending vertex.
=== Topological ordering ===
A topological ordering of a directed graph is an ordering of its vertices into a sequence, such that for every edge the start vertex of the edge occurs earlier in the sequence than the ending vertex of the edge. A graph that has a topological ordering cannot have any cycles, because the edge into the earliest vertex of a cycle would have to be oriented the wrong way. Therefore, every graph with a topological ordering is acyclic. Conversely, every directed acyclic graph has at least one topological ordering. The existence of a topological ordering can therefore be used as an equivalent definition of a directed acyclic graphs: they are exactly the graphs that have topological orderings.
In general, this ordering is not unique; a DAG has a unique topological ordering if and only if it has a directed path containing all the vertices, in which case the ordering is the same as the order in which the vertices appear in the path.
The family of topological orderings of a DAG is the same as the family of linear extensions of the reachability relation for the DAG, so any two graphs representing the same partial order have the same set of topological orders.
=== Combinatorial enumeration ===
The graph enumeration problem of counting directed acyclic graphs was studied by Robinson (1973).
The number of DAGs on n labeled vertices, for n = 0, 1, 2, 3, … (without restrictions on the order in which these numbers appear in a topological ordering of the DAG) is
1, 1, 3, 25, 543, 29281, 3781503, … (sequence A003024 in the OEIS).
These numbers may be computed by the recurrence relation
a
n
=
∑
k
=
1
n
(
−
1
)
k
−
1
(
n
k
)
2
k
(
n
−
k
)
a
n
−
k
.
{\displaystyle a_{n}=\sum _{k=1}^{n}(-1)^{k-1}{n \choose k}2^{k(n-k)}a_{n-k}.}
Eric W. Weisstein conjectured, and McKay et al. (2004) proved, that the same numbers count the (0,1) matrices for which all eigenvalues are positive real numbers. The proof is bijective: a matrix A is an adjacency matrix of a DAG if and only if A + I is a (0,1) matrix with all eigenvalues positive, where I denotes the identity matrix. Because a DAG cannot have self-loops, its adjacency matrix must have a zero diagonal, so adding I preserves the property that all matrix coefficients are 0 or 1.
=== Related families of graphs ===
A multitree (also called a strongly unambiguous graph or a mangrove) is a DAG in which there is at most one directed path between any two vertices. Equivalently, it is a DAG in which the subgraph reachable from any vertex induces an undirected tree.
A polytree (also called a directed tree) is a multitree formed by orienting the edges of an undirected tree.
An arborescence is a polytree formed by orienting the edges of an undirected tree away from a particular vertex, called the root of the arborescence.
== Computational problems ==
=== Topological sorting and recognition ===
Topological sorting is the algorithmic problem of finding a topological ordering of a given DAG. It can be solved in linear time. Kahn's algorithm for topological sorting builds the vertex ordering directly. It maintains a list of vertices that have no incoming edges from other vertices that have not already been included in the partially constructed topological ordering; initially this list consists of the vertices with no incoming edges at all. Then, it repeatedly adds one vertex from this list to the end of the partially constructed topological ordering, and checks whether its neighbors should be added to the list. The algorithm terminates when all vertices have been processed in this way. Alternatively, a topological ordering may be constructed by reversing a postorder numbering of a depth-first search graph traversal.
It is also possible to check whether a given directed graph is a DAG in linear time, either by attempting to find a topological ordering and then testing for each edge whether the resulting ordering is valid or alternatively, for some topological sorting algorithms, by verifying that the algorithm successfully orders all the vertices without meeting an error condition.
=== Construction from cyclic graphs ===
Any undirected graph may be made into a DAG by choosing a total order for its vertices and directing every edge from the earlier endpoint in the order to the later endpoint. The resulting orientation of the edges is called an acyclic orientation. Different total orders may lead to the same acyclic orientation, so an n-vertex graph can have fewer than n! acyclic orientations. The number of acyclic orientations is equal to |χ(−1)|, where χ is the chromatic polynomial of the given graph.
Any directed graph may be made into a DAG by removing a feedback vertex set or a feedback arc set, a set of vertices or edges (respectively) that touches all cycles. However, the smallest such set is NP-hard to find. An arbitrary directed graph may also be transformed into a DAG, called its condensation, by contracting each of its strongly connected components into a single supervertex. When the graph is already acyclic, its smallest feedback vertex sets and feedback arc sets are empty, and its condensation is the graph itself.
=== Transitive closure and transitive reduction ===
The transitive closure of a given DAG, with n vertices and m edges, may be constructed in time O(mn) by using either breadth-first search or depth-first search to test reachability from each vertex. Alternatively, it can be solved in time O(nω) where ω < 2.373 is the exponent for matrix multiplication algorithms; this is a theoretical improvement over the O(mn) bound for dense graphs.
In all of these transitive closure algorithms, it is possible to distinguish pairs of vertices that are reachable by at least one path of length two or more from pairs that can only be connected by a length-one path. The transitive reduction consists of the edges that form length-one paths that are the only paths connecting their endpoints. Therefore, the transitive reduction can be constructed in the same asymptotic time bounds as the transitive closure.
=== Closure problem ===
The closure problem takes as input a vertex-weighted directed acyclic graph and seeks the minimum (or maximum) weight of a closure – a set of vertices C, such that no edges leave C. The problem may be formulated for directed graphs without the assumption of acyclicity, but with no greater generality, because in this case it is equivalent to the same problem on the condensation of the graph. It may be solved in polynomial time using a reduction to the maximum flow problem.
=== Path algorithms ===
Some algorithms become simpler when used on DAGs instead of general graphs, based on the principle of topological ordering. For example, it is possible to find shortest paths and longest paths from a given starting vertex in DAGs in linear time by processing the vertices in a topological order, and calculating the path length for each vertex to be the minimum or maximum length obtained via any of its incoming edges. In contrast, for arbitrary graphs the shortest path may require slower algorithms such as Dijkstra's algorithm or the Bellman–Ford algorithm, and longest paths in arbitrary graphs are NP-hard to find.
== Applications ==
=== Scheduling ===
Directed acyclic graph representations of partial orderings have many applications in scheduling for systems of tasks with ordering constraints.
An important class of problems of this type concern collections of objects that need to be updated, such as the cells of a spreadsheet after one of the cells has been changed, or the object files of a piece of computer software after its source code has been changed.
In this context, a dependency graph is a graph that has a vertex for each object to be updated, and an edge connecting two objects whenever one of them needs to be updated earlier than the other. A cycle in this graph is called a circular dependency, and is generally not allowed, because there would be no way to consistently schedule the tasks involved in the cycle.
Dependency graphs without circular dependencies form DAGs.
For instance, when one cell of a spreadsheet changes, it is necessary to recalculate the values of other cells that depend directly or indirectly on the changed cell. For this problem, the tasks to be scheduled are the recalculations of the values of individual cells of the spreadsheet. Dependencies arise when an expression in one cell uses a value from another cell. In such a case, the value that is used must be recalculated earlier than the expression that uses it. Topologically ordering the dependency graph, and using this topological order to schedule the cell updates, allows the whole spreadsheet to be updated with only a single evaluation per cell. Similar problems of task ordering arise in makefiles for program compilation and instruction scheduling for low-level computer program optimization.
A somewhat different DAG-based formulation of scheduling constraints is used by the program evaluation and review technique (PERT), a method for management of large human projects that was one of the first applications of DAGs. In this method, the vertices of a DAG represent milestones of a project rather than specific tasks to be performed. Instead, a task or activity is represented by an edge of a DAG, connecting two milestones that mark the beginning and completion of the task. Each such edge is labeled with an estimate for the amount of time that it will take a team of workers to perform the task. The longest path in this DAG represents the critical path of the project, the one that controls the total time for the project. Individual milestones can be scheduled according to the lengths of the longest paths ending at their vertices.
=== Data processing networks ===
A directed acyclic graph may be used to represent a network of processing elements. In this representation, data enters a processing element through its incoming edges and leaves the element through its outgoing edges.
For instance, in electronic circuit design, static combinational logic blocks can be represented as an acyclic system of logic gates that computes a function of an input, where the input and output of the function are represented as individual bits. In general, the output of these blocks cannot be used as the input unless it is captured by a register or state element which maintains its acyclic properties. Electronic circuit schematics either on paper or in a database are a form of directed acyclic graphs using instances or components to form a directed reference to a lower level component. Electronic circuits themselves are not necessarily acyclic or directed.
Dataflow programming languages describe systems of operations on data streams, and the connections between the outputs of some operations and the inputs of others. These languages can be convenient for describing repetitive data processing tasks, in which the same acyclically-connected collection of operations is applied to many data items. They can be executed as a parallel algorithm in which each operation is performed by a parallel process as soon as another set of inputs becomes available to it.
In compilers, straight line code (that is, sequences of statements without loops or conditional branches) may be represented by a DAG describing the inputs and outputs of each of the arithmetic operations performed within the code. This representation allows the compiler to perform common subexpression elimination efficiently. At a higher level of code organization, the acyclic dependencies principle states that the dependencies between modules or components of a large software system should form a directed acyclic graph.
Feedforward neural networks are another example.
=== Causal structures ===
Graphs in which vertices represent events occurring at a definite time, and where the edges always point from an earlier time vertex to a later time vertex, are necessarily directed and acyclic. The lack of a cycle follows because the time associated with a vertex always increases as you follow any directed path in the graph, so you can never return to a vertex on a path. This reflects our natural intuition that causality means events can only affect the future, they never affect the past, and thus we have no causal loops. An example of this type of directed acyclic graph are those encountered in the causal set approach to quantum gravity though in this case the graphs considered are transitively complete. In the version history example below, each version of the software is associated with a unique time, typically the time the version was saved, committed or released. In the citation graph examples below, the documents are published at one time and can only refer to older documents.
Sometimes events are not associated with a specific physical time. Provided that pairs of events have a purely causal relationship, that is edges represent causal relations between the events, we will have a directed acyclic graph. For instance, a Bayesian network represents a system of probabilistic events as vertices in a directed acyclic graph, in which the likelihood of an event may be calculated from the likelihoods of its predecessors in the DAG. In this context, the moral graph of a DAG is the undirected graph created by adding an (undirected) edge between all parents of the same vertex (sometimes called marrying), and then replacing all directed edges by undirected edges. Another type of graph with a similar causal structure is an influence diagram, the vertices of which represent either decisions to be made or unknown information, and the edges of which represent causal influences from one vertex to another. In epidemiology, for instance, these diagrams are often used to estimate the expected value of different choices for intervention.
The converse is also true. That is in any application represented by a directed acyclic graph there is a causal structure, either an explicit order or time in the example or an order which can be derived from graph structure. This follows because all directed acyclic graphs have a topological ordering, i.e. there is at least one way to put the vertices in an order such that all edges point in the same direction along that order.
=== Genealogy and version history ===
Family trees may be seen as directed acyclic graphs, with a vertex for each family member and an edge for each parent-child relationship. Despite the name, these graphs are not necessarily trees because of the possibility of marriages between relatives (so a child has a common ancestor on both the mother's and father's side) causing pedigree collapse. The graphs of matrilineal descent (mother-daughter relationships) and patrilineal descent (father-son relationships) are trees within this graph. Because no one can become their own ancestor, family trees are acyclic.
The version history of a distributed revision control system, such as Git, generally has the structure of a directed acyclic graph, in which there is a vertex for each revision and an edge connecting pairs of revisions that were directly derived from each other. These are not trees in general due to merges.
In many randomized algorithms in computational geometry, the algorithm maintains a history DAG representing the version history of a geometric structure over the course of a sequence of changes to the structure. For instance in a randomized incremental algorithm for Delaunay triangulation, the triangulation changes by replacing one triangle by three smaller triangles when each point is added, and by "flip" operations that replace pairs of triangles by a different pair of triangles. The history DAG for this algorithm has a vertex for each triangle constructed as part of the algorithm, and edges from each triangle to the two or three other triangles that replace it. This structure allows point location queries to be answered efficiently: to find the location of a query point q in the Delaunay triangulation, follow a path in the history DAG, at each step moving to the replacement triangle that contains q. The final triangle reached in this path must be the Delaunay triangle that contains q.
=== Citation graphs ===
In a citation graph the vertices are documents with a single publication date. The edges represent the citations from the bibliography of one document to other necessarily earlier documents. The classic example comes from the citations between academic papers as pointed out in the 1965 article "Networks of Scientific Papers" by Derek J. de Solla Price who went on to produce the first model of a citation network, the Price model. In this case the citation count of a paper is just the in-degree of the corresponding vertex of the citation network. This is an important measure in citation analysis. Court judgements provide another example as judges support their conclusions in one case by recalling other earlier decisions made in previous cases. A final example is provided by patents which must refer to earlier prior art, earlier patents which are relevant to the current patent claim. By taking the special properties of directed acyclic graphs into account, one can analyse citation networks with techniques not available when analysing the general graphs considered in many studies using network analysis. For instance transitive reduction gives new insights into the citation distributions found in different applications highlighting clear differences in the mechanisms creating citations networks in different contexts. Another technique is main path analysis, which traces the citation links and suggests the most significant citation chains in a given citation graph.
The Price model is too simple to be a realistic model of a citation network but it is simple enough to allow for analytic solutions for some of its properties. Many of these can be found by using results derived from the undirected version of the Price model, the Barabási–Albert model. However, since Price's model gives a directed acyclic graph, it is a useful model when looking for analytic calculations of properties unique to directed acyclic graphs. For instance,
the length of the longest path, from the n-th node added to the network to the first node in the network, scales as
ln
(
n
)
{\displaystyle \ln(n)}
.
=== Data compression ===
Directed acyclic graphs may also be used as a compact representation of a collection of sequences. In this type of application, one finds a DAG in which the paths form the given sequences. When many of the sequences share the same subsequences, these shared subsequences can be represented by a shared part of the DAG, allowing the representation to use less space than it would take to list out all of the sequences separately. For example, the directed acyclic word graph is a data structure in computer science formed by a directed acyclic graph with a single source and with edges labeled by letters or symbols; the paths from the source to the sinks in this graph represent a set of strings, such as English words. Any set of sequences can be represented as paths in a tree, by forming a tree vertex for every prefix of a sequence and making the parent of one of these vertices represent the sequence with one fewer element; the tree formed in this way for a set of strings is called a trie. A directed acyclic word graph saves space over a trie by allowing paths to diverge and rejoin, so that a set of words with the same possible suffixes can be represented by a single tree vertex.
The same idea of using a DAG to represent a family of paths occurs in the binary decision diagram, a DAG-based data structure for representing binary functions. In a binary decision diagram, each non-sink vertex is labeled by the name of a binary variable, and each sink and each edge is labeled by a 0 or 1. The function value for any truth assignment to the variables is the value at the sink found by following a path, starting from the single source vertex, that at each non-sink vertex follows the outgoing edge labeled with the value of that vertex's variable. Just as directed acyclic word graphs can be viewed as a compressed form of tries, binary decision diagrams can be viewed as compressed forms of decision trees that save space by allowing paths to rejoin when they agree on the results of all remaining decisions.
== References ==
== External links ==
Weisstein, Eric W., "Acyclic Digraph", MathWorld{{cite web}}: CS1 maint: overridden setting (link)
DAGitty – an online tool for creating DAGs | Wikipedia/Directed_Acyclic_Graph |
In mathematical logic, a formula of first-order logic is in Skolem normal form if it is in prenex normal form with only universal first-order quantifiers.
Every first-order formula may be converted into Skolem normal form while not changing its satisfiability via a process called Skolemization (sometimes spelled Skolemnization). The resulting formula is not necessarily equivalent to the original one, but is equisatisfiable with it: it is satisfiable if and only if the original one is satisfiable.
Reduction to Skolem normal form is a method for removing existential quantifiers from formal logic statements, often performed as the first step in an automated theorem prover.
== Examples ==
The simplest form of Skolemization is for existentially quantified variables that are not inside the scope of a universal quantifier. These may be replaced simply by creating new constants. For example,
∃
x
P
(
x
)
{\displaystyle \exists xP(x)}
may be changed to
P
(
c
)
{\displaystyle P(c)}
, where
c
{\displaystyle c}
is a new constant (does not occur anywhere else in the formula).
More generally, Skolemization is performed by replacing every existentially quantified variable
y
{\displaystyle y}
with a term
f
(
x
1
,
…
,
x
n
)
{\displaystyle f(x_{1},\ldots ,x_{n})}
whose function symbol
f
{\displaystyle f}
is new. The variables of this term are as follows. If the formula is in prenex normal form, then
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
are the variables that are universally quantified and whose quantifiers precede that of
y
{\displaystyle y}
. In general, they are the variables that are quantified universally (we assume we get rid of existential quantifiers in order, so all existential quantifiers before
∃
y
{\displaystyle \exists y}
have been removed) and such that
∃
y
{\displaystyle \exists y}
occurs in the scope of their quantifiers. The function
f
{\displaystyle f}
introduced in this process is called a Skolem function (or Skolem constant if it is of zero arity) and the term is called a Skolem term.
As an example, the formula
∀
x
∃
y
∀
z
P
(
x
,
y
,
z
)
{\displaystyle \forall x\exists y\forall zP(x,y,z)}
is not in Skolem normal form because it contains the existential quantifier
∃
y
{\displaystyle \exists y}
. Skolemization replaces
y
{\displaystyle y}
with
f
(
x
)
{\displaystyle f(x)}
, where
f
{\displaystyle f}
is a new function symbol, and removes the quantification over
y
{\displaystyle y}
. The resulting formula is
∀
x
∀
z
P
(
x
,
f
(
x
)
,
z
)
{\displaystyle \forall x\forall zP(x,f(x),z)}
. The Skolem term
f
(
x
)
{\displaystyle f(x)}
contains
x
{\displaystyle x}
, but not
z
{\displaystyle z}
, because the quantifier to be removed
∃
y
{\displaystyle \exists y}
is in the scope of
∀
x
{\displaystyle \forall x}
, but not in that of
∀
z
{\displaystyle \forall z}
; since this formula is in prenex normal form, this is equivalent to saying that, in the list of quantifiers,
x
{\displaystyle x}
precedes
y
{\displaystyle y}
while
z
{\displaystyle z}
does not. The formula obtained by this transformation is satisfiable if and only if the original formula is.
== How Skolemization works ==
Skolemization works by applying a second-order equivalence together with the definition of first-order satisfiability. The equivalence provides a way for "moving" an existential quantifier before a universal one.
∀
x
∃
y
R
(
x
,
y
)
⟺
∃
f
∀
x
R
(
x
,
f
(
x
)
)
{\displaystyle \forall x\exists yR(x,y)\iff \exists f\forall xR(x,f(x))}
where
f
(
x
)
{\displaystyle f(x)}
is a function that maps
x
{\displaystyle x}
to
y
{\displaystyle y}
.
Intuitively, the sentence "for every
x
{\displaystyle x}
there exists a
y
{\displaystyle y}
such that
R
(
x
,
y
)
{\displaystyle R(x,y)}
" is converted into the equivalent form "there exists a function
f
{\displaystyle f}
mapping every
x
{\displaystyle x}
into a
y
{\displaystyle y}
such that, for every
x
{\displaystyle x}
it holds that
R
(
x
,
f
(
x
)
)
{\displaystyle R(x,f(x))}
".
This equivalence is useful because the definition of first-order satisfiability implicitly existentially quantifies over functions interpreting the function symbols. In particular, a first-order formula
Φ
{\displaystyle \Phi }
is satisfiable if there exists a model
M
{\displaystyle M}
and an evaluation
μ
{\displaystyle \mu }
of the free variables of the formula that evaluate the formula to true. The model contains the interpretation of all function symbols; therefore, Skolem functions are implicitly existentially quantified. In the example above,
∀
x
R
(
x
,
f
(
x
)
)
{\displaystyle \forall xR(x,f(x))}
is satisfiable if and only if there exists a model
M
{\displaystyle M}
, which contains an interpretation for
f
{\displaystyle f}
, such that
∀
x
R
(
x
,
f
(
x
)
)
{\displaystyle \forall xR(x,f(x))}
is true for some evaluation of its free variables (none in this case). This may be expressed in second order as
∃
f
∀
x
R
(
x
,
f
(
x
)
)
{\displaystyle \exists f\forall xR(x,f(x))}
. By the above equivalence, this is the same as the satisfiability of
∀
x
∃
y
R
(
x
,
y
)
{\displaystyle \forall x\exists yR(x,y)}
.
At the meta-level, first-order satisfiability of a formula
Φ
{\displaystyle \Phi }
may be written with a little abuse of notation as
∃
M
∃
μ
(
M
,
μ
⊨
Φ
)
{\displaystyle \exists M\exists \mu (M,\mu \models \Phi )}
, where
M
{\displaystyle M}
is a model,
μ
{\displaystyle \mu }
is an evaluation of the free variables, and
⊨
{\displaystyle \models }
means that
Φ
{\displaystyle \Phi }
is true in
M
{\displaystyle M}
under
μ
{\displaystyle \mu }
. Since first-order models contain the interpretation of all function symbols, any Skolem function that
Φ
{\displaystyle \Phi }
contains is implicitly existentially quantified by
∃
M
{\displaystyle \exists M}
. As a result, after replacing existential quantifiers over variables by existential quantifiers over functions at the front of the formula, the formula still may be treated as a first-order one by removing these existential quantifiers. This final step of treating
∃
f
∀
x
R
(
x
,
f
(
x
)
)
{\displaystyle \exists f\forall xR(x,f(x))}
as
∀
x
R
(
x
,
f
(
x
)
)
{\displaystyle \forall xR(x,f(x))}
may be completed because functions are implicitly existentially quantified by
∃
M
{\displaystyle \exists M}
in the definition of first-order satisfiability.
Correctness of Skolemization may be shown on the example formula
F
1
=
∀
x
1
…
∀
x
n
∃
y
R
(
x
1
,
…
,
x
n
,
y
)
{\displaystyle F_{1}=\forall x_{1}\dots \forall x_{n}\exists yR(x_{1},\dots ,x_{n},y)}
as follows. This formula is satisfied by a model
M
{\displaystyle M}
if and only if, for each possible value for
x
1
,
…
,
x
n
{\displaystyle x_{1},\dots ,x_{n}}
in the domain of the model, there exists a value for
y
{\displaystyle y}
in the domain of the model that makes
R
(
x
1
,
…
,
x
n
,
y
)
{\displaystyle R(x_{1},\dots ,x_{n},y)}
true. By the axiom of choice, there exists a function
f
{\displaystyle f}
such that
y
=
f
(
x
1
,
…
,
x
n
)
{\displaystyle y=f(x_{1},\dots ,x_{n})}
. As a result, the formula
F
2
=
∀
x
1
…
∀
x
n
R
(
x
1
,
…
,
x
n
,
f
(
x
1
,
…
,
x
n
)
)
{\displaystyle F_{2}=\forall x_{1}\dots \forall x_{n}R(x_{1},\dots ,x_{n},f(x_{1},\dots ,x_{n}))}
is satisfiable, because it has the model obtained by adding the interpretation of
f
{\displaystyle f}
to
M
{\displaystyle M}
. This shows that
F
1
{\displaystyle F_{1}}
is satisfiable only if
F
2
{\displaystyle F_{2}}
is satisfiable as well. Conversely, if
F
2
{\displaystyle F_{2}}
is satisfiable, then there exists a model
M
′
{\displaystyle M'}
that satisfies it; this model includes an interpretation for the function
f
{\displaystyle f}
such that, for every value of
x
1
,
…
,
x
n
{\displaystyle x_{1},\dots ,x_{n}}
, the formula
R
(
x
1
,
…
,
x
n
,
f
(
x
1
,
…
,
x
n
)
)
{\displaystyle R(x_{1},\dots ,x_{n},f(x_{1},\dots ,x_{n}))}
holds. As a result,
F
1
{\displaystyle F_{1}}
is satisfied by the same model because one may choose, for every value of
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
, the value
y
=
f
(
x
1
,
…
,
x
n
)
{\displaystyle y=f(x_{1},\dots ,x_{n})}
, where
f
{\displaystyle f}
is evaluated according to
M
′
{\displaystyle M'}
.
== Uses of Skolemization ==
One of the uses of Skolemization is within automated theorem proving. For example, in the method of analytic tableaux, whenever a formula whose leading quantifier is existential occurs, the formula obtained by removing that quantifier via Skolemization may be generated. For example, if
∃
x
Φ
(
x
,
y
1
,
…
,
y
n
)
{\displaystyle \exists x\Phi (x,y_{1},\ldots ,y_{n})}
occurs in a tableau, where
x
,
y
1
,
…
,
y
n
{\displaystyle x,y_{1},\ldots ,y_{n}}
are the free variables of
Φ
(
x
,
y
1
,
…
,
y
n
)
{\displaystyle \Phi (x,y_{1},\ldots ,y_{n})}
, then
Φ
(
f
(
y
1
,
…
,
y
n
)
,
y
1
,
…
,
y
n
)
{\displaystyle \Phi (f(y_{1},\ldots ,y_{n}),y_{1},\ldots ,y_{n})}
may be added to the same branch of the tableau. This addition does not alter the satisfiability of the tableau: every model of the old formula may be extended, by adding a suitable interpretation of
f
{\displaystyle f}
, to a model of the new formula.
This form of Skolemization is an improvement over "classical" Skolemization in that only variables that are free in the formula are placed in the Skolem term. This is an improvement because the semantics of tableaux may implicitly place the formula in the scope of some universally quantified variables that are not in the formula itself; these variables are not in the Skolem term, while they would be there according to the original definition of Skolemization. Another improvement that may be used is applying the same Skolem function symbol for formulae that are identical up to variable renaming.
Another use is in the resolution method for first-order logic, where formulas are represented as sets of clauses understood to be universally quantified. (For an example see drinker paradox.)
An important result in model theory is the Löwenheim–Skolem theorem, which can be proven via Skolemizing the theory and closing under the resulting Skolem functions.
== Skolem theories ==
In general, if
T
{\displaystyle T}
is a theory and for each formula with free variables
x
1
,
…
,
x
n
,
y
{\displaystyle x_{1},\dots ,x_{n},y}
there is an n-ary function symbol
F
{\displaystyle F}
that is provably a Skolem function for
y
{\displaystyle y}
, then
T
{\displaystyle T}
is called a Skolem theory.
Every Skolem theory is model complete, i.e. every substructure of a model is an elementary substructure. Given a model M of a Skolem theory T, the smallest substructure of M containing a certain set A is called the Skolem hull of A. The Skolem hull of A is an atomic prime model over A.
== History ==
Skolem normal form is named after the late Norwegian mathematician Thoralf Skolem.
== See also ==
Herbrandization, the dual of Skolemization
Predicate functor logic
== Notes ==
== References ==
== External links ==
"Skolem function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Skolemization on PlanetMath.org
Skolemization by Hector Zenil, The Wolfram Demonstrations Project.
Weisstein, Eric W. "SkolemizedForm". MathWorld. | Wikipedia/Skolem_function |
SLD resolution (Selective Linear Definite clause resolution) is the basic inference rule used in logic programming. It is a refinement of resolution, which is both sound and refutation complete for Horn clauses.
== The SLD inference rule ==
Given a goal clause, represented as the negation of a problem to be solved:
¬
L
1
∨
⋯
∨
¬
L
i
∨
⋯
∨
¬
L
n
{\displaystyle \neg L_{1}\lor \cdots \lor \neg L_{i}\lor \cdots \lor \neg L_{n}}
with selected literal
¬
L
i
{\displaystyle \neg L_{i}}
, and an input definite clause:
L
∨
¬
K
1
∨
⋯
∨
¬
K
m
{\displaystyle L\lor \neg K_{1}\lor \cdots \lor \neg K_{m}}
whose positive literal (atom)
L
{\displaystyle L\,}
unifies with the atom
L
i
{\displaystyle L_{i}\,}
of the selected literal
¬
L
i
{\displaystyle \neg L_{i}\,}
, SLD resolution derives another goal clause, in which the selected literal is replaced by the negative literals of the input clause and the unifying substitution
θ
{\displaystyle \theta \,}
is applied:
(
¬
L
1
∨
⋯
∨
¬
K
1
∨
⋯
∨
¬
K
m
∨
⋯
∨
¬
L
n
)
θ
{\displaystyle (\neg L_{1}\lor \cdots \lor \neg K_{1}\lor \cdots \lor \neg K_{m}\ \lor \cdots \lor \neg L_{n})\theta }
In the simplest case, in propositional logic, the atoms
L
i
{\displaystyle L_{i}\,}
and
L
{\displaystyle L\,}
are identical, and the unifying substitution
θ
{\displaystyle \theta \,}
is vacuous. However, in the more general case, the unifying substitution is necessary to make the two literals identical.
== The origin of the name "SLD" ==
The name "SLD resolution" was given by Maarten van Emden for the unnamed inference rule introduced by Robert Kowalski. Its name is derived from SL resolution, which is both sound and refutation complete for the unrestricted clausal form of logic. "SLD" stands for "SL resolution with Definite clauses".
In both, SL and SLD, "L" stands for the fact that a resolution proof can be restricted to a linear sequence of clauses:
C
1
,
C
2
,
⋯
,
C
l
{\displaystyle C_{1},C_{2},\cdots ,C_{l}}
where the "top clause"
C
1
{\displaystyle C_{1}\,}
is an input clause, and every other clause
C
i
+
1
{\displaystyle C_{i+1}\,}
is a resolvent one of whose parents is the previous clause
C
i
{\displaystyle C_{i}\,}
. The proof is a refutation if the last clause
C
l
{\displaystyle C_{l}\,}
is the empty clause.
In SLD, all of the clauses in the sequence are goal clauses, and the other parent is an input clause. In SL resolution, the other parent is either an input clause or an ancestor clause earlier in the sequence.
In both SL and SLD, "S" stands for the fact that the only literal resolved upon in any clause
C
i
{\displaystyle C_{i}\,}
is one that is uniquely selected by a selection rule or selection function. In SL resolution, the selected literal is restricted to one which has been most recently introduced into the clause. In the simplest case, such a last-in-first-out selection function can be specified by the order in which literals are written, as in Prolog. However, the selection function in SLD resolution is more general than in SL resolution and in Prolog. There is no restriction on the literal that can be selected.
== The computational interpretation of SLD resolution ==
In clausal logic, an SLD refutation demonstrates that the input set of clauses is unsatisfiable. In logic programming, however, an SLD refutation also has a computational interpretation. The top clause
¬
L
1
∨
⋯
∨
¬
L
i
∨
⋯
∨
¬
L
n
{\displaystyle \neg L_{1}\lor \cdots \lor \neg L_{i}\lor \cdots \lor \neg L_{n}}
can be interpreted as the denial of a conjunction of subgoals
L
1
∧
⋯
∧
L
i
∧
⋯
∧
L
n
{\displaystyle L_{1}\land \cdots \land L_{i}\land \cdots \land L_{n}}
. The derivation of clause
C
i
+
1
{\displaystyle C_{i+1}\,}
from
C
i
{\displaystyle C_{i}\,}
is the derivation, by means of backward reasoning, of a new set of sub-goals using an input clause as a goal-reduction procedure. The unifying substitution
θ
{\displaystyle \theta \,}
both passes input from the selected subgoal to the body of the procedure and simultaneously passes output from the head of the procedure to the remaining unselected subgoals. The empty clause is simply an empty set of subgoals, which signals that the initial conjunction of subgoals in the top clause has been solved.
== SLD resolution strategies ==
SLD resolution implicitly defines a search tree of alternative computations, in which the initial goal clause is associated with the root of the tree. For every node in the tree and for every definite clause in the program whose positive literal unifies with the selected literal in the goal clause associated with the node, there is a child node associated with the goal clause obtained by SLD resolution.
A leaf node, which has no children, is a success node if its associated goal clause is the empty clause. It is a failure node if its associated goal clause is non-empty but its selected literal unifies with no positive literal of definite clauses in the program.
SLD resolution is non-deterministic in the sense that it does not determine the search strategy for exploring the search tree. Prolog searches the tree depth-first, one branch at a time, using backtracking when it encounters a failure node. Depth-first search is very efficient in its use of computing resources, but is incomplete if the search space contains infinite branches and the search strategy searches these in preference to finite branches: the computation does not terminate. Other search strategies, including breadth-first, best-first, and branch-and-bound search are also possible. Moreover, the search can be carried out sequentially, one node at a time, or in parallel, many nodes simultaneously.
SLD resolution is also non-deterministic in the sense, mentioned earlier, that the selection rule is not determined by the inference rule, but is determined by a separate decision procedure, which can be sensitive to the dynamics of the program execution process.
The SLD resolution search space is an or-tree, in which different branches represent alternative computations. In the case of propositional logic programs, SLD can be generalised so that the search space is an and-or tree, whose nodes are labelled by single literals, representing subgoals, and nodes are joined either by conjunction or by disjunction. In the general case, where conjoint subgoals share variables, the and-or tree representation is more complicated.
== Example ==
Given the logic program in the Prolog language:
and the top-level goal:
the search space consists of a single branch, in which q is reduced to p which is reduced to the empty set of subgoals, signalling a successful computation. In this case, the program is so simple that there is no role for the selection function and no need for any search.
In clausal logic, the program is represented by the set of clauses:
q
∨
¬
p
{\displaystyle q\lor \neg p}
p
{\displaystyle p\,}
and top-level goal is represented by the goal clause with a single negative literal:
¬
q
{\displaystyle \neg q}
The search space consists of the single refutation:
¬
q
,
¬
p
,
f
a
l
s
e
{\displaystyle \neg q,\neg p,{\mathit {false}}}
where
f
a
l
s
e
{\displaystyle {\mathit {false}}\,}
represents the empty clause.
If the following clause were added to the program:
then there would be an additional branch in the search space, whose leaf node r is a failure node. In Prolog, if this clause were added to the front of the original program, then Prolog would use the order in which the clauses are written to determine the order in which the branches of the search space are investigated. Prolog would try this new branch first, fail, and then backtrack to investigate the single branch of the original program and succeed.
If the clause
were now added to the program, then the search tree would contain an infinite branch. If this clause were tried first, then Prolog would go into an infinite loop and not find the successful branch.
== SLDNF ==
SLDNF is an extension of SLD resolution to deal with negation as failure. In SLDNF, goal clauses can contain negation as failure literals, say of the form
n
o
t
(
p
)
{\displaystyle not(p)\,}
, which can be selected only if they contain no variables. When such a variable-free literal is selected, a subproof (or subcomputation) is attempted to determine whether there is an SLDNF refutation starting from the corresponding unnegated literal
p
{\displaystyle p\,}
as top clause. The selected subgoal
n
o
t
(
p
)
{\displaystyle not(p)\,}
succeeds if the subproof fails, and it fails if the subproof succeeds.
== See also ==
John Alan Robinson
== References ==
Jean Gallier, SLD-Resolution and Logic Programming chapter 9 of Logic for Computer Science: Foundations of Automatic Theorem Proving, 2003 online revision (free to download), originally published by Wiley, 1986
John C. Shepherdson, SLDNF-Resolution with Equality, Journal of Automated Reasoning 8: 297-306, 1992; defines semantics with respect to which SLDNF-resolution with equality is sound and complete
== External links ==
[1] Definition from the Free On-Line Dictionary of Computing | Wikipedia/SLD_resolution |
Science fantasy is a hybrid genre within speculative fiction that simultaneously draws upon or combines tropes and elements from both science fiction and fantasy. In a conventional science fiction story, the world is presented as grounded by the laws of nature and comprehensible by science, while a conventional fantasy story contains mostly supernatural elements that do not obey the scientific laws of the real world. The world of science fantasy, however, is laid out to be scientifically logical and often supplied with hard science-like explanations of any supernatural elements.
During the Golden Age of Science Fiction, science fantasy stories were seen in sharp contrast to the terse, scientifically plausible material that came to dominate mainstream science fiction, typified by the magazine Astounding Science Fiction. Although science fantasy stories at that time were often relegated to the status of children's entertainment, their freedom of imagination and romance proved to be an early major influence on the "New Wave" writers of the 1960s, who became exasperated by the limitations of hard science fiction.
== Historical view ==
The term "science fantasy" was coined in 1935 by critic Forrest J. Ackerman as a synonym for science fiction. In the 1950s, the British journalist Walter Gillings considered science fantasy as a part of science fiction that was not plausible from the point of view of the science of the time (for example, the use of nuclear weapons in H.G. Wells' novel The World Set Free was a science fantasy from the point of view of Newtonian physics and a work of science fiction from the point of view of Einstein's theory). In 1948, writer Marion Zimmer (later known as Marion Zimmer Bradley) called "science fantasy" a mixture of science fiction and fantasy in Startling Stories magazine. Critic Judith Murry considered science fantasy as works of fantasy in which magic has a natural scientific basis. Science fiction critic John Clute chose the narrower term "technological fantasy" from the broader concept of "science fiction". The label first came into wide use after many science fantasy stories were published in the American pulp magazines, such as Robert A. Heinlein's Magic, Inc., L. Ron Hubbard's Slaves of Sleep, and Fletcher Pratt and L. Sprague de Camp's Harold Shea series. All were relatively rationalistic stories published in John W. Campbell Jr.'s Unknown magazine. These were a deliberate attempt to apply the techniques and attitudes of science fiction to traditional fantasy subjects.
Distinguishing between pure science fiction and pure fantasy, Rod Serling argued that the former was "the improbable made possible" while the latter was "the impossible made probable". As a combination of the two, science fantasy gives a scientific veneer of realism to things that simply could not happen in the real world under any circumstances. Where science fiction does not permit the existence of fantastical or supernatural elements, science fantasy explicitly relies upon them to complement the scientific elements.
In explaining the intrigue of science fantasy, Carl D. Malmgren provides an intro regarding C. S. Lewis's speculation on the emotional needs at work in the subgenre: "In the counternatural worlds of science fantasy, the imaginary and the actual, the magical and the prosaic, the mythical and the scientific, meet and interanimate. In so doing, these worlds inspire us with new sensations and experiences, with [quoting C. S. Lewis] 'such beauty, awe, or terror as the actual world does not supply', with the stuff of desires, dreams, and dread."
Henry Kuttner and C. L. Moore published novels in Startling Stories, alone and together, which were far more romantic. These were closely related to the work that they and others were doing for outlets like Weird Tales, such as Moore's Northwest Smith stories.
Ace Books published a number of books as science fantasy during the 1950s and 1960s.
The Encyclopedia of Science Fiction points out that as a genre, science fantasy "has never been clearly defined", and was most commonly used in the period between 1950 and 1966.
The Star Trek franchise created by Gene Roddenberry is sometimes cited as an example of science fantasy. Writer James F. Broderick describes Star Trek as science fantasy because it includes semi-futuristic as well as supernatural/fantasy elements such as The Q. According to the late science fiction author Arthur C. Clarke, many purists argue that Star Trek is science fantasy rather than science fiction because of its scientifically improbable elements, which he partially agreed with.
The status of Star Wars as a science fantasy franchise has been debated. In 2015, George Lucas stated that "Star Wars isn't a science-fiction film, it's a fantasy film and a space opera".
== Characteristics and subjects ==
Science fantasy blends elements and characteristics of science fiction and fantasy. This usually takes the form of incorporating fantasy elements in a science fiction context. It tends to describe worlds that appear much like fantasy worlds but are made believable through science fiction naturalist explanations. For example, creatures from folklore and mythology typical for fantasy fiction become seemingly possible in reinvented forms through for example the element of extra-terrestrial beings. Such works have also been described as 'mythopoeic science fantasy'. In the genre, subjects are often conceptualized on a planetary scale.
== See also ==
Dieselpunk
Dying Earth (genre)
Lovecraftian horror
New weird
Planetary romance (also known as Sword and Planet)
Raygun Gothic
Steampunk
Technofantasy
== References ==
== Further reading ==
Attebery, Brian (2014). "The Fantastic". In Latham, Rob (ed.). The Oxford Handbook of Science Fiction. Oxford University Press. doi:10.1093/oxfordhb/9780199838844.013.0011. ISBN 978-0-19-983884-4.
Scholes, R. (1987). Boiling Roses: Thoughts on Science Fantasy. Intersections: Science Fiction and Fantasy. SIU Press. ISBN 978-0-8093-1374-7
== External links ==
"Science Fantasy" in The Encyclopedia of Science Fiction | Wikipedia/Science_Fantasy |
Michael E. Lynch (born 17 October 1948), is an emeritus professor at the department of Science and Technology Studies at Cornell University. His works are particularly concerned with ethnomethodological approaches in science studies. Much of his research has addressed the role of visual representation in scientific practice.
From 2002 to 2012 he was the editor of Social Studies of Science. In 2016, he won the Society for Social Studies of Science's J. D. Bernal Prize for distinguished contributions to the field.
== Awards ==
1995 Robert K. Merton Professional award, Science, Knowledge and Technology Section of the American Sociological Association
2011 Distinguished Publication Award, Ethnomethodology/Conversation Analysis Section of the American Sociological Association
2016 John Desmond Bernal Prize
2020 Garfinkel-Sacks Award for Distinguished Scholarship, Ethnomethodology/Conversation Analysis Section of the American Sociological Association
== Selected bibliography ==
=== Books ===
Lynch, Michael (1985). Art and artifact in laboratory science: a study of shop work and shop talk in a research laboratory. London Boston: Routledge & Kegan Paul. ISBN 9780710097538.
Lynch, Michael; Woolgar, Steve (1990). Representation in scientific practice. Cambridge, Massachusetts: MIT Press. ISBN 9780262620765.
Lynch, Michael (1993). Scientific practice and ordinary action: ethnomethodology and social studies of science. Cambridge England New York: Cambridge University Press. ISBN 9780521431521.
Lynch, Michael; Sharrock, Wes (2003). Harold Garfinkel (4 volume set). London Thousand Oaks, California: SAGE. ISBN 9780761974598.
Lynch, Michael; Wajcman, Judy; Hackett, Edward J.; Amsterdamska, Olga (2008). The handbook of science and technology studies (3rd ed.). Cambridge, Massachusetts: MIT Press Published in cooperation with the Society for the Social Studies of Science. ISBN 9781435605046.
Lynch, Michael; Cole, Simon; McNally, Ruth; Jordan, Kathleen (2008). Truth machine the contentious history of DNA fingerprinting. Chicago: University of Chicago Press. ISBN 9780226498089.
Lynch, Michael; Sharrock, Wes (2011). Ethnomethodology (4 volume set). Benchmarks in Research Methods. Los Angeles: Sage. ISBN 9781848604414.
Lynch, Michael (2012). Science and technology studies: critical concepts in the social sciences (4 volume set). Oxon New York: Routledge. ISBN 9780415581820.
Lynch, Michael; Woolgar, Steve; Coopmans, Catelijne; Vertesi, Janet (2014). Representation in scientific practice revisited. Cambridge, Massachusetts: MIT Press. ISBN 9780262525381.
=== Book chapters ===
Lynch, Michael (1992), "Extending Wittgensteinian: the pivotal move from epistemology to the sociology of science", in Pickering, Andrew (ed.), Science as practice and culture, Chicago: University of Chicago Press, pp. 215–265, ISBN 9780226668017.
Lynch, Michael (1992), "From the "Will to Theory" to the Discursive Collage: A reply to Bloor's "Left and Right Wittgensteinians"", in Pickering, Andrew (ed.), Science as practice and culture, Chicago: University of Chicago Press, pp. 283–300, ISBN 9780226668017.
=== Journal articles ===
Lynch, Michael; Garfinkel, Harold; Livingston, Eric (June 1981). "The work of a discovering science construed with materials from the optically discovered pulsar". Philosophy of the Social Sciences. 11 (2): 131–158. doi:10.1177/004839318101100202. S2CID 143072970.
Lynch, Michael (April 1988). "The externalized retina: Selection and mathematization in the visual documentation of objects in the life sciences". Human Studies. 2–3 (11): 201–234. doi:10.1007/BF00177304. S2CID 59776975.
Lynch, Michael; Woolgar, Steve (April 1988). "Introduction: Sociological orientations to representational practice in science". Human Studies. 2–3 (11): 99–116. doi:10.1007/BF00177300. S2CID 143536396.
Lynch, Michael (Winter 1994). "Representation is overrated: some critical remarks about the use of the concept of representation in science studies". Configurations. 2 (1): 137–149. doi:10.1353/con.1994.0015.
Lynch, Michael (August 2002). "From naturally occurring data to naturally organized ordinary activities: comment on Speer". Discourse Studies. 4 (4): 531–537. doi:10.1177/14614456020040040801. S2CID 144475763.
Lynch, Michael (December 2006). "From Ruse to Farce". Social Studies of Science. 36 (6): 819–826. doi:10.1177/0306312706067897. S2CID 143851225.
Lynch, Michael; Cody, Cyrus C.M. (September 2010). "Test objects and other epistemic things: a history of a nanoscale object". The British Journal for the History of Science. 43 (3): 423–458. doi:10.1017/S0007087409990689. S2CID 13342372.
Lynch, Michael (February 2011). "Still emerging after all these years". Social Studies of Science. 41 (1): 3–4. doi:10.1177/0306312710396569. S2CID 146494689.
Lynch, Michael (December 2011). "Editorial". Social Studies of Science. 41 (6): 767–768. doi:10.1177/0306312711427368. S2CID 220719334.
Lynch, Michael (December 2011). "Ad hoc special section on ethnomethodological studies of science, mathematics, and technical activity: Introduction". Social Studies of Science. 41 (6): 835–837. doi:10.1177/0306312711427369. S2CID 147128507.
Lynch, Michael (December 2011). "Harold Garfinkel (29 October 1917 – 21 April 2011): A remembrance and reminder". Social Studies of Science. 41 (6): 927–942. doi:10.1177/0306312711423434. S2CID 144144921.
Lynch, Michael (March 2013). "Science, truth, and forensic cultures: The exceptional legal status of DNA evidence". Studies in History and Philosophy of Science Part C. 44 (1): 60–70. doi:10.1016/j.shpsc.2012.09.008. PMID 23117027.
== References ==
== External links ==
Lynch's Cornell profile page
Radical Ethnomethodology workshop organized by Lynch | Wikipedia/Michael_Lynch_(ethnomethodologist) |
In logic and computer science, specifically automated reasoning, unification is an algorithmic process of solving equations between symbolic expressions, each of the form Left-hand side = Right-hand side. For example, using x,y,z as variables, and taking f to be an uninterpreted function, the singleton equation set { f(1,y) = f(x,2) } is a syntactic first-order unification problem that has the substitution { x ↦ 1, y ↦ 2 } as its only solution.
Conventions differ on what values variables may assume and which expressions are considered equivalent. In first-order syntactic unification, variables range over first-order terms and equivalence is syntactic. This version of unification has a unique "best" answer and is used in logic programming and programming language type system implementation, especially in Hindley–Milner based type inference algorithms. In higher-order unification, possibly restricted to higher-order pattern unification, terms may include lambda expressions, and equivalence is up to beta-reduction. This version is used in proof assistants and higher-order logic programming, for example Isabelle, Twelf, and lambdaProlog. Finally, in semantic unification or E-unification, equality is subject to background knowledge and variables range over a variety of domains. This version is used in SMT solvers, term rewriting algorithms, and cryptographic protocol analysis.
== Formal definition ==
A unification problem is a finite set E={ l1 ≐ r1, ..., ln ≐ rn } of equations to solve, where li, ri are in the set
T
{\displaystyle T}
of terms or expressions. Depending on which expressions or terms are allowed to occur in an equation set or unification problem, and which expressions are considered equal, several frameworks of unification are distinguished. If higher-order variables, that is, variables representing functions, are allowed in an expression, the process is called higher-order unification, otherwise first-order unification. If a solution is required to make both sides of each equation literally equal, the process is called syntactic or free unification, otherwise semantic or equational unification, or E-unification, or unification modulo theory.
If the right side of each equation is closed (no free variables), the problem is called (pattern) matching. The left side (with variables) of each equation is called the pattern.
=== Prerequisites ===
Formally, a unification approach presupposes
An infinite set
V
{\displaystyle V}
of variables. For higher-order unification, it is convenient to choose
V
{\displaystyle V}
disjoint from the set of lambda-term bound variables.
A set
T
{\displaystyle T}
of terms such that
V
⊆
T
{\displaystyle V\subseteq T}
. For first-order unification,
T
{\displaystyle T}
is usually the set of first-order terms (terms built from variable and function symbols). For higher-order unification
T
{\displaystyle T}
consists of first-order terms and lambda terms (terms containing some higher-order variables).
A mapping
vars
:
T
→
{\displaystyle {\text{vars}}\colon T\rightarrow }
P
{\displaystyle \mathbb {P} }
(
V
)
{\displaystyle (V)}
, assigning to each term
t
{\displaystyle t}
the set
vars
(
t
)
⊊
V
{\displaystyle {\text{vars}}(t)\subsetneq V}
of free variables occurring in
t
{\displaystyle t}
.
A theory or equivalence relation
≡
{\displaystyle \equiv }
on
T
{\displaystyle T}
, indicating which terms are considered equal. For first-order E-unification,
≡
{\displaystyle \equiv }
reflects the background knowledge about certain function symbols; for example, if
⊕
{\displaystyle \oplus }
is considered commutative,
t
≡
u
{\displaystyle t\equiv u}
if
u
{\displaystyle u}
results from
t
{\displaystyle t}
by swapping the arguments of
⊕
{\displaystyle \oplus }
at some (possibly all) occurrences. In the most typical case that there is no background knowledge at all, then only literally, or syntactically, identical terms are considered equal. In this case, ≡ is called the free theory (because it is a free object), the empty theory (because the set of equational sentences, or the background knowledge, is empty), the theory of uninterpreted functions (because unification is done on uninterpreted terms), or the theory of constructors (because all function symbols just build up data terms, rather than operating on them). For higher-order unification, usually
t
≡
u
{\displaystyle t\equiv u}
if
t
{\displaystyle t}
and
u
{\displaystyle u}
are alpha equivalent.
As an example of how the set of terms and theory affects the set of solutions, the syntactic first-order unification problem { y = cons(2,y) } has no solution over the set of finite terms. However, it has the single solution { y ↦ cons(2,cons(2,cons(2,...))) } over the set of infinite tree terms. Similarly, the semantic first-order unification problem { a⋅x = x⋅a } has each substitution of the form { x ↦ a⋅...⋅a } as a solution in a semigroup, i.e. if (⋅) is considered associative. But the same problem, viewed in an abelian group, where (⋅) is considered also commutative, has any substitution at all as a solution.
As an example of higher-order unification, the singleton set { a = y(x) } is a syntactic second-order unification problem, since y is a function variable. One solution is { x ↦ a, y ↦ (identity function) }; another one is { y ↦ (constant function mapping each value to a), x ↦ (any value) }.
=== Substitution ===
A substitution is a mapping
σ
:
V
→
T
{\displaystyle \sigma :V\rightarrow T}
from variables to terms; the notation
{
x
1
↦
t
1
,
.
.
.
,
x
k
↦
t
k
}
{\displaystyle \{x_{1}\mapsto t_{1},...,x_{k}\mapsto t_{k}\}}
refers to a substitution mapping each variable
x
i
{\displaystyle x_{i}}
to the term
t
i
{\displaystyle t_{i}}
, for
i
=
1
,
.
.
.
,
k
{\displaystyle i=1,...,k}
, and every other variable to itself; the
x
i
{\displaystyle x_{i}}
must be pairwise distinct. Applying that substitution to a term
t
{\displaystyle t}
is written in postfix notation as
t
{
x
1
↦
t
1
,
.
.
.
,
x
k
↦
t
k
}
{\displaystyle t\{x_{1}\mapsto t_{1},...,x_{k}\mapsto t_{k}\}}
; it means to (simultaneously) replace every occurrence of each variable
x
i
{\displaystyle x_{i}}
in the term
t
{\displaystyle t}
by
t
i
{\displaystyle t_{i}}
. The result
t
τ
{\displaystyle t\tau }
of applying a substitution
τ
{\displaystyle \tau }
to a term
t
{\displaystyle t}
is called an instance of that term
t
{\displaystyle t}
.
As a first-order example, applying the substitution { x ↦ h(a,y), z ↦ b } to the term
=== Generalization, specialization ===
If a term
t
{\displaystyle t}
has an instance equivalent to a term
u
{\displaystyle u}
, that is, if
t
σ
≡
u
{\displaystyle t\sigma \equiv u}
for some substitution
σ
{\displaystyle \sigma }
, then
t
{\displaystyle t}
is called more general than
u
{\displaystyle u}
, and
u
{\displaystyle u}
is called more special than, or subsumed by,
t
{\displaystyle t}
. For example,
x
⊕
a
{\displaystyle x\oplus a}
is more general than
a
⊕
b
{\displaystyle a\oplus b}
if ⊕ is commutative, since then
(
x
⊕
a
)
{
x
↦
b
}
=
b
⊕
a
≡
a
⊕
b
{\displaystyle (x\oplus a)\{x\mapsto b\}=b\oplus a\equiv a\oplus b}
.
If ≡ is literal (syntactic) identity of terms, a term may be both more general and more special than another one only if both terms differ just in their variable names, not in their syntactic structure; such terms are called variants, or renamings of each other.
For example,
f
(
x
1
,
a
,
g
(
z
1
)
,
y
1
)
{\displaystyle f(x_{1},a,g(z_{1}),y_{1})}
is a variant of
f
(
x
2
,
a
,
g
(
z
2
)
,
y
2
)
{\displaystyle f(x_{2},a,g(z_{2}),y_{2})}
,
since
f
(
x
1
,
a
,
g
(
z
1
)
,
y
1
)
{
x
1
↦
x
2
,
y
1
↦
y
2
,
z
1
↦
z
2
}
=
f
(
x
2
,
a
,
g
(
z
2
)
,
y
2
)
{\displaystyle f(x_{1},a,g(z_{1}),y_{1})\{x_{1}\mapsto x_{2},y_{1}\mapsto y_{2},z_{1}\mapsto z_{2}\}=f(x_{2},a,g(z_{2}),y_{2})}
and
f
(
x
2
,
a
,
g
(
z
2
)
,
y
2
)
{
x
2
↦
x
1
,
y
2
↦
y
1
,
z
2
↦
z
1
}
=
f
(
x
1
,
a
,
g
(
z
1
)
,
y
1
)
.
{\displaystyle f(x_{2},a,g(z_{2}),y_{2})\{x_{2}\mapsto x_{1},y_{2}\mapsto y_{1},z_{2}\mapsto z_{1}\}=f(x_{1},a,g(z_{1}),y_{1}).}
However,
f
(
x
1
,
a
,
g
(
z
1
)
,
y
1
)
{\displaystyle f(x_{1},a,g(z_{1}),y_{1})}
is not a variant of
f
(
x
2
,
a
,
g
(
x
2
)
,
x
2
)
{\displaystyle f(x_{2},a,g(x_{2}),x_{2})}
, since no substitution can transform the latter term into the former one.
The latter term is therefore properly more special than the former one.
For arbitrary
≡
{\displaystyle \equiv }
, a term may be both more general and more special than a structurally different term.
For example, if ⊕ is idempotent, that is, if always
x
⊕
x
≡
x
{\displaystyle x\oplus x\equiv x}
, then the term
x
⊕
y
{\displaystyle x\oplus y}
is more general than
z
{\displaystyle z}
, and vice versa, although
x
⊕
y
{\displaystyle x\oplus y}
and
z
{\displaystyle z}
are of different structure.
A substitution
σ
{\displaystyle \sigma }
is more special than, or subsumed by, a substitution
τ
{\displaystyle \tau }
if
t
σ
{\displaystyle t\sigma }
is subsumed by
t
τ
{\displaystyle t\tau }
for each term
t
{\displaystyle t}
. We also say that
τ
{\displaystyle \tau }
is more general than
σ
{\displaystyle \sigma }
. More formally, take a nonempty infinite set
V
{\displaystyle V}
of auxiliary variables such that no equation
l
i
≐
r
i
{\displaystyle l_{i}\doteq r_{i}}
in the unification problem contains variables from
V
{\displaystyle V}
. Then a substitution
σ
{\displaystyle \sigma }
is subsumed by another substitution
τ
{\displaystyle \tau }
if there is a substitution
θ
{\displaystyle \theta }
such that for all terms
X
∉
V
{\displaystyle X\notin V}
,
X
σ
≡
X
τ
θ
{\displaystyle X\sigma \equiv X\tau \theta }
.
For instance
{
x
↦
a
,
y
↦
a
}
{\displaystyle \{x\mapsto a,y\mapsto a\}}
is subsumed by
τ
=
{
x
↦
y
}
{\displaystyle \tau =\{x\mapsto y\}}
, using
θ
=
{
y
↦
a
}
{\displaystyle \theta =\{y\mapsto a\}}
, but
σ
=
{
x
↦
a
}
{\displaystyle \sigma =\{x\mapsto a\}}
is not subsumed by
τ
=
{
x
↦
y
}
{\displaystyle \tau =\{x\mapsto y\}}
, as
f
(
x
,
y
)
σ
=
f
(
a
,
y
)
{\displaystyle f(x,y)\sigma =f(a,y)}
is not an instance of
f
(
x
,
y
)
τ
=
f
(
y
,
y
)
{\displaystyle f(x,y)\tau =f(y,y)}
.
=== Solution set ===
A substitution σ is a solution of the unification problem E if liσ ≡ riσ for
i
=
1
,
.
.
.
,
n
{\displaystyle i=1,...,n}
. Such a substitution is also called a unifier of E.
For example, if ⊕ is associative, the unification problem { x ⊕ a ≐ a ⊕ x } has the solutions {x ↦ a}, {x ↦ a ⊕ a}, {x ↦ a ⊕ a ⊕ a}, etc., while the problem { x ⊕ a ≐ a } has no solution.
For a given unification problem E, a set S of unifiers is called complete if each solution substitution is subsumed by some substitution in S. A complete substitution set always exists (e.g. the set of all solutions), but in some frameworks (such as unrestricted higher-order unification) the problem of determining whether any solution exists (i.e., whether the complete substitution set is nonempty) is undecidable.
The set S is called minimal if none of its members subsumes another one. Depending on the framework, a complete and minimal substitution set may have zero, one, finitely many, or infinitely many members, or may not exist at all due to an infinite chain of redundant members. Thus, in general, unification algorithms compute a finite approximation of the complete set, which may or may not be minimal, although most algorithms avoid redundant unifiers when possible. For first-order syntactical unification, Martelli and Montanari gave an algorithm that reports unsolvability or computes a single unifier that by itself forms a complete and minimal substitution set, called the most general unifier.
== Syntactic unification of first-order terms ==
Syntactic unification of first-order terms is the most widely used unification framework.
It is based on T being the set of first-order terms (over some given set V of variables, C of constants and Fn of n-ary function symbols) and on ≡ being syntactic equality.
In this framework, each solvable unification problem {l1 ≐ r1, ..., ln ≐ rn} has a complete, and obviously minimal, singleton solution set {σ}.
Its member σ is called the most general unifier (mgu) of the problem.
The terms on the left and the right hand side of each potential equation become syntactically equal when the mgu is applied i.e. l1σ = r1σ ∧ ... ∧ lnσ = rnσ.
Any unifier of the problem is subsumed by the mgu σ.
The mgu is unique up to variants: if S1 and S2 are both complete and minimal solution sets of the same syntactical unification problem, then S1 = { σ1 } and S2 = { σ2 } for some substitutions σ1 and σ2, and xσ1 is a variant of xσ2 for each variable x occurring in the problem.
For example, the unification problem { x ≐ z, y ≐ f(x) } has a unifier { x ↦ z, y ↦ f(z) }, because
This is also the most general unifier.
Other unifiers for the same problem are e.g. { x ↦ f(x1), y ↦ f(f(x1)), z ↦ f(x1) }, { x ↦ f(f(x1)), y ↦ f(f(f(x1))), z ↦ f(f(x1)) }, and so on; there are infinitely many similar unifiers.
As another example, the problem g(x,x) ≐ f(y) has no solution with respect to ≡ being literal identity, since any substitution applied to the left and right hand side will keep the outermost g and f, respectively, and terms with different outermost function symbols are syntactically different.
=== Unification algorithms ===
Jacques Herbrand discussed the basic concepts of unification and sketched an algorithm in 1930. But most authors attribute the first unification algorithm to John Alan Robinson (cf. box). Robinson's algorithm had worst-case exponential behavior in both time and space. Numerous authors have proposed more efficient unification algorithms. Algorithms with worst-case linear-time behavior were discovered independently by Martelli & Montanari (1976) and Paterson & Wegman (1976) Baader & Snyder (2001) uses a similar technique as Paterson-Wegman, hence is linear, but like most linear-time unification algorithms is slower than the Robinson version on small sized inputs due to the overhead of preprocessing the inputs and postprocessing of the output, such as construction of a DAG representation. de Champeaux (2022) is also of linear complexity in the input size but is competitive with the Robinson algorithm on small size inputs. The speedup is obtained by using an object-oriented representation of the predicate calculus that avoids the need for pre- and post-processing, instead making variable objects responsible for creating a substitution and for dealing with aliasing. de Champeaux claims that the ability to add functionality to predicate calculus represented as programmatic objects provides opportunities for optimizing other logic operations as well.
The following algorithm is commonly presented and originates from Martelli & Montanari (1982). Given a finite set
G
=
{
s
1
≐
t
1
,
.
.
.
,
s
n
≐
t
n
}
{\displaystyle G=\{s_{1}\doteq t_{1},...,s_{n}\doteq t_{n}\}}
of potential equations,
the algorithm applies rules to transform it to an equivalent set of equations of the form
{ x1 ≐ u1, ..., xm ≐ um }
where x1, ..., xm are distinct variables and u1, ..., um are terms containing none of the xi.
A set of this form can be read as a substitution.
If there is no solution the algorithm terminates with ⊥; other authors use "Ω", or "fail" in that case.
The operation of substituting all occurrences of variable x in problem G with term t is denoted G {x ↦ t}.
For simplicity, constant symbols are regarded as function symbols having zero arguments.
==== Occurs check ====
An attempt to unify a variable x with a term containing x as a strict subterm x ≐ f(..., x, ...) would lead to an infinite term as solution for x, since x would occur as a subterm of itself.
In the set of (finite) first-order terms as defined above, the equation x ≐ f(..., x, ...) has no solution; hence the eliminate rule may only be applied if x ∉ vars(t).
Since that additional check, called occurs check, slows down the algorithm, it is omitted e.g. in most Prolog systems.
From a theoretical point of view, omitting the check amounts to solving equations over infinite trees, see #Unification of infinite terms below.
==== Proof of termination ====
For the proof of termination of the algorithm consider a triple
⟨
n
v
a
r
,
n
l
h
s
,
n
e
q
n
⟩
{\displaystyle \langle n_{var},n_{lhs},n_{eqn}\rangle }
where nvar is the number of variables that occur more than once in the equation set, nlhs is the number of function symbols and constants
on the left hand sides of potential equations, and neqn is the number of equations.
When rule eliminate is applied, nvar decreases, since x is eliminated from G and kept only in { x ≐ t }.
Applying any other rule can never increase nvar again.
When rule decompose, conflict, or swap is applied, nlhs decreases, since at least the left hand side's outermost f disappears.
Applying any of the remaining rules delete or check can't increase nlhs, but decreases neqn.
Hence, any rule application decreases the triple
⟨
n
v
a
r
,
n
l
h
s
,
n
e
q
n
⟩
{\displaystyle \langle n_{var},n_{lhs},n_{eqn}\rangle }
with respect to the lexicographical order, which is possible only a finite number of times.
Conor McBride observes that "by expressing the structure which unification exploits" in a dependently typed language such as Epigram, Robinson's unification algorithm can be made recursive on the number of variables, in which case a separate termination proof becomes unnecessary.
=== Examples of syntactic unification of first-order terms ===
In the Prolog syntactical convention a symbol starting with an upper case letter is a variable name; a symbol that starts with a lowercase letter is a function symbol; the comma is used as the logical and operator.
For mathematical notation, x,y,z are used as variables, f,g as function symbols, and a,b as constants.
The most general unifier of a syntactic first-order unification problem of size n may have a size of 2n. For example, the problem
(
(
(
a
∗
z
)
∗
y
)
∗
x
)
∗
w
≐
w
∗
(
x
∗
(
y
∗
(
z
∗
a
)
)
)
{\displaystyle (((a*z)*y)*x)*w\doteq w*(x*(y*(z*a)))}
has the most general unifier
{
z
↦
a
,
y
↦
a
∗
a
,
x
↦
(
a
∗
a
)
∗
(
a
∗
a
)
,
w
↦
(
(
a
∗
a
)
∗
(
a
∗
a
)
)
∗
(
(
a
∗
a
)
∗
(
a
∗
a
)
)
}
{\displaystyle \{z\mapsto a,y\mapsto a*a,x\mapsto (a*a)*(a*a),w\mapsto ((a*a)*(a*a))*((a*a)*(a*a))\}}
, cf. picture. In order to avoid exponential time complexity caused by such blow-up, advanced unification algorithms work on directed acyclic graphs (dags) rather than trees.
=== Application: unification in logic programming ===
The concept of unification is one of the main ideas behind logic programming. Specifically, unification is a basic building block of resolution, a rule of inference for determining formula satisfiability. In Prolog, the equality symbol = implies first-order syntactic unification. It represents the mechanism of binding the contents of variables and can be viewed as a kind of one-time assignment.
In Prolog:
A variable can be unified with a constant, a term, or another variable, thus effectively becoming its alias. In many modern Prolog dialects and in first-order logic, a variable cannot be unified with a term that contains it; this is the so-called occurs check.
Two constants can be unified only if they are identical.
Similarly, a term can be unified with another term if the top function symbols and arities of the terms are identical and if the parameters can be unified simultaneously. Note that this is a recursive behavior.
Most operations, including +, -, *, /, are not evaluated by =. So for example 1+2 = 3 is not satisfiable because they are syntactically different. The use of integer arithmetic constraints #= introduces a form of E-unification for which these operations are interpreted and evaluated.
=== Application: type inference ===
Type inference algorithms are typically based on unification, particularly Hindley-Milner type inference which is used by the functional languages Haskell and ML. For example, when attempting to infer the type of the Haskell expression True : ['x'], the compiler will use the type a -> [a] -> [a] of the list construction function (:), the type Bool of the first argument True, and the type [Char] of the second argument ['x']. The polymorphic type variable a will be unified with Bool and the second argument [a] will be unified with [Char]. a cannot be both Bool and Char at the same time, therefore this expression is not correctly typed.
Like for Prolog, an algorithm for type inference can be given:
Any type variable unifies with any type expression, and is instantiated to that expression. A specific theory might restrict this rule with an occurs check.
Two type constants unify only if they are the same type.
Two type constructions unify only if they are applications of the same type constructor and all of their component types recursively unify.
=== Application: Feature Structure Unification ===
Unification has been used in different research areas of computational linguistics.
== Order-sorted unification ==
Order-sorted logic allows one to assign a sort, or type, to each term, and to declare a sort s1 a subsort of another sort s2, commonly written as s1 ⊆ s2. For example, when reаsoning about biological creatures, it is useful to declare a sort dog to be a subsort of a sort animal. Wherever a term of some sort s is required, a term of any subsort of s may be supplied instead.
For example, assuming a function declaration mother: animal → animal, and a constant declaration lassie: dog, the term mother(lassie) is perfectly valid and has the sort animal. In order to supply the information that the mother of a dog is a dog in turn, another declaration mother: dog → dog may be issued; this is called function overloading, similar to overloading in programming languages.
Walther gave a unification algorithm for terms in order-sorted logic, requiring for any two declared sorts s1, s2 their intersection s1 ∩ s2 to be declared, too: if x1 and x2 is a variable of sort s1 and s2, respectively, the equation x1 ≐ x2 has the solution { x1 = x, x2 = x }, where x: s1 ∩ s2.
After incorporating this algorithm into a clause-based automated theorem prover, he could solve a benchmark problem by translating it into order-sorted logic, thereby boiling it down an order of magnitude, as many unary predicates turned into sorts.
Smolka generalized order-sorted logic to allow for parametric polymorphism.
In his framework, subsort declarations are propagated to complex type expressions.
As a programming example, a parametric sort list(X) may be declared (with X being a type parameter as in a C++ template), and from a subsort declaration int ⊆ float the relation list(int) ⊆ list(float) is automatically inferred, meaning that each list of integers is also a list of floats.
Schmidt-Schauß generalized order-sorted logic to allow for term declarations.
As an example, assuming subsort declarations even ⊆ int and odd ⊆ int, a term declaration like ∀ i : int. (i + i) : even allows to declare a property of integer addition that could not be expressed by ordinary overloading.
== Unification of infinite terms ==
Background on infinite trees:
B. Courcelle (1983). "Fundamental Properties of Infinite Trees". Theoret. Comput. Sci. 25 (2): 95–169. doi:10.1016/0304-3975(83)90059-2.
Michael J. Maher (Jul 1988). "Complete Axiomatizations of the Algebras of Finite, Rational and Infinite Trees". Proc. IEEE 3rd Annual Symp. on Logic in Computer Science, Edinburgh. pp. 348–357.
Joxan Jaffar; Peter J. Stuckey (1986). "Semantics of Infinite Tree Logic Programming". Theoretical Computer Science. 46: 141–158. doi:10.1016/0304-3975(86)90027-7.
Unification algorithm, Prolog II:
A. Colmerauer (1982). K.L. Clark; S.-A. Tarnlund (eds.). Prolog and Infinite Trees. Academic Press.
Alain Colmerauer (1984). "Equations and Inequations on Finite and Infinite Trees". In ICOT (ed.). Proc. Int. Conf. on Fifth Generation Computer Systems. pp. 85–99.
Applications:
Francis Giannesini; Jacques Cohen (1984). "Parser Generation and Grammar Manipulation using Prolog's Infinite Trees". Journal of Logic Programming. 1 (3): 253–265. doi:10.1016/0743-1066(84)90013-X.
== E-unification ==
E-unification is the problem of finding solutions to a given set of equations,
taking into account some equational background knowledge E.
The latter is given as a set of universal equalities.
For some particular sets E, equation solving algorithms (a.k.a. E-unification algorithms) have been devised;
for others it has been proven that no such algorithms can exist.
For example, if a and b are distinct constants,
the equation
x
∗
a
≐
y
∗
b
{\displaystyle x*a\doteq y*b}
has no solution
with respect to purely syntactic unification,
where nothing is known about the operator
∗
{\displaystyle *}
.
However, if the
∗
{\displaystyle *}
is known to be commutative,
then the substitution {x ↦ b, y ↦ a} solves the above equation,
since
The background knowledge E could state the commutativity of
∗
{\displaystyle *}
by the universal equality
"
u
∗
v
=
v
∗
u
{\displaystyle u*v=v*u}
for all u, v".
=== Particular background knowledge sets E ===
It is said that unification is decidable for a theory, if a unification algorithm has been devised for it that terminates for any input problem.
It is said that unification is semi-decidable for a theory, if a unification algorithm has been devised for it that terminates for any solvable input problem, but may keep searching forever for solutions of an unsolvable input problem.
Unification is decidable for the following theories:
A
A,C
A,C,I
A,C,Nl
A,I
A,Nl,Nr (monoid)
C
Boolean rings
Abelian groups, even if the signature is expanded by arbitrary additional symbols (but not axioms)
K4 modal algebras
Unification is semi-decidable for the following theories:
A,Dl,Dr
A,C,Dl
Commutative rings
=== One-sided paramodulation ===
If there is a convergent term rewriting system R available for E,
the one-sided paramodulation algorithm
can be used to enumerate all solutions of given equations.
Starting with G being the unification problem to be solved and S being the identity substitution, rules are applied nondeterministically until the empty set appears as the actual G, in which case the actual S is a unifying substitution. Depending on the order the paramodulation rules are applied, on the choice of the actual equation from G, and on the choice of R's rules in mutate, different computations paths are possible. Only some lead to a solution, while others end at a G ≠ {} where no further rule is applicable (e.g. G = { f(...) ≐ g(...) }).
For an example, a term rewrite system R is used defining the append operator of lists built from cons and nil; where cons(x,y) is written in infix notation as x.y for brevity; e.g. app(a.b.nil,c.d.nil) → a.app(b.nil,c.d.nil) → a.b.app(nil,c.d.nil) → a.b.c.d.nil demonstrates the concatenation of the lists a.b.nil and c.d.nil, employing the rewrite rule 2,2, and 1. The equational theory E corresponding to R is the congruence closure of R, both viewed as binary relations on terms.
For example, app(a.b.nil,c.d.nil) ≡ a.b.c.d.nil ≡ app(a.b.c.d.nil,nil). The paramodulation algorithm enumerates solutions to equations with respect to that E when fed with the example R.
A successful example computation path for the unification problem { app(x,app(y,x)) ≐ a.a.nil } is shown below. To avoid variable name clashes, rewrite rules are consistently renamed each time before their use by rule mutate; v2, v3, ... are computer-generated variable names for this purpose. In each line, the chosen equation from G is highlighted in red. Each time the mutate rule is applied, the chosen rewrite rule (1 or 2) is indicated in parentheses. From the last line, the unifying substitution S = { y ↦ nil, x ↦ a.nil } can be obtained. In fact,
app(x,app(y,x)) {y↦nil, x↦ a.nil } = app(a.nil,app(nil,a.nil)) ≡ app(a.nil,a.nil) ≡ a.app(nil,a.nil) ≡ a.a.nil solves the given problem.
A second successful computation path, obtainable by choosing "mutate(1), mutate(2), mutate(2), mutate(1)" leads to the substitution S = { y ↦ a.a.nil, x ↦ nil }; it is not shown here. No other path leads to a success.
=== Narrowing ===
If R is a convergent term rewriting system for E,
an approach alternative to the previous section consists in successive application of "narrowing steps";
this will eventually enumerate all solutions of a given equation.
A narrowing step (cf. picture) consists in
choosing a nonvariable subterm of the current term,
syntactically unifying it with the left hand side of a rule from R, and
replacing the instantiated rule's right hand side into the instantiated term.
Formally, if l → r is a renamed copy of a rewrite rule from R, having no variables in common with a term s, and the subterm s|p is not a variable and is unifiable with l via the mgu σ, then s can be narrowed to the term t = sσ[rσ]p, i.e. to the term sσ, with the subterm at p replaced by rσ. The situation that s can be narrowed to t is commonly denoted as s ↝ t.
Intuitively, a sequence of narrowing steps t1 ↝ t2 ↝ ... ↝ tn can be thought of as a sequence of rewrite steps t1 → t2 → ... → tn, but with the initial term t1 being further and further instantiated, as necessary to make each of the used rules applicable.
The above example paramodulation computation corresponds to the following narrowing sequence ("↓" indicating instantiation here):
The last term, v2.v2.nil can be syntactically unified with the original right hand side term a.a.nil.
The narrowing lemma ensures that whenever an instance of a term s can be rewritten to a term t by a convergent term rewriting system, then s and t can be narrowed and rewritten to a term s′ and t′, respectively, such that t′ is an instance of s′.
Formally: whenever sσ →∗ t holds for some substitution σ, then there exist terms s′, t′ such that s ↝∗ s′ and t →∗ t′ and s′ τ = t′ for some substitution τ.
== Higher-order unification ==
Many applications require one to consider the unification of typed lambda-terms instead of first-order terms. Such unification is often called higher-order unification. Higher-order unification is undecidable, and such unification problems do not have most general unifiers. For example, the unification problem { f(a,b,a) ≐ d(b,a,c) }, where the only variable is f, has the
solutions {f ↦ λx.λy.λz. d(y,x,c) }, {f ↦ λx.λy.λz. d(y,z,c) },
{f ↦ λx.λy.λz. d(y,a,c) }, {f ↦ λx.λy.λz. d(b,x,c) },
{f ↦ λx.λy.λz. d(b,z,c) } and {f ↦ λx.λy.λz. d(b,a,c) }. A well studied branch of higher-order unification is the problem of unifying simply typed lambda terms modulo the equality determined by αβη conversions. Gérard Huet gave a semi-decidable (pre-)unification algorithm that allows a systematic search of the space of unifiers (generalizing the unification algorithm of Martelli-Montanari with rules for terms containing higher-order variables) that seems to work sufficiently well in practice. Huet and Gilles Dowek have written articles surveying this topic.
Several subsets of higher-order unification are well-behaved, in that they are decidable and have a most-general unifier for solvable problems. One such subset is the previously described first-order terms. Higher-order pattern unification, due to Dale Miller, is another such subset. The higher-order logic programming languages λProlog and Twelf have switched from full higher-order unification to implementing only the pattern fragment; surprisingly pattern unification is sufficient for almost all programs, if each non-pattern unification problem is suspended until a subsequent substitution puts the unification into the pattern fragment. A superset of pattern unification called functions-as-constructors unification is also well-behaved. The Zipperposition theorem prover has an algorithm integrating these well-behaved subsets into a full higher-order unification algorithm.
In computational linguistics, one of the most influential theories of elliptical construction is that ellipses are represented by free variables whose values are then determined using Higher-Order Unification. For instance, the semantic representation of "Jon likes Mary and Peter does too" is like(j, m) ∧ R(p) and the value of R (the semantic representation of the ellipsis) is determined by the equation like(j, m) = R(j) . The process of solving such equations is called Higher-Order Unification.
Wayne Snyder gave a generalization of both higher-order unification and E-unification, i.e. an algorithm to unify lambda-terms modulo an equational theory.
== See also ==
Rewriting
Admissible rule
Explicit substitution in lambda calculus
Mathematical equation solving
Dis-unification: solving inequations between symbolic expression
Anti-unification: computing a least general generalization (lgg) of two terms, dual to computing a most general instance (mgu)
Subsumption lattice, a lattice having unification as meet and anti-unification as join
Ontology alignment (use unification with semantic equivalence)
== Notes ==
== References ==
== Further reading ==
Franz Baader and Wayne Snyder (2001). "Unification Theory". In John Alan Robinson and Andrei Voronkov, editors, Handbook of Automated Reasoning, volume I, pages 447–533. Elsevier Science Publishers.
Gilles Dowek (2001). "Higher-order Unification and Matching" Archived 2019-05-15 at the Wayback Machine. In Handbook of Automated Reasoning.
Franz Baader and Tobias Nipkow (1998). Term Rewriting and All That. Cambridge University Press.
Franz Baader and Jörg H. Siekmann (1993). "Unification Theory". In Handbook of Logic in Artificial Intelligence and Logic Programming.
Jean-Pierre Jouannaud and Claude Kirchner (1991). "Solving Equations in Abstract Algebras: A Rule-Based Survey of Unification". In Computational Logic: Essays in Honor of Alan Robinson.
Nachum Dershowitz and Jean-Pierre Jouannaud, Rewrite Systems, in: Jan van Leeuwen (ed.), Handbook of Theoretical Computer Science, volume B Formal Models and Semantics, Elsevier, 1990, pp. 243–320
Jörg H. Siekmann (1990). "Unification Theory". In Claude Kirchner (editor) Unification. Academic Press.
Kevin Knight (Mar 1989). "Unification: A Multidisciplinary Survey" (PDF). ACM Computing Surveys. 21 (1): 93–124. CiteSeerX 10.1.1.64.8967. doi:10.1145/62029.62030. S2CID 14619034.
Gérard Huet and Derek C. Oppen (1980). "Equations and Rewrite Rules: A Survey". Technical report. Stanford University.
Raulefs, Peter; Siekmann, Jörg; Szabó, P.; Unvericht, E. (1979). "A short survey on the state of the art in matching and unification problems". ACM SIGSAM Bulletin. 13 (2): 14–20. doi:10.1145/1089208.1089210. S2CID 17033087.
Claude Kirchner and Hélène Kirchner. Rewriting, Solving, Proving. In preparation. | Wikipedia/Robinson's_unification_algorithm |
In mathematical logic, an uninterpreted function or function symbol is one that has no other property than its name and n-ary form. Function symbols are used, together with constants and variables, to form terms.
The theory of uninterpreted functions is also sometimes called the free theory, because it is freely generated, and thus a free object, or the empty theory, being the theory having an empty set of sentences (in analogy to an initial algebra). Theories with a non-empty set of equations are known as equational theories. The satisfiability problem for free theories is solved by syntactic unification; algorithms for the latter are used by interpreters for various computer languages, such as Prolog. Syntactic unification is also used in algorithms for the satisfiability problem for certain other equational theories, see Unification (computer science).
== Example ==
As an example of uninterpreted functions for SMT-LIB, if this input is given to an SMT solver:
the SMT solver would return "This input is satisfiable". That happens because f is an uninterpreted function (i.e., all that is known about f is its signature), so it is possible that f(10) = 1. But by applying the input below:
the SMT solver would return "This input is unsatisfiable". That happens because f, being a function, can never return different values for the same input.
== Discussion ==
The decision problem for free theories is particularly important, because many theories can be reduced by it.
Free theories can be solved by searching for common subexpressions to form the congruence closure. Solvers include satisfiability modulo theories solvers.
== See also ==
Algebraic data type
Initial algebra
Term algebra
Theory of pure equality
== Notes ==
== References == | Wikipedia/Free_theory |
Anti-unification is the process of constructing a generalization common to two given symbolic expressions. As in unification, several frameworks are distinguished depending on which expressions (also called terms) are allowed, and which expressions are considered equal. If variables representing functions are allowed in an expression, the process is called "higher-order anti-unification", otherwise "first-order anti-unification". If the generalization is required to have an instance literally equal to each input expression, the process is called "syntactical anti-unification", otherwise "E-anti-unification", or "anti-unification modulo theory".
An anti-unification algorithm should compute for given expressions a complete and minimal generalization set, that is, a set covering all generalizations and containing no redundant members, respectively. Depending on the framework, a complete and minimal generalization set may have one, finitely many, or possibly infinitely many members, or may not exist at all; it cannot be empty, since a trivial generalization exists in any case. For first-order syntactical anti-unification, Gordon Plotkin gave an algorithm that computes a complete and minimal singleton generalization set containing the so-called "least general generalization" (lgg).
Anti-unification should not be confused with dis-unification. The latter means the process of solving systems of inequations, that is of finding values for the variables such that all given inequations are satisfied. This task is quite different from finding generalizations.
== Prerequisites ==
Formally, an anti-unification approach presupposes
An infinite set V of variables. For higher-order anti-unification, it is convenient to choose V disjoint from the set of lambda-term bound variables.
A set T of terms such that V ⊆ T. For first-order and higher-order anti-unification, T is usually the set of first-order terms (terms built from variable and function symbols) and lambda terms (terms containing some higher-order variables), respectively.
An equivalence relation
≡
{\displaystyle \equiv }
on
T
{\displaystyle T}
, indicating which terms are considered equal. For higher-order anti-unification, usually
t
≡
u
{\displaystyle t\equiv u}
if
t
{\displaystyle t}
and
u
{\displaystyle u}
are alpha equivalent. For first-order E-anti-unification,
≡
{\displaystyle \equiv }
reflects the background knowledge about certain function symbols; for example, if
⊕
{\displaystyle \oplus }
is considered commutative,
t
≡
u
{\displaystyle t\equiv u}
if
u
{\displaystyle u}
results from
t
{\displaystyle t}
by swapping the arguments of
⊕
{\displaystyle \oplus }
at some (possibly all) occurrences. If there is no background knowledge at all, then only literally, or syntactically, identical terms are considered equal.
=== First-order term ===
Given a set
V
{\displaystyle V}
of variable symbols, a set
C
{\displaystyle C}
of constant symbols and sets
F
n
{\displaystyle F_{n}}
of
n
{\displaystyle n}
-ary function symbols, also called operator symbols, for each natural number
n
≥
1
{\displaystyle n\geq 1}
, the set of (unsorted first-order) terms
T
{\displaystyle T}
is recursively defined to be the smallest set with the following properties:
every variable symbol is a term: V ⊆ T,
every constant symbol is a term: C ⊆ T,
from every n terms t1,...,tn, and every n-ary function symbol f ∈ Fn, a larger term
f
(
t
1
,
…
,
t
n
)
{\displaystyle f(t_{1},\ldots ,t_{n})}
can be built.
For example, if x ∈ V is a variable symbol, 1 ∈ C is a constant symbol, and add ∈ F2 is a binary function symbol, then x ∈ T, 1 ∈ T, and (hence) add(x,1) ∈ T by the first, second, and third term building rule, respectively. The latter term is usually written as x+1, using Infix notation and the more common operator symbol + for convenience.
=== Higher-order term ===
=== Substitution ===
A substitution is a mapping
σ
:
V
⟶
T
{\displaystyle \sigma :V\longrightarrow T}
from variables to terms; the notation
{
x
1
↦
t
1
,
…
,
x
k
↦
t
k
}
{\displaystyle \{x_{1}\mapsto t_{1},\ldots ,x_{k}\mapsto t_{k}\}}
refers to a substitution mapping each variable
x
i
{\displaystyle x_{i}}
to the term
t
i
{\displaystyle t_{i}}
, for
i
=
1
,
…
,
k
{\displaystyle i=1,\ldots ,k}
, and every other variable to itself. Applying that substitution to a term t is written in postfix notation as
t
{
x
1
↦
t
1
,
…
,
x
k
↦
t
k
}
{\displaystyle t\{x_{1}\mapsto t_{1},\ldots ,x_{k}\mapsto t_{k}\}}
; it means to (simultaneously) replace every occurrence of each variable
x
i
{\displaystyle x_{i}}
in the term t by
t
i
{\displaystyle t_{i}}
. The result tσ of applying a substitution σ to a term t is called an instance of that term t.
As a first-order example, applying the substitution
{
x
↦
h
(
a
,
y
)
,
z
↦
b
}
{\displaystyle \{x\mapsto h(a,y),z\mapsto b\}}
to the term
=== Generalization, specialization ===
If a term
t
{\displaystyle t}
has an instance equivalent to a term
u
{\displaystyle u}
, that is, if
t
σ
≡
u
{\displaystyle t\sigma \equiv u}
for some substitution
σ
{\displaystyle \sigma }
, then
t
{\displaystyle t}
is called more general than
u
{\displaystyle u}
, and
u
{\displaystyle u}
is called more special than, or subsumed by,
t
{\displaystyle t}
. For example,
x
⊕
a
{\displaystyle x\oplus a}
is more general than
a
⊕
b
{\displaystyle a\oplus b}
if
⊕
{\displaystyle \oplus }
is commutative, since then
(
x
⊕
a
)
{
x
↦
b
}
=
b
⊕
a
≡
a
⊕
b
{\displaystyle (x\oplus a)\{x\mapsto b\}=b\oplus a\equiv a\oplus b}
.
If
≡
{\displaystyle \equiv }
is literal (syntactic) identity of terms, a term may be both more general and more special than another one only if both terms differ just in their variable names, not in their syntactic structure; such terms are called variants, or renamings of each other.
For example,
f
(
x
1
,
a
,
g
(
z
1
)
,
y
1
)
{\displaystyle f(x_{1},a,g(z_{1}),y_{1})}
is a variant of
f
(
x
2
,
a
,
g
(
z
2
)
,
y
2
)
{\displaystyle f(x_{2},a,g(z_{2}),y_{2})}
, since
f
(
x
1
,
a
,
g
(
z
1
)
,
y
1
)
{
x
1
↦
x
2
,
y
1
↦
y
2
,
z
1
↦
z
2
}
=
f
(
x
2
,
a
,
g
(
z
2
)
,
y
2
)
{\displaystyle f(x_{1},a,g(z_{1}),y_{1})\{x_{1}\mapsto x_{2},y_{1}\mapsto y_{2},z_{1}\mapsto z_{2}\}=f(x_{2},a,g(z_{2}),y_{2})}
and
f
(
x
2
,
a
,
g
(
z
2
)
,
y
2
)
{
x
2
↦
x
1
,
y
2
↦
y
1
,
z
2
↦
z
1
}
=
f
(
x
1
,
a
,
g
(
z
1
)
,
y
1
)
{\displaystyle f(x_{2},a,g(z_{2}),y_{2})\{x_{2}\mapsto x_{1},y_{2}\mapsto y_{1},z_{2}\mapsto z_{1}\}=f(x_{1},a,g(z_{1}),y_{1})}
.
However,
f
(
x
1
,
a
,
g
(
z
1
)
,
y
1
)
{\displaystyle f(x_{1},a,g(z_{1}),y_{1})}
is not a variant of
f
(
x
2
,
a
,
g
(
x
2
)
,
x
2
)
{\displaystyle f(x_{2},a,g(x_{2}),x_{2})}
, since no substitution can transform the latter term into the former one, although
{
x
1
↦
x
2
,
z
1
↦
x
2
,
y
1
↦
x
2
}
{\displaystyle \{x_{1}\mapsto x_{2},z_{1}\mapsto x_{2},y_{1}\mapsto x_{2}\}}
achieves the reverse direction.
The latter term is hence properly more special than the former one.
A substitution
σ
{\displaystyle \sigma }
is more special than, or subsumed by, a substitution
τ
{\displaystyle \tau }
if
x
σ
{\displaystyle x\sigma }
is more special than
x
τ
{\displaystyle x\tau }
for each variable
x
{\displaystyle x}
.
For example,
{
x
↦
f
(
u
)
,
y
↦
f
(
f
(
u
)
)
}
{\displaystyle \{x\mapsto f(u),y\mapsto f(f(u))\}}
is more special than
{
x
↦
z
,
y
↦
f
(
z
)
}
{\displaystyle \{x\mapsto z,y\mapsto f(z)\}}
, since
f
(
u
)
{\displaystyle f(u)}
and
f
(
f
(
u
)
)
{\displaystyle f(f(u))}
is more special than
z
{\displaystyle z}
and
f
(
z
)
{\displaystyle f(z)}
, respectively.
=== Anti-unification problem, generalization set ===
An anti-unification problem is a pair
⟨
t
1
,
t
2
⟩
{\displaystyle \langle t_{1},t_{2}\rangle }
of terms.
A term
t
{\displaystyle t}
is a common generalization, or anti-unifier, of
t
1
{\displaystyle t_{1}}
and
t
2
{\displaystyle t_{2}}
if
t
σ
1
≡
t
1
{\displaystyle t\sigma _{1}\equiv t_{1}}
and
t
σ
2
≡
t
2
{\displaystyle t\sigma _{2}\equiv t_{2}}
for some substitutions
σ
1
,
σ
2
{\displaystyle \sigma _{1},\sigma _{2}}
.
For a given anti-unification problem, a set
S
{\displaystyle S}
of anti-unifiers is called complete if each generalization subsumes some term
t
∈
S
{\displaystyle t\in S}
; the set
S
{\displaystyle S}
is called minimal if none of its members subsumes another one.
== First-order syntactical anti-unification ==
The framework of first-order syntactical anti-unification is based on
T
{\displaystyle T}
being the set of first-order terms (over some given set
V
{\displaystyle V}
of variables,
C
{\displaystyle C}
of constants and
F
n
{\displaystyle F_{n}}
of
n
{\displaystyle n}
-ary function symbols) and on
≡
{\displaystyle \equiv }
being syntactic equality.
In this framework, each anti-unification problem
⟨
t
1
,
t
2
⟩
{\displaystyle \langle t_{1},t_{2}\rangle }
has a complete, and obviously minimal, singleton solution set
{
t
}
{\displaystyle \{t\}}
.
Its member
t
{\displaystyle t}
is called the least general generalization (lgg) of the problem, it has an instance syntactically equal to
t
1
{\displaystyle t_{1}}
and another one syntactically equal to
t
2
{\displaystyle t_{2}}
.
Any common generalization of
t
1
{\displaystyle t_{1}}
and
t
2
{\displaystyle t_{2}}
subsumes
t
{\displaystyle t}
.
The lgg is unique up to variants: if
S
1
{\displaystyle S_{1}}
and
S
2
{\displaystyle S_{2}}
are both complete and minimal solution sets of the same syntactical anti-unification problem, then
S
1
=
{
s
1
}
{\displaystyle S_{1}=\{s_{1}\}}
and
S
2
=
{
s
2
}
{\displaystyle S_{2}=\{s_{2}\}}
for some terms
s
1
{\displaystyle s_{1}}
and
s
2
{\displaystyle s_{2}}
, that are renamings of each other.
Plotkin has given an algorithm to compute the lgg of two given terms.
It presupposes an injective mapping
ϕ
:
T
×
T
⟶
V
{\displaystyle \phi :T\times T\longrightarrow V}
, that is, a mapping assigning each pair
s
,
t
{\displaystyle s,t}
of terms an own variable
ϕ
(
s
,
t
)
{\displaystyle \phi (s,t)}
, such that no two pairs share the same variable.
The algorithm consists of two rules:
For example,
(
0
∗
0
)
⊔
(
4
∗
4
)
⇝
(
0
⊔
4
)
∗
(
0
⊔
4
)
⇝
ϕ
(
0
,
4
)
∗
ϕ
(
0
,
4
)
⇝
x
∗
x
{\displaystyle (0*0)\sqcup (4*4)\rightsquigarrow (0\sqcup 4)*(0\sqcup 4)\rightsquigarrow \phi (0,4)*\phi (0,4)\rightsquigarrow x*x}
; this least general generalization reflects the common property of both inputs of being square numbers.
Plotkin used his algorithm to compute the "relative least general generalization (rlgg)" of two clause sets in first-order logic, which was the basis of the Golem approach to inductive logic programming.
== First-order anti-unification modulo theory ==
Jacobsen, Erik (Jun 1991), Unification and Anti-Unification (PDF), Technical Report
Østvold, Bjarte M. (Apr 2004), A Functional Reconstruction of Anti-Unification (PDF), NR Note, vol. DART/04/04, Norwegian Computing Center
Boytcheva, Svetla; Markov, Zdravko (2002). "An Algorithm for Inducing Least Generalization Under Relative Implication". Proc. FLAIRS-02. AAAI. pp. 322–326.
Kutsia, Temur; Levy, Jordi; Villaret, Mateu (2014). "Anti-Unification for Unranked Terms and Hedges" (PDF). Journal of Automated Reasoning. 52 (2): 155–190. doi:10.1007/s10817-013-9285-6. Software.
=== Equational theories ===
One associative and commutative operation: Pottier, Loic (Feb 1989), Algorithms des completion et generalisation en logic du premier ordre (These de doctorat); Pottier, Loic (1989), Generalisation de termes en theorie equationelle – Cas associatif-commutatif, INRIA Report, vol. 1056, INRIA
Commutative theories: Baader, Franz (1991). "Unification, Weak Unification, Upper Bound, Lower Bound, and Generalization Problems". Proc. 4th Conf. on Rewriting Techniques and Applications (RTA). LNCS. Vol. 488. Springer. pp. 86–91. doi:10.1007/3-540-53904-2_88.
Free monoids: Biere, A. (1993), Normalisierung, Unifikation und Antiunifikation in Freien Monoiden (PDF), Univ. Karlsruhe, Germany
Regular congruence classes: Heinz, Birgit (Dec 1995), Anti-Unifikation modulo Gleichungstheorie und deren Anwendung zur Lemmagenerierung, GMD Berichte, vol. 261, TU Berlin, ISBN 978-3-486-23873-0; Burghardt, Jochen (2005). "E-Generalization Using Grammars". Artificial Intelligence. 165 (1): 1–35. arXiv:1403.8118. doi:10.1016/j.artint.2005.01.008. S2CID 5328240.
A-, C-, AC-, ACU-theories with ordered sorts: Alpuente, Maria; Escobar, Santiago; Espert, Javier; Meseguer, Jose (2014). "A modular order-sorted equational generalization algorithm". Information and Computation. 235: 98–136. doi:10.1016/j.ic.2014.01.006. hdl:2142/25871.
Purely idempotent theories: Cerna, David; Kutsia, Temur (2020). "Idempotent Anti-Unification". ACM Transactions on Computational Logic. 21 (2): 1–32. doi:10.1145/3359060. hdl:10.1145/3359060. S2CID 207861304.
=== First-order sorted anti-unification ===
Taxonomic sorts: Frisch, Alan M.; Page, David (1990). "Generalisation with Taxonomic Information". AAAI: 755–761.; Frisch, Alan M.; Page Jr., C. David (1991). "Generalizing Atoms in Constraint Logic". Proc. Conf. on Knowledge Representation.; Frisch, A.M.; Page, C.D. (1995). "Building Theories into Instantiation". In Mellish, C.S. (ed.). Proc. 14th IJCAI. Morgan Kaufmann. pp. 1210–1216. CiteSeerX 10.1.1.32.1610.
Feature terms: Plaza, E. (1995). "Cases as Terms: A Feature Term Approach to the Structured Representation of Cases". Proc. 1st International Conference on Case-Based Reasoning (ICCBR). LNCS. Vol. 1010. Springer. pp. 265–276. ISSN 0302-9743.
Idestam-Almquist, Peter (Jun 1993). "Generalization under Implication by Recursive Anti-Unification". Proc. 10th Conf. on Machine Learning. Morgan Kaufmann. pp. 151–158.
Fischer, Cornelia (May 1994), PAntUDE – An Anti-Unification Algorithm for Expressing Refined Generalizations (PDF), Research Report, vol. TM-94-04, DFKI
A-, C-, AC-, ACU-theories with ordered sorts: see above
=== Nominal anti-unification ===
Baumgartner, Alexander; Kutsia, Temur; Levy, Jordi; Villaret, Mateu (Jun 2013). Nominal Anti-Unification. Proc. RTA 2015. Vol. 36 of LIPIcs. Schloss Dagstuhl, 57-73. Software.
=== Applications ===
Program analysis:
Bulychev, Peter; Minea, Marius (2008). "Duplicate Code Detection Using Anti-Unification". Proceedings of the Spring/Summer Young Researchers' Colloquium on Software Engineering (2).;
Bulychev, Peter E.; Kostylev, Egor V.; Zakharov, Vladimir A. (2009). "Anti-Unification Algorithms and their Applications in Program Analysis". In Amir Pnueli and Irina Virbitskaite and Andrei Voronkov (ed.). Perspectives of Systems Informatics (PSI) – 7th International Andrei Ershov Memorial Conference. LNCS. Vol. 5947. Springer. pp. 413–423. doi:10.1007/978-3-642-11486-1_35. ISBN 978-3-642-11485-4.
Code factoring:
Cottrell, Rylan (Sep 2008), Semi-automating Small-Scale Source Code Reuse via Structural Correspondence (PDF), Univ. Calgary
Induction proving:
Heinz, Birgit (1994), Lemma Discovery by Anti-Unification of Regular Sorts, Technical Report, vol. 94–21, TU Berlin
Information Extraction:
Thomas, Bernd (1999). "Anti-Unification Based Learning of T-Wrappers for Information Extraction" (PDF). AAAI Technical Report. WS-99-11: 15–20.
Case-based reasoning:
Armengol; Plaza, Enric (2005). "Using Symbolic Descriptions to Explain Similarity on {CBR}". In Beatriz López and Joaquim Meléndez and Petia Radeva and Jordi Vitrià (ed.). Artificial Intelligence Research and Development, Proc. 8th Int. Conf. of the ACIA, CCIA. IOS Press. pp. 239–246.
Program synthesis: The idea of generalizing terms with respect to an equational theory can be traced back to Manna and Waldinger (1978, 1980) who desired to apply it in program synthesis. In section "Generalization", they suggest (on p. 119 of the 1980 article) to generalize reverse(l) and reverse(tail(l))<>[head(l)] to obtain reverse(l')<>m' . This generalization is only possible if the background equation u<>[]=u is considered.
Zohar Manna; Richard Waldinger (Dec 1978). A Deductive Approach to Program Synthesis (PDF) (Technical Note). SRI International. Archived from the original (PDF) on 2017-02-27. Retrieved 2017-09-29. — preprint of the 1980 article
Zohar Manna and Richard Waldinger (Jan 1980). "A Deductive Approach to Program Synthesis". ACM Transactions on Programming Languages and Systems. 2: 90–121. doi:10.1145/357084.357090. S2CID 14770735.
Natural language processing:
Amiridze, Nino; Kutsia, Temur (2018). "Anti-Unification and Natural Language Processing". Fifth Workshop on Natural Language and Computer Science, NLCS'18. EasyChair Preprints. EasyChair Report No. 203. doi:10.29007/fkrh. S2CID 49322739.
== Higher-order anti-unification ==
Calculus of constructions:
Pfenning, Frank (Jul 1991). "Unification and Anti-Unification in the Calculus of Constructions" (PDF). Proc. 6th LICS. Springer. pp. 74–85.
Simply typed lambda calculus (Input: Terms in the eta-long beta-normal form. Output: higher-order patterns):
Baumgartner, Alexander; Kutsia, Temur; Levy, Jordi; Villaret, Mateu (Jun 2013). A Variant of Higher-Order Anti-Unification. Proc. RTA 2013. Vol. 21 of LIPIcs. Schloss Dagstuhl, 113-127. Software.
Simply typed lambda calculus (Input: Terms in the eta-long beta-normal form. Output: Various fragments of the simply typed lambda calculus including patterns):
Cerna, David; Kutsia, Temur (June 2019). "A Generic Framework for Higher-Order Generalizations" (PDF). 4th International Conference on Formal Structures for Computation and Deduction, FSCD, June 24–30, 2019, Dortmund, Germany. Schloss Dagstuhl - Leibniz-Zentrum für Informatik. pp. 74–85.
Restricted Higher-Order Substitutions:
Wagner, Ulrich (Apr 2002), Combinatorically Restricted Higher Order Anti-Unification, TU Berlin; Schmidt, Martin (Sep 2010), Restricted Higher-Order Anti-Unification for Heuristic-Driven Theory Projection (PDF), PICS-Report, vol. 31–2010, Univ. Osnabrück, Germany, ISSN 1610-5389
== Notes ==
== References == | Wikipedia/Anti-unification_(computer_science) |
In mathematics, the lexicographic or lexicographical order (also known as lexical order, or dictionary order) is a generalization of the alphabetical order of the dictionaries to sequences of ordered symbols or, more generally, of elements of a totally ordered set.
There are several variants and generalizations of the lexicographical ordering. One variant applies to sequences of different lengths by comparing the lengths of the sequences before considering their elements.
Another variant, widely used in combinatorics, orders subsets of a given finite set by assigning a total order to the finite set, and converting subsets into increasing sequences, to which the lexicographical order is applied.
A generalization defines an order on an n-ary Cartesian product of partially ordered sets; this order is a total order if and only if all factors of the Cartesian product are totally ordered.
== Definition ==
The words in a lexicon (the set of words used in some language) have a conventional ordering, used in dictionaries and encyclopedias, that depends on the underlying ordering of the alphabet of symbols used to build the words. The lexicographical order is one way of formalizing word order given the order of the underlying symbols.
The formal notion starts with a finite set A, often called the alphabet, which is totally ordered. That is, for any two symbols a and b in A that are not the same symbol, either a < b or b < a.
The words of A are the finite sequences of symbols from A, including words of length 1 containing a single symbol, words of length 2 with 2 symbols, and so on, even including the empty sequence
ε
{\displaystyle \varepsilon }
with no symbols at all. The lexicographical order on the set of all these finite words orders the words as follows:
Given two different words of the same length, say a = a1a2...ak and b = b1b2...bk, the order of the two words depends on the alphabetic order of the symbols in the first place i where the two words differ (counting from the beginning of the words): a < b if and only if ai < bi in the underlying order of the alphabet A.
If two words have different lengths, the usual lexicographical order pads the shorter one with "blanks" (a special symbol that is treated as smaller than every element of A) at the end until the words are the same length, and then the words are compared as in the previous case.
However, in combinatorics, another convention is frequently used for the second case, whereby a shorter sequence is always smaller than a longer sequence. This variant of the lexicographical order is sometimes called shortlex order.
In lexicographical order, the word "Thomas" appears before "Thompson" because they first differ at the fifth letter ('a' and 'p'), and letter 'a' comes before the letter 'p' in the alphabet. Because it is the first difference, in this case the 5th letter is the "most significant difference" for alphabetical ordering.
An important property of the lexicographical order is that for each n, the set of words of length n is well-ordered by the lexicographical order (provided the alphabet is finite); that is, every decreasing sequence of words of length n is finite (or equivalently, every non-empty subset has a least element). It is not true that the set of all finite words is well-ordered; for example, the infinite set of words {b, ab, aab, aaab, ... } has no lexicographically earliest element.
== Numeral systems and dates ==
The lexicographical order is used not only in dictionaries, but also commonly for numbers and dates.
One of the drawbacks of the Roman numeral system is that it is not always immediately obvious which of two numbers is the smaller. On the other hand, with the positional notation of the Hindu–Arabic numeral system, comparing numbers is easy, because the natural order on natural numbers is the same as the variant shortlex of the lexicographic order. In fact, with positional notation, a natural number is represented by a sequence of numerical digits, and a natural number is larger than another one if either it has more digits (ignoring leading zeroes) or the number of digits is the same and the first (most significant) digit which differs is larger.
For real numbers written in decimal notation, a slightly different variant of the lexicographical order is used: the parts on the left of the decimal point are compared as before; if they are equal, the parts at the right of the decimal point are compared with the lexicographical order. The padding 'blank' in this context is a trailing "0" digit.
When negative numbers are also considered, one has to reverse the order for comparing negative numbers. This is not usually a problem for humans, but it may be for computers (testing the sign takes some time). This is one of the reasons for adopting two's complement representation for representing signed integers in computers.
Another example of a non-dictionary use of lexicographical ordering appears in the ISO 8601 standard for dates, which expresses a date as YYYY-MM-DD. This formatting scheme has the advantage that the lexicographical order on sequences of characters that represent dates coincides with the chronological order: an earlier CE date is smaller in the lexicographical order than a later date up to year 9999. This date ordering makes computerized sorting of dates easier by avoiding the need for a separate sorting algorithm.
== Monoid of words ==
The monoid of words over an alphabet A is the free monoid over A. That is, the elements of the monoid are the finite sequences (words) of elements of A (including the empty sequence, of length 0), and the operation (multiplication) is the concatenation of words. A word u is a prefix (or 'truncation') of another word v if there exists a word w such that v = uw. By this definition, the empty word (
ε
{\displaystyle \varepsilon }
) is a prefix of every word, and every word is a prefix of itself (with w
=
ε
{\displaystyle =\varepsilon }
); care must be taken if these cases are to be excluded.
With this terminology, the above definition of the lexicographical order becomes more concise: Given a partially or totally ordered set A, and two words a and b over A such that b is non-empty, then one has a < b under lexicographical order, if at least one of the following conditions is satisfied:
a is a prefix of b
there exists words u, v, w (possibly empty) and elements x and y of A such that
x < y
a = uxv
b = uyw
Notice that, due to the prefix condition in this definition,
ε
<
b
for all
b
≠
ε
,
{\displaystyle \varepsilon <b\,\,{\text{ for all }}b\neq \varepsilon ,}
where
ε
{\displaystyle \varepsilon }
is the empty word.
If
<
{\displaystyle \,<\,}
is a total order on
A
,
{\displaystyle A,}
then so is the lexicographic order on the words of
A
.
{\displaystyle A.}
However, in general this is not a well-order, even if the alphabet
A
{\displaystyle A}
is well-ordered. For instance, if A = {a, b}, the language {anb | n ≥ 0, b > ε} has no least element in the lexicographical order: ... < aab < ab < b.
Since many applications require well orders, a variant of the lexicographical orders is often used. This well-order, sometimes called shortlex or quasi-lexicographical order, consists in considering first the lengths of the words (if length(a) < length(b), then
a
<
b
{\displaystyle a<b}
), and, if the lengths are equal, using the lexicographical order. If the order on A is a well-order, the same is true for the shortlex order.
== Cartesian products ==
The lexicographical order defines an order on an n-ary Cartesian product of ordered sets, which is a total order when all these sets are themselves totally ordered. An element of a Cartesian product
E
1
×
⋯
×
E
n
{\displaystyle E_{1}\times \cdots \times E_{n}}
is a sequence whose
i
{\displaystyle i}
th element belongs to
E
i
{\displaystyle E_{i}}
for every
i
.
{\displaystyle i.}
As evaluating the lexicographical order of sequences compares only elements which have the same rank in the sequences, the lexicographical order extends to Cartesian products of ordered sets.
Specifically, given two partially ordered sets
A
{\displaystyle A}
and
B
,
{\displaystyle B,}
the lexicographical order on the Cartesian product
A
×
B
{\displaystyle A\times B}
is defined as
(
a
,
b
)
≤
(
a
′
,
b
′
)
if and only if
a
<
a
′
or
(
a
=
a
′
and
b
≤
b
′
)
,
{\displaystyle (a,b)\leq \left(a^{\prime },b^{\prime }\right){\text{ if and only if }}a<a^{\prime }{\text{ or }}\left(a=a^{\prime }{\text{ and }}b\leq b^{\prime }\right),}
The result is a partial order. If
A
{\displaystyle A}
and
B
{\displaystyle B}
are each totally ordered, then the result is a total order as well. The lexicographical order of two totally ordered sets is thus a linear extension of their product order.
One can define similarly the lexicographic order on the Cartesian product of an infinite family of ordered sets, if the family is indexed by the natural numbers, or more generally by a well-ordered set. This generalized lexicographical order is a total order if each factor set is totally ordered.
Unlike the finite case, an infinite product of well-orders is not necessarily well-ordered by the lexicographical order. For instance, the set of countably infinite binary sequences (by definition, the set of functions from natural numbers to
{
0
,
1
}
,
{\displaystyle \{0,1\},}
also known as the Cantor space
{
0
,
1
}
ω
{\displaystyle \{0,1\}^{\omega }}
) is not well-ordered; the subset of sequences that have precisely one
1
{\displaystyle 1}
(that is, { 100000..., 010000..., 001000..., ... }) does not have a least element under the lexicographical order induced by
0
<
1
,
{\displaystyle 0<1,}
because 100000... > 010000... > 001000... > ... is an infinite descending chain. Similarly, the infinite lexicographic product is not Noetherian either because 011111... < 101111... < 110111 ... < ... is an infinite ascending chain.
== Functions over a well-ordered set ==
The functions from a well-ordered set
X
{\displaystyle X}
to a totally ordered set
Y
{\displaystyle Y}
may be identified with sequences indexed by
X
{\displaystyle X}
of elements of
Y
.
{\displaystyle Y.}
They can thus be ordered by the lexicographical order, and for two such functions
f
{\displaystyle f}
and
g
,
{\displaystyle g,}
the lexicographical order is thus determined by their values for the smallest
x
{\displaystyle x}
such that
f
(
x
)
≠
g
(
x
)
.
{\displaystyle f(x)\neq g(x).}
If
Y
{\displaystyle Y}
is also well-ordered and
X
{\displaystyle X}
is finite, then the resulting order is a well-order. As shown above, if
X
{\displaystyle X}
is infinite this is not the case.
== Finite subsets ==
In combinatorics, one has often to enumerate, and therefore to order the finite subsets of a given set
S
.
{\displaystyle S.}
For this, one usually chooses an order on
S
.
{\displaystyle S.}
Then, sorting a subset of
S
{\displaystyle S}
is equivalent to convert it into an increasing sequence. The lexicographic order on the resulting sequences induces thus an order on the subsets, which is also called the lexicographical order.
In this context, one generally prefer to sort first the subsets by cardinality, such as in the shortlex order. Therefore, in the following, we will consider only orders on subsets of fixed cardinal.
For example, using the natural order of the integers, the lexicographical ordering on the subsets of three elements of
S
=
{
1
,
2
,
3
,
4
,
5
,
6
}
{\displaystyle S=\{1,2,3,4,5,6\}}
is
123 < 124 < 125 < 126 < 134 < 135 < 136 < 145 < 146 < 156 <
234 < 235 < 236 < 245 < 246 < 256 < 345 < 346 < 356 < 456.
For ordering finite subsets of a given cardinality of the natural numbers, the colexicographical order (see below) is often more convenient, because all initial segments are finite, and thus the colexicographical order defines an order isomorphism between the natural numbers and the set of sets of
n
{\displaystyle n}
natural numbers. This is not the case for the lexicographical order, as, with the lexicographical order, we have, for example,
12
n
<
134
{\displaystyle 12n<134}
for every
n
>
2.
{\displaystyle n>2.}
== Group orders of Zn ==
Let
Z
n
{\displaystyle \mathbb {Z} ^{n}}
be the free Abelian group of rank
n
,
{\displaystyle n,}
whose elements are sequences of
n
{\displaystyle n}
integers, and operation is the addition. A group order on
Z
n
{\displaystyle \mathbb {Z} ^{n}}
is a total order, which is compatible with addition, that is
a
<
b
if and only if
a
+
c
<
b
+
c
.
{\displaystyle a<b\quad {\text{ if and only if }}\quad a+c<b+c.}
The lexicographical ordering is a group order on
Z
n
.
{\displaystyle \mathbb {Z} ^{n}.}
The lexicographical ordering may also be used to characterize all group orders on
Z
n
.
{\displaystyle \mathbb {Z} ^{n}.}
In fact,
n
{\displaystyle n}
linear forms with real coefficients, define a map from
Z
n
{\displaystyle \mathbb {Z} ^{n}}
into
R
n
,
{\displaystyle \mathbb {R} ^{n},}
which is injective if the forms are linearly independent (it may be also injective if the forms are dependent, see below). The lexicographic order on the image of this map induces a group order on
Z
n
.
{\displaystyle \mathbb {Z} ^{n}.}
Robbiano's theorem is that every group order may be obtained in this way.
More precisely, given a group order on
Z
n
,
{\displaystyle \mathbb {Z} ^{n},}
there exist an integer
s
≤
n
{\displaystyle s\leq n}
and
s
{\displaystyle s}
linear forms with real coefficients, such that the induced map
φ
{\displaystyle \varphi }
from
Z
n
{\displaystyle \mathbb {Z} ^{n}}
into
R
s
{\displaystyle \mathbb {R} ^{s}}
has the following properties;
φ
{\displaystyle \varphi }
is injective;
the resulting isomorphism from
Z
n
{\displaystyle \mathbb {Z} ^{n}}
to the image of
φ
{\displaystyle \varphi }
is an order isomorphism when the image is equipped with the lexicographical order on
R
s
.
{\displaystyle \mathbb {R} ^{s}.}
== Colexicographic order ==
The colexicographic or colex order is a variant of the lexicographical order that is obtained by reading finite sequences from the right to the left instead of reading them from the left to the right. More precisely, whereas the lexicographical order between two sequences is defined by
a1a2...ak <lex b1b2 ... bk if ai < bi for the first i where ai and bi differ,
the colexicographical order is defined by
a1a2...ak <colex b1b2...bk if ai < bi for the last i where ai and bi differ
In general, the difference between the colexicographical order and the lexicographical order is not very significant. However, when considering increasing sequences, typically for coding subsets, the two orders differ significantly.
For example, for ordering the increasing sequences (or the sets) of two natural integers, the lexicographical order begins by
12 < 13 < 14 < 15 < ... < 23 < 24 < 25 < ... < 34 < 35 < ... < 45 < ...,
and the colexicographic order begins by
12 < 13 < 23 < 14 < 24 < 34 < 15 < 25 < 35 < 45 < ....
The main property of the colexicographical order for increasing sequences of a given length is that every initial segment is finite. In other words, the colexicographical order for increasing sequences of a given length induces an order isomorphism with the natural numbers, and allows enumerating these sequences. This is frequently used in combinatorics, for example in the proof of the Kruskal–Katona theorem.
== Monomials ==
When considering polynomials, the order of the terms does not matter in general, as the addition is commutative. However, some algorithms, such as polynomial long division, require the terms to be in a specific order. Many of the main algorithms for multivariate polynomials are related with Gröbner bases, concept that requires the choice of a monomial order, that is a total order, which is compatible with the monoid structure of the monomials. Here "compatible" means that
a
<
b
implies
a
c
<
b
c
,
{\displaystyle a<b{\text{ implies }}ac<bc,}
if the monoid operation is denoted multiplicatively. This compatibility implies that the product of a polynomial by a monomial does not change the order of the terms. For Gröbner bases, a further condition must be satisfied, namely that every non-constant monomial is greater than the monomial 1. However this condition is not needed for other related algorithms, such as the algorithms for the computation of the tangent cone.
As Gröbner bases are defined for polynomials in a fixed number of variables, it is common to identify monomials (for example
x
1
x
2
3
x
4
x
5
2
{\displaystyle x_{1}x_{2}^{3}x_{4}x_{5}^{2}}
) with their exponent vectors (here [1, 3, 0, 1, 2]). If n is the number of variables, every monomial order is thus the restriction to
N
n
{\displaystyle \mathbb {N} ^{n}}
of a monomial order of
Z
n
{\displaystyle \mathbb {Z} ^{n}}
(see above § Group orders of Zn
Z
n
,
{\displaystyle \mathbb {Z} ^{n},}
for a classification).
One of these admissible orders is the lexicographical order. It is, historically, the first to have been used for defining Gröbner bases, and is sometimes called pure lexicographical order for distinguishing it from other orders that are also related to a lexicographical order.
Another one consists in comparing first the total degrees, and then resolving the conflicts by using the lexicographical order. This order is not widely used, as either the lexicographical order or the degree reverse lexicographical order have generally better properties.
The degree reverse lexicographical order consists also in comparing first the total degrees, and, in case of equality of the total degrees, using the reverse of the colexicographical order. That is, given two exponent vectors, one has
[
a
1
,
…
,
a
n
]
<
[
b
1
,
…
,
b
n
]
{\displaystyle [a_{1},\ldots ,a_{n}]<[b_{1},\ldots ,b_{n}]}
if either
a
1
+
⋯
+
a
n
<
b
1
+
⋯
+
b
n
,
{\displaystyle a_{1}+\cdots +a_{n}<b_{1}+\cdots +b_{n},}
or
a
1
+
⋯
+
a
n
=
b
1
+
⋯
+
b
n
and
a
i
>
b
i
for the largest
i
for which
a
i
≠
b
i
.
{\displaystyle a_{1}+\cdots +a_{n}=b_{1}+\cdots +b_{n}\quad {\text{ and }}\quad a_{i}>b_{i}{\text{ for the largest }}i{\text{ for which }}a_{i}\neq b_{i}.}
For this ordering, the monomials of degree one have the same order as the corresponding indeterminates (this would not be the case if the reverse lexicographical order would be used). For comparing monomials in two variables of the same total degree, this order is the same as the lexicographic order. This is not the case with more variables. For example, for exponent vectors of monomials of degree two in three variables, one has for the degree reverse lexicographic order:
[
0
,
0
,
2
]
<
[
0
,
1
,
1
]
<
[
1
,
0
,
1
]
<
[
0
,
2
,
0
]
<
[
1
,
1
,
0
]
<
[
2
,
0
,
0
]
{\displaystyle [0,0,2]<[0,1,1]<[1,0,1]<[0,2,0]<[1,1,0]<[2,0,0]}
For the lexicographical order, the same exponent vectors are ordered as
[
0
,
0
,
2
]
<
[
0
,
1
,
1
]
<
[
0
,
2
,
0
]
<
[
1
,
0
,
1
]
<
[
1
,
1
,
0
]
<
[
2
,
0
,
0
]
.
{\displaystyle [0,0,2]<[0,1,1]<[0,2,0]<[1,0,1]<[1,1,0]<[2,0,0].}
A useful property of the degree reverse lexicographical order is that a homogeneous polynomial is a multiple of the least indeterminate if and only if its leading monomial (its greater monomial) is a multiple of this least indeterminate.
== See also ==
Collation
Kleene–Brouwer order
Lexicographic preferences - an application of lexicographic order in economics.
Lexicographic optimization - an algorithmic problem of finding a lexicographically-maximal element.
Lexicographic order topology on the unit square
Lexicographic ordering in tensor abstract index notation
Lexicographically minimal string rotation
Leximin order
Long line (topology)
Lyndon word
Pre-order - the name of the lexicographical order (of bits) in a binary tree traversal
Star product, a different way of combining partial orders
Shortlex order
Orders on the Cartesian product of totally ordered sets
== References ==
== External links ==
Learning materials related to Lexicographic and colexicographic order at Wikiversity | Wikipedia/Lexicographically |
Algebraic specification is a software engineering technique for formally specifying system behavior. It was a very active subject of computer science research around 1980.
== Overview ==
Algebraic specification seeks to systematically develop more efficient programs by:
formally defining types of data, and mathematical operations on those data types
abstracting implementation details, such as the size of representations (in memory) and the efficiency of obtaining outcome of computations
formalizing the computations and operations on data types
allowing for automation by formally restricting operations to this limited set of behaviors and data types.
An algebraic specification achieves these goals by defining one or more data types, and specifying a collection of functions that operate on those data types. These functions can be divided into two classes:
Constructor functions: Functions that create or initialize the data elements, or construct complex elements from simpler ones. The set of available constructor functions is implied by the specification's signature. Additionally, a specification can contain equations defining equivalences between the objects constructed by these functions. Whether the underlying representation is identical for different but equivalent constructions is implementation-dependent.
Additional functions: Functions that operate on the data types, and are defined in terms of the constructor functions.
== Examples ==
Consider a formal algebraic specification for the boolean data type.
One possible algebraic specification may provide two constructor functions for the data-element: a true constructor and a false constructor. Thus, a boolean data element could be declared, constructed, and initialized to a value. In this scenario, all other connective elements, such as XOR and AND, would be additional functions. Thus, a data element could be instantiated with either "true" or "false" value, and additional functions could be used to perform any operation on the data element.
Alternatively, the entire system of boolean data types could be specified using a different set of constructor functions: a false constructor and a not constructor. In that case, an additional function true could be defined to yield the value not false, and an equation
(
not
not
x
)
=
x
{\displaystyle ({\text{not}}\;{\text{not}}\;x)=x}
should be added.
The algebraic specification therefore describes all possible states of the data element, and all possible transitions between states.
For a more complicated example, the integers can be specified (among many other ways, and choosing one of the many formalisms) with two constructors
1 : Z
(_ - _) : Z × Z -> Z
and three equations:
(1 - (1 - p)) = p
((1 - (n - p)) - 1) = (p - n)
((p1 - n1) - (n2 - p2)) = (p1 - (n1 - (p2 - n2)))
It is easy to verify that the equations are valid, given the usual interpretation of the binary "minus" function. (The variable names have been chosen to hint at positive and negative contributions to the value.) With a little effort, it can be shown that, applied left to right, they also constitute a confluent and terminating rewriting system, mapping any constructed term to an unambiguous normal form representing the respective integer:
...
(((1 - 1) - 1) - 1)
((1 - 1) - 1)
(1 - 1)
1
(1 - ((1 - 1) - 1))
(1 - (((1 - 1) - 1) - 1))
...
Therefore, any implementation conforming to this specification will behave like the integers, or possibly a restricted range of them, like the usual integer types found in most programming languages.
== See also ==
Common Algebraic Specification Language
Formal specification
OBJ
== Notes == | Wikipedia/Algebraic_specification |
In mathematical logic, an uninterpreted function or function symbol is one that has no other property than its name and n-ary form. Function symbols are used, together with constants and variables, to form terms.
The theory of uninterpreted functions is also sometimes called the free theory, because it is freely generated, and thus a free object, or the empty theory, being the theory having an empty set of sentences (in analogy to an initial algebra). Theories with a non-empty set of equations are known as equational theories. The satisfiability problem for free theories is solved by syntactic unification; algorithms for the latter are used by interpreters for various computer languages, such as Prolog. Syntactic unification is also used in algorithms for the satisfiability problem for certain other equational theories, see Unification (computer science).
== Example ==
As an example of uninterpreted functions for SMT-LIB, if this input is given to an SMT solver:
the SMT solver would return "This input is satisfiable". That happens because f is an uninterpreted function (i.e., all that is known about f is its signature), so it is possible that f(10) = 1. But by applying the input below:
the SMT solver would return "This input is unsatisfiable". That happens because f, being a function, can never return different values for the same input.
== Discussion ==
The decision problem for free theories is particularly important, because many theories can be reduced by it.
Free theories can be solved by searching for common subexpressions to form the congruence closure. Solvers include satisfiability modulo theories solvers.
== See also ==
Algebraic data type
Initial algebra
Term algebra
Theory of pure equality
== Notes ==
== References == | Wikipedia/Empty_theory |
In mathematics, the lexicographic or lexicographical order (also known as lexical order, or dictionary order) is a generalization of the alphabetical order of the dictionaries to sequences of ordered symbols or, more generally, of elements of a totally ordered set.
There are several variants and generalizations of the lexicographical ordering. One variant applies to sequences of different lengths by comparing the lengths of the sequences before considering their elements.
Another variant, widely used in combinatorics, orders subsets of a given finite set by assigning a total order to the finite set, and converting subsets into increasing sequences, to which the lexicographical order is applied.
A generalization defines an order on an n-ary Cartesian product of partially ordered sets; this order is a total order if and only if all factors of the Cartesian product are totally ordered.
== Definition ==
The words in a lexicon (the set of words used in some language) have a conventional ordering, used in dictionaries and encyclopedias, that depends on the underlying ordering of the alphabet of symbols used to build the words. The lexicographical order is one way of formalizing word order given the order of the underlying symbols.
The formal notion starts with a finite set A, often called the alphabet, which is totally ordered. That is, for any two symbols a and b in A that are not the same symbol, either a < b or b < a.
The words of A are the finite sequences of symbols from A, including words of length 1 containing a single symbol, words of length 2 with 2 symbols, and so on, even including the empty sequence
ε
{\displaystyle \varepsilon }
with no symbols at all. The lexicographical order on the set of all these finite words orders the words as follows:
Given two different words of the same length, say a = a1a2...ak and b = b1b2...bk, the order of the two words depends on the alphabetic order of the symbols in the first place i where the two words differ (counting from the beginning of the words): a < b if and only if ai < bi in the underlying order of the alphabet A.
If two words have different lengths, the usual lexicographical order pads the shorter one with "blanks" (a special symbol that is treated as smaller than every element of A) at the end until the words are the same length, and then the words are compared as in the previous case.
However, in combinatorics, another convention is frequently used for the second case, whereby a shorter sequence is always smaller than a longer sequence. This variant of the lexicographical order is sometimes called shortlex order.
In lexicographical order, the word "Thomas" appears before "Thompson" because they first differ at the fifth letter ('a' and 'p'), and letter 'a' comes before the letter 'p' in the alphabet. Because it is the first difference, in this case the 5th letter is the "most significant difference" for alphabetical ordering.
An important property of the lexicographical order is that for each n, the set of words of length n is well-ordered by the lexicographical order (provided the alphabet is finite); that is, every decreasing sequence of words of length n is finite (or equivalently, every non-empty subset has a least element). It is not true that the set of all finite words is well-ordered; for example, the infinite set of words {b, ab, aab, aaab, ... } has no lexicographically earliest element.
== Numeral systems and dates ==
The lexicographical order is used not only in dictionaries, but also commonly for numbers and dates.
One of the drawbacks of the Roman numeral system is that it is not always immediately obvious which of two numbers is the smaller. On the other hand, with the positional notation of the Hindu–Arabic numeral system, comparing numbers is easy, because the natural order on natural numbers is the same as the variant shortlex of the lexicographic order. In fact, with positional notation, a natural number is represented by a sequence of numerical digits, and a natural number is larger than another one if either it has more digits (ignoring leading zeroes) or the number of digits is the same and the first (most significant) digit which differs is larger.
For real numbers written in decimal notation, a slightly different variant of the lexicographical order is used: the parts on the left of the decimal point are compared as before; if they are equal, the parts at the right of the decimal point are compared with the lexicographical order. The padding 'blank' in this context is a trailing "0" digit.
When negative numbers are also considered, one has to reverse the order for comparing negative numbers. This is not usually a problem for humans, but it may be for computers (testing the sign takes some time). This is one of the reasons for adopting two's complement representation for representing signed integers in computers.
Another example of a non-dictionary use of lexicographical ordering appears in the ISO 8601 standard for dates, which expresses a date as YYYY-MM-DD. This formatting scheme has the advantage that the lexicographical order on sequences of characters that represent dates coincides with the chronological order: an earlier CE date is smaller in the lexicographical order than a later date up to year 9999. This date ordering makes computerized sorting of dates easier by avoiding the need for a separate sorting algorithm.
== Monoid of words ==
The monoid of words over an alphabet A is the free monoid over A. That is, the elements of the monoid are the finite sequences (words) of elements of A (including the empty sequence, of length 0), and the operation (multiplication) is the concatenation of words. A word u is a prefix (or 'truncation') of another word v if there exists a word w such that v = uw. By this definition, the empty word (
ε
{\displaystyle \varepsilon }
) is a prefix of every word, and every word is a prefix of itself (with w
=
ε
{\displaystyle =\varepsilon }
); care must be taken if these cases are to be excluded.
With this terminology, the above definition of the lexicographical order becomes more concise: Given a partially or totally ordered set A, and two words a and b over A such that b is non-empty, then one has a < b under lexicographical order, if at least one of the following conditions is satisfied:
a is a prefix of b
there exists words u, v, w (possibly empty) and elements x and y of A such that
x < y
a = uxv
b = uyw
Notice that, due to the prefix condition in this definition,
ε
<
b
for all
b
≠
ε
,
{\displaystyle \varepsilon <b\,\,{\text{ for all }}b\neq \varepsilon ,}
where
ε
{\displaystyle \varepsilon }
is the empty word.
If
<
{\displaystyle \,<\,}
is a total order on
A
,
{\displaystyle A,}
then so is the lexicographic order on the words of
A
.
{\displaystyle A.}
However, in general this is not a well-order, even if the alphabet
A
{\displaystyle A}
is well-ordered. For instance, if A = {a, b}, the language {anb | n ≥ 0, b > ε} has no least element in the lexicographical order: ... < aab < ab < b.
Since many applications require well orders, a variant of the lexicographical orders is often used. This well-order, sometimes called shortlex or quasi-lexicographical order, consists in considering first the lengths of the words (if length(a) < length(b), then
a
<
b
{\displaystyle a<b}
), and, if the lengths are equal, using the lexicographical order. If the order on A is a well-order, the same is true for the shortlex order.
== Cartesian products ==
The lexicographical order defines an order on an n-ary Cartesian product of ordered sets, which is a total order when all these sets are themselves totally ordered. An element of a Cartesian product
E
1
×
⋯
×
E
n
{\displaystyle E_{1}\times \cdots \times E_{n}}
is a sequence whose
i
{\displaystyle i}
th element belongs to
E
i
{\displaystyle E_{i}}
for every
i
.
{\displaystyle i.}
As evaluating the lexicographical order of sequences compares only elements which have the same rank in the sequences, the lexicographical order extends to Cartesian products of ordered sets.
Specifically, given two partially ordered sets
A
{\displaystyle A}
and
B
,
{\displaystyle B,}
the lexicographical order on the Cartesian product
A
×
B
{\displaystyle A\times B}
is defined as
(
a
,
b
)
≤
(
a
′
,
b
′
)
if and only if
a
<
a
′
or
(
a
=
a
′
and
b
≤
b
′
)
,
{\displaystyle (a,b)\leq \left(a^{\prime },b^{\prime }\right){\text{ if and only if }}a<a^{\prime }{\text{ or }}\left(a=a^{\prime }{\text{ and }}b\leq b^{\prime }\right),}
The result is a partial order. If
A
{\displaystyle A}
and
B
{\displaystyle B}
are each totally ordered, then the result is a total order as well. The lexicographical order of two totally ordered sets is thus a linear extension of their product order.
One can define similarly the lexicographic order on the Cartesian product of an infinite family of ordered sets, if the family is indexed by the natural numbers, or more generally by a well-ordered set. This generalized lexicographical order is a total order if each factor set is totally ordered.
Unlike the finite case, an infinite product of well-orders is not necessarily well-ordered by the lexicographical order. For instance, the set of countably infinite binary sequences (by definition, the set of functions from natural numbers to
{
0
,
1
}
,
{\displaystyle \{0,1\},}
also known as the Cantor space
{
0
,
1
}
ω
{\displaystyle \{0,1\}^{\omega }}
) is not well-ordered; the subset of sequences that have precisely one
1
{\displaystyle 1}
(that is, { 100000..., 010000..., 001000..., ... }) does not have a least element under the lexicographical order induced by
0
<
1
,
{\displaystyle 0<1,}
because 100000... > 010000... > 001000... > ... is an infinite descending chain. Similarly, the infinite lexicographic product is not Noetherian either because 011111... < 101111... < 110111 ... < ... is an infinite ascending chain.
== Functions over a well-ordered set ==
The functions from a well-ordered set
X
{\displaystyle X}
to a totally ordered set
Y
{\displaystyle Y}
may be identified with sequences indexed by
X
{\displaystyle X}
of elements of
Y
.
{\displaystyle Y.}
They can thus be ordered by the lexicographical order, and for two such functions
f
{\displaystyle f}
and
g
,
{\displaystyle g,}
the lexicographical order is thus determined by their values for the smallest
x
{\displaystyle x}
such that
f
(
x
)
≠
g
(
x
)
.
{\displaystyle f(x)\neq g(x).}
If
Y
{\displaystyle Y}
is also well-ordered and
X
{\displaystyle X}
is finite, then the resulting order is a well-order. As shown above, if
X
{\displaystyle X}
is infinite this is not the case.
== Finite subsets ==
In combinatorics, one has often to enumerate, and therefore to order the finite subsets of a given set
S
.
{\displaystyle S.}
For this, one usually chooses an order on
S
.
{\displaystyle S.}
Then, sorting a subset of
S
{\displaystyle S}
is equivalent to convert it into an increasing sequence. The lexicographic order on the resulting sequences induces thus an order on the subsets, which is also called the lexicographical order.
In this context, one generally prefer to sort first the subsets by cardinality, such as in the shortlex order. Therefore, in the following, we will consider only orders on subsets of fixed cardinal.
For example, using the natural order of the integers, the lexicographical ordering on the subsets of three elements of
S
=
{
1
,
2
,
3
,
4
,
5
,
6
}
{\displaystyle S=\{1,2,3,4,5,6\}}
is
123 < 124 < 125 < 126 < 134 < 135 < 136 < 145 < 146 < 156 <
234 < 235 < 236 < 245 < 246 < 256 < 345 < 346 < 356 < 456.
For ordering finite subsets of a given cardinality of the natural numbers, the colexicographical order (see below) is often more convenient, because all initial segments are finite, and thus the colexicographical order defines an order isomorphism between the natural numbers and the set of sets of
n
{\displaystyle n}
natural numbers. This is not the case for the lexicographical order, as, with the lexicographical order, we have, for example,
12
n
<
134
{\displaystyle 12n<134}
for every
n
>
2.
{\displaystyle n>2.}
== Group orders of Zn ==
Let
Z
n
{\displaystyle \mathbb {Z} ^{n}}
be the free Abelian group of rank
n
,
{\displaystyle n,}
whose elements are sequences of
n
{\displaystyle n}
integers, and operation is the addition. A group order on
Z
n
{\displaystyle \mathbb {Z} ^{n}}
is a total order, which is compatible with addition, that is
a
<
b
if and only if
a
+
c
<
b
+
c
.
{\displaystyle a<b\quad {\text{ if and only if }}\quad a+c<b+c.}
The lexicographical ordering is a group order on
Z
n
.
{\displaystyle \mathbb {Z} ^{n}.}
The lexicographical ordering may also be used to characterize all group orders on
Z
n
.
{\displaystyle \mathbb {Z} ^{n}.}
In fact,
n
{\displaystyle n}
linear forms with real coefficients, define a map from
Z
n
{\displaystyle \mathbb {Z} ^{n}}
into
R
n
,
{\displaystyle \mathbb {R} ^{n},}
which is injective if the forms are linearly independent (it may be also injective if the forms are dependent, see below). The lexicographic order on the image of this map induces a group order on
Z
n
.
{\displaystyle \mathbb {Z} ^{n}.}
Robbiano's theorem is that every group order may be obtained in this way.
More precisely, given a group order on
Z
n
,
{\displaystyle \mathbb {Z} ^{n},}
there exist an integer
s
≤
n
{\displaystyle s\leq n}
and
s
{\displaystyle s}
linear forms with real coefficients, such that the induced map
φ
{\displaystyle \varphi }
from
Z
n
{\displaystyle \mathbb {Z} ^{n}}
into
R
s
{\displaystyle \mathbb {R} ^{s}}
has the following properties;
φ
{\displaystyle \varphi }
is injective;
the resulting isomorphism from
Z
n
{\displaystyle \mathbb {Z} ^{n}}
to the image of
φ
{\displaystyle \varphi }
is an order isomorphism when the image is equipped with the lexicographical order on
R
s
.
{\displaystyle \mathbb {R} ^{s}.}
== Colexicographic order ==
The colexicographic or colex order is a variant of the lexicographical order that is obtained by reading finite sequences from the right to the left instead of reading them from the left to the right. More precisely, whereas the lexicographical order between two sequences is defined by
a1a2...ak <lex b1b2 ... bk if ai < bi for the first i where ai and bi differ,
the colexicographical order is defined by
a1a2...ak <colex b1b2...bk if ai < bi for the last i where ai and bi differ
In general, the difference between the colexicographical order and the lexicographical order is not very significant. However, when considering increasing sequences, typically for coding subsets, the two orders differ significantly.
For example, for ordering the increasing sequences (or the sets) of two natural integers, the lexicographical order begins by
12 < 13 < 14 < 15 < ... < 23 < 24 < 25 < ... < 34 < 35 < ... < 45 < ...,
and the colexicographic order begins by
12 < 13 < 23 < 14 < 24 < 34 < 15 < 25 < 35 < 45 < ....
The main property of the colexicographical order for increasing sequences of a given length is that every initial segment is finite. In other words, the colexicographical order for increasing sequences of a given length induces an order isomorphism with the natural numbers, and allows enumerating these sequences. This is frequently used in combinatorics, for example in the proof of the Kruskal–Katona theorem.
== Monomials ==
When considering polynomials, the order of the terms does not matter in general, as the addition is commutative. However, some algorithms, such as polynomial long division, require the terms to be in a specific order. Many of the main algorithms for multivariate polynomials are related with Gröbner bases, concept that requires the choice of a monomial order, that is a total order, which is compatible with the monoid structure of the monomials. Here "compatible" means that
a
<
b
implies
a
c
<
b
c
,
{\displaystyle a<b{\text{ implies }}ac<bc,}
if the monoid operation is denoted multiplicatively. This compatibility implies that the product of a polynomial by a monomial does not change the order of the terms. For Gröbner bases, a further condition must be satisfied, namely that every non-constant monomial is greater than the monomial 1. However this condition is not needed for other related algorithms, such as the algorithms for the computation of the tangent cone.
As Gröbner bases are defined for polynomials in a fixed number of variables, it is common to identify monomials (for example
x
1
x
2
3
x
4
x
5
2
{\displaystyle x_{1}x_{2}^{3}x_{4}x_{5}^{2}}
) with their exponent vectors (here [1, 3, 0, 1, 2]). If n is the number of variables, every monomial order is thus the restriction to
N
n
{\displaystyle \mathbb {N} ^{n}}
of a monomial order of
Z
n
{\displaystyle \mathbb {Z} ^{n}}
(see above § Group orders of Zn
Z
n
,
{\displaystyle \mathbb {Z} ^{n},}
for a classification).
One of these admissible orders is the lexicographical order. It is, historically, the first to have been used for defining Gröbner bases, and is sometimes called pure lexicographical order for distinguishing it from other orders that are also related to a lexicographical order.
Another one consists in comparing first the total degrees, and then resolving the conflicts by using the lexicographical order. This order is not widely used, as either the lexicographical order or the degree reverse lexicographical order have generally better properties.
The degree reverse lexicographical order consists also in comparing first the total degrees, and, in case of equality of the total degrees, using the reverse of the colexicographical order. That is, given two exponent vectors, one has
[
a
1
,
…
,
a
n
]
<
[
b
1
,
…
,
b
n
]
{\displaystyle [a_{1},\ldots ,a_{n}]<[b_{1},\ldots ,b_{n}]}
if either
a
1
+
⋯
+
a
n
<
b
1
+
⋯
+
b
n
,
{\displaystyle a_{1}+\cdots +a_{n}<b_{1}+\cdots +b_{n},}
or
a
1
+
⋯
+
a
n
=
b
1
+
⋯
+
b
n
and
a
i
>
b
i
for the largest
i
for which
a
i
≠
b
i
.
{\displaystyle a_{1}+\cdots +a_{n}=b_{1}+\cdots +b_{n}\quad {\text{ and }}\quad a_{i}>b_{i}{\text{ for the largest }}i{\text{ for which }}a_{i}\neq b_{i}.}
For this ordering, the monomials of degree one have the same order as the corresponding indeterminates (this would not be the case if the reverse lexicographical order would be used). For comparing monomials in two variables of the same total degree, this order is the same as the lexicographic order. This is not the case with more variables. For example, for exponent vectors of monomials of degree two in three variables, one has for the degree reverse lexicographic order:
[
0
,
0
,
2
]
<
[
0
,
1
,
1
]
<
[
1
,
0
,
1
]
<
[
0
,
2
,
0
]
<
[
1
,
1
,
0
]
<
[
2
,
0
,
0
]
{\displaystyle [0,0,2]<[0,1,1]<[1,0,1]<[0,2,0]<[1,1,0]<[2,0,0]}
For the lexicographical order, the same exponent vectors are ordered as
[
0
,
0
,
2
]
<
[
0
,
1
,
1
]
<
[
0
,
2
,
0
]
<
[
1
,
0
,
1
]
<
[
1
,
1
,
0
]
<
[
2
,
0
,
0
]
.
{\displaystyle [0,0,2]<[0,1,1]<[0,2,0]<[1,0,1]<[1,1,0]<[2,0,0].}
A useful property of the degree reverse lexicographical order is that a homogeneous polynomial is a multiple of the least indeterminate if and only if its leading monomial (its greater monomial) is a multiple of this least indeterminate.
== See also ==
Collation
Kleene–Brouwer order
Lexicographic preferences - an application of lexicographic order in economics.
Lexicographic optimization - an algorithmic problem of finding a lexicographically-maximal element.
Lexicographic order topology on the unit square
Lexicographic ordering in tensor abstract index notation
Lexicographically minimal string rotation
Leximin order
Long line (topology)
Lyndon word
Pre-order - the name of the lexicographical order (of bits) in a binary tree traversal
Star product, a different way of combining partial orders
Shortlex order
Orders on the Cartesian product of totally ordered sets
== References ==
== External links ==
Learning materials related to Lexicographic and colexicographic order at Wikiversity | Wikipedia/Lexicographical_order |
Dis-unification, in computer science and logic, is an algorithmic process of solving inequations between symbolic expressions.
== Publications on dis-unification ==
Alain Colmerauer (1984). "Equations and Inequations on Finite and Infinite Trees". In ICOT (ed.). Proc. Int. Conf. on Fifth Generation Computer Systems. pp. 85–99.
Hubert Comon (1986). "Sufficient Completeness, Term Rewriting Systems and 'Anti-Unification'". Proc. 8th International Conference on Automated Deduction. LNCS. Vol. 230. Springer. pp. 128–140."Anti-Unification" here refers to inequation-solving, a naming which nowadays has become quite unusual, cf. Anti-unification (computer science).
Claude Kirchner; Pierre Lescanne (1987). "Solving Disequations". Proc. LICS. pp. 347–352.
Claude Kirchner and Pierre Lescanne (1987). Solving disequations (Research Report). INRIA.
Hubert Comon (1988). Unification et disunification: Théorie et applications (PDF) (Ph.D.). I.N.P. de Grenoble.
Hubert Comon; Pierre Lescanne (Mar–Apr 1989). "Equational Problems and Disunification". J. Symb. Comput. 7 (3–4): 371–425. CiteSeerX 10.1.1.139.4769. doi:10.1016/S0747-7171(89)80017-3.
Comon, Hubert (1990). "Equational Formulas in Order-Sorted Algebras". Proc. ICALP.Comon shows that the first-order logic theory of equality and sort membership is decidable, that is, each first-order logic formula built from arbitrary function symbols, "=" and "∈", but no other predicates, can effectively be proven or disproven. Using the logical negation (¬), non-equality (≠) can be expressed in formulas, but order relations (<) cannot. As an application, he proves sufficient completeness of term rewriting systems.
Hubert Comon (1991). "Disunification: A Survey". In Jean-Louis Lassez; Gordon Plotkin (eds.). Computational Logic — Essays in Honor of Alan Robinson. MIT Press. pp. 322–359.
Hubert Comon (1993). "Complete Axiomatizations of some Quotient Term Algebras" (PDF). Proc. 18th Int. Coll. on Automata, Languages, and Programming. LNCS. Vol. 510. Springer. pp. 148–164. Retrieved 29 June 2013.
== See also ==
Unification (computer science): solving equations between symbolic expressions
Constraint logic programming: incorporating solving algorithms for particular classes of inequalities (and other relations) into Prolog
Constraint programming: solving algorithms for particular classes of inequalities
Simplex algorithm: solving algorithm for linear inequations
Inequation: kinds of inequations in mathematics in general, including a brief section on solving
Equation solving: how to solve equations in mathematics | Wikipedia/Dis-unification_(computer_science) |
In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign =. The word equation and its cognates in other languages may have subtly different meanings; for example, in French an équation is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation.
Solving an equation containing variables consists of determining which values of the variables make the equality true. The variables for which the equation has to be solved are also called unknowns, and the values of the unknowns that satisfy the equality are called solutions of the equation. There are two kinds of equations: identities and conditional equations. An identity is true for all values of the variables. A conditional equation is only true for particular values of the variables.
The "=" symbol, which appears in every equation, was invented in 1557 by Robert Recorde, who considered that nothing could be more equal than parallel straight lines with the same length.
== Description ==
An equation is written as two expressions, connected by an equals sign ("="). The expressions on the two sides of the equals sign are called the "left-hand side" and "right-hand side" of the equation. Very often the right-hand side of an equation is assumed to be zero. This does not reduce the generality, as this can be realized by subtracting the right-hand side from both sides.
The most common type of equation is a polynomial equation (commonly called also an algebraic equation) in which the two sides are polynomials.
The sides of a polynomial equation contain one or more terms. For example, the equation
A
x
2
+
B
x
+
C
−
y
=
0
{\displaystyle Ax^{2}+Bx+C-y=0}
has left-hand side
A
x
2
+
B
x
+
C
−
y
{\displaystyle Ax^{2}+Bx+C-y}
, which has four terms, and right-hand side
0
{\displaystyle 0}
, consisting of just one term. The names of the variables suggest that x and y are unknowns, and that A, B, and C are parameters, but this is normally fixed by the context (in some contexts, y may be a parameter, or A, B, and C may be ordinary variables).
An equation is analogous to a scale into which weights are placed. When equal weights of something (e.g., grain) are placed into the two pans, the two weights cause the scale to be in balance and are said to be equal. If a quantity of grain is removed from one pan of the balance, an equal amount must be removed from the other pan to keep the scale in balance. More generally, an equation remains balanced if the same operation is performed on each side.
== Properties ==
Two equations or two systems of equations are equivalent, if they have the same set of solutions. The following operations transform an equation or a system of equations into an equivalent one – provided that the operations are meaningful for the expressions they are applied to:
Adding or subtracting the same quantity to both sides of an equation. This shows that every equation is equivalent to an equation in which the right-hand side is zero.
Multiplying or dividing both sides of an equation by a non-zero quantity.
Applying an identity to transform one side of the equation. For example, expanding a product or factoring a sum.
For a system: adding to both sides of an equation the corresponding side of another equation, multiplied by the same quantity.
If some function is applied to both sides of an equation, the resulting equation has the solutions of the initial equation among its solutions, but may have further solutions called extraneous solutions. For example, the equation
x
=
1
{\displaystyle x=1}
has the solution
x
=
1.
{\displaystyle x=1.}
Raising both sides to the exponent of 2 (which means applying the function
f
(
s
)
=
s
2
{\displaystyle f(s)=s^{2}}
to both sides of the equation) changes the equation to
x
2
=
1
{\displaystyle x^{2}=1}
, which not only has the previous solution but also introduces the extraneous solution,
x
=
−
1.
{\displaystyle x=-1.}
Moreover, if the function is not defined at some values (such as 1/x, which is not defined for x = 0), solutions existing at those values may be lost. Thus, caution must be exercised when applying such a transformation to an equation.
The above transformations are the basis of most elementary methods for equation solving, as well as some less elementary ones, like Gaussian elimination.
== Examples ==
=== Analogous illustration ===
An equation is analogous to a weighing scale, balance, or seesaw.
Each side of the equation corresponds to one side of the balance. Different quantities can be placed on each side: if the weights on the two sides are equal, the scale balances, and in analogy, the equality that represents the balance is also balanced (if not, then the lack of balance corresponds to an inequality represented by an inequation).
In the illustration, x, y and z are all different quantities (in this case real numbers) represented as circular weights, and each of x, y, and z has a different weight. Addition corresponds to adding weight, while subtraction corresponds to removing weight from what is already there. When equality holds, the total weight on each side is the same.
=== Parameters and unknowns ===
Equations often contain terms other than the unknowns. These other terms, which are assumed to be known, are usually called constants, coefficients or parameters.
An example of an equation involving x and y as unknowns and the parameter R is
x
2
+
y
2
=
R
2
.
{\displaystyle x^{2}+y^{2}=R^{2}.}
When R is chosen to have the value of 2 (R = 2), this equation would be recognized in Cartesian coordinates as the equation for the circle of radius of 2 around the origin. Hence, the equation with R unspecified is the general equation for the circle.
Usually, the unknowns are denoted by letters at the end of the alphabet, x, y, z, w, ..., while coefficients (parameters) are denoted by letters at the beginning, a, b, c, d, ... . For example, the general quadratic equation is usually written ax2 + bx + c = 0.
The process of finding the solutions, or, in case of parameters, expressing the unknowns in terms of the parameters, is called solving the equation. Such expressions of the solutions in terms of the parameters are also called solutions.
A system of equations is a set of simultaneous equations, usually in several unknowns for which the common solutions are sought. Thus, a solution to the system is a set of values for each of the unknowns, which together form a solution to each equation in the system. For example, the system
3
x
+
5
y
=
2
5
x
+
8
y
=
3
{\displaystyle {\begin{aligned}3x+5y&=2\\5x+8y&=3\end{aligned}}}
has the unique solution x = −1, y = 1.
=== Identities ===
An identity is an equation that is true for all possible values of the variable(s) it contains. Many identities are known in algebra and calculus. In the process of solving an equation, an identity is often used to simplify an equation, making it more easily solvable.
In algebra, an example of an identity is the difference of two squares:
x
2
−
y
2
=
(
x
+
y
)
(
x
−
y
)
{\displaystyle x^{2}-y^{2}=(x+y)(x-y)}
which is true for all x and y.
Trigonometry is an area where many identities exist; these are useful in manipulating or solving trigonometric equations. Two of many that involve the sine and cosine functions are:
sin
2
(
θ
)
+
cos
2
(
θ
)
=
1
{\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1}
and
sin
(
2
θ
)
=
2
sin
(
θ
)
cos
(
θ
)
{\displaystyle \sin(2\theta )=2\sin(\theta )\cos(\theta )}
which are both true for all values of θ.
For example, to solve for the value of θ that satisfies the equation:
3
sin
(
θ
)
cos
(
θ
)
=
1
,
{\displaystyle 3\sin(\theta )\cos(\theta )=1\,,}
where θ is limited to between 0 and 45 degrees, one may use the above identity for the product to give:
3
2
sin
(
2
θ
)
=
1
,
{\displaystyle {\frac {3}{2}}\sin(2\theta )=1\,,}
yielding the following solution for θ:
θ
=
1
2
arcsin
(
2
3
)
≈
20.9
∘
.
{\displaystyle \theta ={\frac {1}{2}}\arcsin \left({\frac {2}{3}}\right)\approx 20.9^{\circ }.}
Since the sine function is a periodic function, there are infinitely many solutions if there are no restrictions on θ. In this example, restricting θ to be between 0 and 45 degrees would restrict the solution to only one number.
== Algebra ==
Algebra studies two main families of equations: polynomial equations and, among them, the special case of linear equations. When there is only one variable, polynomial equations have the form P(x) = 0, where P is a polynomial, and linear equations have the form ax + b = 0, where a and b are parameters. To solve equations from either family, one uses algorithmic or geometric techniques that originate from linear algebra or mathematical analysis. Algebra also studies Diophantine equations where the coefficients and solutions are integers. The techniques used are different and come from number theory. These equations are difficult in general; one often searches just to find the existence or absence of a solution, and, if they exist, to count the number of solutions.
=== Polynomial equations ===
In general, an algebraic equation or polynomial equation is an equation of the form
P
=
0
{\displaystyle P=0}
, or
P
=
Q
{\displaystyle P=Q}
where P and Q are polynomials with coefficients in some field (e.g., rational numbers, real numbers, complex numbers). An algebraic equation is univariate if it involves only one variable. On the other hand, a polynomial equation may involve several variables, in which case it is called multivariate (multiple variables, x, y, z, etc.).
For example,
x
5
−
3
x
+
1
=
0
{\displaystyle x^{5}-3x+1=0}
is a univariate algebraic (polynomial) equation with integer coefficients and
y
4
+
x
y
2
=
x
3
3
−
x
y
2
+
y
2
−
1
7
{\displaystyle y^{4}+{\frac {xy}{2}}={\frac {x^{3}}{3}}-xy^{2}+y^{2}-{\frac {1}{7}}}
is a multivariate polynomial equation over the rational numbers.
Some polynomial equations with rational coefficients have a solution that is an algebraic expression, with a finite number of operations involving just those coefficients (i.e., can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but equations of degree five or more cannot always be solved in this way, as the Abel–Ruffini theorem demonstrates.
A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root finding of polynomials) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations).
=== Systems of linear equations ===
A system of linear equations (or linear system) is a collection of linear equations involving one or more variables. For example,
3
x
+
2
y
−
z
=
1
2
x
−
2
y
+
4
z
=
−
2
−
x
+
1
2
y
−
z
=
0
{\displaystyle {\begin{alignedat}{7}3x&&\;+\;&&2y&&\;-\;&&z&&\;=\;&&1&\\2x&&\;-\;&&2y&&\;+\;&&4z&&\;=\;&&-2&\\-x&&\;+\;&&{\tfrac {1}{2}}y&&\;-\;&&z&&\;=\;&&0&\end{alignedat}}}
is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by
x
=
1
y
=
−
2
z
=
−
2
{\displaystyle {\begin{alignedat}{2}x&\,=\,&1\\y&\,=\,&-2\\z&\,=\,&-2\end{alignedat}}}
since it makes all three equations valid. The word "system" indicates that the equations are to be considered collectively, rather than individually.
In mathematics, the theory of linear systems is a fundamental part of linear algebra, a subject which is used in many parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in physics, engineering, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system.
== Geometry ==
=== Analytic geometry ===
In Euclidean geometry, it is possible to associate a set of coordinates to each point in space, for example by an orthogonal grid. This method allows one to characterize geometric figures by equations. A plane in three-dimensional space can be expressed as the solution set of an equation of the form
a
x
+
b
y
+
c
z
+
d
=
0
{\displaystyle ax+by+cz+d=0}
, where
a
,
b
,
c
{\displaystyle a,b,c}
and
d
{\displaystyle d}
are real numbers and
x
,
y
,
z
{\displaystyle x,y,z}
are the unknowns that correspond to the coordinates of a point in the system given by the orthogonal grid. The values
a
,
b
,
c
{\displaystyle a,b,c}
are the coordinates of a vector perpendicular to the plane defined by the equation. A line is expressed as the intersection of two planes, that is as the solution set of a single linear equation with values in
R
2
{\displaystyle \mathbb {R} ^{2}}
or as the solution set of two linear equations with values in
R
3
.
{\displaystyle \mathbb {R} ^{3}.}
A conic section is the intersection of a cone with equation
x
2
+
y
2
=
z
2
{\displaystyle x^{2}+y^{2}=z^{2}}
and a plane. In other words, in space, all conics are defined as the solution set of an equation of a plane and of the equation of a cone just given. This formalism allows one to determine the positions and the properties of the focuses of a conic.
The use of equations allows one to call on a large area of mathematics to solve geometric questions. The Cartesian coordinate system transforms a geometric problem into an analysis problem, once the figures are transformed into equations; thus the name analytic geometry. This point of view, outlined by Descartes, enriches and modifies the type of geometry conceived of by the ancient Greek mathematicians.
Currently, analytic geometry designates an active branch of mathematics. Although it still uses equations to characterize figures, it also uses other sophisticated techniques such as functional analysis and linear algebra.
=== Cartesian equations ===
In Cartesian geometry, equations are used to describe geometric figures. As the equations that are considered, such as implicit equations or parametric equations, have infinitely many solutions, the objective is now different: instead of giving the solutions explicitly or counting them, which is impossible, one uses equations for studying properties of figures. This is the starting idea of algebraic geometry, an important area of mathematics.
One can use the same principle to specify the position of any point in three-dimensional space by the use of three Cartesian coordinates, which are the signed distances to three mutually perpendicular planes (or, equivalently, by its perpendicular projection onto three mutually perpendicular lines).
The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by Cartesian equations: algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2 in a plane, centered on a particular point called the origin, may be described as the set of all points whose coordinates x and y satisfy the equation x2 + y2 = 4.
=== Parametric equations ===
A parametric equation for a curve expresses the coordinates of the points of the curve as functions of a variable, called a parameter. For example,
x
=
cos
t
y
=
sin
t
{\displaystyle {\begin{aligned}x&=\cos t\\y&=\sin t\end{aligned}}}
are parametric equations for the unit circle, where t is the parameter. Together, these equations are called a parametric representation of the curve.
The notion of parametric equation has been generalized to surfaces, manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.).
== Number theory ==
=== Diophantine equations ===
A Diophantine equation is a polynomial equation in two or more unknowns for which only the integer solutions are sought (an integer solution is a solution such that all the unknowns take integer values). A linear Diophantine equation is an equation between two sums of monomials of degree zero or one. An example of linear Diophantine equation is ax + by = c where a, b, and c are constants. An exponential Diophantine equation is one for which exponents of the terms of the equation can be unknowns.
Diophantine problems have fewer equations than unknown variables and involve finding integers that work correctly for all equations. In more technical language, they define an algebraic curve, algebraic surface, or more general object, and ask about the lattice points on it.
The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis.
=== Algebraic and transcendental numbers ===
An algebraic number is a number that is a solution of a non-zero polynomial equation in one variable with rational coefficients (or equivalently — by clearing denominators — with integer coefficients). Numbers such as π that are not algebraic are said to be transcendental. Almost all real and complex numbers are transcendental.
=== Algebraic geometry ===
Algebraic geometry is a branch of mathematics, classically studying solutions of polynomial equations. Modern algebraic geometry is based on more abstract techniques of abstract algebra, especially commutative algebra, with the language and the problems of geometry.
The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are: plane algebraic curves, which include lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves and quartic curves like lemniscates, and Cassini ovals. A point of the plane belongs to an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of the points of special interest like the singular points, the inflection points and the points at infinity. More advanced questions involve the topology of the curve and relations between the curves given by different equations.
== Differential equations ==
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. They are solved by finding an expression for the function that does not involve derivatives. Differential equations are used to model processes that involve the rates of change of the variable, and are used in areas such as physics, chemistry, biology, and economics.
In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions — the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form.
If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.
=== Ordinary differential equations ===
An ordinary differential equation or ODE is an equation containing a function of one independent variable and its derivatives. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable.
Linear differential equations, which have solutions that can be added and multiplied by coefficients, are well-defined and understood, and exact closed-form solutions are obtained. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analytic solutions.
=== Partial differential equations ===
A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model.
PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations.
== Types of equations ==
Equations can be classified according to the types of operations and quantities involved. Important types include:
An algebraic equation or polynomial equation is an equation in which both sides are polynomials (see also system of polynomial equations). These are further classified by degree:
linear equation for degree one
quadratic equation for degree two
cubic equation for degree three
quartic equation for degree four
quintic equation for degree five
sextic equation for degree six
septic equation for degree seven
octic equation for degree eight
A Diophantine equation is an equation where the unknowns are required to be integers
A transcendental equation is an equation involving a transcendental function of its unknowns
A parametric equation is an equation in which the solutions for the variables are expressed as functions of some other variables, called parameters appearing in the equations
A functional equation is an equation in which the unknowns are functions rather than simple quantities
Equations involving derivatives, integrals and finite differences:
A differential equation is a functional equation involving derivatives of the unknown functions, where the function and its derivatives are evaluated at the same point, such as
f
′
(
x
)
=
x
2
{\displaystyle f'(x)=x^{2}}
. Differential equations are subdivided into ordinary differential equations for functions of a single variable and partial differential equations for functions of multiple variables
An integral equation is a functional equation involving the antiderivatives of the unknown functions. For functions of one variable, such an equation differs from a differential equation primarily through a change of variable substituting the function by its derivative, however this is not the case when the integral is taken over an open surface
An integro-differential equation is a functional equation involving both the derivatives and the antiderivatives of the unknown functions. For functions of one variable, such an equation differs from integral and differential equations through a similar change of variable.
A functional differential equation of delay differential equation is a function equation involving derivatives of the unknown functions, evaluated at multiple points, such as
f
′
(
x
)
=
f
(
x
−
2
)
{\displaystyle f'(x)=f(x-2)}
A difference equation is an equation where the unknown is a function f that occurs in the equation through f(x), f(x−1), ..., f(x−k), for some whole integer k called the order of the equation. If x is restricted to be an integer, a difference equation is the same as a recurrence relation
A stochastic differential equation is a differential equation in which one or more of the terms is a stochastic process
== See also ==
== Notes ==
== References ==
== External links ==
Winplot: General Purpose plotter that can draw and animate 2D and 3D mathematical equations.
Equation plotter: A web page for producing and downloading pdf or postscript plots of the solution sets to equations and inequations in two variables (x and y). | Wikipedia/Equations |
In mathematics, to solve an equation is to find its solutions, which are the values (numbers, functions, sets, etc.) that fulfill the condition stated by the equation, consisting generally of two expressions related by an equals sign. When seeking a solution, one or more variables are designated as unknowns. A solution is an assignment of values to the unknown variables that makes the equality in the equation true. In other words, a solution is a value or a collection of values (one for each unknown) such that, when substituted for the unknowns, the equation becomes an equality.
A solution of an equation is often called a root of the equation, particularly but not only for polynomial equations. The set of all solutions of an equation is its solution set.
An equation may be solved either numerically or symbolically. Solving an equation numerically means that only numbers are admitted as solutions. Solving an equation symbolically means that expressions can be used for representing the solutions.
For example, the equation x + y = 2x – 1 is solved for the unknown x by the expression x = y + 1, because substituting y + 1 for x in the equation results in (y + 1) + y = 2(y + 1) – 1, a true statement. It is also possible to take the variable y to be the unknown, and then the equation is solved by y = x – 1. Or x and y can both be treated as unknowns, and then there are many solutions to the equation; a symbolic solution is (x, y) = (a + 1, a), where the variable a may take any value. Instantiating a symbolic solution with specific numbers gives a numerical solution; for example, a = 0 gives (x, y) = (1, 0) (that is, x = 1, y = 0), and a = 1 gives (x, y) = (2, 1).
The distinction between known variables and unknown variables is generally made in the statement of the problem, by phrases such as "an equation in x and y", or "solve for x and y", which indicate the unknowns, here x and y.
However, it is common to reserve x, y, z, ... to denote the unknowns, and to use a, b, c, ... to denote the known variables, which are often called parameters. This is typically the case when considering polynomial equations, such as quadratic equations. However, for some problems, all variables may assume either role.
Depending on the context, solving an equation may consist to find either any solution (finding a single solution is enough), all solutions, or a solution that satisfies further properties, such as belonging to a given interval. When the task is to find the solution that is the best under some criterion, this is an optimization problem. Solving an optimization problem is generally not referred to as "equation solving", as, generally, solving methods start from a particular solution for finding a better solution, and repeating the process until finding eventually the best solution.
== Overview ==
One general form of an equation is
f
(
x
1
,
…
,
x
n
)
=
c
,
{\displaystyle f\left(x_{1},\dots ,x_{n}\right)=c,}
where f is a function, x1, ..., xn are the unknowns, and c is a constant. Its solutions are the elements of the inverse image (fiber)
f
−
1
(
c
)
=
{
(
a
1
,
…
,
a
n
)
∈
D
∣
f
(
a
1
,
…
,
a
n
)
=
c
}
,
{\displaystyle f^{-1}(c)={\bigl \{}(a_{1},\dots ,a_{n})\in D\mid f\left(a_{1},\dots ,a_{n}\right)=c{\bigr \}},}
where D is the domain of the function f. The set of solutions can be the empty set (there are no solutions), a singleton (there is exactly one solution), finite, or infinite (there are infinitely many solutions).
For example, an equation such as
3
x
+
2
y
=
21
z
,
{\displaystyle 3x+2y=21z,}
with unknowns x, y and z, can be put in the above form by subtracting 21z from both sides of the equation, to obtain
3
x
+
2
y
−
21
z
=
0
{\displaystyle 3x+2y-21z=0}
In this particular case there is not just one solution, but an infinite set of solutions, which can be written using set builder notation as
{
(
x
,
y
,
z
)
∣
3
x
+
2
y
−
21
z
=
0
}
.
{\displaystyle {\bigl \{}(x,y,z)\mid 3x+2y-21z=0{\bigr \}}.}
One particular solution is x = 0, y = 0, z = 0. Two other solutions are x = 3, y = 6, z = 1, and x = 8, y = 9, z = 2. There is a unique plane in three-dimensional space which passes through the three points with these coordinates, and this plane is the set of all points whose coordinates are solutions of the equation.
== Solution sets ==
The solution set of a given set of equations or inequalities is the set of all its solutions, a solution being a tuple of values, one for each unknown, that satisfies all the equations or inequalities.
If the solution set is empty, then there are no values of the unknowns that satisfy simultaneously all equations and inequalities.
For a simple example, consider the equation
x
2
=
2.
{\displaystyle x^{2}=2.}
This equation can be viewed as a Diophantine equation, that is, an equation for which only integer solutions are sought. In this case, the solution set is the empty set, since 2 is not the square of an integer. However, if one searches for real solutions, there are two solutions, √2 and –√2; in other words, the solution set is {√2, −√2}.
When an equation contains several unknowns, and when one has several equations with more unknowns than equations, the solution set is often infinite. In this case, the solutions cannot be listed. For representing them, a parametrization is often useful, which consists of expressing the solutions in terms of some of the unknowns or auxiliary variables. This is always possible when all the equations are linear.
Such infinite solution sets can naturally be interpreted as geometric shapes such as lines, curves (see picture), planes, and more generally algebraic varieties or manifolds. In particular, algebraic geometry may be viewed as the study of solution sets of algebraic equations.
== Methods of solution ==
The methods for solving equations generally depend on the type of equation, both the kind of expressions in the equation and the kind of values that may be assumed by the unknowns. The variety in types of equations is large, and so are the corresponding methods. Only a few specific types are mentioned below.
In general, given a class of equations, there may be no known systematic method (algorithm) that is guaranteed to work. This may be due to a lack of mathematical knowledge; some problems were only solved after centuries of effort. But this also reflects that, in general, no such method can exist: some problems are known to be unsolvable by an algorithm, such as Hilbert's tenth problem, which was proved unsolvable in 1970.
For several classes of equations, algorithms have been found for solving them, some of which have been implemented and incorporated in computer algebra systems, but often require no more sophisticated technology than pencil and paper. In some other cases, heuristic methods are known that are often successful but that are not guaranteed to lead to success.
=== Brute force, trial and error, inspired guess ===
If the solution set of an equation is restricted to a finite set (as is the case for equations in modular arithmetic, for example), or can be limited to a finite number of possibilities (as is the case with some Diophantine equations), the solution set can be found by brute force, that is, by testing each of the possible values (candidate solutions). It may be the case, though, that the number of possibilities to be considered, although finite, is so huge that an exhaustive search is not practically feasible; this is, in fact, a requirement for strong encryption methods.
As with all kinds of problem solving, trial and error may sometimes yield a solution, in particular where the form of the equation, or its similarity to another equation with a known solution, may lead to an "inspired guess" at the solution. If a guess, when tested, fails to be a solution, consideration of the way in which it fails may lead to a modified guess.
=== Elementary algebra ===
Equations involving linear or simple rational functions of a single real-valued unknown, say x, such as
8
x
+
7
=
4
x
+
35
or
4
x
+
9
3
x
+
4
=
2
,
{\displaystyle 8x+7=4x+35\quad {\text{or}}\quad {\frac {4x+9}{3x+4}}=2\,,}
can be solved using the methods of elementary algebra.
=== Systems of linear equations ===
Smaller systems of linear equations can be solved likewise by methods of elementary algebra. For solving larger systems, algorithms are used that are based on linear algebra. See Gaussian elimination and numerical solution of linear systems.
=== Polynomial equations ===
Polynomial equations of degree up to four can be solved exactly using algebraic methods, of which the quadratic formula is the simplest example. Polynomial equations with a degree of five or higher require in general numerical methods (see below) or special functions such as Bring radicals, although some specific cases may be solvable algebraically, for example
4
x
5
−
x
3
−
3
=
0
{\displaystyle 4x^{5}-x^{3}-3=0}
(by using the rational root theorem), and
x
6
−
5
x
3
+
6
=
0
,
{\displaystyle x^{6}-5x^{3}+6=0\,,}
(by using the substitution x = z1⁄3, which simplifies this to a quadratic equation in z).
=== Diophantine equations ===
In Diophantine equations the solutions are required to be integers. In some cases a brute force approach can be used, as mentioned above. In some other cases, in particular if the equation is in one unknown, it is possible to solve the equation for rational-valued unknowns (see Rational root theorem), and then find solutions to the Diophantine equation by restricting the solution set to integer-valued solutions. For example, the polynomial equation
2
x
5
−
5
x
4
−
x
3
−
7
x
2
+
2
x
+
3
=
0
{\displaystyle 2x^{5}-5x^{4}-x^{3}-7x^{2}+2x+3=0\,}
has as rational solutions x = −1/2 and x = 3, and so, viewed as a Diophantine equation, it has the unique solution x = 3.
In general, however, Diophantine equations are among the most difficult equations to solve.
=== Inverse functions ===
In the simple case of a function of one variable, say, h(x), we can solve an equation of the form h(x) = c for some constant c by considering what is known as the inverse function of h.
Given a function h : A → B, the inverse function, denoted h−1 and defined as h−1 : B → A, is a function such that
h
−
1
(
h
(
x
)
)
=
h
(
h
−
1
(
x
)
)
=
x
.
{\displaystyle h^{-1}{\bigl (}h(x){\bigr )}=h{\bigl (}h^{-1}(x){\bigr )}=x\,.}
Now, if we apply the inverse function to both sides of h(x) = c, where c is a constant value in B, we obtain
h
−
1
(
h
(
x
)
)
=
h
−
1
(
c
)
x
=
h
−
1
(
c
)
{\displaystyle {\begin{aligned}h^{-1}{\bigl (}h(x){\bigr )}&=h^{-1}(c)\\x&=h^{-1}(c)\\\end{aligned}}}
and we have found the solution to the equation. However, depending on the function, the inverse may be difficult to be defined, or may not be a function on all of the set B (only on some subset), and have many values at some point.
If just one solution will do, instead of the full solution set, it is actually sufficient if only the functional identity
h
(
h
−
1
(
x
)
)
=
x
{\displaystyle h\left(h^{-1}(x)\right)=x}
holds. For example, the projection π1 : R2 → R defined by π1(x, y) = x has no post-inverse, but it has a pre-inverse π−11 defined by π−11(x) = (x, 0). Indeed, the equation π1(x, y) = c is solved by
(
x
,
y
)
=
π
1
−
1
(
c
)
=
(
c
,
0
)
.
{\displaystyle (x,y)=\pi _{1}^{-1}(c)=(c,0).}
Examples of inverse functions include the nth root (inverse of xn); the logarithm (inverse of ax); the inverse trigonometric functions; and Lambert's W function (inverse of xex).
=== Factorization ===
If the left-hand side expression of an equation P = 0 can be factorized as P = QR, the solution set of the original solution consists of the union of the solution sets of the two equations Q = 0 and R = 0.
For example, the equation
tan
x
+
cot
x
=
2
{\displaystyle \tan x+\cot x=2}
can be rewritten, using the identity tan x cot x = 1 as
tan
2
x
−
2
tan
x
+
1
tan
x
=
0
,
{\displaystyle {\frac {\tan ^{2}x-2\tan x+1}{\tan x}}=0,}
which can be factorized into
(
tan
x
−
1
)
2
tan
x
=
0.
{\displaystyle {\frac {\left(\tan x-1\right)^{2}}{\tan x}}=0.}
The solutions are thus the solutions of the equation tan x = 1, and are thus the set
x
=
π
4
+
k
π
,
k
=
0
,
±
1
,
±
2
,
…
.
{\displaystyle x={\tfrac {\pi }{4}}+k\pi ,\quad k=0,\pm 1,\pm 2,\ldots .}
=== Numerical methods ===
With more complicated equations in real or complex numbers, simple methods to solve equations can fail. Often, root-finding algorithms like the Newton–Raphson method can be used to find a numerical solution to an equation, which, for some applications, can be entirely sufficient to solve some problem.
There are also numerical methods for systems of linear equations.
=== Matrix equations ===
Equations involving matrices and vectors of real numbers can often be solved by using methods from linear algebra.
=== Differential equations ===
There is a vast body of methods for solving various kinds of differential equations, both numerically and analytically. A particular class of problem that can be considered to belong here is integration, and the analytic methods for solving this kind of problems are now called symbolic integration. Solutions of differential equations can be implicit or explicit.
== See also ==
Extraneous and missing solutions
Simultaneous equations
Equating coefficients
Solving the geodesic equations
Unification (computer science) — solving equations involving symbolic expressions
== References == | Wikipedia/Solving_equations |
Parkinson's disease dementia (PDD) is dementia that is associated with Parkinson's disease (PD). Together with dementia with Lewy bodies (DLB), it is one of the Lewy body dementias characterized by abnormal deposits of Lewy bodies in the brain.
Parkinson's disease starts as a movement disorder, but progresses in most cases to include dementia and changes in mood and behavior. The signs, symptoms and cognitive profile of PDD are similar to those of DLB; DLB and PDD are clinically similar after dementia occurs in Parkinson's disease. Parkinson's disease is a risk factor for PDD; it speeds up decline in cognition leading to PDD. Up to 78% of people with PD have dementia. Delusions in PDD are less common than in DLB, and persons with PD are typically less caught up in their visual hallucinations than those with DLB. There is a higher incidence of tremor at rest in PD than in DLB, and signs of parkinsonism in PDD are less symmetrical than in DLB.
Parkinson's disease dementia can only be definitively diagnosed after death with an autopsy of the brain. The 2017 Fourth Consensus Report established diagnostic criteria for PDD and DLB. The diagnostic criteria are the same for both conditions, except that PDD is distinguished from DLB by the time frame in which dementia symptoms appear relative to parkinsonian symptoms. DLB is diagnosed when cognitive symptoms begin before or at the same time as parkinsonism. Parkinson's disease dementia is the diagnosis when Parkinson's disease is well established before the dementia occurs; that is, the onset of dementia is more than a year after the onset of parkinsonian symptoms.
Cognitive behavioral therapy can help people with Parkinson's disease with parkinsonian pain, insomnia, depression, anxiety, and impulse disorders, if those interventions are properly adapted to the motor, cognitive and executive dysfunctions seen in Parkinson's disease, including Parkinson's dementia.
== Society and culture ==
General awareness about LBD lags well behind that of Parkinson's and Alzheimer's diseases, even though LBD is the second most common dementia, after Alzheimer's.
== References ==
== External links == | Wikipedia/Parkinson's_disease_dementia |
An anti-α-synuclein drug, or an α-synuclein inhibitor, is a drug which blocks or inhibits α-synuclein. α-Synuclein is a protein which is thought to be involved in the development and progression of α-synucleinopathies including Parkinson's disease, dementia with Lewy bodies, and multiple system atrophy. Anti-α-synuclein drugs are under development for treatment of Parkinson's disease and other α-synuclein-related diseases. Examples include the monoclonal antibodies prasinezumab and cinpanemab, which both failed to show effectiveness in slowing the progression of Parkinson's disease in phase 2 clinical trials. Other anti-α-synuclein drugs, like the monoclonal antibody exidavnemab, the α-synuclein vaccines PD01A and PD03A, and the small-molecule α-synuclein misfolding and aggregation inhibitors minzasolmin and emrusolmin, are also under development. Memantine is also being studied as a potential disease-modifying treatment for Parkinson's disease by inhibiting cell-to-cell transmission of α-synuclein and is in a phase 3 trial for this purpose.
== See also ==
Anti-amyloid drugs
== References == | Wikipedia/Anti-alpha-synuclein_drugs |
Frontotemporal dementia (FTD), also called frontotemporal degeneration disease or frontotemporal neurocognitive disorder, encompasses several types of dementia involving the progressive degeneration of the brain's frontal and temporal lobes. Men and women appear to be equally affected. FTD generally presents as a behavioral or language disorder with gradual onset. Signs and symptoms tend to appear in late adulthood, typically between the ages of 45 and 65, although it can affect people younger or older than this. There is currently no cure or approved symptomatic treatment for FTD, although some off-label drugs and behavioral methods are prescribed.
Features of FTD were first described by Arnold Pick between 1892 and 1906. The name Pick's disease was coined in 1922. This term is now reserved only for the behavioral variant of FTD, in which characteristic Pick bodies and Pick cells are present. These were first described by Alois Alzheimer in 1911. Common signs and symptoms include significant changes in social and personal behavior, disinhibition, apathy, blunting and dysregulation of emotions, and deficits in both expressive and receptive language.
Each FTD subtype is relatively rare. FTDs are mostly early onset syndromes linked to frontotemporal lobar degeneration (FTLD), which is characterized by progressive neuronal loss predominantly involving the frontal or temporal lobes, and a typical loss of more than 70% of spindle neurons, while other neuron types remain intact. The three main subtypes or variant syndromes are a behavioral variant (bvFTD) previously known as Pick's disease, and two variants of primary progressive aphasia (PPA): semantic (svPPA) and nonfluent (nfvPPA). Two rare distinct subtypes of FTD are neuronal intermediate filament inclusion disease (NIFID) and basophilic inclusion body disease (BIBD). Other related disorders include corticobasal syndrome (CBS or CBD), and FTD with amyotrophic lateral sclerosis (ALS).
== Signs and symptoms ==
Frontotemporal dementia (FTD) is an early onset disorder that mostly occurs between the ages of 45 and 65, but can begin earlier, and in 20–25% of cases onset is later. Men and women appear to be equally affected. It is the most common early presenting dementia. FTD is the second most prevalent type of early onset dementia after Alzheimer's disease.
The International Classification of Diseases recognizes the disease as causative to disorders affecting mental and behavioural aspects in humans. Dissociation from family, compulsive buying disorder (oniomania), vulgar speech characteristics, screaming, inability to control emotions, behavior, personality, and temperament are characteristic social display patterns. A gradual onset and progression of changes in behavior or language deficits are reported to have begun several years prior to presentation to a neurologist.
== Subtypes and related disorders ==
The main subtypes of frontotemporal dementia are behavioral variant FTD (bvFTD), two variants of primary progressive aphasia – semantic dementia (svPPA) and progressive nonfluent aphasia (nfvPPA) – as well as FTD associated with amyotrophic lateral sclerosis (FTD–ALS or FTD-MND). Two distinct rare subtypes are neuronal intermediate filament inclusion disease (NIFID), and basophilic inclusion body disease (BIBD). Related disorders are corticobasal syndrome (CBS or CBD), and progressive supranuclear palsy (PSP).
=== Behavioral variant frontotemporal dementia ===
Behavioral variant frontotemporal dementia (BvFTD) was previously known as Pick's disease, and is the most common of the FTD types. BvFTD is diagnosed four times as often as the PPA variants. Behavior can change in BvFTD in either of two ways—it can change to being impulsive and disinhibited, acting in socially unacceptable ways; or it can change to being listless and apathetic. About 12–13% of people with bvFTD develop motor neuron disease.
The Pick bodies which are present in behavioral variant FTD are spherical inclusion bodies found in the cytoplasm of affected cells. They consist of tau fibrils as a major component together with a number of other protein products including ubiquitin and tubulin.
=== Semantic dementia ===
Semantic dementia (SD) is characterized by the loss of semantic understanding, resulting in impaired word comprehension. However, speech remains fluent and grammatical.
=== Progressive nonfluent aphasia ===
Progressive nonfluent aphasia (PNFA) is characterized by progressive difficulties in speech production.
=== Neuronal intermediate filament inclusion disease ===
Neuronal intermediate filament inclusion disease (NIFID) is a rare distinct variant. The inclusion bodies that are present in NIFID are cytoplasmic and made up of type IV intermediate filaments. NIFID has an early age of onset between 23 and 56. Symptoms can include behavioral and personality changes, memory and cognitive impairments, language difficulties, motor weakness, and extrapyramidal symptoms. NIFID is one of the frontotemporal lobar degeneration (FTLD)-FUS proteopathies. Imaging commonly shows atrophy in the frontotemporal region, and in part of the striatum in the basal ganglia. Post-mortem studies show a marked reduction in the caudate nucleus of the striatum; frontotemporal gyri are narrowed, with widened intervening sulci, and the lateral ventricles are enlarged.
=== Basophilic inclusion body disease ===
Another rare FTD variant, also a FTLD-FUS proteopathy, is basophilic inclusion body disease (BIBD).
== Other characteristics ==
In later stages of FTD, the clinical phenotypes may overlap. People with FTD tend to struggle with binge eating and compulsive behaviors. Binge eating habits are often associated with changes in food preferences (cravings for more sweets, carbohydrates), eating inedible objects and snatching food from others. Recent findings from structural MRI research have indicated that eating changes in FTD are associated with atrophy (wasting) in the right ventral insula, striatum, and orbitofrontal cortex.
People with FTD show marked deficiencies in executive functioning and working memory. Most become unable to perform skills that require complex planning or sequencing. In addition to the characteristic cognitive dysfunction, a number of primitive reflexes known as frontal release signs are often able to be elicited. Usually the first of these frontal release signs to appear is the palmomental reflex which appears relatively early in the disease course whereas the palmar grasp reflex and rooting reflex appear late in the disease course.
In rare cases, FTD can occur in people with amyotrophic lateral sclerosis (ALS), a motor neuron disease. As of 2005, the prognosis for people with ALS was worse when combined with FTD, shortening survival by about a year.
Cerebrospinal fluid leaks are a known cause of reversible frontotemporal dementia.
== Genetics ==
A higher proportion of frontotemporal dementias seem to have a familial component than other neurodegenerative diseases such as Alzheimer's disease. More and more mutations and genetic variants are being identified all the time, needing constant updating of genetic influences.
Tau-positive frontotemporal dementia and parkinsonism linked to chromosome 17 (FTDP-17) is caused by mutations in the MAPT gene on chromosome 17 that encodes the tau protein. It has been determined that there is a direct relationship between the type of tau mutation and the neuropathology of gene mutations. The mutations at the splice junction of exon 10 of tau lead to the selective deposition of the repetitive tau in neurons and glia. The pathological phenotype associated with mutations elsewhere in tau is less predictable, with both typical neurofibrillary tangles (consisting of both 3-repeat and 4-repeat tau) and Pick bodies (consisting of 3-repeat tau) having been described. The presence of tau deposits within glia is also variable in families with mutations outside of exon 10. This disease is now informally designated FTDP-17T. FTD shows a linkage to the region of the tau locus on chromosome 17, but it is believed that there are two loci leading to FTD within megabases of each other on chromosome 17. The only other known autosomal dominant genetic cause of FTLD-tau is a hypomorphic mutation in VCP which is associated with a unique neuropathology called vacuolar tauopathy.
FTD caused by FTLD-TDP43 has numerous genetic causes. Some cases are due to mutations in the GRN gene, also located on chromosome 17. Others are caused by hypomorphic VCP mutations, although these patients present with a complex picture of multisystem proteinopathy that can include amyotrophic lateral sclerosis, inclusion body myopathy, Paget's disease of bone, and FTD. The most recent addition to the list (as of 2019) was a hexanucleotide repeat expansion in intron 1 of C9ORF72. Only one or two cases have been reported describing TARDBP (the TDP-43 gene) mutations in a clinically pure FTD (FTD without ALS).
Several other genes have been linked to this condition. These include CYLD, OPTN, SQSTM1 and TBK1. These genes have been implicated in the autophagy pathway.
No genetic causes of FUS pathology in FTD have yet been reported.
Major alleles of TMEM106B SNPs have been found to be associated with risk of FTLD.
== Pathology ==
There are three main histological subtypes found at post-mortem: FTLD-tau, FTLD-TDP, and FTLD-FUS. In rare cases, patients with clinical FTD were found to have changes consistent with Alzheimer's disease on autopsy. The most severe brain atrophy appears to be associated with behavioral variant FTD, and corticobasal degeneration.
With regard to the genetic defects that have been found, repeat expansion in the C9orf72 gene is considered a major contribution to FTLD, although defects in the GRN and MAPT genes are also associated with it.
DNA damage and the defective repair of such damages have been etiologically linked to various neurodegenerative diseases including FTD.
== Diagnosis ==
FTD is traditionally difficult to diagnose owing to the diverse nature of the associated symptoms. Signs and symptoms are classified into three groups based on the affected functions of the frontal and temporal lobes: These are behavioural variant frontotemporal dementia, semantic dementia, and progressive nonfluent aphasia. An overlap between symptoms can occur as the disease progresses and spreads through the brain regions.
Structural MRI scans often reveal frontal lobe and/or anterior temporal lobe atrophy, but in early cases the scan may seem normal. Atrophy can be either bilateral or asymmetric. Registration of images at different points of time (e.g., one year apart) can show evidence of atrophy that otherwise at individual time points may be reported as normal. Many research groups have begun using techniques such as magnetic resonance spectroscopy, functional imaging, and cortical thickness measurements in an attempt to offer an earlier diagnosis to the FTD patient. Fluorine-18-fluorodeoxyglucose positron emission tomography scans classically show frontal and/or anterior temporal hypometabolism, which helps differentiate the disease from Alzheimer's disease, as the PET scan in Alzheimer's disease classically shows biparietal hypometabolism.
Meta-analyses based on imaging methods have shown that frontotemporal dementia mainly affects a frontomedial network discussed in the context of social cognition or "theory of mind". This is entirely in keeping with the notion that on the basis of cognitive neuropsychological evidence, the ventromedial prefrontal cortex is a major locus of dysfunction early on in the course of the behavioural variant of frontotemporal degeneration. The language subtypes of FTLD (semantic dementia and progressive nonfluent aphasia) can be regionally dissociated by imaging approaches in vivo.
The confusion between Alzheimer's and FTD is justifiable due to the similarities between their initial symptoms. Patients do not have difficulty with movement and other motor tasks. As FTD symptoms appear, it is difficult to differentiate between a diagnosis of Alzheimer's disease and FTD. There are distinct differences in the behavioral and emotional symptoms of the two dementias, notably, the blunting of affect seen in FTD patients. In the early stages of FTD, anxiety and depression are common, which may result in an ambiguous diagnosis. However, over time, these ambiguities fade away as this dementia progresses and defining symptoms of apathy, unique to FTD, start to appear.
Recent studies over several years have developed new criteria for the diagnosis of behavioral variant frontotemporal dementia (bvFTD). The confirmatory diagnosis is made by brain biopsy, but other tests can be used to help, such as MRI, EEG, CT, and physical examination and history. As of 2011, six distinct clinical features have been identified as symptoms of bvFTD.
Disinhibition
Apathy / Inertia
Loss of Sympathy / Empathy
Perseverative / Compulsive behaviors
Hyperorality
Dysexecutive neuropsychological profile
Of the six features, three must be present in a patient to diagnose one with possible bvFTD. Similar to standard FTD, the primary diagnosis stems from clinical trials that identify the associated symptoms, instead of imaging studies. The above criteria are used to distinguish bvFTD from disorders such as Alzheimer's and other causes of dementia. In addition, the criteria allow for a diagnostic hierarchy distinguished possible, probable, and definite bvFTD based on the number of symptoms present.
A 2021 study, determined that using cerebrospinal fluid (CSF) biomarkers of pathologic amyloid plaques, tangles, and neurodegeneration, collectively called ATN, can be useful in diagnosing FTD.
=== Neuropsychological tests ===
The progression of the degeneration caused by bvFTD may follow a predictable course. The degeneration begins in the orbitofrontal cortex and medial aspects such as ventromedial prefrontal cortex. In later stages, it gradually expands its area to the dorsolateral prefrontal cortex and the temporal lobe. Thus, the detection of dysfunction of the orbitofrontal cortex and ventromedial cortex is important in the detection of early stage bvFTD. As stated above, a behavioural change may occur before the appearance of any atrophy in the brain in the course of the disease. Because of that, image scanning such as MRI can be insensitive to the early degeneration and it is difficult to detect early-stage bvFTD.
In neuropsychology, there is an increasing interest in using neuropsychological tests such as the Iowa gambling task or Faux Pas Recognition test as an alternative to imaging for the diagnosis of bvFTD. Both the Iowa gambling task and the Faux Pas test are known to be sensitive to dysfunction of the orbitofrontal cortex.
The Faux Pas Recognition test is intended to measure one's ability to detect faux pas types of social blunders (accidentally making a statement or an action that offends others). It is suggested that people with orbitofrontal cortex dysfunction show a tendency to make social blunders due to a deficit in self-monitoring. Self-monitoring is the ability of individuals to evaluate their own behavior to make sure that their behavior is appropriate in particular situations. The impairment in self-monitoring leads to a lack of social emotion signals. The social emotions such as embarrassment are important in the way that they alert the individual to adapt social behavior in an appropriate manner to maintain relationships with others. Though patients with damage to the OFC retain intact knowledge of social norms, they fail to apply it to actual behavior, because they fail to generate social emotions that promote adaptive social behavior.
The other test, the Iowa gambling task, is a psychological test intended to simulate real-life decision making. The underlying concept of this test is the somatic marker hypothesis. This hypothesis argues that when people have to make complex uncertain decisions, they employ both cognitive and emotional processes to assess the values of the choices available to them. Each time a person makes a decision, both physiological signals and evoked emotion (somatic markers) are associated with their outcomes, and this accumulates as experience. People tend to choose the choice which might produce the outcome reinforced with positive stimuli; thus it biases decision-making towards certain behaviors while avoiding others. It is thought that somatic markers are processed in the orbitofrontal cortex.
The symptoms observed in bvFTD are caused by dysfunction of the orbitofrontal cortex; thus these two neuropsychological tests might be useful in detecting early-stage bvFTD. However, as self-monitoring and somatic marker processes are so complex, it likely involves other brain regions. Therefore, neuropsychological tests are sensitive to the dysfunction of orbitofrontal cortex, yet are not specific to it. The weakness of these tests is that they do not necessarily show dysfunction of the orbitofrontal cortex.
In order to solve this problem, some researchers have combined neuropsychological tests which detect the dysfunction of orbitofrontal cortex into one grouping, so that it increases its specificity to the degeneration of the frontal lobe, in order to detect early-stage bvFTD. They invented the Executive and Social Cognition Battery which comprises five neuropsychological tests:
Faux Pas test
Hotel task
Iowa gambling task
Mind in the Eyes
Multiple Errands task
The result has shown that this combined test is more sensitive in detecting the deficits in early bvFTD.
== Management ==
Currently, there is no cure for FTD. Treatments are available to manage the behavioral symptoms. Rehabilitiation services supporting every day functioning have demonstrated some positive results, in particular the Tailored Activity Programme, which is occupational therapy based. Positive behavior support (PBS) has also been identified as potentially beneficial for people with bvFTD. Disinhibition and compulsive behaviors can be controlled by selective serotonin reuptake inhibitors (SSRIs). Agitation can be controlled with small doses of atypical antipsychotics. Although Alzheimer's and FTD share certain symptoms, they cannot be treated with the same pharmacological agents because the cholinergic systems are not affected in FTD.
Because FTD often occurs in relatively younger adults (i.e. in their 40s or 50s), it can severely affect families. Patients often still have children living in the home.
== Prognosis ==
Symptoms of frontotemporal dementia progress at a rapid, steady rate. Patients with the disease can survive for 2–20 years. Eventually patients will need 24-hour care for daily function.
== History ==
Features of FTD were first described by the Czech psychiatrist Arnold Pick between 1892 and 1906. The name Pick's disease was coined in 1922. This term is now reserved only for behavioral variant FTD which shows the presence of the characteristic Pick bodies and Pick cells, which were first described by Alois Alzheimer in 1911.
In 1989, Snowden suggested the term semantic dementia to describe the patient with predominant left temporal atrophy and aphasia that Pick described. The first research criteria for FTD, "Clinical and neuropathological criteria for frontotemporal dementia. The Lund and Manchester Groups", was developed in 1994. The clinical diagnostic criteria were revised in the late 1990s, when the FTD spectrum was divided into a behavioral variant, a nonfluent aphasia variant, and a semantic dementia variant. The most recent revision of the clinical research criteria was by the International Behavioural Variant FTD Criteria Consortium in 2011.
== Notable cases ==
People who have been diagnosed as having FTD (often referred to as Pick's disease in cases of the behavioral variant) include:
John Berry (1963–2016), American hardcore punk musician and founding member of the Beastie Boys
Clancy Blair (born 1960), American developmental psychologist and professor
Don Cardwell (1935–2008), Major League Baseball pitcher
Charmian Carr (1942–2016), who played Liesl, from the Sound of Music, born Charmian Anne Farnon
Jerry Corbetta (1947–2016), frontman, organist and keyboardist of American psychedelic rock band Sugarloaf
Ted Darling (1935–1996), Buffalo Sabres television announcer
Robert W. Floyd (1936–2001), computer scientist
Lee Holloway (born 1982), computer scientist, co-founder of Cloudflare
Colleen Howe (1933–2009), sports agent and ice hockey team manager, known as "Mrs. Hockey"
Kazi Nazrul Islam (1899–1976) national poet of Bangladesh
Terry Jones (1942–2020), Welsh comedian (Monty Python) and director
Ralph Klein (1942–2013), former premier of Alberta, Canada
Kevin Moore (1958–2013), English footballer
Ernie Moss (1949–2021), English footballer
Nic Potter (1951–2013), British bassist for Van der Graaf Generator
Christina Ramberg (1946–1995), American painter associated with the Chicago Imagists
David Rumelhart (1942–2011), American cognitive psychologist
Sir Nicholas Wall (1945–2017), English judge
Wendy Williams (born 1964), American broadcaster
Bruce Willis (born 1955), American actor
Mark Wirtz (1943–2020), pop musician, composer and producer
== See also ==
== References ==
== Further reading ==
== External links ==
"AFTD - The Association for Frontotemporal Degeneration". Retrieved 2025-05-15. Patient group | Wikipedia/Pick's_disease |
Signs and symptoms of Parkinson's disease are varied. Parkinson's disease affects movement, producing motor symptoms. Non-motor symptoms, which include dysautonomia, cognitive and neurobehavioral problems, and sensory and sleep difficulties, are also common. When other diseases mimic Parkinson's disease, they are categorized as parkinsonism.
== Motor ==
Four motor symptoms are considered cardinal signs in PD: slowness of movement (bradykinesia), tremor, rigidity, and postural instability. Typical for PD is an initial asymmetric distribution of these symptoms, where in the course of the disease, a gradual progression to bilateral symptoms develops, although some asymmetry usually persists. Other motor symptoms include gait and posture disturbances such as decreased arm swing, a forward-flexed posture, and the use of small steps when walking; speech and swallowing disturbances; and other symptoms such as a mask-like facial expression or small handwriting are examples of the range of common motor problems that can appear.
=== Cardinal signs ===
Four motor signs are considered cardinal in PD: tremor, rigidity, bradykinesia, and postural instability (also referred to as parkinsonism).
Tremor is the most apparent and well-known sign. It is also the most common; though around 30% of individuals with PD do not have tremor at disease onset, most develop it as the disease progresses. It is usually a rest tremor, maximal when the limb is at rest and disappearing with voluntary movement and sleep. It affects to a greater extent the most distal part of the limb, and at onset typically appears in only a single arm or leg, becoming bilateral later during the course of the disease. Frequency of PD tremor is between 4 and 6 hertz (cycles per second). It is a pronation–supination tremor that is described as "pill-rolling"; that is, the index finger of the hand tends to get into contact with the thumb, and they perform a circular movement together. Such term was given due to the similarity of the movement in PD patients with the former pharmaceutical technique of manually making pills. PD tremor is not improved with alcohol intake, as opposed to essential tremor.
Rigidity is characterized by an increased muscle tone (an excessive and continuous contraction of the muscles) which produces stiffness and resistance to movement in joints. Rigidity may be associated with joint pain, with such pain being a frequent initial manifestation of the disease. When limbs of the person with PD are passively moved by others, a "cogwheel rigidity" is commonly seen. Cogwheel-like or ratchety jerks are characterized by the articulation moving as opposed to the normal fluid movement; when a muscle is externally tried to move, it resists at first, but with enough force, it is partially moved until it resists again, and only with further force, will it be moved. The combination of tremor and increased tone is considered to be at the origin of cogwheel rigidity.
Bradykinesia and akinesia: the former is slowness of movement, while the latter is the absence of it. It is the most characteristic clinical feature of PD, and is associated with difficulties along the whole course of the movement process, from planning to initiation and finally execution of a movement. The performance of sequential and simultaneous movements is also hindered. Bradykinesia is the most disabling symptom in the early stages of the disease. Initial manifestations of bradykinesia are problems when performing daily life tasks requiring fine motor control such as writing, sewing, or getting dressed. Clinical evaluation is based in similar tasks consisting such as alternating movements between both hands or feet. Bradykinesia is not equal for all movements or times. It is modified by the activity or emotional state of the subject to the point of some patients who are barely able to walk being capable of riding a bicycle. Generally, patients have less difficulties when some sort of external cue is provided.
... immobile patients who become excited may be able to make quick movements such as catching a ball (or may be able to suddenly run if someone screams "fire"). This phenomenon (kinesia paradoxica) suggests that patients with PD have intact motor programmes, but have difficulties accessing them without an external trigger, such as a loud noise, marching music, or a visual cue requiring them to step over an obstacle.
Postural instability: In the late stages, postural instability is typical, which leads to impaired balance and frequent falls, and secondarily to bone fractures. Instability is often absent in the initial stages, especially in younger people. Up to 40% of the patients may experience falls and around 10% may have falls weekly, with the number of falls being related to the severity of PD. It is produced by a failure of postural reflexes, along other disease-related factors such as orthostatic hypotension or cognitive and sensory changes.
=== Other motor symptoms ===
Other motor symptoms include:
Gait and posture disturbances:
Shuffling gait is characterized by short steps, with feet barely leaving the ground. Small obstacles tend to cause the patient to trip.
Decreased arm-swing
Turning 'en bloc, rather than the usual twisting of the neck and trunk and pivoting on the toes, is when PD patients keep their necks and trunks rigid, requiring multiple small steps to accomplish a turn.
Camptocormia is a stooped, forward-flexed posture. In severe forms, the head and upper shoulders may be bent at a right angle relative to the trunk.
Festination is a combination of stooped posture, imbalance, and short steps. It leads to a gait that gets progressively faster and faster, often ending in a fall.
Gait freezing, also called motor blocks, is a manifestation of akinesia. Gait freezing is characterized by a sudden inability to move the lower extremities which usually lasts less than 10 seconds. It may worsen in tight, cluttered spaces, when attempting to initiate gait or turning around, or when approaching a destination. Freezing improves with treatment and also with behavioral techniques such as marching to command or following a given rhythm.
Dystonia is abnormal, sustained, sometimes painful twisting muscle contractions, often affecting the foot and ankle (mainly toe flexion and foot inversion), which often interferes with gait.
Scoliosis is abnormal curvature of the spine.
Speech and swallowing disturbances:
Hypophonia (soft speech).
Monotonic speech—quality tends to be soft, hoarse, and monotonous.
Festinating speech—excessively rapid, soft, poorly intelligible speech.
Drooling is most likely caused by a weak, infrequent swallow.
Dysphagia is an impaired ability to swallow, which in the case of PD is probably related to an inability to initiate the swallowing reflex or by a too long laryngeal or oesophageal movement. It can lead to aspiration pneumonia.
Dysarthria is a form of speech disorder.
Other motor symptoms and signs:
Fatigue
Hypomimia (a mask-like face).
Difficulty rolling in bed or rising from a seated position.
Micrographia (small, cramped handwriting).
Impaired fine-motor dexterity and motor coordination
Impaired gross-motor coordination.
Akathisia (an unpleasant desire to move, often related to medication).
Reemergence of primitive reflexes.
Glabellar reflex
== Neuropsychiatric ==
Parkinson's disease causes neuropsychiatric disturbances, which mainly include cognitive disorders, mood disorders, and behavior problems, and can be as disabling as motor symptoms.
Since L-Dopa, the widely used drug in Parkinson's disease treatment, is decarboxylated by aromatic L-amino acid decarboxylase (AADC), which is found in both dopaminergic and serotonergic neurons, it is possible for serotonergic neurons to convert L-Dopa into dopamine and generate excessive neuronal death by creating reactive oxygen species and quinoproteins. The association of serotonin with mood and cognition may explain some of the side-effects observed in patients treated with L-Dopa due to serotonin deficit.
In most cases, motor symptoms predominate at early PD stages, while cognitive disturbances (such as mild cognitive impairment or dementia) emerge later. The onset of parkinsonism in PD relative to dementia is used as an arbitrary criterion to clinically distinguish Parkinson's disease dementia (PDD) and dementia with Lewy bodies (DLB) using a 'one-year rule'. Dementia onset within 12-months of or at the same time as motor dysfunctions qualified as DLB, whereas in PDD, parkinsonism had to precede dementia by at least one year.
Cognitive disturbances occur even in the initial stages of the disease in some cases. A very high proportion of patients have mild cognitive impairment as the disease advances. Most common deficits in nondemented patients are:
Executive dysfunction, which translates into impaired set shifting, poor problem solving, and fluctuations in attention among other difficulties
Slowed cognitive speed (bradyphrenia)
Memory problems can occur, specifically in recalling learned information, with an important improvement with cues. Recognition memory is less impaired than free recall, pointing towards a retrieving more than to an encoding problem.
Regarding language, patients are found to have problems in verbal fluency tests.
Visuospatial skills difficulties, which are seen when the person with PD is for example asked to perform tests of face perception and perception of line orientation.
Deficits tend to aggravate with time, developing in many cases into dementia. A person with PD has a six-fold increased risk of developing it, and the overall rate in people with the disease is around 30%. Moreover, prevalence of dementia increases in relation to disease duration, going up to 80%. Dementia has been associated with a reduced quality of life in patients and caregivers, increased mortality, and a higher probability of moving to a nursing home.
Cognitive problems and dementia are usually accompanied by behavior and mood alterations, although these kinds of changes are also more common in those patients without cognitive impairment than in the general population. Most frequent mood difficulties include:
Depression is well recognized in PD, having been identified as "melancholia" by James Parkinson in his original report of the disease in 1817. Estimated prevalence rates of depression vary widely according to the population sampled and methodology used, although depressive symptoms, irrespective of classically defined DSM criteria for depression, are present in 35% of patients There is an increased risk for any individual with depression to go on to develop Parkinson's disease at a later date. It is increasingly thought to be a consequence of the disease rather than an emotional reaction to disability, although ample evidence shows that the relationship between depression and PD is bidirectional. General risk factors for depression are actually stronger markers for depression in PD patients than PD specific factors. Since Parkinson's affects many areas of the brain that control mood (specifically the frontal lobe as well as those areas that produce serotonin, norepinephrine and dopamine), depression may result. Depression is one of the most common neuropsychiatric conditions found in patients who have PD, and it is associated with more rapid progression of physical symptoms and a greater decline in cognitive skills. Depression in patients with PD was found to be more predictive of overall disability than was the motor disability from the PD. An interesting finding is that although a high rate of depression is seen in patients with PD, the incidence of suicide is lower in this group of patients. Many of the symptoms of PD may overlap with those of depression, making diagnosis a difficult issue.
Apathy
Anxiety is seen; 70% of individuals with PD diagnosed with pre-existing depression go on to develop anxiety. About 90% of PD patients with pre-existing anxiety subsequently develop depression, apathy, or abulia.
Obsessive–compulsive behaviors (also known as impulse-control disorders) such as craving, binge eating, hypersexuality, pathological gambling, punding, or others, can also appear in PD, and have been related to a dopamine dysregulation syndrome associated with the medications for the disease.
Psychotic symptoms are common in PD, generally associated with dopamine therapy. Symptoms of psychosis, or impaired reality testing, are either hallucinations, typically visual, less commonly auditory, and rarely in other domains including tactile, gustatory, or olfactory, or delusions, that is, irrational beliefs. Hallucinations are generally stereotyped and without emotional content. Initially, patients usually have insight so that the hallucinations are benign in terms of their immediate impact, but have poor prognostic implications, with increased risk of dementia, worsened psychotic symptoms, and mortality. Delusions occur in about 5-10% of treated patients, and are considerably more disruptive, being paranoid in nature, of spousal infidelity or family abandonment. Psychosis is an independent risk factor for nursing-home placement.
Hallucinations can occur in parkinsonian syndromes for a variety of reasons. An overlap exists between PD and dementia with Lewy bodies, so that where Lewy bodies are present in the visual cortex, hallucinations may result. Hallucinations can also be brought about by excessive dopaminergic stimulation. Most hallucinations are visual in nature, often formed as familiar people or animals, and are generally not threatening in nature. Some patients find them comforting; however, their caregivers often find this part of the disease most disturbing, and the occurrence of hallucinations is a major risk factor for hospitalisation. Treatment options consist of modifying the dosage of dopaminergic drugs taken each day, adding an antipsychotic drug such as quetiapine, or offering caregivers a psychosocial intervention to help them cope with the hallucinations.
== Sleep ==
Sleep problems can be worsened by medications for PD, but they are a core feature of the disease. Sleep dysfunction in PD has significant negative impacts on both patient and carer quality of life. Some common symptoms are:
Excessive daytime somnolence
Insomnia, characterized mostly by sleep fragmentation
Disturbances in rapid eye movement sleep: disturbingly vivid dreams, and rapid eye movement behavior disorder, characterized by acting out of dream content: It appears in a third of the patients and it is a risk factor for PD in the overall population.
== Perception ==
Impaired proprioception (the awareness of bodily position in three-dimensional space)
Reduction or loss of sense of smell (hyposmia or anosmia) may be an early marker of the disease.
Paresthesias
== Autonomic ==
Orthostatic hypotension leading to dizziness and fainting
Oily skin
Urinary incontinence (typically in later disease progression) and nocturia (getting up in the night to pass urine)
Altered sexual function is characterized by profound impairment of sexual arousal, behavior, orgasm, and drive, and is found in mid- and late PD.
Excessive sweating
== Gastrointestinal ==
Parkinson's Disease causes constipation and gastric dysmotility that is severe enough to endanger comfort and even health. A factor in this is the appearance of Lewy bodies and Lewy neurites even before these affect the functioning of the substantia nigra in the neurons in the enteric nervous system that control gut functions.
== Neuro-ophthalmological ==
PD is related to different ophthalmological abnormalities produced by the neurological changes. Among them are:
Decreased blink rate
Irritation of the eye surface
Alteration in the tear film
Visual hallucinations
Decreased eye convergence
Blepharospasm
Abnormalities in ocular pursuit, ocular fixation and saccadic movements
Difficulties opening the eyelids This can have particular relevance when driving. People with Parkinson's have been shown to be less accurate in spotting landmarks and roadsigns whilst driving.
Limitations in upward gaze
Blurred vision
Diplopia (double vision), produced by a reduced eye convergence.
== References == | Wikipedia/Signs_and_symptoms_of_Parkinson's_disease |
Conversion disorder (CD) was a formerly diagnosed psychiatric disorder characterized by abnormal sensory experiences and movement problems during periods of high psychological stress. Individuals diagnosed with CD presented with highly distressing neurological symptoms such as numbness, blindness, paralysis, or convulsions, none of which were consistent with a well-established organic cause and could be traced back to a psychological trigger. CD is no longer diagnosed and was superseded by functional neurologic disorder (FND), a similar diagnosis that notably removed the requirement for a psychological stressor to be present.
It was thought that these symptoms arise in response to stressful situations affecting a patient's mental health. Individuals diagnosed with conversion disorder have a greater chance of experiencing certain psychiatric disorders including anxiety disorders, mood disorders, and personality disorders compared to those diagnosed with neurological disorders.
Conversion disorder was partly retained in the DSM-5-TR and ICD-11, but was renamed to functional neurological symptom disorder (FNsD) and dissociative neurological symptom disorder (DNSD), respectively. FNsD covers a similar range of symptoms found in conversion disorder, but does not include the requirements for a psychological stressor to be present. The new criteria no longer require feigning to be disproven before diagnosing FNsD. A fifth criterion describing a limitation in sexual functioning that was included in the DSM-IV was removed in the DSM-5 as well. The ICD-11 classifies DNSD as a dissociative disorder with unspecified neurological symptoms.
== Signs and symptoms ==
Conversion disorder presented with symptoms following exposure to a certain stressor, typically associated with trauma or psychological distress. Usually, the physical symptoms of the disorder affect the senses or movement. Common symptoms included blindness, partial or total paralysis, inability to speak, deafness, numbness, difficulty swallowing, incontinence, balance problems, non-epileptic seizures, tremors, and difficulty walking. Feelings of breathlessness were said to have possibly indicated conversion disorder or sleep paralysis.
Sleep paralysis and narcolepsy can be ruled out with sleep tests. These symptoms were attributed to conversion disorder when a medical explanation for the conditions cannot be found. Symptoms of conversion disorder usually occur suddenly. Conversion disorder was typically observed in people ages 10 to 35, affecting between 0.011% and 0.5% of the general population.
Conversion disorder presented motor or sensory symptoms including:
Motor symptoms or deficits:
Impaired coordination or balance
Weakness/paralysis of a limb or the entire body (hysterical paralysis or motor conversion disorders)
Impairment or loss of speech (hysterical aphonia)
Difficulty swallowing (dysphagia) or a sensation of a lump in the throat
Urinary retention
Psychogenic non-epileptic seizures or convulsions
Persistent dystonia
Tremor, myoclonus or other movement disorders
Gait problems (astasia-abasia)
Loss of consciousness (fainting)
Sensory symptoms or deficits:
Impaired vision, double vision
Impaired hearing
Loss or disturbance of touch or pain sensation
Conversion symptoms typically do not conform to known anatomical pathways and physiological mechanisms. It has sometimes been stated that the presenting symptoms tend to reflect the patient's own understanding of anatomy and that the less medical knowledge a person has, the more implausible are the presenting symptoms. However, no systematic studies have yet been performed to substantiate this statement.
Sexual dysfunction and pain were also considered symptoms of conversion disorder, but if a patient only has these symptoms, they should be diagnosed with sexual pain disorder or pain disorder.
== Diagnosis ==
=== Definition ===
Conversion disorder is now partly contained under functional neurological symptom disorder (FNsD). In cases of conversion disorder, there is a psychological stressor.
The diagnostic criteria for functional neurologic symptom disorder, as set out in DSM-5, are:
Specify type of symptom or deficit as:
With weakness or paralysis
With abnormal movement (e.g. tremor, dystonic movement, myoclonus, gait disorder)
With swallowing symptoms
With speech symptoms (e.g. dysphonia, slurred speech)
With attacks or seizures
With amnesia or memory loss
With special sensory loss symptoms (e.g. visual blindness, olfactory loss, or hearing disturbance)
With mixed symptoms.
Specify if:
Acute episode: symptoms present for less than six months
Persistent: symptoms present for six months or more.
Specify if:
Psychological stressor (conversion disorder)
No psychological stressor (functional neurologic symptom disorder)
=== Exclusion of neurological disease ===
Conversion disorder presents with symptoms that typically resemble a neurological disorder such as stroke, multiple sclerosis, epilepsy, hypokalemic periodic paralysis, or narcolepsy. The neurologist must carefully exclude neurological disease, through examination and appropriate investigations. However, it is not uncommon for patients with neurological disease to also have conversion disorder.
In excluding neurological disease, the neurologist has traditionally relied partly on the presence of positive signs of conversion disorder (i.e., certain aspects of the presentation that were thought to be rare in neurological disease but common in conversion disorder). The validity of many of these signs has been questioned by a study showing that they also occur in neurological disease. One such symptom, for example, is la belle indifférence, described in DSM-IV as "a relative lack of concern about the nature or implications of the symptoms". In a 2006 study, no evidence was found that patients with functional symptoms are any more likely to exhibit this than patients with a confirmed organic disease. In the DSM-5, la belle indifférence was removed as a diagnostic criterion.
Another feature thought to be important was that symptoms tended to be more severe on the non-dominant, usually left side of the body. There have been a number of theories about this, such as the relative involvement of cerebral hemispheres in emotional processing, or more simply, that it was "easier" to live with a functional deficit on the non-dominant side. However, a literature review of 121 studies established that this was not true, with publication bias the most likely explanation for this commonly held view. Although agitation is often assumed to be a positive sign of conversion disorder, release of epinephrine is a well-demonstrated cause of paralysis from hypokalemic periodic paralysis.
Misdiagnosis does sometimes occur. In a highly influential study from the 1960s, Eliot Slater demonstrated that misdiagnoses had occurred in one third of his 112 patients with conversion disorder. Later authors have argued that the paper was flawed. A 2005 meta-analysis has shown that misdiagnosis rates since that paper was published are around four percent, the same as for other neurological diseases.
=== Psychological mechanism ===
The psychological mechanism of conversion can be the most difficult aspect of a conversion disorder diagnosis. Even if there is a clear antecedent trauma or other possible psychological trigger, it is still not clear exactly how this gives rise to the symptoms observed. Patients with medically unexplained neurological symptoms may not have any psychological stressor, hence the use of the term "functional neurological symptom disorder" in the DSM-5, as opposed to "conversion disorder", and the DSM-5's removal of the need for a psychological trigger. The change of name in the DSM-5 also came with a change of criteria. There was a removal of connection to sexual functioning as well as relation to any other medical condition. There was also an added connection to social and occupational functioning.
== Treatment ==
Treatments for conversion disorder included hypnosis, psychotherapy, physical therapy, stress management, and transcranial magnetic stimulation (TMS). Treatment plans will consider duration and presentation of symptoms and may include one or multiple of the above treatments. This may include the following:
Occupational therapy to maintain autonomy in activities of daily living.
Treatment of comorbid depression or anxiety if present..
Educating patients on the causes of their symptoms might help them learn to manage both the psychiatric and physical aspects of their condition. Psychological counseling is often warranted given the known relationship between conversion disorder and emotional trauma. This approach ideally takes place alongside other types of treatment.
Medications such as serotonin–norepinephrine reuptake inhibitors (SNRIs), a class of antidepressants, and sedatives such as benzodiazepines may help reduce stress and also relieve or prevent symptoms from occurring.
There is little evidence-based treatment of conversion disorder. Other treatments such as cognitive behavioral therapy (CBT), hypnosis, EMDR, and psychodynamic psychotherapy, EEG brain biofeedback need further trials. Psychoanalytic treatment may possibly be helpful. Most studies assessing the efficacy of these treatments are of poor quality and larger, better controlled studies are urgently needed. CBT is the most common treatment, with a 13% improvement rate.
== Prognosis ==
Empirical studies have found that the prognosis for conversion disorder varies widely, with some cases resolving in weeks, and others enduring for years or decades. Although patients may go into remission, they can relapse at any point.
== Epidemiology ==
=== Frequency ===
Information on the frequency of conversion disorder is limited, in part due to the complexities of the diagnostic process. In neurology clinics, the reported prevalence of unexplained symptoms among new patients is very high, between 30 and 60%. However, diagnosis of conversion disorder typically required an additional psychiatric evaluation, and since few patients will see a psychiatrist, it is unclear what proportion of the unexplained symptoms are actually due to the disorder. In 1976, large scale psychiatric registers in the U.S. and Iceland found incidence rates of 22 and 11 newly diagnosed cases per 100,000 person-years, respectively. In 2002, some estimates claim that in the general population, between 0.011% and 0.5% of the population have conversion disorder.
=== Culture ===
Although it is often thought that the frequency of conversion may be higher outside of the West, perhaps in relation to cultural and medical attitudes, evidence of this is limited. A 2007 community survey of urban Turkey found a prevalence of 5.6%. Many authors have found occurrence of conversion to be more frequent in rural, lower socio-economic groups, where technological investigation of patients is limited and people may know less about medical and psychological concepts.
=== Gender ===
In recent surveys of conversion disorder, females predominate, with between two and six female patients for every male. Some research suggests however that this gender disparity may be confounded by higher rates of violence against women.
=== Age ===
Conversion disorder may present at any age, but is rare in children younger than ten or in the elderly. Studies suggest a peak onset in the mid-to-late 30s.
== History ==
The first evidence of hysteria dates back to 1900 B.C., when the symptoms were blamed on the uterus moving within the female body. The treatment varied "depending on the position of the uterus, which must be forced to return to its natural position. If the uterus had moved upwards, this could be done by placing malodorous and acrid substances near the woman's mouth and nostrils, while scented ones were placed near her vagina; on the contrary, if the uterus had lowered, the document recommends placing the acrid substances near her vagina and the perfumed ones near her mouth and nostrils."
In Greek mythology, hysteria, a similarly described condition, was thought to be caused by a lack of orgasms, uterine melancholy, and not procreating. Plato, Aristotle, and Hippocrates believed that a lack of sex causes complications in the uterus. Many Greeks believed it could be prevented and cured with wine and orgies. Hippocrates argued that a lack of regular sexual intercourse led to the uterus producing toxic fumes, causing it to move in the body. Therefore, he argued, all women should be married and enjoy a satisfactory sexual life.
Donald Capps argues that the diseases Jesus allegedly healed, such as paralysis and blindness, were actually forms of conversion disorder. He describes Jesus as a "village psychiatrist", who believed that his words had power.
From the 13th century, women with hysteria were exorcised, as it was believed that they were possessed by the devil. It was believed that if doctors could not find the cause of a disease or illness, it must be caused by the devil.
At the beginning of the 16th century, women were sexually stimulated by midwives in order to relieve their symptoms. Gerolamo Cardano and Giambattista della Porta believed polluted water and fumes caused the symptoms of hysteria. Towards the end of the century, the role of the uterus was no longer thought central to the disorder, with Thomas Willis discovering that the brain and central nervous system were the cause of the symptoms. Thomas Sydenham argued that the symptoms of hysteria may have an organic cause. He also proved the uterus is not the cause of symptoms.
In 1692, in the U.S. town of Salem, Massachusetts, there was a reported outbreak of hysteria. This led to the Salem witch trials, where women who were accused of being witches had symptoms such as sudden movements, staring eyes, and uncontrollable jumping.
During the 18th century, there was a move from the idea of hysteria being caused by the uterus to it being caused by the brain. This led to an understanding that it could affect both sexes. Jean-Martin Charcot argued that hysteria was caused by "a hereditary degeneration of the nervous system, namely a neurological disorder".
In the 19th century, hysteria moved from being considered a neurological disorder to being considered a psychological disorder, when Pierre Janet argued that "dissociation appears autonomously for neurotic reasons, and in such a way as to adversely disturb the individual's everyday life". As early as 1874, doctors including W. B. Carpenter and J. A. Omerod began to speak out against the hysteria phenomenon as there was no evidence to prove its existence.
Sigmund Freud referred to the condition as both hysteria and conversion disorder throughout his career. He believed those with the condition could not live in a mature relationship, and that those with the condition were unwell in order to achieve a "secondary gain", in that they are able to manipulate their situation to fit their needs or desires. He also found that both men and women could have the disorder.
Freud's model suggested the emotional charge deriving from painful experiences would be consciously repressed as a way of managing the pain, but that the emotional charge would be somehow "converted" into neurological symptoms. Freud later argued that the repressed experiences were of a sexual nature. As Peter Halligan comments, conversion has "the doubtful distinction among psychiatric diagnoses of still invoking Freudian mechanisms".
Pierre Janet, a highly noted psychologist during the early 20th century, argued that symptoms arose through the power of suggestion, acting on a personality vulnerable to dissociation. In this hypothetical process, the subject's experience of their leg, for example, is split off from the rest of their consciousness, resulting in paralysis or numbness in that leg.
Some support for the Freudian model comes from findings of high rates of childhood sexual abuse in conversion patients. Support for the dissociation model comes from studies showing heightened suggestibility in patients with conversion disorder. Critics argue that it can be challenging to find organic pathologies for all symptoms, and so the practice of diagnosing patients with such symptoms as having hysteria led to the disorder being meaningless, vague and a sham diagnosis, as it does not refer to any definable disease.
Throughout its history, many patients have been misdiagnosed with hysteria or conversion disorder when they had organic disorders such as tumors, epilepsy, or vascular diseases. This has led to patient deaths, a lack of appropriate care, and suffering for the patients.
Eliot Slater, after studying the condition in the 1950s, stated: "The diagnosis of 'hysteria' is all too often a way of avoiding a confrontation with our own ignorance. This is especially dangerous when there is an underlying organic pathology, not yet recognised. In this penumbra we find patients who know themselves to be ill but, coming up against the blank faces of doctors who refuse to believe in the reality of their illness, proceed by way of emotional lability, overstatement and demands for attention ... Here is an area where catastrophic errors can be made. In fact it is often possible to recognise the presence though not the nature of the unrecognisable, to know that a man must be ill or in pain when all the tests are negative. But it is only possible to those who come to their task in a spirit of humility. In the main the diagnosis of 'hysteria' applies to a disorder of the doctor–patient relationship. It is evidence of non-communication, of a mutual misunderstanding ... We are, often, unwilling to tell the full truth or to admit to ignorance ... Evasions, even untruths, on the doctor's side are among the most powerful and frequently used methods he has for bringing about an efflorescence of 'hysteria'".
The onset of conversion disorder often correlates to a traumatic or stressful event. There are certain populations that are considered at risk for conversion disorder, including people with a medical illness or condition, people with personality disorders or dissociative disorders. No biomarkers have yet been found to support the idea that conversion disorder is caused by a psychiatric condition.
There has been much recent interest in using functional neuroimaging to study conversion. As researchers identify the mechanisms which underlie conversion symptoms, it is hoped they will enable the development of a neuropsychological model. A number of such studies have been performed, including some which suggest the blood-flow in patients' brains may be abnormal while they are unwell. The studies have all been too small to be confident of the generalisability of their findings, so no neuropsychological model has been clearly established.
An evolutionary psychology explanation for conversion disorder is that the symptoms may have been evolutionarily advantageous during warfare. A non-combatant with these symptoms signals non-verbally, possibly to someone speaking a different language, that she or he is not dangerous as a combatant and also may be carrying some form of dangerous infectious disease. This can explain that conversion disorder may develop following a threatening situation, that there may be a group effect with many people simultaneously developing similar symptoms, as in mass psychogenic illness, and the gender difference in prevalence.
== See also ==
Body-centred countertransference
Functional neurologic disorder (FND)
Post-traumatic stress disorder (PTSD) and Complex post-traumatic stress disorder (C-PTSD)
Somatic symptom disorder
Functional disorder
== References ==
== External links == | Wikipedia/Conversion_disorder |
Parkin is a 465-amino acid residue E3 ubiquitin ligase, a protein that in humans and mice is encoded by the PARK2 gene. Parkin plays a critical role in ubiquitination – the process whereby molecules are covalently labelled with ubiquitin (Ub) and directed towards degradation in proteasomes or lysosomes. Ubiquitination involves the sequential action of three enzymes. First, an E1 ubiquitin-activating enzyme binds to inactive Ub in eukaryotic cells via a thioester bond and mobilises it in an ATP-dependent process. Ub is then transferred to an E2 ubiquitin-conjugating enzyme before being conjugated to the target protein via an E3 ubiquitin ligase. There exists a multitude of E3 ligases, which differ in structure and substrate specificity to allow selective targeting of proteins to intracellular degradation.
In particular, parkin recognises proteins on the outer membrane of mitochondria upon cellular insult and mediates the clearance of damaged mitochondria via autophagy and proteasomal mechanisms. Parkin also enhances cell survival by suppressing both mitochondria-dependent and -independent apoptosis. Mutations are associated with mitochondrial dysfunction, leading to neuronal death in Parkinson's disease and aberrant metabolism in tumourigenesis.
== Structure ==
The precise function of parkin is unknown; however, the protein is a component of a multiprotein E3 ubiquitin ligase complex which in turn is part of the ubiquitin-proteasome system that mediates the targeting of proteins for degradation. Mutations in this gene are known to cause a familial form of Parkinson's disease known as autosomal recessive juvenile Parkinson's disease (AR-JP). Moreover, parkin is described to be necessary for mitophagy (autophagy of mitochondria).
However, how loss of function of the parkin protein leads to dopaminergic cell death in this disease is unclear. The prevailing hypothesis is that parkin helps degrade one or more proteins toxic to dopaminergic neurons. Putative substrates of parkin include synphilin-1, CDC-rel1, cyclin E, p38 tRNA synthase, Pael-R, synaptotagmin XI, sp22 and parkin itself (see also ubiquitin ligase). Additionally, parkin contains a C-terminal motif that binds PDZ domains. Parkin has been shown to associate in a PDZ dependent manner with the PDZ domain containing proteins CASK and PICK1.
Like other members of the RING-between-RING (RBR) family of E3 ligases, parkin possesses two RING finger domains and an in-between-RING (IBR) region. RING1 forms the binding site for E2 Ub-conjugating enzyme while RING2 contains the catalytic cysteine residue (Cys431) that cleaves Ub off E2 and transiently binds it to E3 via a thioester bond. Ub transfer is aided by neighbouring residues histidine His433, which accepts a proton from Cys431 to activate it, and glutamate Glu444, which is involved in autoubiquitination. Together these form the catalytic triad, whose assembly is required for parkin activation. Parkin also contains an N-terminal Ub-like domain (Ubl) for specific substrate recognition, a unique RING0 domain and a repressor (REP) region that tonically suppresses ligase activity.
Under resting conditions, the tightly coiled conformation of parkin renders it inactive, as access to the catalytic RING2 residue is sterically blocked by RING0, while the E2 binding domain on RING1 is occluded by Ubl and REP. Activating stimuli disrupt these interdomain interactions and induce parkin to collapse along the RING1-RING0 interface. The active site of RING2 is drawn towards E2-Ub bound to RING1, facilitating formation of the Ub-thioester intermediate. Parkin activation requires phosphorylation of serine Ser65 in Ubl by serine/threonine kinase, PINK1. Addition of a charged phosphate destabilises hydrophobic interactions between Ubl and neighbouring subregions, reducing autoinhibitory effects of this N-terminus domain. Ser65Ala missense mutations were found to ablate Ub-parkin binding whilst inhibiting parkin recruitment to damaged mitochondria. PINK1 also phosphorylates Ub at Ser65, accelerating its discharge from E2 and enhancing its affinity for parkin.
Although structural changes following phosphorylation are uncertain, crystallisation of parkin revealed a cationic pocket in RING0 formed by lysine and arginine residues Lys161, Arg163 and Lys211 that forms a putative phosphate binding site. Considering that RING0 is unique to parkin and that its hydrophobic interface with RING1 buries Cys431 in inactive parkin, targeting of phosphorylated Ub and/or Ubl towards this binding niche might be critical in dismantling autoinhibitory complexes during parkin activation.
== Function ==
=== Mitophagy ===
Parkin plays a crucial role in mitophagy and clearance of reactive oxygen species. Mitophagy is the elimination of damaged mitochondria in autophagosomes, and is dependent on a positive feedback cycle involving synergistic action of parkin and PINK1. Following severe cellular insult, rundown of mitochondrial membrane potential prevents import of PINK1 into the mitochondrial matrix and causes it to aggregate on the outer mitochondrial membrane (OMM). Parkin is recruited to mitochondria following depolarisation and phosphorylated by PINK1, which simultaneously phosphorylates Ub pre-conjugated to mitochondrial membrane proteins. PINK1 and Ub phosphorylation facilitate parkin activation and further assembly of mono- and poly-Ub chains. Considering the proximity of these chains to PINK1, further phosphorylation of Ub at Ser65 is likely, potentiating parkin mobilisation and substrate ubiquitination in a self-reinforcing cycle.
Parkin substrates include mitofusins Mfn1 and Mfn2, which are large GTPases that promote mitochondria fusion into dynamic, tubular complexes that maximise efficiency of oxidative phosphorylation. However, upon mitochondrial damage, degradation of fusion proteins is necessary to separate them from the network via mitochondrial fission and prevent the corruption of healthy mitochondria. Parkin is therefore required before mitophagy as it ubiquinates Mfn1/2, labelling it for proteasomal degradation. Proteomic studies identified additional OMM proteins as parkin substrates, including fission protein FIS, its adaptor TBC1D15 and translocase TOMM20 and TOMM70 that facilitate movement of proteins such as PINK1 across OMM. Miro (or RHOT1/RHOT2) is an OMM protein critical for axonal transport, and may be ubiquitinated and targeted towards proteasomal degradation by parkin. Miro breakdown produced a marked decrease in migration of compromised mitochondria along axons of mouse hippocampal neurons, reinforcing the importance of parkin in segregating defective mitochondria from their functioning counterparts and limiting the spatial spread of mitochondrial dysfunction, prior to autophagy.
During mitophagy, parkin targets VDAC1, a voltage-gated anion channel that undergoes a conformational change upon mitochondrial membrane depolarisation, exposing a cytosolic domain for ubiquitination. Silencing of VDAC1 expression in HeLa cells significantly reduced parkin recruitment to depolarised mitochondria and their subsequent clearance, highlighting the critical role of VDAC1 as a selective marker of mitochondrial damage and instigator of mitophagy. Following Ub conjugation, parkin recruits autophagy receptors such as p62, TAX1BP1 and CALCOCO2, facilitating assembly of autophagosomes that digest defective mitochondria.
=== Cell survival ===
Through activation of NF-κB signalling, parkin enhances survival and protects cells from stress-induced apoptosis. Upon cellular insult, parkin activates the catalytic HOIP subunit of another E3 ligase LUBAC. HOIP triggers assembly of linear Ub polymers on NF-κB essential modulator (NEMO), potentiating transcription of mitochondrial GTPase OPA1. Increased OPA1 translation maintains cristae structure and reduces cytochrome C release from mitochondria, inhibiting caspase-mediated apoptosis. Importantly, parkin activates HOIP with greater potency than other LUBAC-associated factors HOIL-1 and sharpin, meaning that parkin mobilisation significantly enhances tolerance to moderate stressors.
Parkin possesses DNA binding affinity and produces a dose-dependent reduction in transcription and activity of pro-apoptotic factor p53. Transfection of p53 promoter with truncated versions of parkin into SH-SY5Y neurons revealed that parkin directly binds to the p53 promoter via its RING1 domain. Conversely, parkin may be a transcriptional target of p53 in H460 lung cells, where it mediates the tumour suppressor action of p53. Considering its role in mitochondrial homeostasis, parkin aids p53 in maintaining mitochondrial respiration while limiting glucose uptake and lactate production, thus preventing onset of the Warburg effect during tumourigenesis. Parkin further elevates cytosolic glutathione levels and protects against oxidative stress, characterising it as a critical tumour suppressor with anti-glycolytic and antioxidant capabilities.
== Clinical significance ==
=== Parkinson's disease ===
PARK2 (OMIM *602544) is the parkin gene that may cause a form of autosomal recessive juvenile Parkinson disease (OMIM 600116) due to a mutation in the parkin protein. This form of genetic mutation may be one of the most common known genetic causes of early-onset Parkinson disease. In one study of patients with onset of Parkinson disease prior to age 40 (10% of all PD patients), 18% had parkin mutations, with 5% homozygous mutations. Patients with an autosomal recessive family history of parkinsonism are much more likely to carry parkin mutations if age at onset is less than 20 (80% vs. 28% with onset over age 40).
Patients with parkin mutations (PARK2) do not have Lewy bodies. Such patients develop a syndrome that closely resembles the sporadic form of PD; however, they tend to develop symptoms at a much younger age.
In humans, loss-of-function mutations in parkin PARK2 gene have been implicated in 50% of inherited and 15% of juvenile-onset sporadic forms of Parkinson's disease (PD). While PD is traditionally regarded a late-onset neurodegenerative condition characterised by alpha-synuclein-enriched Lewy bodies, autosomal recessive PD due to parkin mutations is often early onset and lack the ubiquitinated protein deposits pathognomonic for sporadic PD. Parkin-mutant PD could also involve loss of noradrenergic neurons in the locus coeruleus alongside the hallmark degeneration of dopaminergic neurons in the substantia nigra pars compacta (SNpc). However, its symptoms resembles those of idiopathic PD, with patients presenting with resting tremors, postural instability and bradykinesia.
While mitochondria are essential for ATP generation in any eukaryotic cell, catecholaminergic neurons are particularly reliant on their proper function for clearance of reactive oxygen species produced by dopamine metabolism, and to supply high energy requirements of catecholamine synthesis. Their susceptibility to oxidative damage and metabolic stress render catecholaminergic neurons vulnerable to neurotoxicity associated with aberrant regulation of mitochondrial activity, as is postulated to occur in both inherited and idiopathic PD. For example, enhanced oxidative stress in neurons, skeletal muscle and platelets, corresponding with reduced activity of complex I in the electron transport chain were reported in PD patients, while deletions in the mitochondrial genome were found in the SNpc.
In accordance with its critical role in mitochondrial quality control, more than 120 pathogenic, PD-inducing mutations have been characterised on parkin. Such mutations may be hereditary or stochastic and are associated with structural instability, reduced catalytic efficiency and aberrant substrate binding and ubiquitination. Mutations can generally be categorised into three groups, depending on their location. Firstly, those clustered around Zn-coordinating residues on RING and IBR might compromise structural integrity and impair catalysis. A second class of mutations, including Thr240Arg, affect residues in and around the E2 binding site and alter autoinhibition of RING1 by REP. Finally, Cys431Phe and Gly430Asp mutations impair ligase activity at the catalytic site and significantly reduce parkin function.
The discovery of numerous non-mitochondrial parkin substrates reinforces the importance parkin in neuronal homeostasis, beyond its role in mitochondrial regulation. Potent neuroprotective abilities of parkin in attenuating dopaminergic neurotoxicity, mitochondrial swelling and excitotoxicity were demonstrated in cell cultures over-expressing parkin, although the existence of such mechanisms at physiological parkin levels in vivo is yet unconfirmed. Another parkin substrate, synphilin-1 (encoded by SNCAIP), is an alpha-synuclein interacting protein that is enriched in the core of Lewy bodies and ubiquitinated by parkin in a manner abolished by familial PD-associated mutations. Parkin might promote aggregation of alpha-synuclein and synphilin-1 into Lewy bodies, which are conjugated to Lys63-linked poly-Ub chains and directed towards autophagic degradation. Parkin mutations therefore inhibit this mechanism, leading to toxic accumulation of soluble proteins that overloads the proteasome. Protein aggregation triggers neuronal toxicity, whilst accounting for lack of ubiquitinated Lewy bodies in parkin-mutant PD. Similarly, native parkin reduces death of SH-SY5Y neurons by ubiquitinating other Lewy body constituents, such as the p38 subunit of aminoacyl-tRNA synthetase complex and far upstream element-binding protein 1 through addition of Lys48-linked poly-Ub chains and directing them towards proteasomal degradation. Parkin also influences axonal transport and vesicle fusion through ubiquitination of tubulin and synaptotagmin XI (SYT11) respectively, giving it a modulatory role in synapse function.
Finally, parkin protects dopaminergic neurons from cytotoxicity induced by PD-mimetic 6-OHDA, mediated by suppression of neuronal p53 expression and its downstream activation of the apoptotic cascade. Several PD-associated parkin mutations are localised to RING1 and might impair its ability to bind and downregulate the p53 promoter, leading to enhanced p53 expression. Parkin-mutant PD patients also exhibit a four-fold elevation in p53 immunoreactivity, insinuating that failure of parkin-mediated anti-apoptosis might be involved in etiology of PD.
=== Tumourigenesis ===
Consistent with parkin's potent anti-tumourigenic abilities, negative mutations and deletions have been reported in various tumours. For example, PARK2 copy number was reduced in 85% of glioblastoma samples while lung cancers were associated with heterozygous deletion of PARK2 at 6q25-q27 locus. Parkin deficiency further diminished disease-free survival in infrared-irradiated mice without increasing tumour incidence rate, suggesting that parkin deficiencies increase susceptibility to tumour-promoting events, rather than initiating tumour formation. Similarly, chromosomal breaks in PARK2 suppressed expression of afadin scaffold protein in breast cancer, thereby comprising epithelial integrity, enhancing metastatic potential and worsening overall prognosis. Haploinsufficient PARK2 expression, either due to reduced copy number or DNA hypermethylation, was further detected in spontaneous colorectal cancer where it accelerated all stages of intestinal adenoma development in mouse models. Parkin is therefore a potent modulator of tumour progression, without directly instigating tumourigenesis.
== Interactions ==
Parkin (ligase) has been shown to interact with:
== References ==
== Further reading ==
== External links ==
GeneReviews/NCBI/NIH/UW entry on Parkin Type of Juvenile Parkinson Disease
parkin+protein at the U.S. National Library of Medicine Medical Subject Headings (MeSH) | Wikipedia/Parkin_(protein) |
Parkinson's disease (PD) is a complicated neurodegenerative disease that progresses over time and is marked by bradykinesia (Slowed movements), tremor(rhythmic shaking), and stiffness. As the condition worsens, some patients may also experience postural instability where one finds it difficult to balance and maintain upright posture. Parkinson's disease (PD) is primarily caused by the gradual degeneration of dopaminergic neurons(which produces chemical messenger Dopamine in the brain) in the region known as the substantia nigra along with other monoaminergic cell groups throughout the brainstem, increased activation of microglia, and the build-up of Lewy bodies(clumps of proteins in the brain) and Lewy neurites, which are proteins found in surviving dopaminergic neurons.
Because the etiology of about 80% of PD cases is unknown, they are classified as idiopathic, whereas the other 20% are thought to be genetic. PD risk is increased by variations in the genetic mix of specific genes. Research has indicated that the risk of Parkinson's disease (PD) is increased by mutations in the genes encoding leucine-rich repeat kinase 2 (LRRK2), Parkinson's disease-associated deglycase (PARK7), PRKN, PINK1, or SNCA (alpha-synuclein).
Exposure to pesticides, metals, solvents, and other toxicants has been studied as a factor in the development of Parkinson's disease.
== Genetic factors ==
Traditionally, Parkinson's disease has been considered a non-genetic disorder. However, around 15% of individuals with PD have a first-degree relative who has the disease. At least 5–15% of cases are known to occur because of a mutation in one of several specific genes, transmitted in either an autosomal-dominant or autosomal-recessive pattern.
Mutations in specific genes have been conclusively shown to cause PD. A large number of these genes are linked to translation. Genes which have been implicated in autosomal-dominant PD include PARK1 and PARK4, PARK5, PARK8, PARK11 and GIGYF2 and PARK13 which code for alpha-synuclein (SNCA), UCHL1, leucine-rich repeat kinase 2 (LRRK2 or dardarin) (LRRK2 and Htra2 respectively Genes such as PARK2, PARK6, PARK7 and PARK9 which code for parkin (PRKN), PTEN-induced putative kinase 1 (PINK1), DJ-1 and ATP13A2 respectively have been implicated in the development of autosomal-recessive PD
Furthermore, mutations in genes including those that code for SNCA, LRRK2 and glucocerebrosidase (GBA) have been found to be risk factors for sporadic PD In most cases, people with these mutations will develop PD. With the exception of LRRK2, however, they account for only a small minority of cases of PD. The most extensively studied PD-related genes are SNCA and LRRK2.
At least 11 autosomal dominant and nine autosomal recessive gene mutations have been implicated in the development of PD. The autosomal dominant genes include SNCA, PARK3, UCHL1, LRRK2, GIGYF2, HTRA2, EIF4G1, TMEM230, CHCHD2, RIC3, and VPS35. Autosomal recessive genes include PRKN, PINK1, DJ-1, ATP13A2, PLA2G6, FBXO7, DNAJC6, SYNJ1, and VPS13C. Some genes are X-linked or have unknown inheritance pattern; those include USP24, PARK12, and PARK16. A 22q11 deletion is known to be associated with PD. An autosomal dominant form has been associated with mutations in the LRP10 gene.
=== SNCA gene ===
The role of the SNCA gene is significant in PD because the alpha-synuclein protein is the main component of Lewy bodies, which appear as a primary biomarker in the disease. Missense mutations of the gene (in which a single nucleotide is changed), and duplications and triplications of the locus containing it have been found in different groups with familial PD. Level of alpha-synuclein expression correlates with disease onset and progression, with SNCA gene triplication advancing earlier and faster than duplication. Missense mutations in SNCA are rare. On the other hand, multiplications of the SNCA locus account for around 2% of familial cases. Multiplications have been found in asymptomatic carriers, which indicate that penetrance is incomplete or age-dependent.
=== LRRK2 gene ===
The LRRK2 gene (PARK8) encodes for a protein called dardarin. The name dardarin was taken from a Basque word for tremor, because this gene was first identified in families from England and the north of Spain. A significant number of autosomal-dominant Parkinson's disease cases are associated with mutations in the LRRK2 gene Mutations in LRRK2 are the most common known cause of familial and sporadic PD, accounting for approximately 5% of individuals with a family history of the disease and 3% of sporadic cases. There are many different mutations described in LRRK2, however unequivocal proof of causation only exists for a small number. Mutations in PINK1, PRKN, and DJ-1 may cause mitochondrial dysfunction, an element of both idiopathic and genetic PD. Of related interest are mutations in the progranulin gene that have been found to cause corticobasal degeneration seen in dementia. This could be relevant in PD cases associated with dementia.
=== GBA gene ===
Mutations in GBA are known to cause Gaucher's disease. Genome-wide association studies, which search for mutated alleles with low penetrance in sporadic cases, have now yielded many positive results. Mendelian genetics are not strictly observed in GBA mutations found in inherited parkinsonism. Incidentally, both gain-of-function and loss-of-function GBA mutations are proposed to contribute to parkinsonism through effects such as increased alpha-synuclein levels. In patients with Parkinson's disease, the OR for carrying a GBA mutation was 5·43 (95% CI 3·89–7·57), confirming that mutations in this gene are a common risk factor for Parkinson's disease.
=== Genes underlying familial Parkinson's disease ===
== Environmental factors ==
Exposure to pesticides, metals, solvents, and other toxicants has been studied as a factor in the development of Parkinson's disease. No definitive causal relationship has yet been established. Recent studies also reveal that individuals that sustain mild head injuries (concussions) also have an increased risk of acquiring the disease. As discussed below, exercise and caffeine consumption are known to help decrease the risks of acquisition.
=== Pesticides ===
Evidence from epidemiological, animal, and in vitro studies suggests that exposure to pesticides increases the risk for Parkinson's disease. One meta-analysis found a risk ratio of 1.6 for ever being exposed to a pesticide, with herbicides and insecticides showing the most risk. Rural living, well-drinking, and farming were also associated with Parkinson's, which may be partly explained by pesticide exposure. These factors are pertinent to many communities, one of them being South Asian populations. Organochlorine pesticides (which include DDT) have received the most attention, with several studies reporting that exposure to such pesticides is associated with a doubling of risk for Parkinson's.
Carbon disulfide is a risk factor, and has been identified in industrial worker case studies and has induced parkinsonism in mice. It is mainly used in the manufacture of viscose rayon, cellophane film, rubber and carbon tetrachloride.
=== Metals ===
Lead, which was used in gasoline until 1995 and paint until 1978, is known to damage the nervous system in various ways. A few studies have found that people with high levels of lead in their body had twice the risk of Parkinson's disease. Epidemiological studies on lead, however, have found little evidence for a link with Parkinson's. Iron has been implicated in the etiology of Parkinson's disease, but there is no strong evidence that environmental exposure to it is associated with Parkinson's.
=== Solvent ===
=== Air pollution ===
=== Head injuries ===
A 2012 study suggests that players in the National Football League are three times more likely to die from neurodegenerative diseases, including Alzheimer's and Parkinson's diseases, than the general US population. A 2018 study found 56% increase in risk of Parkinson's disease among US military veterans suffered traumatic brain injury.
=== Exercise ===
While many environmental factors may exacerbate Parkinson's disease, exercise is considered to be one of the main protective factors for neurodegenerative disorders, including Parkinson's disease. The types of exercise interventions that have been studied can be categorized as either aerobic or goal-based. Aerobic exercise includes physical activity that increases the heart rate. Aerobic exercise is beneficial to the overall brain through mechanisms that promote neuroplasticity, or the rewiring of the brain circuitry. Goal-based exercises are often developed with the guidance of a physical therapist to use movement to improve motor task performance and enhance motor learning. While exercise has consistently been shown to be beneficial, the optimal interventional benefit is still being researched.
=== Caffeine consumption ===
Smokers and nonsmokers with different rates of caffeine consumption were monitored for their susceptibility to PD. The results indicate that higher coffee/caffeine intake is associated with a significantly lower incidence of PD and that this effect appeared to be independent of smoking.
=== Dietary Factors ===
Emerging research suggests that diet may influence the risk of developing Parkinson's. A 2023 study found that adherence to a Western dietary pattern—characterized by high consumption of red and processed meats, fried foods, high-fat dairy products, and refined grains—is associated with an increased risk of Parkinson's. Individuals with the highest adherence to this dietary pattern had significantly higher odds—approximately seven times—of developing the disease. Conversely, diets rich in fruits, vegetables, whole grains, and lean proteins have been associated with a reduced risk of Parkinson's, potentially offering protective benefits. Further research is needed to establish causality and better understand the mechanisms underlying these associations.
== Environmental-Genetic factors ==
=== Polymorphism of CYP2D6 gene and pesticide exposure ===
The CYP2D6 gene is primarily expressed in the liver and is responsible for the enzyme cytochrome P450 2D6. A study showed that those who had a mutation of this gene and were exposed to pesticides were twice as likely to develop Parkinson's Disease; those that had the mutation and were not exposed to pesticides were not found to be at an increased risk of developing PD; the pesticides only had a "modest effect" for those without the mutation of the gene.
== See also ==
Trichloroethylene
== References == | Wikipedia/Causes_of_Parkinson's_disease |
In molecular biology and pharmacology, a small molecule or micromolecule is a low molecular weight (≤ 1000 daltons) organic compound that may regulate a biological process, with a size on the order of 1 nm. Many drugs are small molecules; the terms are equivalent in the literature. Larger structures such as nucleic acids and proteins, and many polysaccharides are not small molecules, although their constituent monomers (ribo- or deoxyribonucleotides, amino acids, and monosaccharides, respectively) are often considered small molecules. Small molecules may be used as research tools to probe biological function as well as leads in the development of new therapeutic agents. Some can inhibit a specific function of a protein or disrupt protein–protein interactions.
Pharmacology usually restricts the term "small molecule" to molecules that bind specific biological macromolecules and act as an effector, altering the activity or function of the target. Small molecules can have a variety of biological functions or applications, serving as cell signaling molecules, drugs in medicine, pesticides in farming, and in many other roles. These compounds can be natural (such as secondary metabolites) or artificial (such as antiviral drugs); they may have a beneficial effect against a disease (such as drugs) or may be detrimental (such as teratogens and carcinogens).
== Molecular weight cutoff ==
The upper molecular-weight limit for a small molecule is approximately 900 daltons, which allows for the possibility to rapidly diffuse across cell membranes so that it can reach intracellular sites of action. This molecular weight cutoff is also a necessary but insufficient condition for oral bioavailability as it allows for transcellular transport through intestinal epithelial cells. In addition to intestinal permeability, the molecule must also possess a reasonably rapid rate of dissolution into water and adequate water solubility and moderate to low first pass metabolism. A somewhat lower molecular weight cutoff of 500 daltons (as part of the "rule of five") has been recommended for oral small molecule drug candidates based on the observation that clinical attrition rates are significantly reduced if the molecular weight is kept below this limit.
== Drugs ==
Most pharmaceuticals are small molecules, although some drugs can be proteins (e.g., insulin and other biologic medical products). With the exception of therapeutic antibodies, many proteins are degraded if administered orally and most often cannot cross cell membranes. Small molecules are more likely to be absorbed, although some of them are only absorbed after oral administration if given as prodrugs. One advantage that small molecule drugs (SMDs) have over "large molecule" biologics is that many small molecules can be taken orally whereas biologics generally require injection or another parenteral administration. Small molecule drugs are also typically simpler to manufacture and cheaper for the purchaser. A downside is that not all targets are amenable to modification with small-molecule drugs; bacteria and cancers are often resistant to their effects.
== Secondary metabolites ==
A variety of organisms including bacteria, fungi, and plants, produce small molecule secondary metabolites also known as natural products, which play a role in cell signaling, pigmentation and in defense against predation. Secondary metabolites are a rich source of biologically active compounds and hence are often used as research tools and leads for drug discovery. Examples of secondary metabolites include:
== Research tools ==
Enzymes and receptors are often activated or inhibited by endogenous protein, but can be also inhibited by endogenous or exogenous small molecule inhibitors or activators, which can bind to the active site or on the allosteric site.
An example is the teratogen and carcinogen phorbol 12-myristate 13-acetate, which is a plant terpene that activates protein kinase C, which promotes cancer, making it a useful investigative tool. There is also interest in creating small molecule artificial transcription factors to regulate gene expression, examples include wrenchnolol (a wrench shaped molecule).
Binding of ligand can be characterised using a variety of analytical techniques such as surface plasmon resonance, microscale thermophoresis or dual polarisation interferometry to quantify the reaction affinities and kinetic properties and also any induced conformational changes.
== Anti-genomic therapeutics ==
Small-molecule anti-genomic therapeutics, or SMAT, refers to a biodefense technology that targets DNA signatures found in many biological warfare agents. SMATs are new, broad-spectrum drugs that unify antibacterial, antiviral and anti-malarial activities into a single therapeutic that offers substantial cost benefits and logistic advantages for physicians and the military.
== See also ==
Pharmacology
Druglikeness
Lipinski's rule of five
Metabolite
Chemogenomics
Neurotransmitter
Peptidomimetic
Macromolecule
== References ==
== External links ==
Small+Molecule+Libraries at the U.S. National Library of Medicine Medical Subject Headings (MeSH) | Wikipedia/Small_molecules |
An anti-α-synuclein drug, or an α-synuclein inhibitor, is a drug which blocks or inhibits α-synuclein. α-Synuclein is a protein which is thought to be involved in the development and progression of α-synucleinopathies including Parkinson's disease, dementia with Lewy bodies, and multiple system atrophy. Anti-α-synuclein drugs are under development for treatment of Parkinson's disease and other α-synuclein-related diseases. Examples include the monoclonal antibodies prasinezumab and cinpanemab, which both failed to show effectiveness in slowing the progression of Parkinson's disease in phase 2 clinical trials. Other anti-α-synuclein drugs, like the monoclonal antibody exidavnemab, the α-synuclein vaccines PD01A and PD03A, and the small-molecule α-synuclein misfolding and aggregation inhibitors minzasolmin and emrusolmin, are also under development. Memantine is also being studied as a potential disease-modifying treatment for Parkinson's disease by inhibiting cell-to-cell transmission of α-synuclein and is in a phase 3 trial for this purpose.
== See also ==
Anti-amyloid drugs
== References == | Wikipedia/Anti-α-synuclein_drug |
Dopaminergic cell groups, DA cell groups, or dopaminergic nuclei are collections of neurons in the central nervous system that synthesize the neurotransmitter dopamine. In the 1960s, dopaminergic neurons or dopamine neurons were first identified and named by Annica Dahlström and Kjell Fuxe, who used histochemical fluorescence. The subsequent discovery of genes encoding enzymes that synthesize dopamine, and transporters that incorporate dopamine into synaptic vesicles or reclaim it after synaptic release, enabled scientists to identify dopaminergic neurons by labeling gene or protein expression that is specific to these neurons.
In the mammalian brain, dopaminergic neurons form a semi-continuous population extending from the midbrain through the forebrain, with eleven named collections or clusters among them.
== Cell group A8 ==
Group A8 is a small group of dopaminergic cells in rodents and primates. It is located in the midbrain reticular formation dorsolateral to the substantia nigra at the level of the red nucleus and caudally. In the mouse it is identified with the retrorubral field as defined by classical stains.
== Cell group A9 ==
Group A9 is the most densely packed group of dopaminergic cells, and is located in the ventrolateral midbrain of rodents and primates. It is for the most part identical with the pars compacta of the substantia nigra as seen from the accumulation of neuromelanin pigment in the midbrain of healthy, adult humans.
== Cell group A10 ==
Group A10 is the largest group of dopaminergic cells in the ventral midbrain tegmentum of rodents and primates. The cells are located for the most part in the ventral tegmental area, the linear nucleus and, in primates, the part of central gray of the midbrain located between the left and right oculomotor nuclear complexes.
== Cell group A11 ==
Group A11 is a small group of dopaminergic cells located in the posterior periventricular nucleus and the intermediate periventricular nucleus of the hypothalamus in the macaque. In the rat, small numbers of cells assigned to this group are also found in the posterior nucleus of hypothalamus, the supramammillary area and the reuniens nucleus. Dopaminergic cells in A11 may be important in the modulation of auditory processing.
== Cell group A12 ==
Group A12 is a small group of cells in the arcuate nucleus of the hypothalamus in primates. In the rat a few cells belonging to this group are also seen in the anteroventral portion of the paraventricular nucleus of the hypothalamus.
== Cell group A13 ==
Group A13 is distributed in clusters that, in the primate, are ventral and medial to the mammillothalamic tract of the hypothalamus; a few extend into the reuniens nucleus of the thalamus. In the mouse, A13 is located ventral to the mammillothalamic tract of the thalamus in the zona incerta.
== Cell group A14 ==
Group A14 consists of a few cells observed in and near the preoptic periventricular nucleus of the primate. In the mouse, cells in the anterodorsal preoptic nucleus are assigned to this group.
== Cell group A15 ==
Group A15 exists in a few species, such as sheep, and immunoreactive for tyrosine hydroxylase, a precursor of dopamine, in many other species including rodents and primates. It is located in ventral and dorsal components within the preoptic periventricular nucleus and adjacent parts of the anterior hypothalamic region. It is continuous caudally with the dopaminergic group A14.
== Cell group A16 ==
Group A16 is located in the olfactory bulb of vertebrates, including rodents and primates.
== Cell group Aaq ==
Group Aaq is a sparse group of cells located in the rostral half of the central gray of the midbrain in primates. It is more prominent in the squirrel monkey (Saimiri) than the macaque.
== Telencephalic group ==
This group is a population of cells immunoreactive for dopamine and tyrosine hydroxylase that are broadly distributed in the rostral forebrain, including such structures as: substantia innominata, diagonal band, olfactory tubercle, prepyriform area, striatum (at levels rostral to the anterior commissure), claustrum, and deep cortical layers of all gyri of the frontal lobe rostral to the head of the caudate nucleus; the cells are also numerous in intervening white matter, including the external capsule, extreme capsule and frontal white matter. They are found in the rodent, the macaque and the human.
== See also ==
Dopaminergic pathways
History of catecholamine research
== Footnotes ==
== References ==
Dahlstrom A, Fuxe K (1964). "Evidence for the existence of monoamine-containing neurons in the central nervous system". Acta Physiologica Scandinavica. 62: 1–55. PMID 14229500.
Dubach MF (1994). "11:Telencephalic dopamine cells in monkeys, humans and rats". In Smeets WJ, Reiner A (eds.). Phylogeny and Development of Catecholamine Systems in the CNS of Vertebrates. Cambridge, England: University Press. ISBN 978-0-5214-4251-0. OCLC 29952121.
Felten DL, Sladek Jr JR (1983). "Monoamine distribution in primate brain V. Monoaminergic nuclei: anatomy, pathways and local organization". Brain Research Bulletin. 10 (2): 171–284. doi:10.1016/0361-9230(83)90045-x. PMID 6839182.
Fuxe K, Hoekfelt T, Ungerstedt U (1970). "Morphological and functional aspects of central monoamine neurons". International Review of Neurobiology. 13: 93–126. doi:10.1016/S0074-7742(08)60167-1.
Nevue AA, Felix II RA, Portfors CV (November 2016). "Dopaminergic projections of the subparafascicular thalamic nucleus to the auditory brainstem". Hearing Research. 341: 202–209. doi:10.1016/j.heares.2016.09.001. PMC 5111623. PMID 27620513.
Paxinos G, Franklin KB (2001). The Mouse Brain in Stereotaxic Coordinates (2nd ed.). San Diego: Academic Press. ISBN 978-0-1254-7636-2. OCLC 493265554.
Smeets WJ, Reiner A (1994). "20:Catecholamines in the CNS of vertebrates: current concepts of evolution and functional significance". In Smeets WJ, Reiner A (eds.). Phylogeny and Development of Catecholamine Systems in the CNS of Vertebrates. Cambridge, England: University Press. ISBN 978-0-5214-4251-0. OCLC 29952121.{{cite book}}: CS1 maint: ref duplicates default (link)
Tillet Y (1994). "9: Catecholaminergic neuronal systems in the diencephalon of mammals". In Smeets WJ, Reiner A (eds.). Phylogeny and Development of Catecholamine Systems in the CNS of Vertebrates. Cambridge, England: University Press. ISBN 978-0-5214-4251-0. OCLC 29952121. | Wikipedia/Dopaminergic_neuron |
Impulse-control disorder (ICD) is a class of psychiatric disorders characterized by impulsivity – failure to resist a temptation, an urge, or an impulse; or having the inability to not speak on a thought.
The fifth edition of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM-5) that was published in 2013 includes a new chapter on disruptive, impulse-control, and conduct disorders covering disorders "characterized by problems in emotional and behavioral self-control". Five behavioral stages characterize impulsivity: an impulse, growing tension, pleasure on acting, relief from the urge, and finally guilt (which may or may not arise).
== Types ==
Disorders characterized by impulsivity that were not categorized elsewhere in the DSM-IV-TR were also included in the category "Impulse-control disorders not elsewhere classified". Trichotillomania (hair-pulling) and skin-picking were moved in DSM-5 to the obsessive-compulsive chapter. Additionally, other disorders not specifically listed in this category are often classed as impulsivity disorders. Terminology was changed in the DSM-5 from "Not Otherwise Classified" to "Not Elsewhere Classified".
=== Sexual compulsion ===
Sexual compulsion includes an increased urge in sexual behavior and thoughts. This compulsion may also lead to several consequences in the individual's life, including risky partner selection, increased chance for STIs and depression, as well as unwanted pregnancy. There has not yet been a determined estimate of its prevalence due to the secretiveness of the disorder. However, research conducted in the early 1990s in the United States gave prevalence estimates between 5–6% in the U.S. population, with male cases being higher than female.
=== Internet addiction ===
The disorder of Internet addiction has only recently been taken into consideration and has been added as a form of ICD. It is characterized by excessive and damaging usage of Internet with increased amount of time spent chatting, web surfing, gambling, shopping or consuming pornography. Excessive and problematic Internet use has been reported across all age, social, economic, and educational ranges. Although initially thought to occur mostly in males, increasing rates have been also observed in females. However, no epidemiological study has been conducted yet to understand its prevalence.
=== Compulsive shopping ===
Compulsive shopping or buying is characterized by a frequent irresistible urge to shop even if the purchases are not needed or cannot be afforded. The prevalence of compulsive buying in the U.S. has been estimated to be 2–8% of the general adult population, with 80–95% of these cases being females. The onset is believed to occur in late teens or early twenties and the disorder is considered to be generally chronic.
=== Pyromania ===
Pyromania is characterized by impulsive and repetitive urges to deliberately start fires.
Because of its nature, the number of studies performed for fire-setting are understandably limited. However, studies done on children and adolescents with pyromania have reported its prevalence to be between 2.4 and 3.5% in the United States. It has also been observed that the incidence of fire-setting is more common in juvenile and teenage boys than girls of the same age.
=== Intermittent explosive disorder ===
Intermittent explosive disorder or IED is a clinical condition of experiencing recurrent aggressive episodes that are out of proportion of any given stressor. Earlier studies reported a prevalence rate between 1–2% in a clinical setting, but a study done by Coccaro and colleagues in 2004 had reported about 11.1% lifetime prevalence and 3.2% one month prevalence in a sample of a moderate number of individuals (n=253). Based on the study, Coccaro and colleagues estimated the prevalence of IED in 1.4 million individuals in the US and 10 million with lifetime IED.
=== Kleptomania ===
Kleptomania is characterized by an impulsive urge to steal purely for the sake of gratification.
In the U.S. the presence of kleptomania is unknown but has been estimated at 6 per 1000 individuals. Kleptomania is also thought to be the cause of 5% of annual shoplifting in the U.S. If true, 100,000 arrests are made in the U.S. annually due to kleptomaniac behavior.
== Signs and symptoms ==
The signs and symptoms of impulse-control disorders vary based on the age of the persons with them, the actual type of impulse-control that they are struggling with, the environment in which they are living, and whether they are male or female.
=== Co-morbidity ===
Complications of late Parkinson's disease may include a range of impulse-control disorders, including eating, buying, compulsive gambling, sexual behavior, and related behaviors (punding, hobbyism and walkabout). Prevalence studies suggest that ICDs occur in 13.6–36.0% of Parkinson's patients exhibited at least one form of ICD. There is a significant co-occurrence of pathological gambling (PG) and personality disorder, and is suggested to be caused partly by their common "genetic vulnerability". The degree of heritability to ICD is similar to other psychiatric disorders including substance use disorder. There has also been found a genetic factor to the development of ICD just as there is for substance use disorder. The risk for subclinical PG in a population is accounted for by the risk of alcohol dependence by about 12–20% genetic and 3–8% environmental factors. There is a high rate of co-morbidity between ADHD and other impulse-control disorders.
== Mechanism ==
Dysfunction of the striatum may prove to be the link between obsessive–compulsive disorder (OCD), ICD and substance use disorder (SUD). According to research, the "impulsiveness" that occurs in the later stages of OCD is caused by progressive dysfunction of the ventral striatal circuit. Whereas in case of ICD and SUD, the increased dysfunction of dorsal striatal circuit increases the "ICD and SUD behaviours that are driven by the compulsive processes". OCD and ICD have traditionally been viewed as two very different disorders, the former one is generally driven by the desire to avoid harm whereas the latter one driven "by reward-seeking behaviour". Still, there are certain behaviors similar in both, for example the compulsive actions of ICD patients and the behavior of reward-seeking (for example hoarding) in OCD patients.
== Treatment ==
Impulse-control disorders have two treatment options: psychosocial and pharmacological. Treatment methodology is informed by the presence of comorbid conditions.
=== Medication ===
In the case of pathological gambling, along with fluvoxamine, clomipramine has been shown effective in the treatment, with reducing the problems of pathological gambling in a subject by up to 90%. Whereas in trichotillomania, the use of clomipramine has again been found to be effective, fluoxetine has not produced consistent positive results. Fluoxetine, however, has produced positive results in the treatment of pathological skin picking disorder, although more research is needed to conclude this information. Fluoxetine has also been evaluated in treating IED and demonstrated significant improvement in reducing frequency and severity of impulsive aggression and irritability in a sample of 100 subjects who were randomized into a 14-week, double-blind study. Despite a large decrease in impulsive aggression behavior from baseline, only 44% of fluoxetine responders and 29% of all fluoxetine subjects were considered to be in full remission at the end of the study. Paroxetine has shown to be somewhat effective although the results are inconsistent. Another medication, escitalopram, has shown to improve the condition of the subjects of pathological gambling with anxiety symptoms. The results suggest that although SSRIs have shown positive results in the treatment of pathological gambling, inconsistent results with the use of SSRIs have been obtained which might suggest a neurological heterogeneity in the impulse-control disorder spectrum.
=== Psychosocial ===
The psychosocial approach to the treatment of ICDs includes cognitive behavioral therapy (CBT) which has been reported to have positive results in the case of treatment of pathological gambling and sexual addiction. There is general consensus that cognitive-behavioural therapies offer an effective intervention model.
Pyromania
Pyromania is harder to control in adults due to lack of co-operation; however, CBT is effective in treating child pyromaniacs. (Frey 2001)
Intermittent explosive disorder
Along with several other methods of treatments, cognitive behavioural therapy has also shown to be effective in the case of Intermittent explosive disorder as well. Cognitive Relaxation and Coping Skills Therapy (CRCST), which consists of 12 sessions starting first with the relaxation training followed by cognitive restructuring, then exposure therapy is taken. Later, the focus is on resisting aggressive impulses and taking other preventative measures.
Kleptomania
In the case of kleptomania, the cognitive behaviour techniques used in these cases consists of covert sensitization, imaginal desensitization, systematic desensitization, aversion therapy, relaxation training, and "alternative sources of satisfaction".
Compulsive buying
Although compulsive buying falls under the category of Impulse-control disorder – Not Otherwise Specified in the DSM-IV-TR, some researchers have suggested that it consists of core features that represent impulse-control disorders which includes preceding tension, difficult to resist urges and relief or pleasure after action. The efficiency of cognitive behavior therapy for compulsive buying is not truly determined yet; however, common techniques for the treatment include exposure and response prevention, relapse prevention, cognitive restructuring, covert sensitization, and stimulus control.
== See also ==
Behavioral addiction
Binge eating disorder
Body-focused repetitive behavior
Child pyromaniac
Dopamine dysregulation syndrome
Biopsychosocial model
== References ==
== External links == | Wikipedia/Impulse_control_disorders |
Parkinson's disease (PD), the second most common neurodegenerative disease after Alzheimer's disease, affects 1% of people over 60 years of age. In the past three decades, the number of PD cases has doubled globally from 2.5 million in 1990 to 6.1 million in 2016. As of 2022, there are ~10 million PD cases globally. In the United States, the estimated prevalence of PD by 2030 is estimated will be ~1.24 million. These numbers are expected to increase as life expectancy and the age of the general population increase. PD is considered to be a multisystem and multifactorial disease, where many factors, such as the environment, gut, lifestyle and genetics, play a significant role in the onset and progression of the disease.
== Pathology ==
The neuropathological hallmarks of PD include the loss of dopaminergic neurons in the substantia nigra pars compacta region of the brain (shown in figure) and the presence of aggregated alpha-synuclein. Under physiological conditions, alpha-synuclein, a protein encoded by the SNCA gene, is found at the synapses of neurons, where it regulates synaptic signaling and plasticity by modulating the release of neurotransmitters. It is most abundantly found in the brain and to a smaller extent in other tissues, such as the gut and heart. Under pathological conditions in PD, alpha-synuclein undergoes a conformational change, resulting in a misfolded insoluble protein that aggregates into beta-sheets and forms protein inclusions called Lewy Bodies. Aggregated alpha-synuclein loses its ability to bind at the membrane, disrupting cellular processes and synaptic formation. It is hypothesized to propagate in a prion-like manner, spreading within and between other cells, eventually leading to neurodegeneration, which is seen in the illustration with the loss of dopaminergic neurons. These pathological changes are also found peripherally (outside of the central nervous system - CNS) in early stages of PD. However, the mechanisms involved in these changes are not well understood.
== Symptomology ==
The clinical presentation of PD include both motor and non-motor symptoms. The cardinal motor symptoms of PD are rigidity, abnormal gait, resting tremor, stiffness, bradykinesia, and dystonia. Non-motor symptoms include autonomic dysfunction, olfaction dysfunction, cognitive impairment, urinogenital complications, hyposmia, depression, asymmetric vague shoulder pain, gastrointestinal (GI) dysfunction, and REM sleep behavior disorder (acting out dreams during REM). In early stages of PD, non-motor symptoms occur prior to the onset of motor symptoms, contributing to a delay in PD diagnosis and even misdiagnosis in up to 15% of cases. By the time motor symptoms appear and treatment is initiated, there is already over 50% dopaminergic neuronal cell loss in the substantia nigra. Therefore, non-motor symptoms are valuable biomarkers of early stages of PD and provide a potential avenue for early disease diagnosis and early intervention.
== Gastrointestinal dysfunction ==
GI symptoms can occur up to 20 years prior to the onset of clinical motor symptoms. The potential involvement of the gut in PD was first suggested over 200 years ago by James Parkinson, who describes PD as “a disordered state of the stomach and bowels (that) may induce a morbid action in a part of the medulla spinalis”. However, this crosstalk between the gut and the brain was not fully understood and was not extensively explored in PD until more recently in the last two decades. There is increasing evidence that have further reported on the role of gastrointestinal (GI) dysfunction in the initiation of neurodegeneration as well as the pathogenesis of PD.
In the upper GI tract, dysphagia is a swallowing impairment that results in inadequate mastication (chewing), body mass index below than 20, weight loss and malnutrition. Drooling is also common as a result of the difficulties with swallowing and not with saliva secretion, which is actually decreased in PD. Oropharyngeal dysphagia results in choking or aspiration. Swallowing involves three phases - oral, pharyngeal, esophageal, of which the first two are affected in oropharyngeal dysphagia. This motor symptom affects 35% of patients and worsens with the disease progression, but does improve with medication. Gastroparesis results in the paralysis of the stomach that contributes to 50% of patients feeling bloated and full while 15% experience vomiting and nausea. Solid meal scintigraphy as well as a breath test are used to measure gastric emptying time (GET), which is prolonged in PD patients. Other methods include MRI based imaging and electromagnetic capsule system. Small intestinal bacterial overgrowth (SIBO): results in diarrhea, abdominal discomfort, bloating and can lead to absorption issues of PD medications. In the lower GI tract, constipation is characterized by straining during defecation or having less than 3 bowel movements per week, which occurs in 40-50% of PD patients.
== Microbiome-GBA dysfunction in PD ==
=== Braak's hypothesis ===
Aggregated alpha-synuclein pathology in the GI ENS of PD patients was only unveiled in the 1980s. Within the GI tract, pathology has a rostral-caudal gradient pattern with no pathology in the upper esophagus to the most affected regions in lower esophagus (contributing to the swallowing symptoms) and the stomach, followed by sparse pathology in the colon. Autopsy studies performed in PD patients showed pathology in the DMNV, olfactory bulb and vagus nerve. Based on these findings, Braak et al. proposed a retrograde spreading of alpha-synuclein (known as Braak's theory), where the dysfunction of the gut (resulting from altered microbiota or other contributing factors discussed below) triggers the aggregation of alpha-synuclein within the gut prior to spreading to the brain. This was further supported by the decrease in PD risk with truncal vagotomy, a procedure that involves the cutting of the fibers in the vagus nerve that connect to the stomach. Additionally, many animal studies have shown the bi-directional movement of alpha-synuclein between the CNS and ENS. Alpha-synuclein can be detected in the visceral motor nerve terminals and the preganglionic vagus nerve after the overexpression of alpha-synuclein in the midbrain of rats. Conversely, injections of preformed fibrils (pathological alpha-synuclein) into the colon of mice induced pathological changes in endogenous alpha-synuclein in the brainstem.
=== Altered microbiota in PD ===
The microbiota, located throughout the GI tract, contains thousands of different microbial species that have evolved to form a mutualistic and symbiotic relationship with the host. The microbiota exhibits varies functions - structural, metabolic, and immune-based. Structurally, it maintains the intestinal barrier and regulates the growth of the epithelial cells. Metabolically, it is involved in the synthesis or degradation of many compounds, such as amino acids, vitamins, lipids, bile acids and indigestible food. It also regulates the immune response, protecting the host from pathogens. Gut dysbiosis occurs when there is an alteration in the composition of the gut microbiota that leads to a dysfunction and an unhealthy state.
An overgrowth of bacteria in the small intestine can metabolize levodopa into dopamine, preventing it from reaching the brain.
=== Contributing factors of Microbiome-GBA Dysfunction in PD ===
There are many key factors involved in the modulation and dysfunction of the microbiome-GBA in PD.
==== Genetics ====
Genome-wide association studies (GWAS) has linked several autosomal dominant (SNCA, LRRK2, GBA) and recessive (DJ-1, PINK1, PARK7, Parkin) mutations to the development of PD. However, there is variable penetrance in even the most common genetic risk factor of PD, LRRK, where <30% of carriers develop PD. This further suggests the involvement of other factors, such as the environment, in the increased vulnerability of developing the disease and in clinical presentation of symptoms of genetic forms of PD.
LRRK2: LRRK2 is expressed by innate and adaptive immune cells as well as by enteric neurons in the small intestine. After exposure to certain enteric pathogens, LRRK2 modulates the intestinal inflammatory response via the secretion of anti-microbial components. This is also seen in patients with Crohn's Disease, where greater levels of LRRK2 are found in the colon. In in vitro studies, LRRK2 mutation (G2019S) results in changes in intestinal gene expression in epithelial cells associated with GI impairment.
SNCA: many pathogens have been associated with SNCA genes
PINK1 and PRKN: play role in clearance of damaged mitochondria and associated with PD mitochondria dysfunction. An infection with intestinal Gram-negative bacteria in mice containing no PINK1 results in an increased inflammatory response, dopaminergic degeneration and PD like motor symptoms.
==== Aging ====
Aging, a major risk factor of PD, results in alterations to the gut microbiota's biodiversity, which it increases from infancy to adult and begins to decline with age. There are many factors that contribute to this decline, such as the immune system, changes in lifestyle, the environment, medications, other diseases, and organ dysfunction. The decrease in biodiversity with age is associated with a decrease in intestinal epithelial barrier integrity, resulting in the leakage of neurotransmitters, lipopolysaccharide (LPS, an endotoxin found on Gram-negative bacteria), short-chain fatty acids (SCFA, a systemic anti-inflammatory) and bacterial antigens as well as the breakdown of the neuro-immune system.
==== Inflammation ====
Inflammation plays a critical role n PD. Intestinal and periphery inflammation further worsen the neuroinflammatory response on PD progression. Helicobacter pylori (HP) infection may play a role in the pathogenesis and symptomology of PD. HP occurs at a higher prevalence in PD and has been associated in some cases with more severe motor symptoms of the disease. Some studies showed an improvement of symptoms with the eradication of HP, while others reported a 45% increase risk of PD. The elimination of HP can also increase the bioavailability of L-dopa.
Some PD patients have intestinal inflammation as well as a breakdown of the intestinal epithelial barrier integrity, markers of intestinal inflammation and barrier dysfunction Peripheral immune cells are found in the brains of patients with PD and
There are similarities with PD and inflammatory bowel disease (IBD) and irritable bowel syndrome (IBS).
==== Environmental toxins ====
There is an increased risk of PD with exposures to herbicides and pesticides on farms as well as bacteria found in drinking well water. Exposure to herbicides and pesticides in animal models result in movement disorder and the loss of dopaminergic neurons. In other animal studies, exposure to pesticide rotenone resulted in alpha synuclein being released from enteric neurons into the extracellular matrix. In vitro studies also showed that secreted alpha-synuclein can be undergo transneuronal retrograde movement, where it can be taken up by other neurons or non-neuronal cell types. Moreover, the gut of PD patients exposed to herbicides and pesticides showed an increase in xenobiotics degradation pathway.
==== Lifestyle ====
Food: There are many epidemiological studies that demonstrate the significant impact of diet on the onset and exacerbation of PD through its influence on the composition of the gut microbiota. There is a slower progression and incidence of PD with the consumption of a Mediterranean diet. Western diets have less dietary fibers and more fats and sugars, while Mediterranean diets consist of vegetables, nuts, fruits, whole grains, healthy fats, and vegetables. Diets rich in fiber increase bacteria that produce SCFA, which has anti-inflammatory effect. Versus Western diets that result in a lower abundance of
Fluids: Caffeine drinkers and smokers have a decreased risk of PD, by 60% and 30%, respectively, potentially through the modulation of the gut-brain axis. The consumption of caffeine or smoking alters the microbiota composition, which may lower intestinal inflammation and decrease alpha-synuclein aggregation. This is further supported in animal and human studies that have demonstrated an increase of Bifidobacteria, which has anti-inflammatory effects, after coffee consumption. Other components of coffee, such as polyphenols, increase gut motility and regulate the microbiome. Caffeine antagonizes (blocks) the adenosine A2A receptor, resulting in a neuroprotective effect on dopaminergic neurons. Flavonoids (found in tea, red wine, oranges, apples and berry fruits) have antioxidant and antimicrobial properties and have been linked to a lower risk of PD. There is no association of PD risk and dairy products. There is a decrease risk and a Urate, a potent antioxidant, also results in a slower progression and risk of PD. There are many conflicting results on the association of alcohol and PD risk. While some studies report an increased risk, others studies demonstrate a decreased risk that may be dependent on the type of alcohol.
Exercise: has also been associated with enriching the microbiota with more beneficial bacteria, such as Erysipelotrichaceae, Roseburia, Clostridiales and Lachnospiraceae.
=== Targeting the Microbiota-GBA in PD ===
== References == | Wikipedia/Parkinson's_disease_and_gut-brain_axis |
The peripheral nervous system (PNS) is one of two components that make up the nervous system of bilateral animals, with the other part being the central nervous system (CNS). The PNS consists of nerves and ganglia, which lie outside the brain and the spinal cord. The main function of the PNS is to connect the CNS to the limbs and organs, essentially serving as a relay between the brain and spinal cord and the rest of the body. Unlike the CNS, the PNS is not protected by the vertebral column and skull, or by the blood–brain barrier, which leaves it exposed to toxins.
The peripheral nervous system can be divided into a somatic division and an autonomic division. Each of these can further be differentiated into a sensory and a motor sector. In the somatic nervous system, the cranial nerves are part of the PNS with the exceptions of the olfactory nerve and epithelia and the optic nerve (cranial nerve II) along with the retina, which are considered parts of the central nervous system based on developmental origin. The second cranial nerve is not a true peripheral nerve but a tract of the diencephalon. Cranial nerve ganglia, as with all ganglia, are part of the PNS. The autonomic nervous system exerts involuntary control over smooth muscle and glands.
== Structure ==
The peripheral nervous system can be divided into a somatic and an autonomic division, which are part of the somatic nervous system and the autonomic nervous system, respectively. The somatic nervous system is under voluntary control, and transmits signals from the brain to end organs such as muscles. The sensory nervous system is part of the somatic nervous system and transmits signals from senses such as taste and touch (including fine touch and gross touch) to the spinal cord and brain. The autonomic nervous system is a "self-regulating" system which influences the function of organs outside voluntary control, such as the heart rate, or the functions of the digestive system.
=== Somatic nervous system ===
The somatic nervous system includes the sensory nervous system (ex. the somatosensory system) and consists of sensory nerves and somatic nerves, and many nerves which hold both functions.
In the head and neck, cranial nerves carry somatosensory data. There are twelve cranial nerves, ten of which originate from the brainstem, and mainly control the functions of the anatomic structures of the head with some exceptions. One unique cranial nerve is the vagus nerve, which receives sensory information from organs in the thorax and abdomen. The other unique cranial nerve is the accessory nerve which is responsible for innervating the sternocleidomastoid and trapezius muscles, neither of which are located exclusively in the head.
For the rest of the body, spinal nerves are responsible for somatosensory information. These arise from the spinal cord. Usually these arise as a web ("plexus") of interconnected nerves roots that arrange to form single nerves. These nerves control the functions of the rest of the body. In humans, there are 31 pairs of spinal nerves: 8 cervical, 12 thoracic, 5 lumbar, 5 sacral, and 1 coccygeal. These nerve roots are named according to the spinal vertebrata which they are adjacent to. In the cervical region, the spinal nerve roots come out above the corresponding vertebrae (i.e., nerve root between the skull and 1st cervical vertebrae is called spinal nerve C1). From the thoracic region to the coccygeal region, the spinal nerve roots come out below the corresponding vertebrae. This method creates a problem when naming the spinal nerve root between C7 and T1 (so it is called spinal nerve root C8). In the lumbar and sacral region, the spinal nerve roots travel within the dural sac and they travel below the level of L2 as the cauda equina.
==== Cervical spinal nerves (C1–C4) ====
The first 4 cervical spinal nerves, C1 through C4, split and recombine to produce a variety of nerves that serve the neck and back of head.
Spinal nerve C1 is called the suboccipital nerve, which provides motor innervation to muscles at the base of the skull.
C2 and C3 form many of the nerves of the neck, providing both sensory and motor control. These include the greater occipital nerve, which provides sensation to the back of the head, the lesser occipital nerve, which provides sensation to the area behind the ears, the greater auricular nerve and the lesser auricular nerve.
The phrenic nerve is a nerve essential for our survival which arises from nerve roots C3, C4 and C5. It supplies the thoracic diaphragm, enabling breathing. If the spinal cord is transected above C3, then spontaneous breathing is not possible.
==== Brachial plexus (C5–T1) ====
The last four cervical spinal nerves, C5 through C8, and the first thoracic spinal nerve, T1, combine to form the brachial plexus, or plexus brachialis, a tangled array of nerves, splitting, combining and recombining, to form the nerves that subserve the upper-limb and upper back. Although the brachial plexus may appear tangled, it is highly organized and predictable, with little variation between people. See brachial plexus injuries.
==== Lumbosacral plexus (L1–Co1) ====
The anterior divisions of the lumbar nerves, sacral nerves, and coccygeal nerve form the lumbosacral plexus, the first lumbar nerve being frequently joined by a branch from the twelfth thoracic. For descriptive purposes this plexus is usually divided into three parts:
lumbar plexus
sacral plexus
pudendal plexus
=== Autonomic nervous system ===
The autonomic nervous system (ANS) controls involuntary responses to regulate physiological functions. The brain and spinal cord of the central nervous system are connected with organs that have smooth muscle or cardiac muscle, such as the heart, bladder, and other cardiac, exocrine, and endocrine related organs, by ganglionic neurons. The most notable physiological effects from autonomic activity are pupil constriction and dilation, and salivation of saliva. The autonomic nervous system is always activated, but is either in the sympathetic or parasympathetic state. Depending on the situation, one state can overshadow the other, resulting in a release of different kinds of neurotransmitters.
==== Sympathetic nervous system ====
The sympathetic system is activated during a "fight or flight" situation in which mental stress or physical danger is encountered. Neurotransmitters such as norepinephrine, and epinephrine are released, which increases heart rate and blood flow in certain areas like muscle, while simultaneously decreasing activities of non-critical functions for survival, like digestion. The systems are independent to each other, which allows activation of certain parts of the body, while others remain rested.
==== Parasympathetic nervous system ====
Primarily using the neurotransmitter acetylcholine (ACh) as a mediator, the parasympathetic system allows the body to function in a "rest and digest" state. Consequently, when the parasympathetic system dominates the body, there are increases in salivation and activities in digestion, while heart rate and other sympathetic response decrease. Unlike the sympathetic system, humans have some voluntary controls in the parasympathetic system. The most prominent examples of this control are urination and defecation.
==== Enteric nervous system ====
There is a lesser known division of the autonomic nervous system known as the enteric nervous system. Located only around the digestive tract, this system allows for local control without input from the sympathetic or the parasympathetic branches, though it can still receive and respond to signals from the rest of the body. The enteric system is responsible for various functions related to gastrointestinal system.
== Disease ==
Diseases of the peripheral nervous system can be specific to one or more nerves, or affect the system as a whole.
Any peripheral nerve or nerve root can be damaged, called a mononeuropathy. Such injuries can be because of injury or trauma, or compression. Compression of nerves can occur because of a tumour mass or injury. Alternatively, if a nerve is in an area with a fixed size it may be trapped if the other components increase in size, such as carpal tunnel syndrome and tarsal tunnel syndrome. Common symptoms of carpal tunnel syndrome include pain and numbness in the thumb, index and middle finger. In peripheral neuropathy, the function one or more nerves are damaged through a variety of means. Toxic damage may occur because of diabetes (diabetic neuropathy), alcohol, heavy metals or other toxins; some infections; autoimmune and inflammatory conditions such as amyloidosis and sarcoidosis. Peripheral neuropathy is associated with a sensory loss in a "glove and stocking" distribution that begins at the peripheral and slowly progresses upwards, and may also be associated with acute and chronic pain. Peripheral neuropathy is not just limited to the somatosensory nerves, but the autonomic nervous system too (autonomic neuropathy).
== See also ==
Classification of peripheral nerves
Connective tissue in the peripheral nervous system
Preferential motor reinnervation
== References ==
== External links ==
Peripheral nervous system photomicrographs
Peripheral Neuropathy Archived 2016-12-15 at the Wayback Machine from the US NIH
Neuropathy: Causes, Symptoms and Treatments from Medical News Today
Peripheral Neuropathy at the Mayo Clinic | Wikipedia/Peripheral_nervous_systems |
Impulse-control disorder (ICD) is a class of psychiatric disorders characterized by impulsivity – failure to resist a temptation, an urge, or an impulse; or having the inability to not speak on a thought.
The fifth edition of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM-5) that was published in 2013 includes a new chapter on disruptive, impulse-control, and conduct disorders covering disorders "characterized by problems in emotional and behavioral self-control". Five behavioral stages characterize impulsivity: an impulse, growing tension, pleasure on acting, relief from the urge, and finally guilt (which may or may not arise).
== Types ==
Disorders characterized by impulsivity that were not categorized elsewhere in the DSM-IV-TR were also included in the category "Impulse-control disorders not elsewhere classified". Trichotillomania (hair-pulling) and skin-picking were moved in DSM-5 to the obsessive-compulsive chapter. Additionally, other disorders not specifically listed in this category are often classed as impulsivity disorders. Terminology was changed in the DSM-5 from "Not Otherwise Classified" to "Not Elsewhere Classified".
=== Sexual compulsion ===
Sexual compulsion includes an increased urge in sexual behavior and thoughts. This compulsion may also lead to several consequences in the individual's life, including risky partner selection, increased chance for STIs and depression, as well as unwanted pregnancy. There has not yet been a determined estimate of its prevalence due to the secretiveness of the disorder. However, research conducted in the early 1990s in the United States gave prevalence estimates between 5–6% in the U.S. population, with male cases being higher than female.
=== Internet addiction ===
The disorder of Internet addiction has only recently been taken into consideration and has been added as a form of ICD. It is characterized by excessive and damaging usage of Internet with increased amount of time spent chatting, web surfing, gambling, shopping or consuming pornography. Excessive and problematic Internet use has been reported across all age, social, economic, and educational ranges. Although initially thought to occur mostly in males, increasing rates have been also observed in females. However, no epidemiological study has been conducted yet to understand its prevalence.
=== Compulsive shopping ===
Compulsive shopping or buying is characterized by a frequent irresistible urge to shop even if the purchases are not needed or cannot be afforded. The prevalence of compulsive buying in the U.S. has been estimated to be 2–8% of the general adult population, with 80–95% of these cases being females. The onset is believed to occur in late teens or early twenties and the disorder is considered to be generally chronic.
=== Pyromania ===
Pyromania is characterized by impulsive and repetitive urges to deliberately start fires.
Because of its nature, the number of studies performed for fire-setting are understandably limited. However, studies done on children and adolescents with pyromania have reported its prevalence to be between 2.4 and 3.5% in the United States. It has also been observed that the incidence of fire-setting is more common in juvenile and teenage boys than girls of the same age.
=== Intermittent explosive disorder ===
Intermittent explosive disorder or IED is a clinical condition of experiencing recurrent aggressive episodes that are out of proportion of any given stressor. Earlier studies reported a prevalence rate between 1–2% in a clinical setting, but a study done by Coccaro and colleagues in 2004 had reported about 11.1% lifetime prevalence and 3.2% one month prevalence in a sample of a moderate number of individuals (n=253). Based on the study, Coccaro and colleagues estimated the prevalence of IED in 1.4 million individuals in the US and 10 million with lifetime IED.
=== Kleptomania ===
Kleptomania is characterized by an impulsive urge to steal purely for the sake of gratification.
In the U.S. the presence of kleptomania is unknown but has been estimated at 6 per 1000 individuals. Kleptomania is also thought to be the cause of 5% of annual shoplifting in the U.S. If true, 100,000 arrests are made in the U.S. annually due to kleptomaniac behavior.
== Signs and symptoms ==
The signs and symptoms of impulse-control disorders vary based on the age of the persons with them, the actual type of impulse-control that they are struggling with, the environment in which they are living, and whether they are male or female.
=== Co-morbidity ===
Complications of late Parkinson's disease may include a range of impulse-control disorders, including eating, buying, compulsive gambling, sexual behavior, and related behaviors (punding, hobbyism and walkabout). Prevalence studies suggest that ICDs occur in 13.6–36.0% of Parkinson's patients exhibited at least one form of ICD. There is a significant co-occurrence of pathological gambling (PG) and personality disorder, and is suggested to be caused partly by their common "genetic vulnerability". The degree of heritability to ICD is similar to other psychiatric disorders including substance use disorder. There has also been found a genetic factor to the development of ICD just as there is for substance use disorder. The risk for subclinical PG in a population is accounted for by the risk of alcohol dependence by about 12–20% genetic and 3–8% environmental factors. There is a high rate of co-morbidity between ADHD and other impulse-control disorders.
== Mechanism ==
Dysfunction of the striatum may prove to be the link between obsessive–compulsive disorder (OCD), ICD and substance use disorder (SUD). According to research, the "impulsiveness" that occurs in the later stages of OCD is caused by progressive dysfunction of the ventral striatal circuit. Whereas in case of ICD and SUD, the increased dysfunction of dorsal striatal circuit increases the "ICD and SUD behaviours that are driven by the compulsive processes". OCD and ICD have traditionally been viewed as two very different disorders, the former one is generally driven by the desire to avoid harm whereas the latter one driven "by reward-seeking behaviour". Still, there are certain behaviors similar in both, for example the compulsive actions of ICD patients and the behavior of reward-seeking (for example hoarding) in OCD patients.
== Treatment ==
Impulse-control disorders have two treatment options: psychosocial and pharmacological. Treatment methodology is informed by the presence of comorbid conditions.
=== Medication ===
In the case of pathological gambling, along with fluvoxamine, clomipramine has been shown effective in the treatment, with reducing the problems of pathological gambling in a subject by up to 90%. Whereas in trichotillomania, the use of clomipramine has again been found to be effective, fluoxetine has not produced consistent positive results. Fluoxetine, however, has produced positive results in the treatment of pathological skin picking disorder, although more research is needed to conclude this information. Fluoxetine has also been evaluated in treating IED and demonstrated significant improvement in reducing frequency and severity of impulsive aggression and irritability in a sample of 100 subjects who were randomized into a 14-week, double-blind study. Despite a large decrease in impulsive aggression behavior from baseline, only 44% of fluoxetine responders and 29% of all fluoxetine subjects were considered to be in full remission at the end of the study. Paroxetine has shown to be somewhat effective although the results are inconsistent. Another medication, escitalopram, has shown to improve the condition of the subjects of pathological gambling with anxiety symptoms. The results suggest that although SSRIs have shown positive results in the treatment of pathological gambling, inconsistent results with the use of SSRIs have been obtained which might suggest a neurological heterogeneity in the impulse-control disorder spectrum.
=== Psychosocial ===
The psychosocial approach to the treatment of ICDs includes cognitive behavioral therapy (CBT) which has been reported to have positive results in the case of treatment of pathological gambling and sexual addiction. There is general consensus that cognitive-behavioural therapies offer an effective intervention model.
Pyromania
Pyromania is harder to control in adults due to lack of co-operation; however, CBT is effective in treating child pyromaniacs. (Frey 2001)
Intermittent explosive disorder
Along with several other methods of treatments, cognitive behavioural therapy has also shown to be effective in the case of Intermittent explosive disorder as well. Cognitive Relaxation and Coping Skills Therapy (CRCST), which consists of 12 sessions starting first with the relaxation training followed by cognitive restructuring, then exposure therapy is taken. Later, the focus is on resisting aggressive impulses and taking other preventative measures.
Kleptomania
In the case of kleptomania, the cognitive behaviour techniques used in these cases consists of covert sensitization, imaginal desensitization, systematic desensitization, aversion therapy, relaxation training, and "alternative sources of satisfaction".
Compulsive buying
Although compulsive buying falls under the category of Impulse-control disorder – Not Otherwise Specified in the DSM-IV-TR, some researchers have suggested that it consists of core features that represent impulse-control disorders which includes preceding tension, difficult to resist urges and relief or pleasure after action. The efficiency of cognitive behavior therapy for compulsive buying is not truly determined yet; however, common techniques for the treatment include exposure and response prevention, relapse prevention, cognitive restructuring, covert sensitization, and stimulus control.
== See also ==
Behavioral addiction
Binge eating disorder
Body-focused repetitive behavior
Child pyromaniac
Dopamine dysregulation syndrome
Biopsychosocial model
== References ==
== External links == | Wikipedia/Impulse-control_disorder |
Micrographia is an acquired disorder characterized by abnormally small, cramped handwriting. It is commonly associated with neurodegenerative disorders of the basal ganglia, such as in Parkinson's disease, but it has also been ascribed to subcortical focal lesions. O'Sullivan and Schmitz describe it as an abnormally small handwriting that is difficult to read, as seen in the accompanying photo. Micrographia is also seen in patients with Wilson's disease, obsessive compulsive disorder, metamorphopsia, or with isolated focal lesions of the midbrain or basal ganglia.
== Parkinson's disease ==
A common feature of Parkinson's disease (PD) is difficulty in routine activities due to lack of motor control. Patients have trouble maintaining the scale of movements and have reduced amplitude of movement (hypokinesia). In PD, the trouble in scaling and controlling the amplitude of movement affects complex, sequential movements, so that micrographia is a common symptom. Another cause of micrographia is lack of physical dexterity.
James Parkinson may have been aware of micrographia in patients with shaking palsy (later renamed Parkinson's disease), when he described "the hand failing to answer with exactness to the dictates of the will".
=== Occurrence in Parkinson's ===
Micrographia is often seen amongst patients with Parkinson’s disease, although the precise prevalence is uncertain, with reported figures of between 9% and 75%. Often appearing before other symptoms, it can help in diagnosis.
=== Pharmacological management ===
Micrographia may worsen when a PD patient is under-medicated and when the medication is wearing off.
== References ==
== Bibliography ==
O'Sullivan, Susan; Schmitz, Thomas (2007). Physical Rehabilitation. Philadelphia, PA: F.A. Davis Company. pp. 857–1339. | Wikipedia/Micrographia_(handwriting) |
In psychology and neuroscience, executive dysfunction, or executive function deficit, is a disruption to the efficacy of the executive functions, which is a group of cognitive processes that regulate, control, and manage other cognitive processes. Executive dysfunction can refer to both neurocognitive deficits and behavioural symptoms. It is implicated in numerous neurological and mental disorders, as well as short-term and long-term changes in non-clinical executive control. It can encompass other cognitive difficulties like planning, organizing, initiating tasks, and regulating emotions. It is a core characteristic of attention deficit hyperactivity disorder (ADHD) and can elucidate numerous other recognized symptoms. Extreme executive dysfunction is the cardinal feature of dysexecutive syndrome.
== Overview ==
Executive functioning is a theoretical construct representing a domain of cognitive processes that regulate, control, and manage other cognitive processes. Executive functioning is not a unitary concept; it is a broad description of the set of processes involved in certain areas of cognitive and behavioural control. Executive processes are integral to higher brain function, particularly in the areas of goal formation, planning, goal-directed action, self-monitoring, attention, response inhibition, and coordination of complex cognition and motor control for effective performance. Deficits of the executive functions are observed in all populations to varying degrees, but severe executive dysfunction can have devastating effects on cognition and behaviour in both individual and social contexts on a day-to-day basis.
Executive dysfunction does occur to a minor degree in all individuals on both short-term and long-term scales. In non-clinical populations, the activation of executive processes appears to inhibit further activation of the same processes, suggesting a mechanism for normal fluctuations in executive control. Decline in executive functioning is also associated with both normal and clinical aging. The decline of memory processes as people age appears to affect executive functions, which also points to the general role of memory in executive functioning.
Executive dysfunction appears to consistently involve disruptions in task-oriented behavior, which requires executive control in the inhibition of habitual responses and goal activation. Such executive control is responsible for adjusting behaviour to reconcile environmental changes with goals for effective behaviour. Impairments in set shifting ability are a notable feature of executive dysfunction; set shifting is the cognitive ability to dynamically change focus between points of fixation based on changing goals and environmental stimuli. This offers a parsimonious explanation for the common occurrence of impulsive, hyperactive, disorganized, and aggressive behaviour in clinical patients with executive dysfunction. A 2011 study confirms there is a lack of self-control, greater impulsivity, and greater disorganization with executive dysfunction, leading to greater amounts of aggressive behavior.
Executive dysfunction, particularly in working memory capacity, may also lead to varying degrees of emotional dysregulation, which can manifest as chronic depression, anxiety, or hyperemotionality. Russell Barkley proposed a hybrid model of the role of behavioural disinhibition in the presentation of ADHD, which has served as the basis for much research of both ADHD and broader implications of the executive system.
Other common and distinctive symptoms of executive dysfunction include utilization behaviour, which is compulsive manipulation/use of nearby objects due simply to their presence and accessibility (rather than a functional reason); and imitation behaviour, a tendency to rely on imitation as a primary means of social interaction. Research also suggests that executive set shifting is a co-mediator with episodic memory of feeling-of-knowing (FOK) accuracy, such that executive dysfunction may reduce FOK accuracy.
There is some evidence suggesting that executive dysfunction may produce beneficial effects as well as maladaptive ones. Abraham et al. demonstrate that creative thinking in schizophrenia is mediated by executive dysfunction, and they establish a firm etiology for creativity in psychoticism, pinpointing a cognitive preference for broader top-down associative thinking versus goal-oriented thinking, which closely resembles aspects of ADHD. It is postulated that elements of psychosis are present in both ADHD and schizophrenia/schizotypy due to dopamine overlap.
== Cause ==
The cause of executive dysfunction is heterogeneous, as many neurocognitive processes are involved in the executive system and each may be compromised by a range of genetic and environmental factors. Learning and development of long-term memory play a role in the severity of executive dysfunction through dynamic interaction with neurological characteristics. Studies in cognitive neuroscience suggest that executive functions are widely distributed throughout the brain, though a few areas have been isolated as primary contributors. Executive dysfunction is studied extensively in clinical neuropsychology as well, allowing correlations to be drawn between such dysexecutive symptoms and their neurological correlates. A 2015 study confirmed that executive dysfunction has a positive correlation with neurodevelopmental disorders such as autism spectrum disorder (ASD) or attention deficit hyperactivity disorder (ADHD).
Executive processes are closely integrated with memory retrieval capabilities for overall cognitive control; in particular, goal/task-information is stored in both short-term and long-term memory, and effective performance requires effective storage and retrieval of this information.
Executive dysfunction characterizes many of the symptoms observed in numerous clinical populations. In the case of acquired brain injury and neurodegenerative diseases there is a clear neurological etiology producing dysexecutive symptoms. Conversely, syndromes and disorders are defined and diagnosed based on their symptomatology rather than etiology. Thus, while Parkinson's disease, a neurodegenerative condition, causes executive dysfunction, a disorder such as ADHD is a classification given to a set of subjectively-determined symptoms implicating executive dysfunction – models from the 1990s and 2000s indicate that such clinical symptoms are caused by executive dysfunction.
=== Neurophysiology ===
Executive functioning is not a unitary concept. Many studies have been conducted in an attempt to pinpoint the exact regions of the brain that lead to executive dysfunction, producing a vast amount of often conflicting information indicating wide and inconsistent distribution of such functions. A common assumption is that disrupted executive control processes are associated with pathology in prefrontal brain regions. This is supported to some extent by the primary literature, which shows both pre-frontal activation and communication between the pre-frontal cortex and other areas associated with executive functions such as the basal ganglia and cerebellum.
In most cases of executive dysfunction, deficits are attributed to either frontal lobe damage or dysfunction, or to disruption in fronto-subcortical connectivity. Neuroimaging with PET and fMRI has confirmed the relationship between executive function and functional frontal pathology. Neuroimaging studies have also suggested that some constituent functions are not discretely localized in prefrontal regions. Functional imaging studies using different tests of executive function have implicated the dorsolateral prefrontal cortex to be the primary site of cortical activation during these tasks. In addition, PET studies of patients with Parkinson's disease have suggested that tests of executive function are associated with abnormal function in the globus pallidus and appear to be the genuine result of basal ganglia damage.
With substantial cognitive load, fMRI signals indicate a common network of frontal, parietal and occipital cortices, thalamus, and the cerebellum. This observation suggests that executive function is mediated by dynamic and flexible networks that are characterized using functional integration and effective connectivity analyses. The complete circuit underlying executive function includes both a direct and an indirect circuit. The neural circuit responsible for executive functioning is, in fact, located primarily in the frontal lobe. This main circuit originates in the dorsolateral prefrontal cortex/orbitofrontal cortex and then projects through the striatum and thalamus to return to the prefrontal cortex.
Not surprisingly, plaques and tangles in the frontal cortex can cause disruption in functions as well as damage to the connections between prefrontal cortex and the hippocampus. Another important point is in the finding that structural MRI images link the severity of white matter lesions to deficits in cognition.
The emerging view suggests that cognitive processes materialize from networks that span multiple cortical sites with closely collaborative and over-lapping functions. A challenge for future research will be to map the multiple brain regions that might combine with each other in a vast number of ways, depending on the task requirements.
=== Genetics ===
Certain genes have been identified with a clear correlation to executive dysfunction and related psychopathologies. According to Friedman et al. (2008), the heritability of executive functions is among the highest of any psychological trait. The dopamine receptor D4 gene (DRD4) with 7'-repeating polymorphism (7R) has been repeatedly shown to correlate strongly with impulsive response style on psychological tests of executive dysfunction, particularly in clinical ADHD. The catechol-o-methyl transferase gene (COMT) codes for an enzyme that degrades catecholamine neurotransmitters (DA and NE), and its Val158Met polymorphism is linked with the modulation of task-oriented cognition and behavior (including set shifting) and the experience of reward, which are major aspects of executive functioning. COMT is also linked to methylphenidate (stimulant medication) response in children with ADHD. Both the DRD4/7R and COMT/Val158Met polymorphisms are also correlated with executive dysfunction in schizophrenia and schizotypal behaviour.
=== Evolutionary perspective ===
The prefrontal lobe controls two related executive functioning domains. The first is mediation of abilities involved in planning, problem solving, and understanding information, as well as engaging in working memory processes and controlled attention. In this sense, the prefrontal lobe is involved with dealing with basic, everyday situations, especially those involving metacognitive functions. The second domain involves the ability to fulfill biological needs through the coordination of cognition and emotions which are both associated with the frontal and prefrontal areas.
From an evolutionary perspective, it has been hypothesized that the executive system may have evolved to serve several adaptive purposes. The prefrontal lobe in humans has been associated both with metacognitive executive functions and emotional executive functions. Theory and evidence suggest that the frontal lobes in other primates also mediate and regulate emotion, but do not demonstrate the metacognitive abilities that are demonstrated in humans. This uniqueness of the executive system to humans implies that there was also something unique about the environment of ancestral humans, which gave rise to the need for executive functions as adaptations to that environment. Some examples of possible adaptive problems that would have been solved by the evolution of an executive system are: social exchange, imitation and observational learning, enhanced pedagogical understanding, tool construction and use, and effective communication.
In a similar vein, some have argued that the unique metacognitive capabilities demonstrated by humans have arisen out of the development of a sophisticated language (symbolization) systems and culture. Moreover, in a developmental context, it has been proposed that each executive function capability originated as a form of public behaviour directed at the external environment, but then became self-directed, and then finally, became private to the individual, over the course of the development of self-regulation. These shifts in function illustrate the evolutionarily salient strategy of maximizing longer-term social consequences over near-term ones, through the development of an internal control of behaviour.
== Testing and measurement ==
There are several measures that can be employed to assess the executive functioning capabilities of an individual. Although a trained non-professional working outside of an institutionalized setting can legally and competently perform many of these measures, a trained professional administering the test in a standardized setting will yield the most accurate results.
=== Clock drawing test ===
The Clock drawing test (CDT) is a brief cognitive task that can be used by physicians who suspect neurological dysfunction based on history and physical examination. It is relatively easy to train non-professional staff to administer a CDT. Therefore, this is a test that can easily be administered in educational and geriatric settings and can be utilized as a precursory measure to indicate the likelihood of further/future deficits. Also, generational, educational and cultural differences are not perceived as impacting the utility of the CDT.
The procedure of the CDT begins with the instruction to the participant to draw a clock reading a specific time (generally 11:10). After the task is complete, the test administrator draws a clock with the hands set at the same specific time. Then the patient is asked to copy the image. Errors in clock drawing are classified according to the following categories: omissions, perseverations, rotations, misplacements, distortions, substitutions and additions. Memory, concentration, initiation, energy, mental clarity and indecision are all measures that are scored during this activity. Those with deficits in executive functioning will often make errors on the first clock but not the second. In other words, they will be unable to generate their own example, but will show proficiency in the copying task.
=== Stroop task ===
The cognitive mechanism involved in the Stroop task is referred to as directed attention. The Stroop task requires the participant to engage in and allows assessment of processes such as attention management, speed and accuracy of reading words and colours and of inhibition of competing stimuli. The stimulus is a colour word that is printed in a different colour than what the written word reads. For example, the word "red" is written in a blue font. One must verbally classify the colour that the word is displayed/printed in, while ignoring the information provided by the written word. In the aforementioned example, this would require the participant to say "blue" when presented with the stimulus. Although the majority of people will show some slowing when given incompatible text versus font colour, this is more severe in individuals with deficits in inhibition. The Stroop task takes advantage of the fact that most humans are so proficient at reading colour words that it is extremely difficult to ignore this information, and instead acknowledge, recognize and say the colour the word is printed in. The Stroop task is an assessment of attentional vitality and flexibility. More modern variations of the Stroop task tend to be more difficult and often try to limit the sensitivity of the test.
=== Trail-making test ===
Another prominent test of executive dysfunction is known as the Trail-making test. This test is composed of two main parts (Part A & Part B). Part B differs from Part A specifically in that it assesses more complex factors of motor control and perception. Part B of the Trail-making test consists of multiple circles containing letters (A-L) and numbers (1-12). The participant's objective for this test is to connect the circles in order, alternating between number and letter (e.g. 1-A-2-B) from start to finish. The participant is required not to lift their pencil from the page. The task is also timed as a means of assessing speed of processing. Set-switching tasks in Part B have low motor and perceptual selection demands, and therefore provide a clearer index of executive function. Throughout this task, some of the executive function skills that are being measured include impulsivity, visual attention and motor speed.
=== Wisconsin card sorting test ===
The Wisconsin Card Sorting Test (WCST) is used to determine an individual's competence in abstract reasoning, and the ability to change problem-solving strategies when needed. These abilities are primarily determined by the frontal lobes and basal ganglia, which are crucial components of executive functioning; making the WCST a good measure for this purpose.
The WCST utilizes a deck of 128 cards that contains four stimulus cards. The figures on the cards differ with respect to color, quantity, and shape. The participants are then given a pile of additional cards and are asked to match each one to one of the previous cards. Typically, children between ages 9 and 11 are able to show the cognitive flexibility that is needed for this test.
== In clinical populations ==
The executive system's broad range of functions relies on, and is instrumental in, a broad range of neurocognitive processes. Clinical presentation of severe executive dysfunction that is unrelated to a specific disease or disorder is classified as a dysexecutive syndrome, and often appears following damage to the frontal lobes of the cerebral cortex. As a result, executive dysfunction is implicated etiologically or co-morbidly in many psychiatric illnesses, which often show the same symptoms as the dysexecutive syndrome. It has been assessed and researched extensively in relation to cognitive developmental disorders, psychotic disorders, mood disorders, and conduct disorder, as well as neurodegenerative diseases and acquired brain injury (ABI).
Environmental dependency syndrome is a dysexecutive syndrome marked by significant behavioural dependence on environmental cues and is marked by excessive imitation and utilization behaviour. It has been observed in patients with a variety of etiologies including ABI, exposure to phendimetrazine tartrate, stroke, and various frontal lobe lesions.
=== Attention deficit hyperactivity disorder ===
A triad of core symptoms – inattention, hyperactivity, and impulsivity – characterize attention deficit hyperactivity disorder (ADHD). Individuals with ADHD often experience problems with organization, discipline, and setting priorities, and these difficulties often persist from childhood through adulthood. In both children and adults with ADHD, an underlying executive dysfunction involving the prefrontal regions and other interconnected subcortical structures has been found. As a result, people with ADHD commonly perform more poorly than matched controls on interference control, mental flexibility and verbal fluency. Also, a more central impairment in self-regulation is noted in cases of ADHD. However, some research has suggested the possibility that the severity of executive dysfunction in individuals with ADHD declines with age as they learn to compensate for the aforementioned deficits. Thus, a decrease in executive dysfunction in adults with ADHD as compared to children with ADHD is thought reflective of compensatory strategies employed on behalf of the adults (e.g. using schedules to organize tasks) rather than neurological differences.
Although ADHD has typically been conceptualized in a categorical diagnostic paradigm, it has also been proposed that this disorder should be considered within a more dimensional behavioural model that links executive functions to observed deficits. Proponents argue that classic conceptions of ADHD falsely localize the problem at perception (input) rather than focusing on the inner processes involved in producing appropriate behaviour (output). Moreover, others have theorized that the appropriate development of inhibition (something that is seen to be lacking in individuals with ADHD) is essential for the normal performance of other neuropsychological abilities such as working memory, and emotional self-regulation. Thus, within this model, deficits in inhibition are conceptualized to be developmental and the result of atypically operating executive systems.
Both ADHD and obesity are complicated disorders and each produces a large impact on an individual's social well-being. This being both a physical and psychological disorder has reinforced that obese individuals with ADHD need more treatment time (with associated costs), and are at a higher risk of developing physical and emotional complications. The cognitive ability to develop a comprehensive self-construct and the ability to demonstrate capable emotion regulation is a core deficit observed in people with ADHD and is linked to deficits in executive function. Overall, low executive functioning seen in individuals with ADHD has been correlated with tendencies to overeat, as well as with emotional eating. This particular interest in the relationship between ADHD and obesity is rarely clinically assessed and may deserve more attention in future research.
=== Schizophrenia ===
Schizophrenia is commonly described as a mental disorder in which a person becomes detached from reality because of disruptions in the pattern of thinking and perception. Although the etiology is not completely understood, it is closely related to dopaminergic activity and is strongly associated with both neurocognitive and genetic elements of executive dysfunction. Individuals with schizophrenia may demonstrate amnesia for portions of their episodic memory. Observed damage to explicit, consciously accessed memory is generally attributed to the fragmented thoughts that characterize the disorder. These fragmented thoughts are suggested to produce a similarly fragmented organization in memory during encoding and storage, making retrieval more difficult. However, implicit memory is generally preserved in patients with schizophrenia.
Patients with schizophrenia demonstrate spared performance on measures of visual and verbal attention and concentration, as well as on immediate digit span recall, suggesting that observed deficits cannot be attributed to deficits in attention or short-term memory. However, impaired performance was measured on psychometric measures assumed to assess higher order executive function. Working memory and multi-tasking impairments typically characterize the disorder. Persons with schizophrenia also tend to demonstrate deficits in response inhibition and cognitive flexibility.
Patients often demonstrate noticeable deficits in the central executive component of working memory as conceptualized by Baddeley and Hitch. However, performance on tasks associated with the phonological loop and visuospatial sketchpad are typically less affected. More specifically, patients with schizophrenia show impairment to the central executive component of working memory, specific to tasks in which the visuospatial system is required for central executive control. The phonological system appears to be more generally spared overall.
=== Autism spectrum disorder ===
Autism is diagnosed based on the presence of markedly abnormal or impaired development in social interaction and communication and a markedly restricted or repetitive repertoire of stereotypic movements, activities, and/or interests. It is a disorder that is defined according to behaviour as no specific biological markers are known. Due to the variability in severity and impairment in functioning exhibited by autistic people, the disorder is typically conceptualized as existing along a continuum (or spectrum) of severity.
Autistic individuals commonly show impairment in three main areas of executive functioning:
Fluency. Fluency refers to the ability to generate novel ideas and responses. Although adult populations are largely underrepresented in this area of research, findings have suggested that autistic children generate fewer novel words and ideas and produce less complex responses than matched controls.
Planning. Planning refers to a complex, dynamic process, wherein a sequence of planned actions must be developed, monitored, re-evaluated and updated. Autistic persons demonstrate impairment on tasks requiring planning abilities relative to typically functioning controls, with this impairment maintained over time. As might be suspected, in the case of autism comorbid with learning disability, an additive deficit is observed in many cases.
Flexibility. Poor mental flexibility, as demonstrated in autistic individuals, is characterized by perseverative, stereotyped behaviour, and deficits in both the regulation and modulation of motor acts. Some research has suggested that autistic individuals experience a sort of 'stuck-in-set' perseveration that is specific to the disorder, rather than a more global perseveration tendency. These deficits have been exhibited in cross-cultural samples and have been shown to persist over time. Autistic individuals have also been shown to react slower as well as perform slower in tasks that require mental flexibility when compared to their non-autistic peers.
Although there has been some debate, inhibition is generally no longer considered to be an executive function deficit in autistic people. Autistic individuals have demonstrated differential performance on various tests of inhibition, with results being taken to indicate a general difficulty in the inhibition of a habitual response. However, performance on the Stroop task, for example, has been unimpaired relative to matched controls. An alternative explanation has suggested that executive function tests that demonstrate a clear rationale are passed by autistic individuals. In this light, it is the design of the measures of inhibition that have been implicated in the observation of impaired performance rather than inhibition being a core deficit.
In general, autistic individuals show relatively spared performance on tasks that do not require mentalization. These include: use of desire and emotion words, sequencing behavioural pictures, and the recognition of basic facial emotional expressions. In contrast, autistic individuals typically demonstrated impaired performance on tasks that do require mentalizing. These include: false beliefs, use of belief and idea words, sequencing mentalistic pictures, and recognizing complex emotions such as scheming.
=== Bipolar disorder ===
Bipolar disorder is a mood disorder that is characterized by both highs (mania) and lows (depression) in mood. These changes in mood sometimes alternate rapidly (changes within days or weeks) and sometimes not so rapidly (within weeks or months). A 2006 study provided strong evidence of cognitive impairments in individuals with bipolar disorder, particularly in executive function and verbal learning. Moreover, these cognitive deficits appear to be consistent cross-culturally, indicating that these impairments are characteristic of the disorder and not attributable to differences in cultural values, norms, or practice. Functional neuroimaging studies have implicated abnormalities in the dorsolateral prefrontal cortex and the anterior cingulate cortex as being volumetrically different in individuals with bipolar disorder.
Individuals affected by bipolar disorder exhibit deficits in strategic thinking, inhibitory control, working memory, attention, and initiation that are independent of affective state. In contrast to the more generalized cognitive impairment demonstrated in persons with schizophrenia, for example, deficits in bipolar disorder are typically less severe and more restricted. It has been suggested that a "stable dys-regulation of prefrontal function or the subcortical-frontal circuitry [of the brain] may underlie the cognitive disturbances of bipolar disorder". Executive dysfunction in bipolar disorder is suggested to be associated particularly with the manic state, and is largely accounted for in terms of the formal thought disorder that is a feature of mania. It is important to note, however, that patients with bipolar disorder with a history of psychosis demonstrated greater impairment on measures of executive functioning and spatial working memory compared with bipolar patients without a history of psychosis suggesting that psychotic symptoms are correlated with executive dysfunction.
=== Parkinson's disease ===
Parkinson's disease (PD) primarily involves damage to subcortical brain structures and is usually associated with movement difficulties, in addition to problems with memory and thought processes. Persons affected by PD often demonstrate difficulties in working memory, a component of executive functioning. Cognitive deficits found in early PD process appear to involve primarily the fronto-executive functions. Moreover, studies of the role of dopamine in the cognition of PD patients have suggested that PD patients with inadequate dopamine supplementation are more impaired in their performance on measures of executive functioning. This suggests that dopamine may contribute to executive control processes. Increased distractibility, problems in set formation and maintaining and shifting attentional sets, deficits in executive functions such as self-directed planning, problems solving, and working memory have been reported in PD patients. In terms of working memory specifically, persons with PD show deficits in the areas of: a) spatial working memory; b) central executive aspects of working memory; c) loss of episodic memories; d) locating events in time.
Spatial working memory
PD patients often demonstrate difficulty in updating changes in spatial information and often become disoriented. They do not keep track of spatial contextual information in the same way that a typical person would do almost automatically. Similarly, they often have trouble remembering the locations of objects that they have recently seen, and thus also have trouble with encoding this information into long-term memory.
Central executive aspects
PD is often characterized by a difficulty in regulating and controlling one's stream of thought, and how memories are utilized in guiding future behaviour. Also, persons affected by PD often demonstrate perseverative behaviours such as continuing to pursue a goal after it is completed, or an inability to adopt a new strategy that may be more appropriate in achieving a goal. However, some research from 2007 suggests that PD patients may actually be less persistent in pursuing goals than typical persons and may abandon tasks sooner when they encounter problems of a higher level of difficulty.
Loss of episodic memories
The loss of episodic memories in PD patients typically demonstrates a temporal gradient wherein older memories are generally more preserved than newer memories. Also, while forgetting event content is less compromised in Parkinson's than in Alzheimer's, the opposite is true for event data memories.
Locating events in time
PD patients often demonstrate deficits in their ability to sequence information, or date events. Part of the problems is hypothesized to be due to a more fundamental difficulty in coordinating or planning retrieval strategies, rather than failure at the level of encoding or storing information in memory. This deficit is also likely to be due to an underlying difficulty in properly retrieving script information. PD patients often exhibit signs of irrelevant intrusions, incorrect ordering of events, and omission of minor components in their script retrieval, leading to disorganized and inappropriate application of script information.
== Treatment ==
=== Medication ===
Methylphenidate- and amphetamine-based medications are first-line treatments for ADHD. On average, these stimulants are more effective at treating core ADHD symptoms including executive dysfunction than psychosocial treatment alone. Their efficacy treating ADHD is among the highest of any psychotropic medication treating any psychiatric condition. Treatment with methylphenidate or other ADHD medications reduces core ADHD symptoms equally well with or without psychosocial treatment. However, psychosocial treatment may confer other benefits.
=== Psychosocial treatment ===
Since 1997, there has been experimental and clinical practice of psychosocial treatment for adults with executive dysfunction, and particularly attention-deficit/hyperactivity disorder (ADHD). Psychosocial treatment addresses the many facets of executive difficulties, and as the name suggests, covers academic, occupational and social deficits. Psychosocial treatment facilitates marked improvements in major symptoms of executive dysfunction such as time management, organization and self-esteem. One kind of psychosocial treatment has been found to be particularly helpful, Behavioral Parent Training (BPT). Behavioral Parent Training (BPT) helps parents learn, through the help of a trained mental health professional, how to help their child behave better. This outlines proper use of reward and punishment with the child, mostly using methods of positive and negative reinforcement rather than punishment. For example, taking away a positive reinforcement such as praise, as opposed to adding a punishment. Psychosocial treatments are effective for adults with attention-deficit/hyperactivity disorder (ADHD) as well. One study shows that there are a number of useful psychosocial interventions that help adults with ADHD live better lives too. These included mindfulness training, cognitive based behavioral therapy, as well as education to help the participants recognize problem behaviors in their lives.
=== Cognitive behavioral therapy and group rehabilitation ===
Cognitive behavioral therapy (CBT) is a frequently suggested treatment for executive dysfunction, but has shown limited effectiveness. However, a study of CBT in a group rehabilitation setting showed a significant increase in positive treatment outcome compared with individual therapy. Patients' self-reported symptoms on 16 different ADHD/executive-related items were reduced following the treatment period.
=== Treatment for patients with acquired brain injury ===
The use of auditory stimuli has been examined in the treatment of dysexecutive syndrome. The presentation of auditory stimuli causes an interruption in current activity, which appears to aid in preventing "goal neglect" by increasing the patients' ability to monitor time and focus on goals. Given such stimuli, subjects no longer performed below their age group average IQ.
Patients with acquired brain injury have also been exposed to goal management training (GMT). GMT skills are associated with paper-and-pencil tasks that are suitable for patients having difficulty setting goals. From these studies there has been support for the effectiveness of GMT and the treatment of executive dysfunction due to ABI.
== Developmental context ==
An understanding of how executive dysfunction shapes development has implications how we conceptualize executive functions and their role in shaping the individual. Disorders affecting children such as ADHD, along with oppositional defiant disorder, conduct disorder, high functioning autism, and Tourette's syndrome have all been suggested to involve executive functioning deficits. The main focus of research in 2000s had been on working memory, planning, set shifting, inhibition, and fluency. This research suggests that differences exist between typically functioning, matched controls, and clinical groups, on measures of executive functioning.
Some research has suggested a link between a child's abilities to gain information about the world around them and having the ability to override emotions in order to behave appropriately. One study required children to perform a task from a series of psychological tests, with their performance used as a measure of executive function. The tests included assessments of: executive functions (self-regulation, monitoring, attention, flexibility in thinking), language, sensorimotor, visuospatial, and learning, in addition to social perception. The findings suggested that the development of theory of mind in younger children is linked to executive control abilities with development impaired in individuals who exhibit signs of executive dysfunction.
It has been made known that young children with behavioral problems show poor verbal ability and executive functions. The exact distinction between parenting style and the importance of family structure on child development is still somewhat unclear. However, in infancy and early childhood, parenting is among the most critical external influences on child reactivity. In Mahoney's study of maternal communication, results indicated that the way mothers interacted with their children accounted for almost 25% of variability in children's rate of development. Every child is unique, making parenting an emotional challenge that should be most closely related to the child's level of emotional self-regulation (persistence, frustration and compliance). A promising approach that was being investigated in 2006 amid intellectually disabled children and their parents is responsive teaching. Responsive teaching is an early intervention curriculum designed to address the cognitive, language, and social needs of young children with developmental problems. Based on the principle of "active learning", responsive teaching is a method that was being applauded in 1980s as adaptable for individual caregivers, children and their combined needs The effect of parenting styles on the development of children is an important area of research that seems to be forever ongoing and altering.
== Comorbidity ==
Flexibility problems are more likely to be related to anxiety, and metacognition problems are more likely to be related to depression.
== Socio-cultural implications ==
=== Education ===
In the classroom environment, children with executive dysfunction typically demonstrate skill deficits that can be categorized into two broad domains: a) self-regulatory skills; and b) goal-oriented skills. The table below is an adaptation of McDougall's summary and provides an overview of specific executive function deficits that are commonly observed in a classroom environment. It also offers examples of how these deficits are likely to manifest in behaviour.
Self-regulatory skills
Goal-oriented skills
Teachers play a crucial role in the implementation of strategies aimed at improving academic success and classroom functioning in individuals with executive dysfunction. In a classroom environment, the goal of intervention should ultimately be to apply external control, as needed (e.g. adapt the environment to suit the child, provide adult support) in an attempt to modify problem behaviours or supplement skill deficits. Ultimately, executive function difficulties should not be attributed to negative personality traits or characteristics (e.g. laziness, lack of motivation, apathy, and stubbornness) as these attributions are neither useful nor accurate.
Several factors should be considered in the development of intervention strategies. These include, but are not limited to: developmental level of the child, comorbid disabilities, environmental changes, motivating factors, and coaching strategies. It is also recommended that strategies should take a proactive approach in managing behaviour or skill deficits (when possible), rather than adopt a reactive approach. For example, an awareness of where a student may have difficulty throughout the course of the day can aid the teacher in planning to avoid these situations or in planning to accommodate the needs of the student.
People with executive dysfunction have a slower cognitive processing speed and thus often take longer to complete tasks than people who demonstrate typical executive function capabilities. This can be frustrating for the individual and can serve to impede academic progress. Disorders affecting children such as ADHD, along with oppositional defiant disorder, conduct disorder, high functioning autism and Tourette's syndrome have all been suggested to involve executive functioning deficits. The main focus of research in the 2000s had been on working memory, planning, set shifting, inhibition, and fluency. This research suggests that differences exist between typically functioning, matched controls and clinical groups, on measures of executive functioning.
Moreover, some people with ADHD report experiencing frequent feelings of drowsiness. This can hinder their attention for lectures, readings, and completing assignments. Individuals with this disorder have also been found to require more stimuli for information processing in reading and writing. Slow processing may manifest in behavior as signaling a lack of motivation on behalf of the learner. However, slow processing is reflective of an impairment of the ability to coordinate and integrate multiple skills and information sources.
The main concern with individuals with autism regarding learning is in the imitation of skills. This can be a barrier in many aspects such as learning about others intentions, mental states, speech, language, and general social skills. Individuals with autism tend to be dependent on the routines that they have already mastered, and have difficulty with initiating new non-routine tasks. Although an estimated 25–40% of people with autism also have a learning disability, many will demonstrate an impressive rote memory and memory for factual knowledge. As such, repetition is the primary and most successful method for instruction when teaching people with autism.
Being attentive and focused for people with Tourette's syndrome is a difficult process. People affected by this disorder tend to be easily distracted and act very impulsively. That is why it is very important to have a quiet setting with few distractions for the ultimate learning environment. Focusing is particularly difficult for those who are affected by Tourette's syndrome comorbid with other disorders such as ADHD or obsessive-compulsive disorder, it makes focusing very difficult. Also, these individuals can be found to repeat words or phrases consistently either immediately after they are learned or after a delayed period of time.
=== Criminal behaviour ===
Prefrontal dysfunction has been found as a marker for persistent, criminal behavior. The prefrontal cortex is involved with mental functions including; affective range of emotions, forethought, and self-control. Moreover, there is a scarcity of mental control displayed by individuals with a dysfunction in this area over their behavior, reduced flexibility and self-control and their difficulty to conceive behavioral consequences, which may conclude in unstable (or criminal) behavior. In a 2008 study conducted by Barbosa & Monteiro, it was discovered that the recurrent criminals that were considered in this study had executive dysfunction. Since abnormalities in executive function can limit how people respond to rehabilitation and re-socialization programs these findings of the recurrent criminals are justified. Statistically significant relations have been discerned between anti-social behavior and executive function deficits. These findings relate to the emotional instability that is connected with executive function as a detrimental symptom that can also be linked towards criminal behavior. Conversely, it is unclear as to the specificity of anti-social behavior to executive function deficits as opposed to other generalized neuropsychological deficits. The uncontrollable deficiency of executive function has an increased expectancy for aggressive behavior that can result in a criminal deed. Orbitofrontal injury also hinders the ability to be risk avoidant, make social judgments, and may cause reflexive aggression. A common retort to these findings is that the higher incidence of cerebral lesions among the criminal population may be due to the peril associated with a life of crime. Along with this reasoning, it would be assumed that some other personality trait is responsible for the disregard of social acceptability and reduction in social aptitude.
Furthermore, some think the dysfunction cannot be entirely to blame. There are interacting environmental factors that also have an influence on the likelihood of criminal action. This theory proposes that individuals with this deficit are less able to control impulses or foresee the consequences of actions that seem attractive at the time (see above) and are also typically provoked by environmental factors. One must recognize that the frustrations of life, combined with a limited ability to control life events, can easily cause aggression and/or other criminal activities.
== See also ==
Autonoetic consciousness
== References == | Wikipedia/Executive_dysfunction |
Ablative brain surgery (also known as brain lesioning) is the surgical ablation by various methods of brain tissue to treat neurological or psychological disorders. The word "Ablation" stems from the Latin word Ablatus meaning "carried away". In most cases, however, ablative brain surgery does not involve removing brain tissue, but rather destroying tissue and leaving it in place. The lesions it causes are irreversible. There are some target nuclei for ablative surgery and deep brain stimulation. Those nuclei are the motor thalamus, the globus pallidus, and the subthalamic nucleus.
Ablative brain surgery was first introduced by Pierre Flourens (1794–1867), a French physiologist. He removed different parts of the nervous system from animals and observed what effects were caused by the removal of certain parts. For example, if an animal could not move its arm after a certain part was removed, it was assumed that the region would control arm movements. The method of removal of part of the brain was termed "experimental ablation". With the use of experimental ablation, Flourens claimed to find the area of the brain that controlled heart rate and breathing.
Ablative brain surgery is also often used as a research tool in neurobiology. For example, by ablating specific brain regions and observing differences in animals subjected to behavioral tests, the functions of all the removed areas may be inferred.
Experimental ablation is used in research on animals. Such research is considered unethical on humans due to the irreversible effects and damages caused by the lesion and by the ablation of brain tissues. However, the effects of brain lesions (caused by accidents or diseases) on behavior can be observed to draw conclusions on the functions of different parts of the brain.
== Uses ==
=== Parkinson's disease ===
Parkinson's disease (PD) is a progressive degenerative disease of the basal ganglia, characterized by the loss of dopaminergic cells of the substantia nigra, pars compacta (SNc). Surgical ablation has been used to treat Parkinson's disease. In the 1990s, the pallidum was a common surgical target. Unilateral pallidotomy improves tremor and dyskinesia on one side of the body (opposite the side of the brain surgery), but bilateral pallidotomy was found to cause irreversible deterioration in speech and cognition.
Two other rapidly evolving or potential surgical approaches to Parkinson's disease are deep brain stimulation (DBS) and restorative therapies.
Deep brain stimulation is a surgical treatment involving the implantation of a neurostimulator medical device, sometimes called a 'brain pacemaker', which sends electrical impulses to specific parts of the brain. Generally, deep brain stimulation surgery is considered preferable to ablation because it has the same effect and is adjustable and reversible.
The advent of deep brain stimulation has been an important advance in the treatment of Parkinson's disease. DBS may be employed in the management of medication-refractory tremor or treatment-related motor complications, and may benefit between 4.5% and 20% of patients at some stage of their disease course. DBS at high frequency often has behavioral effects that are similar to those of lesioning.
In Australia, patients with PD are reviewed by specialized DBS teams who assess the likely benefits and risks associated with DBS for each individual. The aim of these guidelines is to assist neurologists and general physicians identify patients who may benefit from referral to a DBS team. Common indications for referral are motor fluctuations and/or dyskinesias that are not adequately controlled with optimised medical therapy, medication-refractory tremor, and intolerance to medical therapy. Early referral for consideration of DBS is recommended as soon as optimised medical therapy fails to offer satisfactory motor control.
The thalamus is another potential target for treating a tremor; in some countries, so is the subthalamic nucleus, although not in the United States due to its severe side effects. Stimulation of portions of the thalamus or lesioning has been used for various psychiatric and neurological conditions, and when practiced for movement disorders the target is in the motor nuclei of the thalamus. Thalamotomy is another surgical option in the treatment of Parkinson's disease. However, rigidity is not fully controlled after successful thalamotomy, it is replaced by hypotonia. Furthermore, significant complications can occur, for example, left ventral-lateral thalamotomy in a right-handed patient results in verbal deterioration while right thalamotomy causes visual-spatial defects. However, for patients for whom DBS is not feasible, ablation of the subthalamic nucleus has been shown to be safe and effective. DBS is not suitable for certain patients. Patients with immunodeficiencies are an example of a situation in which DBS is not a suitable procedure. However, a major reason as to why DBS is not often performed is the cost. Because of its high cost, DBS cannot be performed in regions of the world that are not wealthy. In the case of such circumstances, a permanent lesion in the subthalamic nucleus (STN) is created as it is a more favourable surgical procedure. The surgical procedure is going to be done on the non-dominant side of the brain; a lesion might be favored to evade numerous pacemaker replacements. More so, patients who gain relief from stimulation devoid of any side effects and need a pacemaker change may have a lesion performed on them in the same position. The stimulation parameters act as a guide for the preferred size of the lesion. In order to identify the part of the brain that is to be destroyed, new techniques such as micro electrode mapping have been developed.
=== Cluster headaches ===
Cluster headaches occur in cyclical patterns or clusters—which gives the condition of its name. Cluster headache is one of the most painful types of headache. Bouts of frequent cluster headaches may last from weeks to months. Attempts have been made to treat cluster headaches via ablation of the trigeminal nerve, but have not been very effective. Other surgical treatments for cluster headaches are currently under investigation.
=== Psychiatric disorders ===
Ablative psychosurgery continues to be used in a few centres in various countries. In the US there are a few centres including Massachusetts General Hospital that carry out ablative psychosurgical procedures. Belgium, the United Kingdom, and Venezuela are other examples of countries where the technique is still used.
In the People's Republic of China, surgical ablation was used to treat psychological and neurological disorders, particularly schizophrenia, but also including clinical depression, and obsessive-compulsive disorder. The official Xinhua News Agency has since reported that China's Ministry of Health has banned the procedure for schizophrenia and severely restricted the practice for other conditions. In recent studies, Deep Brain Stimulation (DBS) is beginning to replace Ablative Brain Surgery for severe psychiatric conditions that are generally treatment resistant, such as obsessive-compulsive disorder.
== Methods ==
Experimental ablation involves the drilling of holes in the skull of an animal and inserting an electrode or a small tube called a cannula into the brain using a stereotactic apparatus. A brain lesion can be created by conducting electricity through the electrode which damages the targeted area of the brain. likewise, chemicals can be inserted in the cannula which could possibly damages the area of interest. By comparing the prior behavior of the animal to after the lesion, the researcher can predict the function of damaged brain segment. Recently, lasers have been shown to be effective in ablation of both cerebral and cerebellar tissue. A laser technology called MRI-guided laser ablation, for example, allows great precision in location and size of the lesion and the causes little to no thermal damage to adjacent tissue. The Texas Children's Hospital is one of the first to use this MRI guided method to destroy and treat brain lesions effectively and precisely. A prime example is a patient at this hospital who now no longer undergoes frequent seizures because of the success of this treatment. MRI-guided laser ablation is also used for ablating brain, prostate and liver tumors. Heating or freezing are also alternative methods to ablative brain surgery.
=== Sham lesions ===
A sham lesion is a way for researchers to give a placebo lesion to animals involved in experimental ablation. Whenever a cannula or electrode is placed into brain tissue, unintended additional damage is caused by the instrument itself. A sham lesion is simply the placement of the lesioning instrument into the same spot it would be placed in a regular lesion, only there is no chemical or electrical process. This technique allows researchers to properly compare to an appropriate control group by controlling for the damage done separate from the intended lesion.
=== Excitotoxic lesions ===
An excitotoxic lesion is the process of an excitatory amino acid being injected into the brain using a cannula. The amino acid is used to kill neurons by essentially stimulating them to death. Kainic acid is an example of an excitatory amino acid used in this type of lesion. One crucial benefit to this lesion is its specificity. The chemicals are selective in that they do not damage the surrounding axons of nearby neurons, but only the target neurons.
=== Radio frequency lesions ===
Radio frequency (RF) lesions are produced by electrodes placed in the brain tissue. RF current is an alternating current of very high frequency. The process during which the current passes through tissue produces heat that kills cells in the surrounding area. Unlike excitotoxic lesions, RF lesions destroy everything in the nearby vicinity of the electrode tip.
The use of ablative brain surgery on the nucleus accumbens is the wrong method to treat addictions according to Dr. Charles O'Brien. Dr. John Adler, however, believes ablation can provide valuable information about how the nucleus accumbens works.
== See also ==
Ablation (artificial intelligence), analogous process used in artificial neural networks
== References ==
== Further reading ==
Bain, P (1 November 2003). "Surgical treatment of Parkinson's disease and other movement disorders". Journal of Neurology, Neurosurgery & Psychiatry. 74 (11): 1601. doi:10.1136/jnnp.74.11.1601-a. PMC 1738211. ProQuest 1781234131.
Eric, M. Gabriel; Blaine, S. Nashold (March 1998). "Evolution of Neuroablative Surgery for Involuntary Movement Disorders: An Historical Review". Neurosurgery. 42 (3): 575–591. doi:10.1097/00006123-199803000-00027. PMID 9526992. | Wikipedia/Ablative_brain_surgery |
The research in Parkinson's disease (also known as clinical trials, medical research, research studies, or clinical studies) refers to any study intended to help answer questions about etiology, diagnostic approaches or new treatments of Parkinson's disease (PD) by studying their effects on human subjects. Clinical trials are designed and conducted by scientists and medical experts, who invite participants to undergo testing new vaccines, therapies, or treatments.
Only a small fraction of patients with Parkinson's disease participate in clinical research and specially in clinical trials. When clinical trials lack participation, it causes a significant delay in the development of new drugs and treatments.
== Research directions ==
One of the purposes of clinical research is to test the safety and efficacy of new treatments. Clinical research may also be conducted to learn other things about medical treatments or procedures, such as how to make an earlier diagnosis or how the treatment interacts with other drugs.
Though there are many types of clinical research, the two most common are interventional and observational. For example, researchers trying to identify causes of PD may conduct an observational study to examine genetic or environmental factors that may have triggered the disease in an individual. Natural history studies that evaluate how Parkinson's affects different people and how it changes over time are another example of observational research. Diagnostic accuracy studies are used to investigate how well a test, or a series of tests, are able to correctly identify diseased patients.
Researchers conducting clinical trials test the impact of treatments. These can include changing behavior, taking medications, or performing surgery. Interventional and observational research are equally important in helping to answer questions, develop new treatments, and ultimately find a cure for Parkinson's. Clinical trials are conducted in a series of phases.
Among the interventional and observational studies for Parkinson's disease, research is ongoing in a number of specific areas.
=== Quality of life ===
Quality of life research investigates the function that physical therapy, occupational therapy, exercise or other interventions may play in the quality of life of persons with Parkinson's disease. Persons with Parkinson's disease may experience motor symptoms (tremors, rigidity, slowness of movement, postural instability and gait dysfunctions) as well as non-motor symptoms (neuropsychiatric symptoms, autonomic dysfunction, or other; see Parkinson's disease). Due to this diversity of symptoms, Parkinson's disease may impact upon an individual's physical, social and mental well-being. For example, difficulties with movement can lead to difficulties with self-care, embarrassment, social-isolation, and depression.
Research may investigate whether there is a relationship between quality of life and a symptom of Parkinson's disease. Research on Parkinson's disease has investigated the link between quality of life and axial rigidity, personality traits, and patient education.
Alternatively, a study may evaluate the effectiveness of an intervention on the mitigation of symptoms, and the subsequent impact on quality of life. For example, an ongoing clinical study exploring Vitamin D as a possible therapy to improve balance and decrease the risk of falling in people with Parkinson's expects a subsequent increase in safety and well-being. Another recent study used data mining and analysis from previous clinical research to explore improvement in motor function people with Parkinson's disease experience after treatment with levodopa. The study concluded that motor learning in the presence of levodopa may improve the body's ability to adapt to Parkinson's disease.
Quality of life measures are increasingly being incorporated into clinical trials, therefore much research has gone into validating quality of life measures for persons with Parkinson's disease.
=== Neuroprotection ===
Neuroprotection is treatment that may slow down, stop, or reverse the progression of Parkinson's. Researchers are attempting to develop neuroprotective agents for Parkinson's disease, as well as other neurodegenerative brain disorders.
Several molecules have been proposed as potential neuroprotective treatments. However, none of them has been conclusively demonstrated to reduce degeneration in clinical trials. Agents currently under investigation as neuroprotective agents include anti-apoptotic drugs (omigapil, CEP-1347), antiglutamatergic agents, monoamine oxidase inhibitors (selegiline, rasagiline), promitochondrial drugs (coenzyme Q10, creatine), calcium channel blockers (isradipine) and growth factors (GDNF).
Researchers are also investigating vaccines for Parkinson's disease that produce cells that change the way the body's immune system responds to the loss of dopamine. This treatment has shown success in reversing Parkinson's in mice, and researchers are investigating the viability of clinical studies in people.
Exercise may be neuroprotective. Animal studies show exercise may protect against dopaminergic neurotoxins, and research conducted via prospective studies shows the risk of Parkinson's disease in humans is reduced significantly by midlife exercise. More research is needed to investigate the benefits of exercise in the early stage of Parkinson's, the most suitable type of exercise, when exercise should be implemented, and the optimal duration of exercises.
A 2009 review of 11 systematic reviews and 230 random controlled trials, showed the effectiveness of Chinese Herbal Medicine (CHM) as a paratherapy for Parkinson's disease patients.
=== Genetics ===
Of those people with PD, it is only a small percentage that inherits the disease. However, the study of genetic forms of Parkinson's can assist scientists in learning more about the non-inherited forms. Several current studies are examining the genetic factors of Parkinson's disease. An example of genetic research is a recent study that investigated the GBA gene as a suspected cause of early-onset Parkinson's.
A genetic study involving researchers from BGI Genomics reveals the genetic cause of Parkinson's disease. The study, published in Neuroscience Bulletin, discovered that a mutation in the Cysteinyl-tRNA synthetase (CARS) gene (c.2384A>T; p.Glu795Val; E795V) is responsible, offering a new path for prevention and control of the disease.
=== Surgery ===
Advances in surgical procedures and neuroimaging techniques have ensured that surgical approaches can be as effective as medication at relieving some PD symptoms. Deep brain stimulation (DBS) is a surgical technique whereby a tiny electrode is inserted deep in the brain. The electrode is connected to a battery pack implanted under the collarbone via a subcutaneous wire. DBS is effective in suppressing symptoms of PD, especially tremor. A recent clinical study led to recommendations on identifying which Parkinson's patients are most likely to benefit from DBS.
== Astrocytes ==
In an animal model, manipulating glial precursor cells produced astrocytes that repaired Parkinson's multiple types of neurological damage. The researchers implanted cells only in rats with disease signs. The astrocytes used in the study differ from other types of astrocytes present in the mature brain. When implanted into the brains of rats with the disease, the new cells acted similar to astrocytes in the developing brain, which are more effective at building connections between nerves. The implanted astrocytes restored health and stability and allowed the nerve cells to resume normal activity.
Successful long-term therapy must both protect the areas of the brain under attack and foster the repair of dopaminergic neurons damage to other brain cell populations. Astrocyte dysfunction can contribute to multiple neurological disorders.
After transplantation, dopaminergic, interneurons and synaptophysin were all rescued. Interneurons play an important role in information processing and movement control and are lost in Parkinson's. Synaptophysin is a protein that is essential for communication between nerve cells. The transplanted rats recovered motor skills to normal levels, essentially reversing all symptoms. No previous therapies rescued these cells.
== Participant groups ==
Parkinson's clinical research studies need volunteers at all stages of the disease to help solve the unanswered questions about Parkinson's and to develop new treatments. Some studies seek to enroll specific groups of people.
=== Newly diagnosed ===
A number of Parkinson's disease clinical research studies seek to enroll people newly diagnosed with PD that are not currently undergoing any treatment. These trials vary in scope, some focusing on neuroprotection in which researchers seek to determine whether a certain compound might offer protection to dopamine-producing cells, thus helping to slow or stop the progression of the disease.
=== Healthy controls ===
In addition to patients with PD, healthy controls, including friends and family members of those with Parkinson's, are also needed for clinical trials. Family members may participate in genetic studies, and healthy people can participate in trials that require a control group of participants without PD. Control groups are necessary as a means of testing the research being studied.
== Participating ==
=== Benefits ===
People with PD, their friends, and their family members all have many reasons to consider participating in clinical research. Many participants believe that their involvement benefits themselves and the future of other people with the disease. Without clinical research participants, many of the advances in treating PD would not have happened. In addition to furthering the scientific community's knowledge of Parkinson's, clinical trial participation may offer access to leading healthcare professionals and potentially useful new drugs and therapies. This care is often provided free of charge in exchange for participation in the study. Finally, by participating in clinical studies, those whose lives are impacted by PD may increase knowledge and understanding of the disease.
=== How to participate ===
It can be a challenge to find the right clinical trial, and it can be even more challenging for the trial team members to find volunteers. People with PD may consult their doctors, discuss with their family members, and speak to other clinical trials participants about their experiences. Online resources for participation can be found at www.FoxTrialFinder.org. and www.ClinicalTrials.gov.
== Clinical research resources ==
People with Parkinson's disease who are considering participating in clinical research have resources available to help them navigate the clinical research process.
=== Fox Trial Finder ===
Led by The Michael J. Fox Foundation for Parkinson's Research, the Fox Trial Finder is a matching site that connects clinical trials to potential volunteers. Since its launch in 2012, the Fox Trial Finder has registered more than 19,000 volunteers across multiple continents. Volunteers enter their information—from location to the medicines they take—into a profile on Fox Trial Finder, which then matches them to nearby trials seeking volunteers with their particular criteria. The Fox Trial Finder seeks volunteers both with and without Parkinson's disease.
=== Parkinson's Advocates in Research ===
The Parkinson's Disease Foundation's Parkinson's Advocates in Research (PAIR) program is a patient-based initiative that ensures people with Parkinson's disease have a role in shaping the clinical research process. By training advocates with Parkinson's disease to serve as patient representatives on clinical research advisory boards, the PAIR program aims to improve outcomes by helping researchers overcome and identify barriers in research that they may otherwise overlook. Participants in the PAIR program receive training through PDF's Clinical Research Learning Institute, an annual multi-day training that focuses on education via training sessions, clinical researcher led workshops, as well as interaction with study coordinators and representatives from both the government and the industry.
=== Parkinson's Disease Biomarkers Program ===
The NINDS Parkinson's Disease Biomarkers Program brings together various stakeholders to create a resource of longitudinal biofluid samples from PD patients and controls and their associated clinical assessment data for biomarker discovery research. Neuroimaging and genomic data are also available for some of the samples. All samples are stored at the NINDS Human Genetics Repository at Coriell Institute and can be requested through the PDBP Data Management Resource.
== Research organizations ==
The National Institute of Neurological Disorders and Stroke (NINDS), part of the National Institutes of Health (NIH), is a major funder of Parkinson's disease research in the US. In 2012, the NINDS funded approximately $98 million out of a total of $154 million in NIH-supported PD research. The NINDS supports basic, translational, and clinical PD research programs through a variety of mechanisms, including the Morris K. Udall Centers of Excellence for Parkinson's Disease Research and the Parkinson's Disease Biomarkers Program (PDBP). NINDS has just completed a major planning effort to determine priorities for future Parkinson's disease research.
The Parkinson's Disease Foundation is a leading national presence in the United States in Parkinson's disease research, education and public advocacy. PDF works on behalf of people who live with Parkinson's disease by funding promising clinical research to find treatments and cures for Parkinson's. PDF was founded in 1957, and since then has invested more than $115 million on scientific research.
The Michael J. Fox Foundation aims to develop a cure for Parkinson's disease. As the largest private foundation for Parkinson's disease in the US, the Michael J. Fox Foundation has spent 325 million dollars on research. In 2010, the Fox foundation launched the first large-scale clinical study on evolution biomarkers of the disease with a cost of 40 million dollars in 5 years.
The CRC for Mental Health is an Australian Federal Government funded research consortium researching biomarkers, imaging reagents and therapeutics for early diagnosis of Parkinson's Disease.
The Cure Parkinson's Trust, set up in the UK in 1968 by Tom Isaacs, was instrumental in arranging a ground-breaking clinical trial of the drug GDNF at the University of Bristol during the 2010s.
== References ==
== External links ==
Parkinson's Disease Foundation
Fox Trial Finder
NINDS Parkinson's Disease Biomarkers Program
NINDS Parkinson's Disease Research page
NINDS Human Genetics Repository at Coriell Institute | Wikipedia/Research_in_Parkinson's_disease |
Exact Sciences Corp. is an American molecular diagnostics company based in Madison, Wisconsin specializing in the detection of early stage cancers. The company's initial focus was on the early detection and prevention of colorectal cancer; in 2014 it launched Cologuard, the first stool DNA test for colorectal cancer. Since then Exact Sciences has grown its product portfolio to encompass other screening and precision oncological tests for other types of cancer.
== History ==
Exact Sciences was founded in 1995 in Marlborough, Massachusetts, by Stanley Lapidus and Anthony Shuber as a company focused on the development of a non-invasive test for colorectal cancer. The company eventually went public with an initial offering on the NASDAQ in 2001. In the early years, there was much speculation that the company would be acquired by a competitor or exit the market; during this time the company's share price fell to less than one dollar.
A significant turnaround in the company's fortunes began with the announcement of a mutual collaboration and licensing agreement between Exact Sciences and the Mayo Clinic in June 2009. In the same year, the company appointed Kevin Conroy as CEO & president and moved its head office to Madison, Wisconsin.
In August 2014, Exact Sciences received premarket approval from the Food and Drug Administration for the use and marketing of its flagship product, Cologuard. This breakthrough heralded the beginning of a period of rapid growth for Exact Sciences and the start of its first foray into the acquisitions market.
In August 2017, the company made its first major acquisition when it purchased Sampleminded, a healthcare information technology company based in Salt Lake City, Utah, for $3.2 million. This was followed by the January 2018 announcement that Exact Sciences had completed a $690 million convertible bond offering and the revelation that the company was to acquire Armune Bioscience, a cancer diagnostic developer based in Kalamazoo and Ann Arbor, Michigan (announced during that year's J.P Morgan Healthcare Conference). Third-quarter financial reports revealed the price of the Armune Bioscience acquisition to be valued at $12 million, plus $17.5 million in incentives for certain milestones. Later, in October 2018, Exact Sciences announced its purchase of Biomatrica, a developer of sample preservation technology based in San Diego, California.
In summer 2019, Exact Sciences opened a new 169,000 square feet lab and warehouse facility to expand its testing capacity for Cologuard and, in its largest acquisition yet, announced its intention to buy Genomic Health, a genetic cancer detection company based in Redwood City, California, for $2.8 billion. The reason given for this latest acquisition was to both expand Exact Sciences' product portfolio through the addition of Genomic Health's OncotypeIQ suite of precision tests, and expand into other markets outside the US on the back of Genomic Health's existing network.
In March 2020, Exact Sciences purchased Paradigm Diagnostics and Viomics, two companies based in Phoenix, Arizona that would expand their lab testing and research and development capabilities. Later, in October 2020, the company again announced a round of acquisitions - this time of Thrive Earlier Detection Corp. (based in Cambridge, Massachusetts) and Base Genomics (based in Oxford, England), two companies specializing in one of Exact Sciences' pipeline areas - blood-based cancer screening.
Exact Sciences responded to the 2020 COVID-19 pandemic by temporarily refocusing a portion of its diagnostic capacity to testing for the disease. The company received FDA regulatory approval to provide home testing kits in April 2020, becoming one of the first companies in the U.S. to do so.
In early 2021, Exact Sciences announced its acquisition of Ashion Analytics and plans to collaborate in research with TGen, the City Of Hope's Genomics Institute. This news came shortly after the company's decision to purchase an exclusive-use license of TGen's proprietary liquid biopsy-based test technology, Tardis.
=== Acquisition history ===
The following is an illustration of the company's major mergers and acquisitions and historical predecessors (this is not a comprehensive list):
== Partnerships ==
Since 2009, Exact Sciences has maintained a collaboration with Mayo Clinic for its current and future products. In 2009, Exact Sciences also completed a licensing agreement with Hologic for its molecular detection platform. In April 2017, Exact Sciences and MDxHealth agreed to share technology on a variety of epigenetics and molecular diagnostics applications for five years. In August 2018, Exact Sciences and Pfizer announced an agreement through 2021 to co-promote Cologuard. In November 2018, Exact Sciences announced a partnership with Epic Systems for order entries.
== Products ==
=== Current products ===
=== Pipeline ===
Pipeline products include esophageal, breast, lung, liver, and pancreatic cancer testing. The company is also working with the Mayo Clinic to identify biomarkers associated with the 15 deadliest cancers. Other initiatives focus on:
Using experience gained from the development of Cologuard to create a wider cancer detection platform
Expanding the range Oncotype IQ products to include liquid and tissue-based tests
Adapting biomarker-based technologies create a liquid biopsy capable of detecting cancers and precancers from a blood sample
Improving analytical sensitivity using the company's existing multi-marker approach to better identify cancerous samples
=== Former products ===
== References ==
== External links ==
Official website
Business data for Exact Sciences Corporation: | Wikipedia/Exact_Sciences_(company) |
In computer science and operations research, the artificial bee colony algorithm (ABC) is an optimization algorithm based on the intelligent foraging behaviour of honey bee swarm, proposed by Derviş Karaboğa (Erciyes University) in 2005.
== Algorithm ==
In the ABC model, the colony consists of three groups of bees: employed bees, onlookers and scouts. It is assumed that there is only one artificial employed bee for each food source. In other words, the number of employed bees in the colony is equal to the number of food sources around the hive. Employed bees go to their food source and come back to hive and dance on this area. The employed bee whose food source has been abandoned becomes a scout and starts to search for finding a new food source. Onlookers watch the dances of employed bees and choose food sources depending on dances. The main steps of the algorithm are given below:
Initial food sources are produced for all employed bees
REPEAT
Each employed bee goes to a food source in her memory and determines a closest source, then evaluates its nectar amount and dances in the hive
Each onlooker watches the dance of employed bees and chooses one of their sources depending on the dances, and then goes to that source. After choosing a neighbour around that, she evaluates its nectar amount.
Abandoned food sources are determined and are replaced with the new food sources discovered by scouts.
The best food source found so far is registered.
UNTIL (requirements are met)
In ABC, a population based algorithm, the position of a food source represents a possible solution to the optimization problem and the nectar amount of a food source corresponds to the quality (fitness) of the associated solution. The number of the employed bees is equal to the number of solutions in the population. At the first step, a randomly distributed initial population (food source positions) is generated. After initialization, the population is subjected to repeat the cycles of the search processes of the employed, onlooker, and scout bees, respectively. An employed bee produces a modification on the source position in her memory and discovers a new food source position. Provided that the nectar amount of the new one is higher than that of the previous source, the bee memorizes the new source position and forgets the old one. Otherwise she keeps the position of the one in her memory. After all employed bees complete the search process, they share the position information of the sources with the onlookers on the dance area. Each onlooker evaluates the nectar information taken from all employed bees and then chooses a food source depending on the nectar amounts of sources. As in the case of the employed bee, she produces a modification on the source position in her memory and checks its nectar amount. Providing that its nectar is higher than that of the previous one, the bee memorizes the new position and forgets the old one. The sources abandoned are determined and new sources are randomly produced to be replaced with the abandoned ones by artificial scouts.
== Artificial bee colony algorithm ==
Artificial bee colony (ABC) algorithm is an optimization technique that simulates the foraging behavior of honey bees, and has been successfully applied to various practical problems. ABC belongs to the group of swarm intelligence algorithms and was proposed by Karaboga in 2005.
A set of honey bees, called swarm, can successfully accomplish tasks through social cooperation. In the ABC algorithm, there are three types of bees: employed bees, onlooker bees, and scout bees. The employed bees search food around the food source in their memory; meanwhile they share the information of these food sources to the onlooker bees. The onlooker bees tend to select good food sources from those found by the employed bees. The food source that has higher quality (fitness) will have a large chance to be selected by the onlooker bees than the one of lower quality. The scout bees are translated from a few employed bees, which abandon their food sources and search new ones.
In the ABC algorithm, the first half of the swarm consists of employed bees, and the second half constitutes the onlooker bees.
The number of employed bees or the onlooker bees is equal to the number of solutions in the swarm. The ABC generates a randomly distributed initial population of SN solutions (food sources), where SN denotes the swarm size.
Let
X
i
=
{
x
i
,
1
,
x
i
,
2
,
…
,
x
i
,
n
}
{\displaystyle X_{i}=\{x_{i,1},x_{i,2},\ldots ,x_{i,n}\}}
represent the
i
t
h
{\displaystyle i^{th}}
solution in the swarm, where
n
{\displaystyle n}
is the dimension size.
Each employed bee
X
i
{\displaystyle X_{i}}
generates a new candidate solution
V
i
{\displaystyle V_{i}}
in the neighborhood of its present position as equation below:
where
X
j
{\displaystyle X_{j}}
is a randomly selected candidate solution (
i
≠
j
{\displaystyle i\neq j}
),
k
{\displaystyle k}
is a random dimension index selected from the set
{
1
,
2
,
…
,
n
}
{\displaystyle \{1,2,\ldots ,n\}}
, and
Φ
i
,
k
{\displaystyle \Phi _{i,k}}
is a random number within
[
−
1
,
1
]
{\displaystyle [-1,1]}
. Once the new candidate solution
V
i
{\displaystyle V_{i}}
is generated, a greedy selection is used. If the fitness value of
V
i
{\displaystyle V_{i}}
is better than that of its parent
X
i
{\displaystyle X_{i}}
, then update
X
i
{\displaystyle X_{i}}
with
V
i
{\displaystyle V_{i}}
; otherwise keep
X
i
{\displaystyle X_{i}}
unchanged. After all employed bees complete the search process; they share the information of their food sources with the onlooker bees through waggle dances. An onlooker bee evaluates the nectar information taken from all employed bees and chooses a food source with a probability related to its nectar amount. This probabilistic selection is really a roulette wheel selection mechanism which is described as equation below:
where
f
i
t
i
{\displaystyle \mathrm {fit} _{i}}
is the fitness value of the
i
t
h
{\displaystyle i^{th}}
solution in the swarm. As seen, the better the solution
i
{\displaystyle i}
, the higher the probability of the
i
t
h
{\displaystyle i^{th}}
food source selected. If a position cannot be improved over a predefined number (called limit) of cycles, then the food source is abandoned. Assume that the abandoned source is
X
i
{\displaystyle X_{i}}
, and then the scout bee discovers a new food source to be replaced with
i
t
h
{\displaystyle i^{th}}
as equation below:
where
Φ
i
,
k
=
r
a
n
d
(
0
,
1
)
{\displaystyle \Phi _{i,k}=\mathrm {rand} (0,1)}
is a random number within
[
0
,
1
]
{\displaystyle [0,1]}
based on a normal distribution, and
l
b
k
,
u
b
k
{\displaystyle lb_{k},ub_{k}}
are lower and upper boundaries of the
k
t
h
{\displaystyle k^{th}}
dimension, respectively.
== See also ==
Evolutionary computation
Evolutionary multi-modal optimization
Particle swarm optimization
Swarm intelligence
Bees algorithm
Fish School Search
List of metaphor-based metaheuristics
== References ==
== External links ==
Artificial Bee Colony (ABC) Algorithm Homepage, Turkey: Intelligent Systems Research Group, Department of Computer Engineering, Erciyes University | Wikipedia/Artificial_bee_colony_algorithm |
In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems that can be reduced to finding good paths through graphs. Artificial ants represent multi-agent methods inspired by the behavior of real ants.
The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a preferred method for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.
As an example, ant colony optimization is a class of optimization algorithms modeled on the actions of an ant colony. Artificial 'ants' (e.g. simulation agents) locate optimal solutions by moving through a parameter space representing all possible solutions. Real ants lay down pheromones to direct each other to resources while exploring their environment. The simulated 'ants' similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate better solutions. One variation on this approach is the bees algorithm, which is more analogous to the foraging patterns of the honey bee, another social insect.
This algorithm is a member of the ant colony algorithms family, in swarm intelligence methods, and it constitutes some metaheuristic optimizations. Initially proposed by Marco Dorigo in 1992 in his PhD thesis, the first algorithm was aiming to search for an optimal path in a graph, based on the behavior of ants seeking a path between their colony and a source of food. The original idea has since diversified to solve a wider class of numerical problems, and as a result, several problems have emerged, drawing on various aspects of the behavior of ants. From a broader perspective, ACO performs a model-based search and shares some similarities with estimation of distribution algorithms.
== Overview ==
In the natural world, ants of some species (initially) wander randomly, and upon finding food return to their colony while laying down pheromone trails. If other ants find such a path, they are likely to stop travelling at random and instead follow the trail, returning and reinforcing it if they eventually find food (see Ant communication).
Over time, however, the pheromone trail starts to evaporate, thus reducing its attractive strength. The more time it takes for an ant to travel down the path and back again, the more time the pheromones have to evaporate. A short path, by comparison, is marched over more frequently, and thus the pheromone density becomes higher on shorter paths than longer ones. Pheromone evaporation also has the advantage of avoiding the convergence to a locally optimal solution. If there were no evaporation at all, the paths chosen by the first ants would tend to be excessively attractive to the following ones. In that case, the exploration of the solution space would be constrained. The influence of pheromone evaporation in real ant systems is unclear, but it is very important in artificial systems.
The overall result is that when one ant finds a good (i.e., short) path from the colony to a food source, other ants are more likely to follow that path, and positive feedback eventually leads to many ants following a single path. The idea of the ant colony algorithm is to mimic this behavior with "simulated ants" walking around the graph representing the problem to be solved.
=== Ambient networks of intelligent objects ===
New concepts are required since “intelligence” is no longer centralized but can be found throughout all minuscule objects. Anthropocentric concepts have been known to lead to the production of IT systems in which data processing, control units and calculating power are centralized. These centralized units have continually increased their performance and can be compared to the human brain. The model of the brain has become the ultimate vision of computers. Ambient networks of intelligent objects and, sooner or later, a new generation of information systems that are even more diffused and based on nanotechnology, will profoundly change this concept. Small devices that can be compared to insects do not possess a high intelligence on their own. Indeed, their intelligence can be classed as fairly limited. It is, for example, impossible to integrate a high performance calculator with the power to solve any kind of mathematical problem into a biochip that is implanted into the human body or integrated in an intelligent tag designed to trace commercial articles. However, once those objects are interconnected they develop a form of intelligence that can be compared to a colony of ants or bees. In the case of certain problems, this type of intelligence can be superior to the reasoning of a centralized system similar to the brain.
Nature offers several examples of how minuscule organisms, if they all follow the same basic rule, can create a form of collective intelligence on the macroscopic level. Colonies of social insects perfectly illustrate this model which greatly differs from human societies. This model is based on the cooperation of independent units with simple and unpredictable behavior. They move through their surrounding area to carry out certain tasks and only possess a very limited amount of information to do so. A colony of ants, for example, represents numerous qualities that can also be applied to a network of ambient objects. Colonies of ants have a very high capacity to adapt themselves to changes in the environment, as well as great strength in dealing with situations where one individual fails to carry out a given task. This kind of flexibility would also be very useful for mobile networks of objects which are perpetually developing. Parcels of information that move from a computer to a digital object behave in the same way as ants would do. They move through the network and pass from one node to the next with the objective of arriving at their final destination as quickly as possible.
=== Artificial pheromone system ===
Pheromone-based communication is one of the most effective ways of communication which is widely observed in nature. Pheromone is used by social insects such as
bees, ants and termites; both for inter-agent and agent-swarm communications. Due to its feasibility, artificial pheromones have been adopted in multi-robot and swarm robotic systems. Pheromone-based communication was implemented by different means such as chemical or physical (RFID tags, light, sound) ways. However, those implementations were not able to replicate all the aspects of pheromones as seen in nature.
Using projected light was presented in a 2007 IEEE paper by Garnier, Simon, et al. as an experimental setup to study pheromone-based communication with micro autonomous robots. Another study presented a system in which pheromones were implemented via a horizontal LCD screen on which the robots moved, with the robots having downward facing light sensors to register the patterns beneath them.
== Algorithm and formula ==
In the ant colony optimization algorithms, an artificial ant is a simple computational agent that searches for good solutions to a given optimization problem. To apply an ant colony algorithm, the optimization problem needs to be converted into the problem of finding the shortest path on a weighted graph. In the first step of each iteration, each ant stochastically constructs a solution, i.e. the order in which the edges in the graph should be followed. In the second step, the paths found by the different ants are compared. The last step consists of updating the pheromone levels on each edge.
procedure ACO_MetaHeuristic is
while not terminated do
generateSolutions()
daemonActions()
pheromoneUpdate()
repeat
end procedure
=== Edge selection ===
Each ant needs to construct a solution to move through the graph. To select the next edge in its tour, an ant will consider the length of each edge available from its current position, as well as the corresponding pheromone level. At each step of the algorithm, each ant moves from a state
x
{\displaystyle x}
to state
y
{\displaystyle y}
, corresponding to a more complete intermediate solution. Thus, each ant
k
{\displaystyle k}
computes a set
A
k
(
x
)
{\displaystyle A_{k}(x)}
of feasible expansions to its current state in each iteration, and moves to one of these in probability. For ant
k
{\displaystyle k}
, the probability
p
x
y
k
{\displaystyle p_{xy}^{k}}
of moving from state
x
{\displaystyle x}
to state
y
{\displaystyle y}
depends on the combination of two values, the attractiveness
η
x
y
{\displaystyle \eta _{xy}}
of the move, as computed by some heuristic indicating the a priori desirability of that move and the trail level
τ
x
y
{\displaystyle \tau _{xy}}
of the move, indicating how proficient it has been in the past to make that particular move. The trail level represents a posteriori indication of the desirability of that move.
In general, the
k
{\displaystyle k}
th ant moves from state
x
{\displaystyle x}
to state
y
{\displaystyle y}
with probability
p
x
y
k
=
(
τ
x
y
α
)
(
η
x
y
β
)
∑
z
∈
a
l
l
o
w
e
d
y
(
τ
x
z
α
)
(
η
x
z
β
)
{\displaystyle p_{xy}^{k}={\frac {(\tau _{xy}^{\alpha })(\eta _{xy}^{\beta })}{\sum _{z\in \mathrm {allowed} _{y}}(\tau _{xz}^{\alpha })(\eta _{xz}^{\beta })}}}
where
τ
x
y
{\displaystyle \tau _{xy}}
is the amount of pheromone deposited for transition from state
x
{\displaystyle x}
to
y
{\displaystyle y}
,
α
{\displaystyle \alpha }
≥ 0 is a parameter to control the influence of
τ
x
y
{\displaystyle \tau _{xy}}
,
η
x
y
{\displaystyle \eta _{xy}}
is the desirability of state transition
x
y
{\displaystyle xy}
(a priori knowledge, typically
1
/
d
x
y
{\displaystyle 1/d_{xy}}
, where
d
{\displaystyle d}
is the distance) and
β
{\displaystyle \beta }
≥ 1 is a parameter to control the influence of
η
x
y
{\displaystyle \eta _{xy}}
.
τ
x
z
{\displaystyle \tau _{xz}}
and
η
x
z
{\displaystyle \eta _{xz}}
represent the trail level and attractiveness for the other possible state transitions.
=== Pheromone update ===
Trails are usually updated when all ants have completed their solution, increasing or decreasing the level of trails corresponding to moves that were part of "good" or "bad" solutions, respectively. An example of a global pheromone updating rule is now
τ
x
y
←
(
1
−
ρ
)
τ
x
y
+
∑
k
m
Δ
τ
x
y
k
{\displaystyle \tau _{xy}\leftarrow (1-\rho )\tau _{xy}+\sum _{k}^{m}\Delta \tau _{xy}^{k}}
where
τ
x
y
{\displaystyle \tau _{xy}}
is the amount of pheromone deposited for a state transition
x
y
{\displaystyle xy}
,
ρ
{\displaystyle \rho }
is the pheromone evaporation coefficient,
m
{\displaystyle m}
is the number of ants and
Δ
τ
x
y
k
{\displaystyle \Delta \tau _{xy}^{k}}
is the amount of pheromone deposited by
k
{\displaystyle k}
th ant, typically given for a TSP problem (with moves corresponding to arcs of the graph) by
Δ
τ
x
y
k
=
{
Q
/
L
k
if ant
k
uses curve
x
y
in its tour
0
otherwise
{\displaystyle \Delta \tau _{xy}^{k}={\begin{cases}Q/L_{k}&{\mbox{if ant }}k{\mbox{ uses curve }}xy{\mbox{ in its tour}}\\0&{\mbox{otherwise}}\end{cases}}}
where
L
k
{\displaystyle L_{k}}
is the cost of the
k
{\displaystyle k}
th ant's tour (typically length) and
Q
{\displaystyle Q}
is a constant.
== Common extensions ==
Here are some of the most popular variations of ACO algorithms.
=== Ant system (AS) ===
The ant system is the first ACO algorithm. This algorithm corresponds to the one presented above. It was developed by Dorigo.
=== Ant colony system (ACS) ===
In the ant colony system algorithm, the original ant system was modified in three aspects:
The edge selection is biased towards exploitation (i.e. favoring the probability of selecting the shortest edges with a large amount of pheromone);
While building a solution, ants change the pheromone level of the edges they are selecting by applying a local pheromone updating rule;
At the end of each iteration, only the best ant is allowed to update the trails by applying a modified global pheromone updating rule.
=== Elitist ant system ===
In this algorithm, the global best solution deposits pheromone on its trail after every iteration (even if this trail has not been revisited), along with all the other ants. The elitist strategy has as its objective directing the search of all ants to construct a solution to contain links of the current best route.
=== Max-min ant system (MMAS) ===
This algorithm controls the maximum and minimum pheromone amounts on each trail. Only the global best tour or the iteration best tour are allowed to add pheromone to its trail. To avoid stagnation of the search algorithm, the range of possible pheromone amounts on each trail is limited to an interval [τmax,τmin]. All edges are initialized to τmax to force a higher exploration of solutions. The trails are reinitialized to τmax when nearing stagnation.
=== Rank-based ant system (ASrank) ===
All solutions are ranked according to their length. Only a fixed number of the best ants in this iteration are allowed to update their trials. The amount of pheromone deposited is weighted for each solution, such that solutions with shorter paths deposit more pheromone than the solutions with longer paths.
=== Parallel ant colony optimization (PACO) ===
An ant colony system (ACS) with communication strategies is developed. The artificial ants are partitioned into several groups. Seven communication
methods for updating the pheromone level between groups in ACS are proposed and work on the traveling salesman problem.
=== Continuous orthogonal ant colony (COAC) ===
The pheromone deposit mechanism of COAC is to enable ants to search for solutions collaboratively and effectively. By using an orthogonal design method, ants in the feasible domain can explore their chosen regions rapidly and efficiently, with enhanced global search capability and accuracy. The orthogonal design method and the adaptive radius adjustment method can also be extended to other optimization algorithms for delivering wider advantages in solving practical problems.
=== Recursive ant colony optimization ===
It is a recursive form of ant system which divides the whole search domain into several sub-domains and solves the objective on these subdomains. The results from all the subdomains are compared and the best few of them are promoted for the next level. The subdomains corresponding to the selected results are further subdivided and the process is repeated until an output of desired precision is obtained. This method has been tested on ill-posed geophysical inversion problems and works well.
== Convergence ==
For some versions of the algorithm, it is possible to prove that it is convergent (i.e., it is able to find the global optimum in finite time). The first evidence of convergence for an ant colony algorithm was made in 2000, the graph-based ant system algorithm, and later on for the ACS and MMAS algorithms. Like most metaheuristics, it is very difficult to estimate the theoretical speed of convergence. A performance analysis of a continuous ant colony algorithm with respect to its various parameters (edge selection strategy, distance measure metric, and pheromone evaporation rate) showed that its performance and rate of convergence are sensitive to the chosen parameter values, and especially to the value of the pheromone evaporation rate. In 2004, Zlochin and his colleagues showed that ACO-type algorithms are closely related to stochastic gradient descent, Cross-entropy method and estimation of distribution algorithm. They proposed an umbrella term "Model-based search" to describe this class of metaheuristics.
== Applications ==
Ant colony optimization algorithms have been applied to many combinatorial optimization problems, ranging from quadratic assignment to protein folding or routing vehicles and a lot of derived methods have been adapted to dynamic problems in real variables, stochastic problems, multi-targets and parallel implementations.
It has also been used to produce near-optimal solutions to the travelling salesman problem. They have an advantage over simulated annealing and genetic algorithm approaches of similar problems when the graph may change dynamically; the ant colony algorithm can be run continuously and adapt to changes in real time. This is of interest in network routing and urban transportation systems.
The first ACO algorithm was called the ant system and it was aimed to solve the travelling salesman problem, in which the goal is to find the shortest round-trip to link a series of cities. The general algorithm is relatively simple and based on a set of ants, each making one of the possible round-trips along the cities. At each stage, the ant chooses to move from one city to another according to some rules:
It must visit each city exactly once;
A distant city has less chance of being chosen (the visibility);
The more intense the pheromone trail laid out on an edge between two cities, the greater the probability that that edge will be chosen;
Having completed its journey, the ant deposits more pheromones on all edges it traversed, if the journey is short;
After each iteration, trails of pheromones evaporate.
=== Scheduling problem ===
Sequential ordering problem (SOP)
Job-shop scheduling problem (JSP)
Open-shop scheduling problem (OSP)
Permutation flow shop problem (PFSP)
Single machine total tardiness problem (SMTTP)
Single machine total weighted tardiness problem (SMTWTP)
Resource-constrained project scheduling problem (RCPSP)
Group-shop scheduling problem (GSP)
Single-machine total tardiness problem with sequence dependent setup times (SMTTPDST)
Multistage flowshop scheduling problem (MFSP) with sequence dependent setup/changeover times
Assembly sequence planning (ASP) problems
=== Vehicle routing problem ===
Capacitated vehicle routing problem (CVRP)
Multi-depot vehicle routing problem (MDVRP)
Period vehicle routing problem (PVRP)
Split delivery vehicle routing problem (SDVRP)
Stochastic vehicle routing problem (SVRP)
Vehicle routing problem with pick-up and delivery (VRPPD)
Vehicle routing problem with time windows (VRPTW)
Time dependent vehicle routing problem with time windows (TDVRPTW)
Vehicle routing problem with time windows and multiple service workers (VRPTWMS)
=== Assignment problem ===
Quadratic assignment problem (QAP)
Generalized assignment problem (GAP)
Frequency assignment problem (FAP)
Redundancy allocation problem (RAP)
=== Set problem ===
Set cover problem (SCP)
Partition problem (SPP)
Weight constrained graph tree partition problem (WCGTPP)
Arc-weighted l-cardinality tree problem (AWlCTP)
Multiple knapsack problem (MKP)
Maximum independent set problem (MIS)
=== Device sizing problem in nanoelectronics physical design ===
Ant colony optimization (ACO) based optimization of 45 nm CMOS-based sense amplifier circuit could converge to optimal solutions in very minimal time.
Ant colony optimization (ACO) based reversible circuit synthesis could improve efficiency significantly.
=== Antennas optimization and synthesis ===
To optimize the form of antennas, ant colony algorithms can be used. As example can be considered antennas RFID-tags based on ant colony algorithms (ACO), loopback and unloopback vibrators 10×10
=== Image processing ===
The ACO algorithm is used in image processing for image edge detection and edge linking.
Edge detection:
The graph here is the 2-D image and the ants traverse from one pixel depositing pheromone. The movement of ants from one pixel to another is directed by the local variation of the image's intensity values. This movement causes the highest density of the pheromone to be deposited at the edges.
The following are the steps involved in edge detection using ACO:
Step 1: Initialization. Randomly place
K
{\displaystyle K}
ants on the image
I
M
1
M
2
{\displaystyle I_{M_{1}M_{2}}}
where
K
=
(
M
1
∗
M
2
)
1
2
{\displaystyle K=(M_{1}*M_{2})^{\tfrac {1}{2}}}
. Pheromone matrix
τ
(
i
,
j
)
{\displaystyle \tau _{(i,j)}}
are initialized with a random value. The major challenge in the initialization process is determining the heuristic matrix.
There are various methods to determine the heuristic matrix. For the below example the heuristic matrix was calculated based on the local statistics:
the local statistics at the pixel position
(
i
,
j
)
{\displaystyle (i,j)}
.
η
(
i
,
j
)
=
1
Z
∗
V
c
∗
I
(
i
,
j
)
,
{\displaystyle \eta _{(i,j)}={\tfrac {1}{Z}}*Vc*I_{(i,j)},}
where
I
{\displaystyle I}
is the image of size
M
1
∗
M
2
{\displaystyle M_{1}*M_{2}}
,
Z
=
∑
i
=
1
:
M
1
∑
j
=
1
:
M
2
V
c
(
I
i
,
j
)
{\displaystyle Z=\sum _{i=1:M_{1}}\sum _{j=1:M_{2}}Vc(I_{i,j})}
is a normalization factor, and
V
c
(
I
i
,
j
)
=
f
(
|
I
(
i
−
2
,
j
−
1
)
−
I
(
i
+
2
,
j
+
1
)
|
+
|
I
(
i
−
2
,
j
+
1
)
−
I
(
i
+
2
,
j
−
1
)
|
+
|
I
(
i
−
1
,
j
−
2
)
−
I
(
i
+
1
,
j
+
2
)
|
+
|
I
(
i
−
1
,
j
−
1
)
−
I
(
i
+
1
,
j
+
1
)
|
+
|
I
(
i
−
1
,
j
)
−
I
(
i
+
1
,
j
)
|
+
|
I
(
i
−
1
,
j
+
1
)
−
I
(
i
−
1
,
j
−
1
)
|
+
|
I
(
i
−
1
,
j
+
2
)
−
I
(
i
−
1
,
j
−
2
)
|
+
|
I
(
i
,
j
−
1
)
−
I
(
i
,
j
+
1
)
|
)
{\displaystyle {\begin{aligned}Vc(I_{i,j})=&f\left(\left\vert I_{(i-2,j-1)}-I_{(i+2,j+1)}\right\vert +\left\vert I_{(i-2,j+1)}-I_{(i+2,j-1)}\right\vert \right.\\&+\left\vert I_{(i-1,j-2)}-I_{(i+1,j+2)}\right\vert +\left\vert I_{(i-1,j-1)}-I_{(i+1,j+1)}\right\vert \\&+\left\vert I_{(i-1,j)}-I_{(i+1,j)}\right\vert +\left\vert I_{(i-1,j+1)}-I_{(i-1,j-1)}\right\vert \\&+\left.\left\vert I_{(i-1,j+2)}-I_{(i-1,j-2)}\right\vert +\left\vert I_{(i,j-1)}-I_{(i,j+1)}\right\vert \right)\end{aligned}}}
f
(
⋅
)
{\displaystyle f(\cdot )}
can be calculated using the following functions:
f
(
x
)
=
λ
x
,
for x ≥ 0; (1)
{\displaystyle f(x)=\lambda x,\quad {\text{for x ≥ 0; (1)}}}
f
(
x
)
=
λ
x
2
,
for x ≥ 0; (2)
{\displaystyle f(x)=\lambda x^{2},\quad {\text{for x ≥ 0; (2)}}}
f
(
x
)
=
{
sin
(
π
x
2
λ
)
,
for 0 ≤ x ≤
λ
; (3)
0
,
else
{\displaystyle f(x)={\begin{cases}\sin({\frac {\pi x}{2\lambda }}),&{\text{for 0 ≤ x ≤}}\lambda {\text{; (3)}}\\0,&{\text{else}}\end{cases}}}
f
(
x
)
=
{
π
x
sin
(
π
x
2
λ
)
,
for 0 ≤ x ≤
λ
; (4)
0
,
else
{\displaystyle f(x)={\begin{cases}\pi x\sin({\frac {\pi x}{2\lambda }}),&{\text{for 0 ≤ x ≤}}\lambda {\text{; (4)}}\\0,&{\text{else}}\end{cases}}}
The parameter
λ
{\displaystyle \lambda }
in each of above functions adjusts the functions’ respective shapes.
Step 2: Construction process. The ant's movement is based on 4-connected pixels or 8-connected pixels. The probability with which the ant moves is given by the probability equation
P
x
,
y
{\displaystyle P_{x,y}}
Step 3 and step 5: Update process. The pheromone matrix is updated twice. in step 3 the trail of the ant (given by
τ
(
x
,
y
)
{\displaystyle \tau _{(x,y)}}
) is updated where as in step 5 the evaporation rate of the trail is updated which is given by:
τ
n
e
w
←
(
1
−
ψ
)
τ
o
l
d
+
ψ
τ
0
{\displaystyle \tau _{new}\leftarrow (1-\psi )\tau _{old}+\psi \tau _{0}}
,
where
ψ
{\displaystyle \psi }
is the pheromone decay coefficient
0
<
τ
<
1
{\displaystyle 0<\tau <1}
Step 7: Decision process. Once the K ants have moved a fixed distance L for N iteration, the decision whether it is an edge or not is based on the threshold T on the pheromone matrix τ. Threshold for the below example is calculated based on Otsu's method.
Image edge detected using ACO: The images below are generated using different functions given by the equation (1) to (4).
Edge linking: ACO has also proven effective in edge linking algorithms.
=== Other applications ===
Bankruptcy prediction
Classification
Connection-oriented network routing
Connectionless network routing
Data mining
Discounted cash flows in project scheduling
Distributed information retrieval
Energy and electricity network design
Grid workflow scheduling problem
Inhibitory peptide design for protein protein interactions
Intelligent testing system
Power electronic circuit design
Protein folding
System identification
== Definition difficulty ==
With an ACO algorithm, the shortest path in a graph, between two points A and B, is built from a combination of several paths. It is not easy to give a precise definition of what algorithm is or is not an ant colony, because the definition may vary according to the authors and uses. Broadly speaking, ant colony algorithms are regarded as populated metaheuristics with each solution represented by an ant moving in the search space. Ants mark the best solutions and take account of previous markings to optimize their search. They can be seen as probabilistic multi-agent algorithms using a probability distribution to make the transition between each iteration. In their versions for combinatorial problems, they use an iterative construction of solutions. According to some authors, the thing which distinguishes ACO algorithms from other relatives (such as algorithms to estimate the distribution or particle swarm optimization) is precisely their constructive aspect. In combinatorial problems, it is possible that the best solution eventually be found, even though no ant would prove effective. Thus, in the example of the travelling salesman problem, it is not necessary that an ant actually travels the shortest route: the shortest route can be built from the strongest segments of the best solutions. However, this definition can be problematic in the case of problems in real variables, where no structure of 'neighbours' exists. The collective behaviour of social insects remains a source of inspiration for researchers. The wide variety of algorithms (for optimization or not) seeking self-organization in biological systems has led to the concept of "swarm intelligence", which is a very general framework in which ant colony algorithms fit.
== Stigmergy algorithms ==
There is in practice a large number of algorithms claiming to be "ant colonies", without always sharing the general framework of optimization by canonical ant colonies. In practice, the use of an exchange of information between ants via the environment (a principle called "stigmergy") is deemed enough for an algorithm to belong to the class of ant colony algorithms. This principle has led some authors to create the term "value" to organize methods and behavior based on search of food, sorting larvae, division of labour and cooperative transportation.
== Related methods ==
Genetic algorithms (GA)
These maintain a pool of solutions rather than just one. The process of finding superior solutions mimics that of evolution, with solutions being combined or mutated to alter the pool of solutions, with solutions of inferior quality being discarded.
Estimation of distribution algorithm (EDA)
An evolutionary algorithm that substitutes traditional reproduction operators by model-guided operators. Such models are learned from the population by employing machine learning techniques and represented as probabilistic graphical models, from which new solutions can be sampled or generated from guided-crossover.
Simulated annealing (SA)
A related global optimization technique which traverses the search space by generating neighboring solutions of the current solution. A superior neighbor is always accepted. An inferior neighbor is accepted probabilistically based on the difference in quality and a temperature parameter. The temperature parameter is modified as the algorithm progresses to alter the nature of the search.
Reactive search optimization
Focuses on combining machine learning with optimization, by adding an internal feedback loop to self-tune the free parameters of an algorithm to the characteristics of the problem, of the instance, and of the local situation around the current solution.
Tabu search (TS)
Similar to simulated annealing in that both traverse the solution space by testing mutations of an individual solution. While simulated annealing generates only one mutated solution, tabu search generates many mutated solutions and moves to the solution with the lowest fitness of those generated. To prevent cycling and encourage greater movement through the solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a solution that contains elements of the tabu list, which is updated as the solution traverses the solution space.
Artificial immune system (AIS)
Modeled on vertebrate immune systems.
Particle swarm optimization (PSO)
A swarm intelligence method.
Intelligent water drops (IWD)
A swarm-based optimization algorithm based on natural water drops flowing in rivers
Gravitational search algorithm (GSA)
A swarm intelligence method.
Ant colony clustering method (ACCM)
A method that make use of clustering approach, extending the ACO.
Stochastic diffusion search (SDS)
An agent-based probabilistic global search and optimization technique best suited to problems where the objective function can be decomposed into multiple independent partial-functions.
== History ==
Chronology of ant colony optimization algorithms.
1959, Pierre-Paul Grassé invented the theory of stigmergy to explain the behavior of nest building in termites;
1983, Deneubourg and his colleagues studied the collective behavior of ants;
1988, and Moyson Manderick have an article on self-organization among ants;
1989, the work of Goss, Aron, Deneubourg and Pasteels on the collective behavior of Argentine ants, which will give the idea of ant colony optimization algorithms;
1989, implementation of a model of behavior for food by Ebling and his colleagues;
1991, M. Dorigo proposed the ant system in his doctoral thesis (which was published in 1992). A technical report extracted from the thesis and co-authored by V. Maniezzo and A. Colorni was published five years later;
1994, Appleby and Steward of British Telecommunications Plc published the first application to telecommunications networks
1995, Gambardella and Dorigo proposed ant-q, the preliminary version of ant colony system as first extension of ant system;.
1996, Gambardella and Dorigo proposed ant colony system
1996, publication of the article on ant system;
1997, Dorigo and Gambardella proposed ant colony system hybridized with local search;
1997, Schoonderwoerd and his colleagues published an improved application to telecommunication networks;
1998, Dorigo launches first conference dedicated to the ACO algorithms;
1998, Stützle proposes initial parallel implementations;
1999, Gambardella, Taillard and Agazzi proposed macs-vrptw, first multi ant colony system applied to vehicle routing problems with time windows,
1999, Bonabeau, Dorigo and Theraulaz publish a book dealing mainly with artificial ants
2000, special issue of the Future Generation Computer Systems journal on ant algorithms
2000, Hoos and Stützle invent the max-min ant system;
2000, first applications to the scheduling, scheduling sequence and the satisfaction of constraints;
2000, Gutjahr provides the first evidence of convergence for an algorithm of ant colonies
2001, the first use of COA algorithms by companies (Eurobios and AntOptima);
2001, Iredi and his colleagues published the first multi-objective algorithm
2002, first applications in the design of schedule, Bayesian networks;
2002, Bianchi and her colleagues suggested the first algorithm for stochastic problem;
2004, Dorigo and Stützle publish the Ant Colony Optimization book with MIT Press
2004, Zlochin and Dorigo show that some algorithms are equivalent to the stochastic gradient descent, the cross-entropy method and algorithms to estimate distribution
2005, first applications to protein folding problems.
2012, Prabhakar and colleagues publish research relating to the operation of individual ants communicating in tandem without pheromones, mirroring the principles of computer network organization. The communication model has been compared to the Transmission Control Protocol.
2016, first application to peptide sequence design.
2017, successful integration of the multi-criteria decision-making method PROMETHEE into the ACO algorithm (HUMANT algorithm).
== References ==
== Publications (selected) ==
M. Dorigo, 1992. Optimization, Learning and Natural Algorithms, PhD thesis, Politecnico di Milano, Italy.
M. Dorigo, V. Maniezzo & A. Colorni, 1996. "Ant System: Optimization by a Colony of Cooperating Agents", IEEE Transactions on Systems, Man, and Cybernetics–Part B, 26 (1): 29–41.
M. Dorigo & L. M. Gambardella, 1997. "Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem". IEEE Transactions on Evolutionary Computation, 1 (1): 53–66.
M. Dorigo, G. Di Caro & L. M. Gambardella, 1999. "Ant Algorithms for Discrete Optimization Archived 2018-10-06 at the Wayback Machine". Artificial Life, 5 (2): 137–172.
E. Bonabeau, M. Dorigo et G. Theraulaz, 1999. Swarm Intelligence: From Natural to Artificial Systems, Oxford University Press. ISBN 0-19-513159-2
M. Dorigo & T. Stützle, 2004. Ant Colony Optimization, MIT Press. ISBN 0-262-04219-3
M. Dorigo, 2007. "Ant Colony Optimization". Scholarpedia.
C. Blum, 2005 "Ant colony optimization: Introduction and recent trends". Physics of Life Reviews, 2: 353-373
M. Dorigo, M. Birattari & T. Stützle, 2006 Ant Colony Optimization: Artificial Ants as a Computational Intelligence Technique. TR/IRIDIA/2006-023
Mohd Murtadha Mohamad,"Articulated Robots Motion Planning Using Foraging Ant Strategy", Journal of Information Technology - Special Issues in Artificial Intelligence, Vol. 20, No. 4 pp. 163–181, December 2008, ISSN 0128-3790.
N. Monmarché, F. Guinand & P. Siarry (eds), "Artificial Ants", August 2010 Hardback 576 pp. ISBN 978-1-84821-194-0.
A. Kazharov, V. Kureichik, 2010. "Ant colony optimization algorithms for solving transportation problems", Journal of Computer and Systems Sciences International, Vol. 49. No. 1. pp. 30–43.
C-M. Pintea, 2014, Advances in Bio-inspired Computing for Combinatorial Optimization Problem, Springer ISBN 978-3-642-40178-7
K. Saleem, N. Fisal, M. A. Baharudin, A. A. Ahmed, S. Hafizah and S. Kamilah, "Ant colony inspired self-optimized routing protocol based on cross layer architecture for wireless sensor networks", WSEAS Trans. Commun., vol. 9, no. 10, pp. 669–678, 2010. ISBN 978-960-474-200-4
K. Saleem and N. Fisal, "Enhanced Ant Colony algorithm for self-optimized data assured routing in wireless sensor networks", Networks (ICON) 2012 18th IEEE International Conference on, pp. 422–427. ISBN 978-1-4673-4523-1
Abolmaali S, Roodposhti FR. Portfolio Optimization Using Ant Colony Method a Case Study on Tehran Stock Exchange. Journal of Accounting. 2018 Mar;8(1).
== External links ==
Scholarpedia Ant Colony Optimization page
Ant Colony Optimization Home Page
"Ant Colony Optimization" - Russian scientific and research community
AntSim - Simulation of Ant Colony Algorithms
MIDACO-Solver General purpose optimization software based on ant colony optimization (Matlab, Excel, VBA, C/C++, R, C#, Java, Fortran and Python)
University of Kaiserslautern, Germany, AG Wehn: Ant Colony Optimization Applet Visualization of Traveling Salesman solved by ant system with numerous options and parameters (Java Applet)
Ant algorithm simulation (Java Applet)
Java Ant Colony System Framework
Ant Colony Optimization Algorithm Implementation (Python Notebook) | Wikipedia/Ant_colony_optimization_algorithm |
The energy industry refers to all of the industries involved in the production and sale of energy, including fuel extraction, manufacturing, refining and distribution. Modern society consumes large amounts of fuel, and the energy industry is a crucial part of the infrastructure and maintenance of society in almost all countries.
In particular, the energy industry comprises:
the fossil fuel industries, which include petroleum industries (oil companies, petroleum refiners, fuel transport and end-user sales at gas stations), coal industries (extraction and processing), and the natural gas industries (natural gas extraction, and coal gas manufacture, as well as distribution and sales);
the electrical power industry, including electricity generation, electric power distribution, and sales;
the nuclear power industry;
the renewable energy industry, comprising alternative energy and sustainable energy companies, including those involved in hydroelectric power, wind power, and solar power generation, and the manufacture, distribution and sale of alternative fuels; and,
traditional energy industry based on the collection and distribution of firewood, the use of which, for cooking and heating, is particularly common in poorer countries.
The increased dependence during the 20th century on carbon-emitting energy sources, such as fossil fuels, and carbon-emitting renewables, such as biomass, means that the energy industry has frequently contributed to pollution and environmental impacts on the economy. Until recently, fossil fuels were the primary source of energy generation in most parts of the world and are a significant contributor to global warming and pollution. Many economies are investing in renewable and sustainable energy to limit global warming and reduce air pollution.
== History ==
The use of energy has been a key in the development of human societies by helping it to control and adapt to the environment. Managing the use of energy is inevitable in any functional society. In the industrialized world the development of energy resources has become essential for agriculture, transportation, waste collection, information technology, communications that have become prerequisites of a developed society. The increasing use of energy since the Industrial Revolution has also brought with it a number of serious problems, some of which, such as global warming, present potentially grave risks to the world.
In some industries, the word energy is used as a synonym for energy resources, which refer to substances like fuels, petroleum products, and electricity in general. This is because a significant portion of the energy contained in these resources can easily be extracted to serve a useful purpose. After a useful process has taken place, the total energy is conserved. Still, the resource itself is not conserved since a process usually transforms the energy into unusable forms (such as unnecessary or excess heat).
Ever since humanity discovered various energy resources available in nature, it has been inventing devices, known as machines, that make life more comfortable by using energy resources. Thus, although primitive man knew the utility of fire to cook food, the invention of devices like gas burners and microwave ovens led to additional ways of how energy can be used. The trend is the same in any other field of social activity, be it the construction of social infrastructure, manufacturing of fabrics for covering, porting, printing, decorating, for example, textiles, air conditioning, communication of information, or for moving people and goods (automobiles).
== Economics ==
Production and consumption of energy resources is very important to the global economy. All economic activity requires energy resources, whether to manufacture goods, provide transportation, run computers and other machines.
Widespread demand for energy may encourage competing energy utilities and the formation of retail energy markets. Note the presence of the "Energy Marketing and Customer Service" (EMACS) sub-sector.
The energy sector accounts for 4.6% of outstanding leveraged loans, compared with 3.1% a decade ago, while energy bonds make up 15.7% of the $1.3 trillion junk bond market, up from 4.3% over the same period.
== Management ==
Since the cost of energy has become a significant factor in the performance of societies' economies, the management of energy resources has become crucial. Energy management involves using the available energy resources more effectively, that is, with minimum incremental costs. Simple management techniques can often save energy expenditures without incorporating fresh technology. Energy management is most often the practice of using energy more efficiently by eliminating energy wastage or balancing justifiable energy demand with appropriate energy supply. The process couples energy awareness with energy conservation.
== Classifications ==
=== Government ===
The United Nations developed the International Standard Industrial Classification, which is a list of economic and social classifications. There is no distinct classification for an energy industry, because the classification system is based on activities, products, and expenditures according to purpose.
Countries in North America use the North American Industry Classification System (NAICS). The NAICS sectors No. 21 and No. 22 (mining and utilities) might roughly define the energy industry in North America. This classification is used by the U.S. Securities and Exchange Commission.
=== Financial market ===
The Global Industry Classification Standard used by Morgan Stanley define the energy industry as comprising companies primarily working with oil, gas, coal and consumable fuels, excluding companies working with certain industrial gases.
Add also to expand this section: Dow Jones Industrial Average
== Environmental impact ==
Government encouragement in the form of subsidies and tax incentives for energy-conservation efforts has increasingly fostered the view of conservation as a major function of the energy industry: saving an amount of energy provides economic benefits almost identical to generating that same amount of energy. This is compounded by the fact that the economics of delivering energy tend to be priced for capacity as opposed to average usage. One of the purposes of a smart grid infrastructure is to smooth out demand so that capacity and demand curves align more closely.
Some parts of the energy industry generate considerable pollution, including toxic and greenhouse gases from fuel combustion, nuclear waste from the generation of nuclear power, and oil spillages as a result of petroleum extraction. Government regulations to internalize these externalities form an increasing part of doing business, and the trading of carbon credits and pollution credits on the free market may also result in energy-saving and pollution-control measures becoming even more important to energy providers.
Consumption of energy resources, (e.g. turning on a light) requires resources and has an effect on the environment. Many electric power plants burn coal, oil or natural gas to generate electricity for energy needs. While burning these fossil fuels produces a readily available and instantaneous supply of electricity, it also generates air pollutants including carbon dioxide (CO2), sulfur dioxide and trioxide (SOx) and nitrogen oxides (NOx). Carbon dioxide is an important greenhouse gas, known to be responsible, along with methane, nitrous oxide, and fluorinated gases, for the rapid increase in global warming since the Industrial Revolution. In the 20th century, global temperature records are significantly higher than temperature records from thousands of years ago, taken from ice cores in Arctic regions. Burning fossil fuels for electricity generation also releases trace metals such as beryllium, cadmium, chromium, copper, manganese, mercury, nickel, and silver into the environment, which also act as pollutants.
The large-scale use of renewable energy technologies would "greatly mitigate or eliminate a wide range of environmental and human health impacts of energy use". Renewable energy technologies include biofuels, solar heating and cooling, hydroelectric power, solar power, and wind power. Energy conservation and the efficient use of energy would also help.
In addition it is argued that there is also the potential to develop a more efficient energy sector. This can be done by:
Fuel switching in the power sector from coal to natural gas;
Power plant optimisation and other measures to improve the efficiency of existing CCGT power plants;
Combined heat and power (CHP), from micro-scale residential to large-scale industrial;
Waste heat recovery
Best available technology (BAT) offers supply-side efficiency levels far higher than global averages. The relative benefits of gas compared to coal are influenced by the development of increasingly efficient energy production methods. According to an impact assessment carried out for the European Commission, the levels of energy efficiency of coal-fired plants built have now increased to 46–49% efficiency rates, as compared to coals plants built before the 1990s (32–40%). However, at the same time gas can reach 58–59% efficiency levels with the best available technology. Meanwhile, combined heat and power can offer efficiency rates of 80–90%.
== Politics ==
Since energy now plays an essential role in industrial societies, the ownership and control of energy resources plays an increasing role in politics. At the national level, governments seek to influence the sharing (distribution) of energy resources among various sections of the society through pricing mechanisms; or even who owns resources within their borders. They may also seek to influence the use of energy by individuals and business in an attempt to tackle environmental issues.
The most recent international political controversy regarding energy resources is in the context of the Iraq Wars. Some political analysts maintain that the hidden reason for both 1991 and 2003 wars can be traced to strategic control of international energy resources. Others counter this analysis with the numbers related to its economics. According to the latter group of analysts, U.S. has spent about $336 billion in Iraq as compared with a background current value of $25 billion per year budget for the entire U.S. oil import dependence
=== Policy ===
Energy policy is the manner in which a given entity (often governmental) has decided to address issues of energy development, including energy production, distribution and consumption. The attributes of energy policy may include legislation, international treaties, incentives to investment, guidelines for energy conservation, taxation and other public policy techniques.
=== Security ===
Energy security is the intersection of national security and the availability of natural resources for energy consumption. Access to cheap energy has become essential to the functioning of modern economies. However, the uneven distribution of energy supplies among countries has led to significant vulnerabilities. Threats to energy security include the political instability of several energy producing countries, the manipulation of energy supplies, the competition over energy sources, attacks on supply infrastructure, as well as accidents, natural disasters, the funding to foreign dictators, rising terrorism, and dominant countries reliance to the foreign oil supply. The limited supplies, uneven distribution, and rising costs of fossil fuels, such as oil and gas, create a need to change to more sustainable energy sources in the foreseeable future. With as much dependence that the U.S. currently has for oil and with the peaking limits of oil production; economies and societies will begin to feel the decline in the resource that we have become dependent upon. Energy security has become one of the leading issues in the world today as oil and other resources have become as vital to the world's people. But with oil production rates decreasing and oil production peak nearing, the world has come to protect what resources we have left. With new advancements in renewable resources, less pressure has been put on companies that produce the world's oil; these resources are geothermal, solar power, wind power, and hydroelectric. Although these are not all the current and possible options for the world to turn to as the oil depletes, the most critical issue is protecting these vital resources from future threats. These new resources will become more valuable as the price of exporting and importing oil will increase due to increased demand.
== Development ==
Producing energy to sustain human needs is an essential social activity, and a great deal of effort goes into the activity. While most of such effort is limited towards increasing the production of electricity and oil, newer ways of producing usable energy resources from the available energy resources are being explored. One such effort is to explore means of producing hydrogen fuel from water. Though hydrogen use is environmentally friendly, its production requires energy and existing technologies to make it, are not very efficient. Research is underway to explore enzymatic decomposition of biomass.
Other forms of conventional energy resources are also being used in new ways. Coal gasification and liquefaction are recent technologies that are becoming attractive after the realization that oil reserves, at present consumption rates, may be rather short lived. See alternative fuels.
Energy is the subject of significant research activities globally. For example, the UK Energy Research Centre is the focal point for UK energy research while the European Union has many technology programmes as well as a platform for engaging social science and humanities within energy research.
== Transportation ==
All societies require materials and food to be transported over distances, generally against some force of friction. Since application of force over distance requires the presence of a source of usable energy, such sources are of great worth in society.
While energy resources are an essential ingredient for all modes of transportation in society, the transportation of energy resources is becoming equally important. Energy resources are frequently located far from the place where they are consumed. Therefore, their transportation is always in question. Some energy resources like liquid or gaseous fuels are transported using tankers or pipelines, while electricity transportation invariably requires a network of grid cables. The transportation of energy, whether by tanker, pipeline, or transmission line, poses challenges for scientists and engineers, policy makers, and economists to make it more risk-free and efficient.
== Crisis ==
Economic and political instability can lead to an energy crisis. Notable oil crises are the 1973 oil crisis and the 1979 oil crisis. The advent of peak oil, the point in time when the maximum rate of global petroleum extraction is reached, will likely precipitate another energy crisis.
== Mergers and acquisitions ==
Between 1985 and 2018 there had been around 69,932 deals in the energy sector. This cumulates to an overall value of US$9,578bn. The most active year was 2010 with about 3.761 deals. In terms of value 2007 was the strongest year (US$684bn), which was followed by a steep decline until 2009 (−55.8%).
Here is a list of the top 10 deals in history in the energy sector:
== See also ==
== References ==
== Further reading ==
Armstrong, Robert C., Catherine Wolfram, Robert Gross, Nathan S. Lewis, and M.V. Ramana et al. The Frontiers of Energy, Nature Energy, Vol 1, 11 January 2016.
Bradley, Robert (2004). Energy: The Master Resource. Kendall Hunt. p. 252. ISBN 978-0757511691.
Fouquet, Roger, and Peter J.G. Pearson. "Seven Centuries of Energy Services: The Price and Use of Light in the United Kingdom (1300-2000)". Energy Journal 27.1 (2006).
Gales, Ben, et al. "North versus South: Energy transition and energy intensity in Europe over 200 years". European Review of Economic History 11.2 (2007): 219–253.
Nye, David E. Consuming power: A social history of American energies (MIT Press, 1999)
Pratt, Joseph A. Exxon: Transforming Energy, 1973–2005 (2013) 600pp
Smil, Vaclav (1994). Energy in World History. Westview Press. ISBN 978-0-8133-1902-5.
Stern, David I. "The role of energy in economic growth". Annals of the New York Academy of Sciences 1219.1 (2011): 26-51.
Warr, Benjamin, et al. "Energy use and economic development: A comparative analysis of useful work supply in Austria, Japan, the United Kingdom and the US during 100 years of economic growth". Ecological Economics 69.10 (2010): 1904–1917.
Yergin, Daniel (2011). The Quest: Energy, Security, and the Remaking of the Modern World. Penguin. p. 816. ISBN 978-1594202834. | Wikipedia/Energy_company |
Xcel Energy Inc. is a U.S. regulated electric utility and natural gas delivery company based in Minneapolis, Minnesota, serving more than 3.7 million electric customers and 2.1 million natural gas customers across parts of eight states (Colorado, Minnesota, Wisconsin, Michigan, North Dakota, South Dakota, Texas and New Mexico). It consists of four operating subsidiaries: Northern States Power-Minnesota, Northern States Power-Wisconsin, Public Service Company of Colorado, and Southwestern Public Service Co.
In December 2018, Xcel Energy announced it would deliver 100 percent clean, carbon-free electricity by 2050, with an 80 percent carbon reduction by 2035 (from 2005 levels). This makes Xcel the first major US utility to set such a goal.
== History ==
Xcel Energy was built on three companies: Minneapolis-based Northern States Power Company (NSP), Denver-based Public Service Company of Colorado (PSCo), and Amarillo-based Southwestern Public Service (SPS). Southwestern Public Service Co. (SPS) dates its origins to 1904 and the Pecos Valley in New Mexico when Maynard Gunsell received an electricity franchise for the city of Roswell, New Mexico and its 2,000 residents. The financial strain of creating this new enterprise soon overwhelmed him and he sold the franchise to W.H. Gillenwater, who named his utility the Roswell Electric Light Co. He later sold the company to an investment firm in Cleveland, Ohio, which already owned the Roswell Gas Co.
Northern States Power Company's timeline begins with the organization of the Washington County Light & Power Co. in 1909. When H. M. Byllesby began building his utility holding company across the Northwestern region of the US, he renamed it the Consumers Power Co. in 1910 and which was renamed the Northern States Power Co. in 1916. While the bulk of NSP's territory grew across central and southern Minnesota (starting from the Twin Cities), it acquired territory in North Dakota (centering on Fargo, Grand Forks, and Minot) and extended southwest into South Dakota (centering on Sioux Falls). NSP's system also extended east into Wisconsin, but because of utility ownership laws in that state, it was operated as an entity separate from the rest of the company.
Public Service Company of Colorado (PSCo) was formed in 1923 to provide an electric generating station for the Denver area. By 1924, it had acquired most of the electric companies in northern and central Colorado. Originally a subsidiary of Cities Service Company, it became an independent autonomous operation in November 1943. By this time, it served 80 percent of Colorado's gas and electricity needs. As demand for energy continued to grow, so did PSCo. Eventually, the company merged with SPS to form New Century Energies (NCE) in 1995.
Northern States Power and Wisconsin Energy Corporation had planned to merge into a new outfit that was to be called Primergy - but in 1997, the merger fell through because of the time it was taking to gain the required approvals from state and federal agencies. After the failed Primergy merger, NSP (both the Minnesota and Wisconsin companies) merged with New Century Energies to form Xcel Energy. In 2005, Xcel sold Cheyenne Light, Fuel and Power to Black Hills Corporation. Cheyenne Light, Fuel and Power had been a subsidiary of PSCo since the 1920s, and had become an operating company of NCE after the merger with SPS.
In December 2018, Xcel Energy became the first major US utility to pledge to go carbon-free, aiming for 80% carbon reduction by 2030, and 100% reduction by 2050.
Utility industry magazine Utility Dive awarded Xcel Energy its 2018 "Utility of the Year" award for its plans for add 12 wind farms, its project with Google to develop new ways for customers to personalize energy management, and its plan to retire 50 percent of its coal-powered capacity by 2026 (and replacing it with a combination of renewable energy, efficiency, and natural gas).
On May 20, 2019, Xcel Energy announced its intent to close all of its remaining coal-fired plants in Minnesota by 2030 while compensating by increasing solar production capacity by 1,400%. It also declared its plans to continue operating its Monticello nuclear plant near Monticello, Minnesota, until at least 2040.
== Generation portfolio ==
Xcel Energy primarily provides energy from wind, natural gas, and coal. It has increased its share of carbon-free energy generation over time. Primarily by adding wind capacity. It has also lowered its greenhouse gas emissions by reducing coal consumption in favor of natural gas. Xcel plans to continue this trend by converting some coal plants to natural gas and closing others.: 10–12 Some environmental activists argue that Xcel is not acting on plans to stop using coal plants fast enough.
Xcel Energy owns and operates three wind farms.: 11 In October 2011, Xcel Energy set a world record for electricity from wind power, with an hourly penetration of 55.6% of production from wind. At peak generation wind is the largest source of energy capacity for Xcel.: 10–11
Xcel Energy owns and operates two nuclear power plants:
Monticello Nuclear Generating Plant near Monticello, Minnesota
Prairie Island Nuclear Generating Plant near Red Wing, Minnesota
and stores the spent fuel from these nuclear plants on site in independent spent fuel storage installations. (ISFSIs).
Biomass electricity comes from organic fuel sources. Xcel Energy has contracts for about 110 megawatts of electricity from biomass generators. Two in northern Minnesota are fueled by forest harvest residue, such as treetops and limbs. A third facility, brought on line in 2007 in western Minnesota, generates power using turkey litter.
Xcel Energy's Bay Front plant in Ashland, Wisconsin, is a three-unit generating station that has become a model for the creative use of fuels: coal, waste wood, railroad ties, discarded tires, natural gas, and petroleum coke. Two of the three Bay Front operating units already use biomass as their primary fuel. Xcel Energy recently proposed a plan to install biomass gasification technology at Bay Front. The waste-to-energy facilities use waste that would otherwise end up in landfills. The Wisconsin waste-to-energy plant burns wood waste in combination with refuse-derived fuel (RDF).
== Transmission and Distribution ==
=== Transmission ===
Electric power transmission refers to the high voltage power lines for long distance transport of electric energy. Xcel Energy owns and operates 110,000 miles of such lines.: 12 Xcel has proposed significant plans for updating its transmission system with a budget of $7.4 billion from 2022-2027.
The Colorado Power Pathway, approved by regulators in 2022, gives Xcel $1.7 billion to expand transmission infrastructure in eastern Colorado. One proposal in Colorado involved nearly $3 billion in new investment. Construction is subcontracted to Quanta Infrastructure Services Group. Current timelines have most of the project being completed by 2027.
Another proposal in Colorado involved nearly $3 billion in new investment. This was more than nine times the state budget and in addition to the budget for the Colorado Power Pathway program.
Expanding the transmission infrastructure is important for adding new renewable systems to the grid. However the cost can be significant. Colorado regulators did not approve Xcel's plan to build additional transmission in Baca County which is a large potential wind energy generator. Instead, they sought proposals that involve constructing renewable energy sources connected to the existing grid.
In Minnesota, the budget for a new 465 mile transmission line doubled to $1.14 billion and Xcel cited inflationary pressures. Opponents of Colorado's Power Pathway Program highlighted that cost overruns in transmission construction will be pushed to consumers. The Federal Energy Information Administration stated in 2023 that transmission lines typically cost $1.17 million to $8.62 million per mile.
Under the Power for the Plains Initiative, Xcel Energy built hundreds of miles of transmission lines and substations totaling $3 billion in investment. The lines supply Kiowa NM, Lubbock TX, and connect other towns in the Texas panhandle.
The transmission system is operated on a non-discriminatory basis under the open access requirements of the Federal Government. This means that all wholesale buyers and sellers of electricity can use the transmission system under the same terms and conditions used to serve Xcel Energy's own retail customers.
=== Grid security ===
In 2017, Xcel Energy partnered with the Financial Services Information Sharing and Analysis Center to create a new "threat information sharing community" intended to share cyber and physical security intelligence with the energy sector. The new community is called the Energy Analytic Security Exchange (EASE). It is run by the FS-ISAC Sector Services team; FS-ISAC is an organization that gathers cyber and physical risk intelligence for the financial services industry. Additionally, the North American Electric Reliability Council (NERC) manages the Electricity Information Sharing and Analysis Center, which is another resource that the energy sector uses to gather threat intelligence.
==== Advanced Grid in Colorado ====
In 2016, Xcel Energy announced the Advanced Grid Intelligence and Security (AGIS) initiative, a long-term effort related to power reliability, distributed generation, and information sharing with customers. Through the initiative, Xcel would build an "intelligent grid" in Colorado in order to improve grid security. The company filed a request for permission with the Colorado Public Utilities Commission for the program, which would cost $500 million.
== Programs ==
Since 1998, Xcel Energy's Windsource program has allowed customers to designate that part or all of their electricity comes from a renewable energy source. In 2015, about 96,000 people were enrolled in Windsource. In 2011, more than 2.3 million electric and 261,800 natural gas customers took part in Xcel Energy's energy efficiency programs for homes and businesses.
Xcel Energy also offers customers incentives to install solar panels. At the end of 2011, more than 10,600 photovoltaic systems had been installed, with a capacity of about 121 megawatts (DC). In early 2011, Xcel Energy suspended the solar rebate program before reaching a settlement a month later with representatives of solar power companies to restore the solar incentive program until it is fully reviewed by the Public Utilities Commission.
== Controversies ==
On August 1, 2002, Xcel Energy Inc. was sued because of engaging in "round-trip" energy trades that provided no economic benefit for the company, and because the company lacked the necessary internal controls to adequately monitor the trading of its power. Xcel paid $80,000,000 in a settlement.
=== Nuclear waste, radioactive water management ===
The parent company to Xcel Energy, and later Xcel Energy itself have operated the Prairie Island Nuclear Power Plant since 1973. Over that time, nuclear waste produced by the power plant has been stored adjacent to the Prairie Island Indian Community. The homes of some members of the community are within 600 yards of the nuclear waste containment site. The island is very vulnerable to seasonal flooding because it is on the bank of the Mississippi River. While in 1991, the Minnesota Public Utilities Commission capped the storage of nuclear waste on the island to 17 casks, the legislature has since permitted this number to increase. Environmentalists and members of the Prairie Island Indian Community have been working since 1994 to have this nuclear waste transported away from their reservation due to the combined risk of the temporary design of the storage facility, unpredictable flooding and single evacuation road in the event nuclear waste is released from containment.
On March 16, 2023, Xcel Energy announced that a significant unplanned release of radioactive water from its Monticello nuclear power plant took place on November 21, 2022, which was reported only to state and federal authorities but was concealed from the public until then. Xcel estimated the leak to be 400 thousand gallons of contaminated water containing radioactive tritium. The leak occurred in a water pipe that runs between two buildings.
=== Fires ===
The Cabin Creek Fire occurred on October 2, 2007, at Xcel Energy's Hydropower Generation plant in Georgetown, Colorado. On June 1, 2011, Federal prosecutors opened their charges that Xcel Energy was criminally liable for the deaths of the five RPI workers. The jury found Xcel Energy not guilty. On December 19, 2011, RPI Coating plead guilty to workplace safety violations and paid $1.55 million in a cash settlement. The company took responsibility for the deaths of five workers and the injuries to three.
Xcel Energy faces more than 200 lawsuits filed by victims of the Marshall Fire, which started on December 30, 2021, Boulder County, and spread across Superior and Louisville counties in Colorado. Two people died and more than 1,000 homes and commercial properties were destroyed, resulting in more than $2 billion in property damage. It was the most destructive wildfire in Colorado history. The official investigation blamed the fire on a loose wire owned by Xcel and a week-old fire started by members of the Twelve Tribes that was smoldering.
Following the Smokehouse Creek Fire in the Texas Panhandle, which began February 26, 2024, Xcel Energy acknowledged that its facilities played a role in the ignition of the fire. The fire became the largest wildfire in Texas History, burning more than 1 million acres and resulting in at least two deaths and extensive property damage and livestock losses. An investigation by the Texas A&M Forest Service found that power lines ignited the fire.
== See also ==
Plant X, Lamb County, Texas
== References ==
== External links ==
Official website
Business data for Xcel Energy Inc.: | Wikipedia/Xcel_Energy |
Gilead Sciences, Inc. () is an American biopharmaceutical company headquartered in Foster City, California, that focuses on researching and developing antiviral drugs used in the treatment of HIV/AIDS, hepatitis B, hepatitis C, influenza, and COVID-19, including ledipasvir/sofosbuvir and sofosbuvir. Gilead is a member of the Nasdaq-100 and the S&P 100.
Gilead was founded in 1987 under the name Oligogen by Michael L. Riordan. The original name was a reference to oligonucleotides, small strands of DNA used to target genetic sequences. Gilead held its initial public offering in 1992, and successfully developed drugs like Tamiflu and Vistide that decade.
In the 2000s, Gilead received approval for drugs including Viread and Hepsera, among others. It began evolving from a biotechnology company into a pharmaceutical company, acquiring several subsidiaries, though it still relied heavily on contracting to manufacture its drugs.
The company continued its growth in the 2010s. However, it came under heavy scrutiny over its business practices, including extremely high pricing of drugs such as Sovaldi and Truvada in the United States relative to production cost and cost in the developing world.
== History ==
=== Foundation ===
In June 1987, Gilead Sciences was originally founded under the name Oligogen by Michael L. Riordan, a medical doctor. Riordan graduated from Washington University in St. Louis, the Johns Hopkins School of Medicine, and the Harvard Business School. The idea for Gilead began as a research project at Menlo Ventures, where Michael was an associate. Three scientific advisers worked with Riordan to create the company: Peter Dervan of Caltech, Doug Melton of Harvard, and Harold M. Weintraub of the Fred Hutchinson Cancer Research Center, along with H. Dubose Montgomery, one of Menlo Ventures founders. Riordan served as CEO from the company's founding until 1996. Menlo Ventures subsequently made the first investment in Gilead of $2 million. Riordan also recruited scientific advisers, including Harold Varmus, a Nobel laureate who later became Director of the National Institutes of Health, and Jack Szostak, recipient of the Nobel Prize for Physiology or Medicine in 2009.
The company's primary therapeutic focus was in antiviral medicines, a field that piqued Riordan's interest after he contracted dengue fever. Riordan recruited Donald Rumsfeld to join the board of directors in 1988, followed by Benno C. Schmidt, Sr., Gordon Moore, and George P. Shultz. Riordan tried to recruit Warren Buffett as an investor and board member but was unsuccessful.
The company focused its early research on making small strands of DNA (oligomers, or more particularly, oligonucleotides) to target specific genetic code sequences – that is, antisense therapy, a form of gene therapy. According to Riordan, he had always wanted to use the name Gilead Sciences all along. Still, he used Oligogen as a temporary name because he needed to deal with a trademark clearance issue with a California nonprofit organization that was already using the word Gilead in its name. He had first heard of the Balm of Gilead when he read Lanford Wilson's play Balm in Gilead while in medical school, then learned that naturally occurring acetylsalicylic acid (aspirin) had been found in modern times in a willow tree species from that part of that world, and was therefore inspired to name his company Gilead. After founding Oligogen, he contacted the nonprofit about the naming issue and secured the right to use the Gilead Sciences name in exchange for a $1,000 donation.
By 1988, the company had moved its headquarters to Foster City's Vintage Park neighborhood, where it has been based ever since. The company began to develop small molecule antiviral therapeutics in 1991, when the company in-licensed a group of nucleotide compounds including tenofovir.
Riordan later recalled that Gilead's first decade as a startup was an extremely stressful experience for him, as a young venture capitalist serving for the first time as the founder, chairman, and chief executive officer of his own biotech company. The new company had no products and very little income, and narrowly escaped going out of business on several occasions: "It was touch and go for a long time". Finding a way for Gilead to make money was Riordan's top priority "every second of the day for eight years".
=== 1990–1999: IPO ===
Gilead's antisense intellectual property portfolio was sold to Ionis Pharmaceuticals. Gilead debuted on the NASDAQ in January 1992. Its initial public offering raised $86.25 million in proceeds.
In June 1996, Gilead launched Vistide (cidofovir injection) for the treatment of cytomegalovirus (CMV) retinitis in patients with AIDS.
In January 1997, Donald Rumsfeld was appointed chairman, but left the board in January 2001 when he was appointed United States Secretary of Defense during George W. Bush's first term as president.
In March 1999, Gilead acquired NeXstar Pharmaceuticals of Boulder, Colorado. At the time, NeXstar's annual sales of $130 million was three times Gilead's sales; it sold AmBisome, an injectable fungal treatment, and DaunoXome, an oncology drug taken by HIV patients. That same year, Roche announced FDA approval of Tamiflu (oseltamivir) for the treatment of influenza. Tamiflu was originally discovered by Gilead and licensed to Roche for late-phase development and marketing.
One reason for entering into the Tamiflu licensing agreement was that with only 350 employees, Gilead still did not yet have the capability to sell its drugs directly to overseas buyers. To avoid having to license future drugs in order to access international markets, Gilead simply acquired the 480-employee NeXstar, which had already built its own sales force in Europe to market AmBisome there.
=== 2000 to 2009 ===
Viread (tenofovir) achieved first approval in 2001 for the treatment of HIV.
In 2002, Gilead changed its corporate strategy to focus exclusively on antivirals, and sold its cancer assets to OSI Pharmaceuticals for $200 million.
In December 2002, Gilead and Triangle Pharmaceuticals announced that Gilead would acquire Triangle for around $464 million; Triangle's lead drug was emtricitabine that was near FDA approval, and it had two other antivirals in its pipeline. The company also announced its first full year of profitability. Later that year, Hepsera (adefovir) was approved for the treatment of chronic hepatitis B, and Emtriva (emtricitabine) for the treatment of HIV.
During this era, Gilead completed its gradual evolution from a biotech startup into a pharmaceutical company. The San Francisco Chronicle noted that by 2003, the Gilead corporate campus in Foster City had expanded to "seven low-slung sand-colored buildings around a tiny lake on which ducks happily paddle." Like many startups, Gilead originally leased its space, but in 2004, the company paid $123 million to buy all its headquarters buildings from its landlords. However, even as Gilead developed its ability to distribute and sell its own drugs, it remained distinct from most pharmaceutical companies in terms of its strong reliance on subcontracting most of its manufacturing to contract manufacturing organizations.
In 2004, during the Avian flu pandemic scare, Gilead Sciences' revenue from Tamiflu almost quadrupled to $44.6m as more than 60 national governments stockpiled the antiviral drug, though the firm had made a loss in 2003 before concern about the flu started. As stocks soared, US Defense Secretary and Pentagon chief Donald Rumsfeld sold shares of the company, receiving more than $5 million in capital gains, while still maintaining up to $25m-worth of shares by the end of the year. Sales of Tamiflu almost quadrupled again in 2005, to $161.6m, during which time the share price tripled. A 2005 report showed that, in all, Rumsfeld owned shares worth up to $95.9m, from which he got an income of up to $13m.
In 2006, the company acquired Corus Pharma, Inc. for $365 million. The acquisition of Corus signaled Gilead's entry into the respiratory arena. Corus was developing aztreonam lysine for the treatment of patients with cystic fibrosis who are infected with Pseudomonas aeruginosa.
In July 2006, the U.S. Food and Drug Administration (FDA) approved Atripla, a once a day single tablet regimen for HIV, combining Sustiva (efavirenz), a Bristol-Myers Squibb product, and Truvada (emtricitabine and tenofovir disoproxil), a Gilead product.
Gilead purchased Raylo Chemicals, Inc. in November 2006, for a price of US$133.3 million. Raylo Chemical, based in Edmonton, Alberta, was a wholly-owned subsidiary of Degussa AG, a German company. Raylo Chemical was a custom manufacturer of active pharmaceutical ingredients and advanced intermediates for the pharmaceutical and biopharmaceutical industries.
Later in the same year, Gilead acquired Myogen, Inc. for $2.5 billion (then its largest acquisition). With two drugs in development (ambrisentan and darusentan), and one marketed product (Flolan) for pulmonary diseases, the acquisition of Myogen has solidified Gilead's position in this therapeutic arena. Under an agreement with GlaxoSmithKline, Myogen marketed Flolan (epoprostenol sodium) in the United States for the treatment of primary pulmonary hypertension. Additionally, Myogen was developing (in Phase 3 studies) darusentan, also an endothelin receptor antagonist, for the potential treatment of resistant hypertension.
Gilead expanded its move into respiratory therapeutics in 2007 by entering into a licensing agreement with Parion for an epithelial sodium channel inhibitor for the treatment of pulmonary diseases, including cystic fibrosis, chronic obstructive pulmonary disease and bronchiectasis.
In 2009, the company acquired CV Therapeutics, Inc. for $1.4 billion, bringing Ranexa and Lexiscan into Gilead. Ranexa is a cardiovascular drug used to treat chest pain related to coronary artery disease, with both of these products and pipeline building out Gilead's cardiovascular franchise. Later that year, the company was named one of the Fastest Growing Companies by Fortune.
=== 2010 to 2019 ===
In 2010, the company acquired CGI Pharmaceuticals for $120 million, expanding Gilead's research expertise into kinase biology and chemistry. Later that year, the company acquired Arresto Biosciences, Inc. for $225 million, obtaining developmental-stage research for treating fibrotic diseases and cancer.
In February 2011, the company acquired Calistoga Pharmaceuticals for US$375 million ($225 million plus milestone payments). The acquisition boosted Gilead's oncology and inflammation areas. Later that year, Gilead made its most important acquisition – and by then most expensive – with the US$10.4 billion purchase of Pharmasset, Inc. This transaction helped cement Gilead as the leader in treatment of the hepatitis C virus by giving it control of sofosbuvir (see below).
In October 2011, Gilead broke ground on a massive multi-year expansion of its 17-building headquarters campus in Foster City. By replacing eight one or two-story buildings with seven new structures ranging as tall as 10 stories, Gilead nearly doubled its headquarters real estate footprint from about 620,000 square feet to about 1.2 million square feet.
On July 16, 2012, the FDA approved Gilead's Truvada for prevention of HIV infection (it was already approved for treating HIV). The pill was a preventive measure (PrEP) for people at high risk of getting HIV through sexual activity.
In 2013, the company acquired YM Biosciences, Inc. for $510 million. The acquisition brings drug candidate CYT387, an orally-administered, once-daily, selective inhibitor of the Janus kinase (JAK) family, specifically JAK1 and JAK2, into Gilead's oncology pipeline. The JAK enzymes have been implicated in myeloproliferative diseases, inflammatory disorders, and certain cancers.
In 2015, the company made a trio of acquisitions:
It bought Phenex Pharmaceuticals for $470 million. Its Farnesoid X receptor (FXR) program used small-molecule FXR agonists in the treatment of liver diseases such as non-alcoholic steatohepatitis.
It bought EpiTherapeutics for $65 million. This acquisition gave Gilead first-in-class small molecule inhibitors of histone demethylases involved in regulating gene transcription in cancer.
It paid $425 million for a 15% equity stake in Galapagos NV, with additional payments for Gilead to license the experimental anti-inflammatory drug filgotinib, which may treat rheumatoid arthritis, ulcerative colitis, and Crohn's disease.
In 2016, the company acquired Nimbus Apollo, Inc. for $400 million, giving Gilead control of the compound NDI-010976 (an ACC inhibitor) and other preclinical ACC inhibitors for the treatment of non-alcoholic steatohepatitis and for the potential treatment of hepatocellular carcinoma. Also in 2016, the company was named the most generous company on the 2016 Fortune list of The Most Generous Companies of the Fortune 500. Charitable donations to HIV/AIDS and liver disease organizations totaled over 440 million in 2015.
In August 2017, the company announced it would acquire Kite Pharma for $11.9 billion, equating to $180 cash per share, a 29% premium over the closing price of the shares. The deal was Gilead's entry into the cell therapy market and added a chimeric antigen receptor T cell (CAR-T) therapy candidate to the company's portfolio. By 2022 this acquisition had led to two marketed products for lymphoma: Yescarta and Tecartus. In November, the company announced it will acquire Cell Design Labs for up to $567 million, after it indirectly acquired a stake of 12.2% via the Kite Pharma deal.
On May 9, 2019, the U.S. Department of Health and Human Services announced that Gilead Sciences will donate Truvada, the only drug approved to prevent infection with H.I.V., for free to 200,000 patients annually for 11 years. On December 3, 2019, HHS explained how the government would distribute the donated drugs. HHS Secretary Alex Azar explained that the U.S. government will pay Gilead $200 per bottle for 30 pills for costs associated with getting the drug from factories into the eventual hands of patients.
=== 2020 onwards ===
In March 2020, the company announced it would acquire Forty Seven Inc. for $95.50 a share ($4.9 billion in total). On April 7, 2020, Gilead completed acquisition of Forty Seven, Inc. for "$95.50 per share, net to the seller in cash, without interest, or approximately $4.9 billion in the aggregate."
In June 2020, Bloomberg reported that AstraZeneca Plc had made a preliminary approach to Gilead for a potential merger, worth almost $240 billion. In the same month, the company announced it would acquire a 49.9% stake in privately held Pionyr Immunotherapeutics Inc for $275 million.
In September 2020, Gilead announced it had reached a deal to acquire Immunomedics for $21 billion ($88 per share), gaining control of the cancer treatment Trodelvy (Sacituzumab govitecan-hziy) – a first-in-class Trop-2 antibody-drug conjugate. In December, the business announced it would acquire German biotech, MYR GmbH, for €1.15 billion plus up to a further €300 million. MYR focuses on the treatment of chronic hepatitis delta virus.
On August 11, 2021, U.S. Senator Rand Paul disclosed that his wife Kelley Paul had purchased a stake in Gilead Sciences on February 26, 2020.
In November 2021, the company was added to the Dow Jones Sustainability World Index.
In January 2022, Gilead pulled its cancer drug Zydelig (idelalisib) from its accelerated approval in relapsed follicular B-cell non-Hodgkin lymphoma (FL) and relapsed small lymphocytic leukemia (SLL). In September, the company completed its acquisition of MiroBio for $405 million.
In February 2023, the business, through Kite Pharma completed its acquisition of Tmunity Therapeutics In May, the business announced it would acquire XinThera and its small molecule inhibitors.
In February 2024, the company acquired CymaBay Therapeutics, and in September, paid Genesis Therapeutics $35 million for AI-based drug discovery work.
In May 2025, Gilead Sciences announced it would pay $10 million for sole ownership of arenavirus immunotherapies for hepatitis B (HBV) and HIV resulting from the company’s collaboration with Hookipa Pharma.
=== Acquisition history ===
== Treatments for hepatitis C ==
The drug sofosbuvir had been part of the 2011 acquisition of Pharmasset. In 2013, the FDA approved this drug, under the trade name Sovaldi, as a treatment for the hepatitis C virus. Forbes magazine ranked Gilead its number 4 drug company, citing a market capitalization of US$113 billion and stock appreciation of 100%, and describing their 2011 purchase of Pharmasset for $11 billion as "one of the best pharma acquisitions ever". Deutsche Bank estimated Sovaldi sales in the year's final quarter would be $53 million, and Barron's noted the FDA approval and subsequent strong sales of the "potentially revolutionary" drug as a positive indicator for the stock.
On July 11, 2014, the United States Senate Committee on Finance investigated Sovaldi's high price ($1,000 per pill; $84,000 for the full 12-week regimen). Senators questioned the extent to which the market was operating "efficiently and rationally", and committee chairman Ron Wyden (D-Oregon) and ranking minority member Chuck Grassley (R-Iowa) wrote to CEO John C. Martin asking Gilead to justify the price for this drug. The committee hearings did not result in new law, but in 2014 and 2015, due to negotiated and mandated discounts, Sovaldi was sold well below the list price. For poorer countries, Gilead licensed multiple companies to produce generic versions of Sovaldi; in India, a pill's price was as low as $4.29.
Gilead later combined Sovaldi with other antivirals in single-pill combinations. First, Sovaldi was combined with ledipasvir and marketed as Harvoni. This treatment for hepatitis C cures the patient in 94% to 99% of cases (HCV genotype 1). By 2017, Gilead was reporting drastic drops in Sovaldi revenue from year to year, not only because of pricing pressure but because the number of suitable patients decreased. Later single-pill combinations were Epclusa (with velpatasvir) and Vosevi (with velpatasvir and voxilaprevir).
== Finances ==
For the fiscal year 2017, Gilead Sciences reported earnings of US$4.628 billion and annual revenue of US$26.107 billion, a decline of 14.1% over the previous fiscal cycle. Gilead Sciences's shares traded at over $70 per share, and its market capitalization was valued at US$93.4 billion in October 2018.
=== Prospects for the future ===
As of 2017, Gilead's challenge is to develop or acquire new blockbuster drugs before its current revenue-producers wane or their patent protection expires. Gilead benefited from the expansion of Medicaid in the ACA; Leerink analyst Geoffrey Porges wrote that Gilead's HIV drugs could face funding pressure under reform proposals. Gilead has $32 billion in cash, but $27.4 billion is outside the U.S. and is unavailable for acquisitions unless Gilead pays U.S. tax on it, though it could borrow against it. Gilead would benefit from proposals to let companies repatriate offshore capital with minimal further taxation.
Gilead's Entospletinib has shown a 90% complete response rate for MLL type acute myeloid leukaemia (AML).
== Criticisms ==
=== TAF development delays ===
Several mass tort lawsuits have been filed against Gilead alleging that the company deliberately delayed development of antiretroviral drugs based on tenofovir alafenamide fumarate (TAF) in order to maximize profits from previous-generation medications containing tenofovir disoproxil fumarate (TDF). Plaintiffs allege that Gilead suspended TAF in 2004 despite clear evidence indicating that TAF-based medications were safer than TDF, a compound whose long-term use was associated with adverse side effects such as nephrotoxicity and bone density loss.
Gilead's first TAF medication, marketed under the trade name Genvoya, came out in 2015. Lawsuits allege that in the interim period, many HIV patients who continuously took Gilead's older TDF-based drugs suffered severe side effects, including nephrotoxicity.
=== Pricing ===
==== Biktarvy ====
In 2023, the Institute for Clinical and Economic Review (ICER) identified Biktarvy (bictegravir/emtricitabine/tenofovir alafenamide) as one of five high-expenditure drugs that experienced significant net price increases without new clinical evidence to justify the hikes. Specifically, Biktarvy's wholesale acquisition cost rose by 5.49%, leading to an additional $815 million in costs to U.S. payers.
==== Sovaldi ====
Gilead came under intense criticism for its high pricing of its patented drug sofosbuvir (sold under the brand name Sovaldi), used to treat hepatitis C. In the US, for instance, it was launched at $1,000 per pill or $84,000 for the standard 84-day course, but it was drastically cheaper in the developing world; in India, it dropped as low as $4.29 per pill. While Sovaldi represented a significant improvement over contemporary treatments, the controversy surrounding its price ignited a national debate in the US, according to Reuters.
The United States Senate Committee on Finance launched an 18-month investigation of Gilead's Sovaldi pricing, and argued in its 2015 report that Gilead set prices high in disregard of the human cost and in order to set the stage for a higher eventual price for Sovaldi's successor, Harvoni. The committee's investigation, based in part on internal documents obtained from Gilead, revealed that the company had considered prices ranging from $50,000 to $115,000 per year, trying to strike a balance between revenue and predicted activist and public relations blowback, with little regard to research and development costs.
The high prices forced state Medicaid programs to ration treatment to patients, delaying treatment of less advanced hepatitis C cases. In Oregon, for example, 10,000 Medicaid patients were deemed good candidates for Sovaldi therapy, but the Oregon Health Authority estimated that treating half of these patients would more than double the state's total drug expenditures. The state thus opted to limit treatment to 500 patients per year.
==== Truvada and Descovy ====
Truvada was introduced to the market by Gilead in 2004 to treat HIV infections. In the following years, the United States government conducted research demonstrating that Truvada was able to prevent HIV infection. The US Centers for Disease Control holds the patent for this use of Truvada as pre-exposure prophylaxis (PreP).
Gilead introduced Truvada for PreP in 2012, at which point a prescription cost approximately $1,200 per month in the United States. By 2018, this price had increased to up to $2,000, despite generally costing less than $100 outside the U.S. Gilead made over $3 billion in sales of Truvada in 2018.
The high price drew the ire of activist groups such as ACT UP and was the subject of a Congressional hearing in May 2019. Gilead's CEO defended its pricing in the hearing by noting the large sums the company spends on HIV/AIDS research. Activists pressured the US government to enforce its patent on Truvada in order to combat the high prices set by Gilead.
In May 2019, Gilead announced it would donate enough Truvada to treat up to 200,000 patients annually for up to 11 years, the result of discussions with the Department of Health and Human Services under Trump. Dr. Rochelle Walensky noted that the donations still covered less than one-fifth of the people who need the drug, and argued it was possibly a move to help the company market Descovy, a more advanced successor drug. Walensky led a 2020 study that concluded the high costs of Descovy would on the whole negate any comparative advantage of prescribing it over a generic Truvada alternative.
In July 2021, Gilead announced it would decrease 340B Drug Pricing Program reimbursements to clinics serving primarily low-income communities; clinics argued this severely hinders their ability to provide HIV/AIDS prevention and treatment services among vulnerable populations.
=== Anticompetitive behavior ===
Gilead has also been accused of stifling competition. A lawsuit filed in the United States in 2019 alleged that the company entered "pay for delay" agreements with other manufacturers, wherein the manufacturers agreed to delay releasing generic versions of Truvada. In 2021, CVS Pharmacy and RiteAid filed a lawsuit on similar grounds against Gilead, Bristol-Myers Squibb, and Teva Pharmaceuticals in 2021.
In response to criticisms over the price of Sovaldi, Gilead began licensing the rights to produce generic versions of the drug to select producers in India in 2015. Included in the licensing agreements were 'anti-diversion' provisions, designed to prevent the drug from being exported back to developed countries where the cheaper, generic alternatives were still unavailable. (In India, a one-month treatment cost approximately US$300, versus $1,000 per pill in the United States.) Gilead required the Indian producers to screen patients to determine who could buy Sovaldi, which was criticized by Médecins Sans Frontières since it could lead to the exclusion of vulnerable groups like refugees and migrants from accessing the medicines. In response to the criticism, Gilead eventually relaxed these requirements.
=== Tax structures and tax avoidance ===
Gilead has been criticized for tax avoidance. Tax avoidance, as opposed to tax evasion, is the use of legal means to shift tax burdens from the one jurisdiction to overseas affiliates that pay a lower tax rate, even if revenue is primarily generated outside the overseas jurisdiction.
A 2016 report by the liberal think tank Americans for Tax Fairness argued that Gilead was able to avoid up to $10 billion in taxes on U.S. sales through mechanisms such as transfer pricing, the sale of assets between affiliated entities. In particular, Gilead sells intellectual property to an Irish subsidiary, which then sells the finished products, such as Sovaldi, in the United States and elsewhere, paying the low Irish tax rate on profits. The practice is common among multinational pharmaceutical companies like Gilead.
On December 26, 2018, The Times reported that Gilead had used the Double Irish arrangement to avoid U.S. corporate taxes on global profits, stating that the firm "used a controversial tax loophole arrangement to shift almost €20 billion in profits through an Irish entity in just two years" without paying Irish taxes. The company repatriated a portion of the Irish subsidiary's holdings, $28 billion, to the United States in 2018 following reductions of the corporate tax rate. For this it paid an estimated $5.5 billion in tax.
== Remdesivir ==
Gilead sought and obtained orphan drug designation for remdesivir from the US Food and Drug Administration (FDA) on March 23, 2020. This designation is intended to encourage the development of drugs affecting fewer than 200,000 Americans by granting strengthened and extended legal monopoly rights to the manufacturer, along with waivers on taxes and government fees.
Remdesivir became a candidate for treating COVID-19; at the time the status was granted, fewer than 200,000 Americans had COVID-19, but numbers were climbing rapidly as the COVID-19 pandemic reached the US, and crossing the threshold soon was considered inevitable. Gilead retains 20-year remdesivir patents in more than 70 countries.
In 2021, remdesivir (tradename Veklury) generated more than $4.5 billion in annual revenues, and was Gilead's highest selling product.
=== COVID-19 ===
Emergency use authorization for remdesivir was granted in the U.S. on May 1, 2020, for people hospitalized with severe COVID-19. In September 2020 following a review of the evidence, the WHO issued guidance not to use remdesivir for people with COVID-19, as there was no good evidence of benefit. However, over 2020–22 with further clinical research, remdesivir had been approved for treatment of hospitalized people with COVID-19 in the United States, European Union, and multiple other countries. In 2022, the Canadian component of the WHO international Solidarity Trial reported that in-hospital people with COVID-19 treated with remdesivir had lower death rates (by about 4%) and reduced need for oxygen and mechanical ventilation compared to people receiving standard-of-care treatments.
==== Regulatory approval ====
Veklury received approval from the US Food and Drug Administration (FDA) in October 2020 use in hospitalized adults and children 12 years and older for treatment of severe COVID-19 infections. In January 2022, the FDA gave regulatory approval to Veklury for use in adults and children (12 years of age and older who weigh at least 40 kilograms (88 lb) and are positive for COVID-19, not hospitalized, and are ill having high risk for developing severe COVID-19, including hospitalization or death.
The FDA also provided Emergency Use Authorization for Veklury treatment of children under age 12 who are COVID-positive and not hospitalized, but have mild-to-moderate COVID-19 with high risk of developing severe COVID-19, including hospitalization or death.
== References ==
== External links ==
Official website
Business data for Gilead Sciences, Inc.: | Wikipedia/Gilead_Sciences |
A president is a leader of an organization, company, community, club, trade union, university or other group. The relationship between a president and a chief executive officer varies, depending on the structure of the specific organization. In a similar vein to a chief operating officer, the title of corporate president as a separate position (as opposed to being combined with a "C-suite" designation, such as "president and chief executive officer" or "president and chief operating officer") is also loosely defined; the president is usually the legally recognized highest rank of corporate officer, ranking above the various vice presidents (including senior vice president and executive vice president), but on its own generally considered subordinate, in practice, to the CEO. The powers of a president vary widely across organizations and such powers come from specific authorization in the bylaws like Robert's Rules of Order (e.g. the president can make an "executive decision" only if the bylaws allow for it).
== History ==
Originally, the term president was used in the same way that foreman or overseer is used now (the term is still used in that sense today). It has now also come to mean "chief officer" in terms of administrative or executive duties.
== Powers and authority ==
The powers of the president vary widely across organizations. In some organizations the president has the authority to hire staff and make financial decisions, while in others the president only makes recommendations to a board of directors, and still others the president has no executive powers and is mainly a spokesperson for the organization. The amount of power given to the president depends on the type of organization, its structure, and the rules it has created for itself.
In addition to administrative or executive duties in organizations, a president has the duties of presiding over meetings. Such duties at meetings include:
calling the meeting to order
determining if a quorum is present
announcing the items on the order of business or agenda as they come up
recognition of members to have the floor
enforcing the rules of the group
putting all questions (motions) to a vote
adjourning the meeting
While presiding, a president remains impartial and does not interrupt speakers if a speaker has the floor and is following the rules of the group. In committees or small boards, the president votes along with the other members. However, in assemblies or larger boards, the president should vote only when it can affect the result. At a meeting, the president only has one vote (i.e. the president cannot vote twice and cannot override the decision of the group unless the organisation has specifically given the president such authority).
== Disciplinary procedures ==
If the president exceeds the given authority, engages in misconduct, or fails to perform the duties, the president may face disciplinary procedures. Such procedures may include censure, suspension, or removal from office. The rules of the particular organization would provide details on who can perform these disciplinary procedures and the extent that they can be done. Usually, whoever appointed or elected the president has the power to discipline this officer.
== President-elect ==
Some organizations may have a position of president-elect in addition to the position of president. Generally the membership of the organization elects a president-elect and when the term of the president-elect is complete, that person automatically becomes president.
== Immediate past president ==
Some organizations may have a position of immediate past president in addition to the position of president. In those organizations, when the term of the president is complete, that person automatically fills the position of immediate past president. The organization can have such a position only if the bylaws provide it. The duties of such a position would also have to be provided in the bylaws.
== Life president ==
Life president is an honorary title often given to someone who has already served the organization for a long period in a major role.
== References ==
== Further reading ==
Bennett, Nathan; Stephen A. Miles (2006). Riding Shotgun: The Role of the COO. Stanford, California: Stanford University Press. ISBN 0-8047-5166-8.
National Association of Parliamentarians, Education Committee (1993). Spotlight on You the President. Independence, MO: National Association of Parliamentarians. ISBN 1-884048-15-3. | Wikipedia/President_(corporate_title) |
Diamondback Energy is an American company engaged in hydrocarbon exploration headquartered in Midland, Texas.
As of December 31, 2020, the company had 1,788 million barrels of oil equivalent (1.094×1010 GJ) of estimated proved reserves, of which 52% was petroleum, 24% was natural gas, and 24% was natural gas liquids. The company's reserves are all in the Permian Basin.
As of February 2024, it is ranked 400th on the Fortune 500.
== History ==
The company began operations in December 2007 with the acquisition of 4,174 net acres in the Permian Basin.
In October 2012, the company became a public company via an initial public offering, issuing 12,500,000 shares of common stock at a price of $17.50 per share.
In March 2017, the company acquired assets from Brigham Resources for $2.55 billion.
In October 2018, the company acquired the assets of Ajax Resources for $1.25 billion.
In November 2018, the company acquired Energen.
In February 2021, the company acquired leasehold interests and assets from Guidon Resources for $375 million in cash and 10.68 million shares.
In March 2021, the company acquired QEP Resources.
A 2023 Bloomberg news story identified the company, as well as Permian Resources, as major contributors to the increase of flaring gas in the Permian oil field.
In February 2024, it was reported Diamondback Energy and Endeavor Energy Resources were in final discussions toward a merger that would create an oil-and-gas giant worth more than $50 billion.
== Antitrust lawsuit ==
In January 2024, a class action lawsuit was filed accusing Diamondback, along with seven other US oil and gas producers, of an illegal price-fixing scheme to constrain production of shale oil, allegedly leading to drivers in the US paying more for gasoline than they would have in a competitive market.
== References ==
== External links ==
Official website
Business data for Diamondback Energy, Inc.: | Wikipedia/Diamondback_Energy |
MicroStrategy Incorporated, doing business as Strategy, is an American development company that provides business intelligence (BI), mobile software, and cloud-based services. Founded in 1989 by Michael J. Saylor, Sanju Bansal, and Thomas Spahr, the firm develops software to analyze internal and external data in order to make business decisions and to develop mobile apps. It is a public company headquartered in Tysons Corner, Virginia, in the Washington metropolitan area. Its primary business analytics competitors include SAP AG Business Objects, IBM Cognos, and Oracle Corporation's BI Platform. Saylor is the Executive Chairman and, from 1989 to 2022, was the CEO.
Since 2020, the company's securities are widely considered to be a bitcoin proxy due to MicroStrategy's holdings of the cryptocurrency. The company's executive chairman has compared it to a bitcoin spot leveraged ETF, though it's not a regulated investment fund.
== History ==
Saylor started MicroStrategy in 1989 with a consulting contract from DuPont, which provided Saylor with $250,000 in start-up capital and office space in Wilmington, Delaware. Saylor was soon joined by company co-founder Sanju Bansal, whom he had met while the two were students at Massachusetts Institute of Technology (MIT). The company produced software for data mining and business intelligence using nonlinear mathematics, an idea inspired by a course on systems-dynamics theory that they took at MIT.
In 1992, MicroStrategy gained its first major client when it signed a $10 million contract with McDonald's. It increased revenues by 100% each year between 1990 and 1996. In 1994, the company's offices and its 50 employees moved from Delaware to Tysons Corner, Virginia.
On June 11, 1998, MicroStrategy became a public company via an initial public offering. The company sold 36 million shares of its common stock, each share priced at $6, under the stock ticker "MSTR" on the NASDAQ stock exchange.
In 2000, MicroStrategy founded Alarm.com as part of its research and development unit.
On March 20, 2000, after a review of its accounting practices, MicroStrategy announced that it would restate its financial results for the preceding two years. Its stock price, which had risen from $7 per share to as high as $333 per share in a year, fell to $120 per share, or 62%, in a day in what is regarded as the bursting of the dot-com bubble.
Following MicroStrategy Inc.'s March 20, 2000 announcement that it had significantly overstated its 1998 and 1999 revenues, approximately two dozen class action securities fraud actions were filed in the United States District Court for the Eastern District of Virginia against MicroStrategy. In December 2000, the U.S. Securities and Exchange Commission brought charges against the company and its executives. A lawsuit was subsequently filed against MicroStrategy and certain of its officials over fraud. In December 2000, Saylor, Bansal, and the company's former CFO settled with the SEC without admitting wrongdoing, each paying $350,000 in fines. The officers also paid a combined total of $10 million in disgorgement. The company settled with the SEC, hiring an independent director to ensure regulatory compliance.
In February 2009, MicroStrategy sold Alarm.com to venture capital firm ABS Capital Partners for $27.7 million. The company introduced OLAP Services with a shared data set cache to accelerate reports and ad hoc queries. In 2010, the company began developing and deploying business intelligence software for mobile platforms, such as the iPhone and iPad.
In 2011, MicroStrategy expanded its offerings to include a cloud-based service, MicroStrategy Cloud.
In 2013, MicroStrategy sold Angel to Genesys Telecommunications Laboratories for $110 million.
In January 2014, the company announced a new feature of the platform called PRIME (Parallel Relational In-Memory Engine), co-developed with Facebook.
In October 2014, the company announced plans to lay off 770 employees, a month after reducing Saylor's salary from $875,000 to $1 at his request.
In June 2015, MicroStrategy announced the general availability of MicroStrategy 10.
In the fall of 2018, the company released MicroStrategy 11.
In January 2019, MicroStrategy announced the general availability of MicroStrategy 2019.
In February 2020, the company announced MicroStrategy 2020.
In August 2022, the Attorney General for the District of Columbia sued Saylor for tax fraud, accusing him of illegally avoiding more than $25 million in D.C. taxes by pretending to be a resident of other jurisdictions. MicroStrategy was accused of collaborating with Saylor to facilitate his tax evasion by misreporting his residential address to local and federal tax authorities and failing to withhold D.C. taxes. MicroStrategy said the case is "a personal tax matter involving Mr. Saylor" and called the claims against the company "false" and it would "defend aggressively against this overreach." In June 2024, Saylor and MicroStrategy reached a $40 million settlement agreement with the District of Columbia.
Saylor resigned as CEO effective August 8, 2022. Phong Le, who had been president, succeeded him. Saylor remains the executive chairman of MicroStrategy. In a press release announcing the transition, Saylor said that he would focus on the company's bitcoin acquisition strategy and that Phong would manage overall corporate operations.
== Bitcoin purchases ==
In August 2020, MicroStrategy invested $250 million in bitcoin as a treasury reserve asset, citing declining returns from cash, a weakening dollar, and other global macroeconomic factors. The company went on to make several additional large purchases of bitcoin; as of September 19, 2022, MicroStrategy and its subsidiaries held approximately 130,000 BTC, acquired at an aggregate purchase price of $3.98 billion at an average purchase price of $30,639 per bitcoin. Saylor is the main driver behind this strategy.
On the company's quarterly earnings call on May 3, 2022, MicroStrategy CFO Phong Le stated that the company would face a margin call if bitcoin's price fell to about $21,000. A margin call would obligate the company to sell some of its bitcoin holdings. Le stated that the company could add more collateral to its loan to avoid such a situation. After bitcoin's price fell to about $20,800 in June 2022, the company said that it had not received a margin call and that it had enough capital to withstand further volatility. On December 22, 2022, MicroStrategy sold 704 BTC, which represented their first time selling any bitcoin, for an amount of around $11.8 million.
On September 25, 2023, MicroStrategy announced that, during the period between August 1, 2023, and September 24, 2023, MicroStrategy and its subsidiaries acquired approximately 5,445 bitcoins for approximately $147.3 million in cash, at an average price of approximately $27,053 per bitcoin, inclusive of fees and expenses.
As of December 8, 2024, MicroStrategy was reported to own 423,650 bitcoins, worth $42.43 billion, and is the largest corporate holder of the asset. MicroStrategy purchased 149,880 bitcoins in the month beginning on November 11, 2024. On the strength of this asset, MicroStrategy was included in the Nasdaq-100 effective December 23, 2024.
== Products ==
MicroStrategy 2020 is the latest platform release of the company's business intelligence software.
MicroStrategy 2019, the prior platform release, attempted to improve connectivity to data sources and applications and allow for easier mobile application development. it also offered Bluetooth identity detection and voice. The earlier suite of software, MicroStrategy 10, consisted of MicroStrategy Analytics, MicroStrategy Mobile, and Usher. MicroStrategy 10.10, released in December 2017, added MicroStrategy Workstation. It uses business intelligence and predictive analytics to search through and perform analytics on big data from a variety of sources, including data warehouses, Excel files, and Apache Hadoop distributions.
MicroStrategy Mobile, introduced in 2010, incorporates analytics capabilities to apps for iPhone, iPad, Android, and BlackBerry.
Usher is a digital credential and identity intelligence product for organizations to control digital and physical access. It replaces physical badges and passwords with secure digital badges, and generates information on user behavior and resource usage.
== References ==
== External links ==
Official website
Business data for MicroStrategy Incorporated: | Wikipedia/MicroStrategy |
Palo Alto Networks, Inc. is an American multinational cybersecurity company with headquarters in Santa Clara, California. The core product is a platform that includes advanced firewalls and cloud-based offerings that extend those firewalls to cover other aspects of security. The company serves over 70,000 organizations in over 150 countries, including 85 of the Fortune 100. It is home to the Unit 42 threat research team and hosts the Ignite cybersecurity conference. It is a partner organization of the World Economic Forum.
In June 2018, former Google and SoftBank executive Nikesh Arora joined the company as Chairman and CEO.
== History ==
Palo Alto Networks was founded in 2005 by Nir Zuk, a former engineer from Check Point and NetScreen Technologies. Zuk, an Israeli native, began working with computers during his mandatory military service in the Israeli Defense Forces in the early 1990s and served as head of software development in Unit 8200, a branch of the Israeli Intelligence Corps.
The company debuted on the NYSE on July 20, 2012, raising $260 million with its initial public offering, which was the 4th-largest tech IPO of 2012. It remained on the NYSE until October 2021 when the company transferred its listing to Nasdaq.
In 2014, Palo Alto Networks founded the Cyber Threat Alliance with Fortinet, McAfee, and NortonLifeLock, a not-for-profit organization with the goal of improving cybersecurity "for the greater good" by encouraging cybersecurity organizations to collaborate by sharing cyber threat intelligence among members. By 2018, the organization had 20 members including Cisco, Check Point, Juniper Networks, and Sophos.
In 2018, the company began opening cybersecurity training facilities around the world as part of the Global Cyber Range Initiative.
In May 2018, the company announced Application Framework, an open cloud-delivered ecosystem where developers can publish security services as SaaS applications that can be instantly delivered to customers.
In 2019, the company announced the K2-Series, a 5G-ready next-generation firewall developed for service providers with 5G and IoT requirements. In February 2019, the company announced Cortex, an AI-based continuous security platform.
=== Acquisitions ===
January 2014: Morta Security
April 2014: Cyvera for approximately $200 million
May 2015: CirroSecure
March 2017: LightCyber for approximately $100 million
March 2018: Cloud Security company Evident.io for $300 million. This acquisition created the Prisma Cloud division.
April 2018: Secdo
October 2018: RedLock for $173 million
February 2019: Demisto for $560 million
May 2019: Twistlock for $410 million
June 2019: PureSec for $47 million
September 2019: Zingbox for $75 million
November 2019: Aporeto, Inc. for $150 million
April 2020: CloudGenix, Inc. for $420 million
August 2020: Crypsis Group for $265 million
December 2020: Expanse for $1.25 billion (initially announced for $800 million in November 2020).
February 2021: Bridgecrew for $156 million
November 2022: Cider Security for $300 million.
October 2023: Announced its intent to acquire Dig Security for $400 million
November 2023: Talon Cyber Security for $625 million
December 2023: Dig Security for $400 million
== Threat research ==
Unit 42 is the Palo Alto Networks threat intelligence and security consulting team. They are a group of cybersecurity researchers and industry experts who use data collected by the company's security platform to discover new cyber threats, such as new forms of malware and malicious actors operating across the world. The group runs a popular blog where they post technical reports analyzing active threats and adversaries. Multiple Unit 42 researchers have been named in the MSRC Top 100, Microsoft's annual ranking of top 100 security researchers. In April 2020, the business unit consisting of Crypsis Group which provided digital forensics, incident response, risk assessment, and other consulting services merged with the Unit 42 threat intelligence team.
According to the FBI, Palo Alto Networks Unit 42 has helped solve multiple cybercrime cases, such as the Mirai Botnet and Clickfraud Botnet cases, the LuminosityLink RAT case, and assisted with "Operation Wire-Wire".
In 2018, Unit 42 discovered Gorgon, a hacking group believed to be operating out of Pakistan and targeting government organizations in the United Kingdom, Spain, Russia, and the United States. The group was detected sending spear-phishing emails attached to infected Microsoft Word documents using an exploit commonly used by cybercriminals and cyber-espionage campaigns.
In September 2018, Unit 42 discovered Xbash, a ransomware that also performs cryptomining, believed to be tied to the Chinese threat actor "Iron". Xbash is able to propagate like a worm and deletes databases stored on victim hosts. In October, Unit 42 warned of a new crypto mining malware, XMRig, that comes bundled with infected Adobe Flash updates. The malware uses the victim's computer's resources to mine Monero cryptocurrency.
In November 2018, Palo Alto Networks announced the discovery of "Cannon", a trojan being used to target United States and European government entities. The hackers behind the malware are believed to be Fancy Bear, the Russian hacking group believed to be responsible for hacking the Democratic National Committee in 2016. The malware communicates with its command and control server with email and uses encryption to evade detection.
== References ==
== External links ==
Official website
Business data for Palo Alto Networks, Inc.: | Wikipedia/Palo_Alto_Networks |
Aquion Energy was a Pittsburgh, Pennsylvania–based company that manufactured sodium ion batteries (salt water batteries) and electricity storage systems.
The company claimed to provide a low-cost way to store large amounts of energy (e.g. for an electricity grid) through thousands of battery cycles, and a non-toxic end product made from widely available material inputs and which operates safely and reliably across a wide range of temperatures and operating environments.
== History ==
The company was founded in 2008 by Jay F. Whitacre, a professor at Carnegie Mellon University, and Ted Wiley. They set up research and development offices in Lawrenceville, where it produced pilot-stage batteries. The company raised funding from Kleiner Perkins, Foundation Capital, Bill Gates, Nick and Jobey Pritzker, Bright Capital and Advanced Technology Ventures, among others.
In 2011, an individual battery stack was promoted to store 1.5 kWh, a shipping container-sized unit 180 kWh. The battery cannot overheat. The company expected its products to last many charge/discharge cycles, twice as long as a lead-acid battery. Costs were claimed to be about the same as with lead-acid.
In October 2014, they announced a new generation with a single stack reaching 2.4 kWh and a multi-stack module holding 25.5 kWh.
In 2015, the company announced it would supply batteries for a Hawaii microgrid to serve as backup for a 176-kilowatt solar panel array that would store 1,000 kilowatt-hours of electricity. In April 2015 they announced they have been cradle-to-cradle design certified. It was also reported they were reducing headcount.
In March 2017, Aquion Energy filed for voluntary bankruptcy under Chapter 11.
In June 2017, bidding starting with a stalking horse offer of $2.8 million from an Austrian battery firm, BlueSky Energy. Juline-Titans LLC, reported by Pittsburgh Post-Gazette: affiliate of the China Titans Energy Technology Group identification error was due to the recent creation of Juline-Titans LLC and is not related to the China Titans Energy Technology Group. The American company's owners chose to pay won the bidding with an offer of $9.16 million keep the inventor, Jay Whitacre with the Aquion Energy products. The auction price was a small fraction of the reported $190 million in venture capital and debt the company had raised from investors including Bill Gates, Gentry Venture Partners, Kleiner Perkins Caufield & Byers, Foundation Capital, Bright Capital, Advanced Technology Ventures, Trinity Capital Investment and CapX Partners, Yung’s Enterprise, and Nick and Joby Pritzker.
The company was sued in April 2017 for violation of the WARN Act. In August 2017, MIT Technology Review reported that the China Titans acquisition would mean that Aquion "will continue operating as an independent entity, with research and development probably remaining in Pittsburgh. But manufacturing may move elsewhere, potentially somewhere in China."
In September 2017 Juline-Titans closed the East Huntingdon Township facility and moved production to China.
Reports regarding Juline-Titans LLC being a company of Chinese origin continue to hinder progress for Aquion Energy. Wilson-Kramer Army Reserve Center in Bethlehem, Pennsylvania was purchased September 2017 for administration and training, along with other properties in the USA through GSA Auctions, including the former USDA Cotton Annex in Washington, D.C.
== Technology ==
The battery materials are non-toxic. As of early 2014, the cathode used manganese oxide and relies on intercalation reactions. The anode was a titanium phosphate (NaTi2(PO4)3). The electrolyte was <5M NaClO4. A synthetic cotton separator was reported. The electrode layers were unusually thick (>2 mm). The device used Siemens power inverter technology.
== Production ==
The company set up manufacturing facilities at a former Sony and Volkswagen assembly plant in East Huntingdon, Pennsylvania initially proposing a capacity of 500 megawatt-hours per year in 2013 and 2014. In March 2014, they announced that commercial shipments of batteries would begin in mid-2014, and in May 2014 announced they had shipped 100 units.
In March 2017, Aquion Energy filed for Chapter 11 bankruptcy, citing the inability to obtain additional funding.
== See also ==
NanoFlowcell
== References ==
== External links ==
Official website
US application 20110052945 | Wikipedia/Aquion_Energy |
EnergySolutions (stylized as EnergySolutions), headquartered in Salt Lake City, Utah, is one of the largest processors of low level waste (LLW) in America, making it also one of the world's largest nuclear waste processors. It was formed in 2007 when Envirocare acquired three other nuclear waste disposal companies: Scientech D&D, BNG America, and Duratek.
EnergySolutions has operations in over 40 states, with a licensed landfill to dispose of radioactive waste approximately 60 miles (97 km) west of Salt Lake City in Tooele County, Utah. It also operates a disposal site in Barnwell County, South Carolina. The company possesses the technology to convert waste into alternative material such as durable glass and is contracted by the United States Department of Energy to assist in waste conversion efforts. The company held the naming rights to the Utah Jazz home EnergySolutions Arena (Now Delta Center) from November 20, 2006, until October 26, 2015, when Vivint, a home security system provider based in Provo, Utah, acquired the naming rights.
In June 2007 the company took over operation and management of several Magnox atomic plants from British Nuclear Fuels plc in the United Kingdom through the acquisition of the BNFL subsidiary – Reactor Sites Management Company (RMSC).
== Formation of EnergySolutions ==
Envirocare of Utah purchased the Connecticut-based Scientech D&D division in October 2005. On February 2, 2006, Envirocare announced the $90 million purchase of BNG America, a subsidiary of British Nuclear Fuels (BNFL) based in Virginia. Envirocare of Utah was renamed EnergySolutions, with corporate headquarters in Salt Lake City, Utah. On February 7, 2006, EnergySolutions announced it would buy Maryland-based Duratek, a publicly traded company, for $396 million in an all-cash deal. The leveraged buyout was financed by banks led by Citigroup, effectively taking the company private.
After the acquisitions, EnergySolutions had 2,500 employees in 40 states with an annual revenue of $280 million. EnergySolutions owns two of the nation's four commercial low-level nuclear-waste repositories, its primary competitor, Waste Control Specialists, built a fourth repository in Texas.
=== Envirocare ===
Envirocare of Utah, Inc. (Envirocare) buried Class A low level radioactive waste (LLRW) in an engineered landfill. It began operations in 1990 in Clive, Utah.
Envirocare was founded by Iranian immigrant Khosrow Semnani in 1988. Semnani served as president of the company until May 1997, when Envirocare's largest customer—the Department of Energy—requested that he step down in the wake of a bribery scandal.
In mid-December 2004, Semnani sold Envirocare for an undisclosed sum. Steve Creamer became the company's new CEO. The deal was financed by private equity firms, led by Lindsay Goldberg & Bessemer of New York, Creamer Investments, and Peterson Partners both of Salt Lake City. Envirocare management promised to drop its plans to bury hotter class B and C nuclear waste in Utah in deference to growing political opposition to the company, which was poised to ban the waste. Envirocare subsequently made the acquisitions and became EnergySolutions.
=== Duratek ===
Based in Columbia, Maryland, Duratek was founded in 1983. In 1990, the company merged with General Technical Services (GTS); the resulting company was known as GTS Duratek. That year, the company formed a joint venture with another firm — Chem-Nuclear Systems, Inc. — to build a commercial vitrification system.
In 1997, GTS Duratek acquired the Scientific Ecology Group (SEG). In 2000, the company purchased the nuclear services business arm of Waste Management Inc. One year later, the company announced that it was dropping GTS from its name, and was once again known as Duratek.
Duratek was purchased by EnergySolutions at a 25.7% premium over the February 7, 2006 stock price when the merger was announced.
=== Energy Solutions ===
Since its inception, Energy Solutions has collected primarily domestic, Class A nuclear waste for its west Utah desert site.
On June 7, 2007, the company announced the acquisition of the UK based BNFL subsidiary – Reactor Sites Management Company (RSMC). The sale included Magnox Electric Limited (MEL), a wholly owned subsidiary of RSMC, which holds the contracts and licences to operate ten nuclear reactor sites in the UK on behalf of the Nuclear Decommissioning Authority (NDA). Through the acquisition, the company took over operational and management responsibilities of several Magnox atomic plants from British Nuclear Fuels plc.
In 2009 it attempted to bring 20,000 tons of waste from Italy's shuttered nuclear power program through the ports of either Charleston, South Carolina, or New Orleans. After processing in Tennessee, about 1,600 tons would be disposed of in Utah. The importation attempt was eventually abandoned.
EnergySolutions sought permission in 2011 from the State of Utah for its "Semprasafe" process to blend, or dilute, the currently allowed Class A low-level radioactive waste with more radioactive Class B and Class C wastes until it just meets the Class A waste levels its license allows per container at its Clive disposal site. Some estimates projected that this could increase Energy Solutions' Utah site total of 7,450 curies of radiation per annum (2010), to an additional 19,184 to 28,470 curies each year. The Division of Radiation Control of Utah considered, but rejected blending to allow Class B and Class C waste into Utah. This would have made Utah, after Texas, the second state in the US to allow the importation of Class B and C radioactive wastes.
On November 15, 2015, EnergySolutions announced that it had signed a definitive agreement to purchase Waste Control Specialists for $270 million in cash and $20 million in stock. This sale was blocked by the DOJ for breaching anti-trust law.
In November 2015, EnergySolutions sold its Projects, Products and Technology division to WS Atkins plc for $318 million. Energy Capital Partners is the seller. The deal includes EnergySolutions’ North American government, Europe, and Asia businesses, and about 650 employees. EnergySolutions will retain its logistics, processing, and disposal (“LP&D”) business, its reactor decommissioning business, including current projects at Zion, Illinois and La Crosse, Wisconsin, and its North American utility services.
Most of the radioactive waste from the decommissioning of the San Onofre Nuclear Generating Station is going to the Energy Solutions facility in Clive, Utah, and is being transported by rail.
== References ==
== External links ==
Official site | Wikipedia/EnergySolutions |
The exponential mechanism is a technique for designing differentially private algorithms. It was developed by Frank McSherry and Kunal Talwar in 2007. Their work was recognized as a co-winner of the 2009 PET Award for Outstanding Research in Privacy Enhancing Technologies.
Most of the initial research in the field of differential privacy revolved around real-valued functions which have relatively low sensitivity to change in the data of a single individual and whose usefulness is not hampered by small additive perturbations. A natural question is what happens in the situation when one wants to preserve more general sets of properties. The exponential mechanism helps to extend the notion of differential privacy to address these issues. Moreover, it describes a class of mechanisms that includes all possible differentially private mechanisms.
== The mechanism ==
Source:
=== Algorithm ===
In very generic terms, a privacy mechanism maps a set of
n
{\displaystyle n\,\!}
inputs from domain
D
{\displaystyle {\mathcal {D}}\,\!}
to a range
R
{\displaystyle {\mathcal {R}}\,\!}
. The map may be randomized, in which case each element of the domain
D
{\displaystyle {\mathcal {D}}\,\!}
corresponds to a probability distribution over the range
R
{\displaystyle {\mathcal {R}}\,\!}
. The privacy mechanism makes no assumption about the nature of
D
{\displaystyle {\mathcal {D}}\,\!}
and
R
{\displaystyle {\mathcal {R}}\,\!}
apart from a base measure
μ
{\displaystyle \mu \,\!}
on
R
{\displaystyle {\mathcal {R}}\,\!}
. Let us define a function
q
:
D
n
×
R
→
R
{\displaystyle q:{\mathcal {D}}^{n}\times {\mathcal {R}}\rightarrow \mathbb {R} \,\!}
. Intuitively this function assigns a score to the pair
(
d
,
r
)
{\displaystyle (d,r)\,\!}
, where
d
∈
D
n
{\displaystyle d\in {\mathcal {D}}^{n}\,\!}
and
r
∈
R
{\displaystyle r\in {\mathcal {R}}\,\!}
. The score reflects the appeal of the pair
(
d
,
r
)
{\displaystyle (d,r)\,\!}
, i.e. the higher the score, the more appealing the pair is.
Given the input
d
∈
D
n
{\displaystyle d\in {\mathcal {D}}^{n}\,\!}
, the mechanism's objective is to return an
r
∈
R
{\displaystyle r\in {\mathcal {R}}\,\!}
such that the function
q
(
d
,
r
)
{\displaystyle q(d,r)\,\!}
is approximately maximized. To achieve this, set up the mechanism
E
q
ε
(
d
)
{\displaystyle {\mathcal {E}}_{q}^{\varepsilon }(d)\,\!}
as follows:
Definition: For any function
q
:
(
D
n
×
R
)
→
R
{\displaystyle q:({\mathcal {D}}^{n}\times {\mathcal {R}})\rightarrow \mathbb {R} \,\!}
, and a base measure
μ
{\displaystyle \mu \,\!}
over
R
{\displaystyle {\mathcal {R}}\,\!}
, define:
E
q
ε
(
d
)
:=
{\displaystyle {\mathcal {E}}_{q}^{\varepsilon }(d):=\,\!}
Choose
r
{\displaystyle r\,\!}
with probability proportional to
e
ε
q
(
d
,
r
)
×
μ
(
r
)
{\displaystyle e^{\varepsilon q(d,r)}\times \mu (r)\,\!}
, where
d
∈
D
n
,
r
∈
R
{\displaystyle d\in {\mathcal {D}}^{n},r\in {\mathcal {R}}\,\!}
.
This definition implies the fact that the probability of returning an
r
{\displaystyle r\,\!}
increases exponentially with the increase in the value of
q
(
d
,
r
)
{\displaystyle q(d,r)\,\!}
. Ignoring the base measure
μ
{\displaystyle \mu \,\!}
then the value
r
{\displaystyle r\,\!}
which maximizes
q
(
d
,
r
)
{\displaystyle q(d,r)\,\!}
has the highest probability. Moreover, this mechanism is differentially private. Proof of this claim will follow. One technicality that should be kept in mind is that in order to properly define
E
q
ε
(
d
)
{\displaystyle {\mathcal {E}}_{q}^{\varepsilon }(d)\,\!}
the
∫
r
e
ε
q
(
d
,
r
)
×
μ
(
r
)
{\displaystyle \int _{r}e^{\varepsilon q(d,r)}\times \mu (r)\,\!}
should be finite.
Theorem (differential privacy):
E
q
ε
(
d
)
{\displaystyle {\mathcal {E}}_{q}^{\varepsilon }(d)\,\!}
gives
(
2
ε
Δ
q
)
{\displaystyle (2\varepsilon \Delta q)\,\!}
-differential privacy, where
Δ
q
{\displaystyle \Delta q}
is something that we need to define.
Proof: The probability density of
E
q
ε
(
d
)
{\displaystyle {\mathcal {E}}_{q}^{\varepsilon }(d)\,\!}
at
r
{\displaystyle r\,\!}
equals
e
ε
q
(
d
,
r
)
μ
(
r
)
∫
e
ε
q
(
d
,
r
)
μ
(
r
)
d
r
.
{\displaystyle {\frac {e^{\varepsilon q(d,r)}\mu (r)}{\int e^{\varepsilon q(d,r)}\mu (r)\,dr}}.\,\!}
Now, if a single change in
d
{\displaystyle d\,\!}
changes
q
{\displaystyle q\,\!}
by at most
Δ
q
{\displaystyle \Delta q\,\!}
then the numerator can change at most by a factor of
e
ε
Δ
q
{\displaystyle e^{\varepsilon \Delta q}\,\!}
and the denominator minimum by a factor of
e
−
ε
Δ
q
{\displaystyle e^{-\varepsilon \Delta q}\,\!}
. Thus, the ratio of the new probability density (i.e. with new
d
{\displaystyle d\,\!}
) and the earlier one is at most
exp
(
2
ε
Δ
q
)
{\displaystyle \exp(2\varepsilon \Delta q)\,\!}
.
=== Accuracy ===
We would ideally want the random draws of
r
{\displaystyle r\,\!}
from the mechanism
E
q
ε
(
d
)
{\displaystyle {\mathcal {E}}_{q}^{\varepsilon }(d)\,\!}
to nearly maximize
q
(
d
,
r
)
{\displaystyle q(d,r)\,\!}
. If we consider
max
r
q
(
d
,
r
)
{\displaystyle \max _{r}q(d,r)\,\!}
to be
O
P
T
{\displaystyle OPT\,\!}
then we can show that the probability of the mechanism deviating from
O
P
T
{\displaystyle OPT\,\!}
is low, as long as there is a sufficient mass (in terms of
μ
{\displaystyle \mu }
) of values
r
{\displaystyle r\,\!}
with value
q
{\displaystyle q\,\!}
close to the optimum.
Lemma: Let
S
t
=
{
r
:
q
(
d
,
r
)
>
O
P
T
−
t
}
{\displaystyle S_{t}=\{r:q(d,r)>OPT-t\}\,\!}
and
S
¯
2
t
=
{
r
:
q
(
d
,
r
)
≤
O
P
T
−
2
t
}
{\displaystyle {\bar {S}}_{2t}=\{r:q(d,r)\leq OPT-2t\}\,\!}
, we have
p
(
S
¯
2
t
)
{\displaystyle p({\bar {S}}_{2t})\,\!}
is at most
exp
(
−
ε
t
)
/
μ
(
S
t
)
{\displaystyle \exp(-\varepsilon t)/\mu (S_{t})\,\!}
. The probability is taken over
R
{\displaystyle {\mathcal {R}}\,\!}
.
Proof: The probability
p
(
S
¯
2
t
)
{\displaystyle p({\bar {S}}_{2t})\,\!}
is at most
p
(
S
¯
2
t
)
/
p
(
S
t
)
{\displaystyle p({\bar {S}}_{2t})/p(S_{t})\,\!}
, as the denominator can be at most one. Since both the probabilities have the same normalizing term so,
p
(
S
¯
2
t
)
p
(
S
t
)
=
∫
S
¯
2
t
exp
(
ε
q
(
d
,
r
)
)
μ
(
r
)
d
r
∫
S
t
exp
(
ε
q
(
d
,
r
)
)
μ
(
r
)
d
r
≤
exp
(
−
ε
t
)
μ
(
S
¯
2
t
)
μ
(
S
t
)
.
{\displaystyle {\frac {p({\bar {S}}_{2t})}{p(S_{t})}}={\frac {\int _{{\bar {S}}_{2t}}\exp(\varepsilon q(d,r))\mu (r)\,dr}{\int _{S_{t}}\exp(\varepsilon q(d,r))\mu (r)\,dr}}\leq \exp(-\varepsilon t){\frac {\mu ({\bar {S}}_{2t})}{\mu (S_{t})}}.}
The value of
μ
(
S
¯
2
t
)
{\displaystyle \mu ({\bar {S}}_{2t})\,\!}
is at most one, and so this bound implies the lemma statement.
Theorem (Accuracy): For those values of
t
≥
ln
(
O
P
T
t
μ
(
S
t
)
)
/
ε
{\displaystyle t\geq \ln \left({\frac {OPT}{t\mu (S_{t})}}\right)/\varepsilon \,\!}
, we have
E
[
q
(
d
,
E
q
ε
(
d
)
)
]
≥
O
P
T
−
3
t
{\displaystyle E[q(d,{\mathcal {E}}_{q}^{\varepsilon }(d))]\geq OPT-3t\,\!}
.
Proof: It follows from the previous lemma that the probability of the score being at least
O
P
T
−
2
t
{\displaystyle OPT-2t\,\!}
is
1
−
exp
(
−
ε
t
)
/
μ
(
S
t
)
{\displaystyle 1-\exp(-\varepsilon t)/\mu (S_{t})\,\!}
. By hypothesis,
t
≥
ln
(
O
P
T
t
μ
(
S
t
)
)
/
ε
{\displaystyle t\geq \ln \left({\frac {OPT}{t\mu (S_{t})}}\right)/\varepsilon \,\!}
. Substituting the value of
t
{\displaystyle t\,\!}
we get this probability to be at least
1
−
t
/
O
P
T
{\displaystyle 1-t/OPT\,\!}
. Multiplying with
O
P
T
−
2
t
{\displaystyle OPT-2t\,\!}
yields the desired bound.
We can assume
μ
(
A
)
{\displaystyle \mu (A)\,\!}
for
A
⊆
R
{\displaystyle A\subseteq {\mathcal {R}}\,\!}
to be less than or equal to one in all the computations, because we can always normalize with
μ
(
R
)
{\displaystyle \mu ({\mathcal {R}})\,\!}
.
== Example application ==
Source:
Before we get into the details of the example let us define some terms which we will be using extensively throughout our discussion.
Definition (global sensitivity): The global sensitivity of a query
Q
{\displaystyle Q\,\!}
is its maximum difference when evaluated on two neighbouring datasets
D
1
,
D
2
∈
D
n
{\displaystyle D_{1},D_{2}\in {\mathcal {D}}^{n}\,\!}
:
G
S
Q
=
max
D
1
,
D
2
:
d
(
D
1
,
D
2
)
=
1
|
(
Q
(
D
1
)
−
Q
(
D
2
)
)
|
.
{\displaystyle GS_{Q}=\max _{D_{1},D_{2}:d(D_{1},D_{2})=1}|(Q(D_{1})-Q(D_{2}))|.\,\!}
Definition: A predicate query
Q
φ
{\displaystyle Q_{\varphi }\,\!}
for any predicate
φ
{\displaystyle \varphi \,\!}
is defined to be
Q
φ
=
|
{
x
∈
D
:
φ
(
x
)
}
|
|
D
|
.
{\displaystyle Q_{\varphi }={\frac {|\{x\in D:\varphi (x)\}|}{|D|}}.\,\!}
Note that
G
S
Q
φ
≤
1
/
n
{\displaystyle GS_{Q_{\varphi }}\leq 1/n\,\!}
for any predicate
φ
{\displaystyle \varphi \,\!}
.
=== Release mechanism ===
The following is due to Avrim Blum, Katrina Ligett and Aaron Roth.
Definition (Usefulness): A mechanism
A
{\displaystyle {\mathcal {A}}\,\!}
is
(
α
,
δ
)
{\displaystyle (\alpha ,\delta )\,\!}
-useful for queries in class
H
{\displaystyle H\,\!}
with probability
1
−
δ
{\displaystyle 1-\delta \,\!}
, if
∀
h
∈
H
{\displaystyle \forall h\in H\,\!}
and every dataset
D
{\displaystyle D\,\!}
, for
D
^
=
A
(
D
)
{\displaystyle {\widehat {D}}={\mathcal {A}}(D)\,\!}
,
|
Q
h
(
D
^
)
−
Q
h
(
D
)
|
≤
α
{\displaystyle |Q_{h}({\widehat {D}})-Q_{h}(D)|\leq \alpha \,\!}
.
Informally, it means that with high probability the query
Q
h
{\displaystyle Q_{h}\,\!}
will behave in a similar way on the original dataset
D
{\displaystyle D\,\!}
and on the synthetic dataset
D
^
{\displaystyle {\widehat {D}}\,\!}
.
Consider a common problem in Data Mining. Assume there is a database
D
{\displaystyle D\,\!}
with
n
{\displaystyle n\,\!}
entries. Each entry consist of
k
{\displaystyle k\,\!}
-tuples of the form
(
x
1
,
x
2
,
…
,
x
k
)
{\displaystyle (x_{1},x_{2},\dots ,x_{k})\,\!}
where
x
i
∈
{
0
,
1
}
{\displaystyle x_{i}\in \{0,1\}\,\!}
. Now, a user wants to learn a linear halfspace of the form
π
1
x
1
+
π
2
x
2
+
⋯
+
π
k
−
1
x
k
−
1
≥
x
k
{\displaystyle \pi _{1}x_{1}+\pi _{2}x_{2}+\cdots +\pi _{k-1}x_{k-1}\geq x_{k}\,\!}
. In essence the user wants to figure out the values of
π
1
,
π
2
,
…
,
π
k
−
1
{\displaystyle \pi _{1},\pi _{2},\dots ,\pi _{k-1}\,\!}
such that maximum number of tuples in the database satisfy the inequality. The algorithm we describe below can generate a synthetic database
D
^
{\displaystyle {\widehat {D}}\,\!}
which will allow the user to learn (approximately) the same linear half-space while querying on this synthetic database. The motivation for such an algorithm being that the new database will be generated in a differentially private manner and thus assure privacy to the individual records in the database
D
{\displaystyle D\,\!}
.
In this section we show that it is possible to release a dataset which is useful for concepts from a polynomial VC-Dimension class and at the same time adhere to
ε
{\displaystyle \varepsilon \,\!}
-differential privacy as long as the size of the original dataset is at least polynomial on the VC-Dimension of the concept class. To state formally:
Theorem: For any class of functions
H
{\displaystyle H\,\!}
and any dataset
D
⊂
{
0
,
1
}
k
{\displaystyle D\subset \{0,1\}^{k}\,\!}
such that
|
D
|
≥
O
(
k
⋅
VCDim
(
H
)
log
(
1
/
α
)
α
3
ε
+
log
(
1
/
δ
)
α
ε
)
{\displaystyle |D|\geq O\left({\frac {k\cdot \operatorname {VCDim} (H)\log(1/\alpha )}{\alpha ^{3}\varepsilon }}+{\frac {\log(1/\delta )}{\alpha \varepsilon }}\right)\,\!}
we can output an
(
α
,
δ
)
{\displaystyle (\alpha ,\delta )\,\!}
-useful dataset
D
^
{\displaystyle {\widehat {D}}\,\!}
that preserves
ε
{\displaystyle \varepsilon \,\!}
-differential privacy. As we had mentioned earlier the algorithm need not be efficient.
One interesting fact is that the algorithm which we are going to develop generates a synthetic dataset whose size is independent of the original dataset; in fact, it only depends on the VC-dimension of the concept class and the parameter
α
{\displaystyle \alpha \,\!}
. The algorithm outputs a dataset of size
O
~
(
VCDim
(
H
)
/
α
2
)
{\displaystyle {\tilde {O}}(\operatorname {VCDim} (H)/\alpha ^{2})\,\!}
We borrow the Uniform Convergence Theorem from combinatorics and state a corollary of it which aligns to our need.
Lemma: Given any dataset
D
{\displaystyle D\,\!}
there exists a dataset
D
^
{\displaystyle {\widehat {D}}\,\!}
of size
=
O
(
VCDim
(
H
)
log
(
1
/
α
)
)
/
α
2
{\displaystyle =O(\operatorname {VCDim} (H)\log(1/\alpha ))/\alpha ^{2}\,\!}
such that
max
h
∈
H
|
Q
h
(
D
)
−
Q
h
(
D
^
)
|
≤
α
/
2
{\displaystyle \max _{h\in H}|Q_{h}(D)-Q_{h}({\widehat {D}})|\leq \alpha /2\,\!}
.
Proof:
We know from the uniform convergence theorem that
Pr
[
|
Q
h
(
D
)
−
Q
h
(
D
^
)
|
≥
α
2
for some
h
∈
H
]
≤
2
(
e
m
VCDim
(
H
)
)
VCDim
(
H
)
⋅
e
−
α
2
m
/
8
,
{\displaystyle {\begin{aligned}&\Pr \left[\,\left|Q_{h}(D)-Q_{h}({\widehat {D}})\right|\geq {\frac {\alpha }{2}}{\text{ for some }}h\in H\right]\\[5pt]\leq {}&2\left({\frac {em}{\operatorname {VCDim} (H)}}\right)^{\operatorname {VCDim} (H)}\cdot e^{-\alpha ^{2}m/8},\end{aligned}}}
where probability is over the distribution of the dataset.
Thus, if the RHS is less than one then we know for sure that the data set
D
^
{\displaystyle {\widehat {D}}\,\!}
exists. To bound the RHS to less than one we need
m
≥
λ
(
VCDim
(
H
)
log
(
m
/
VCDim
(
H
)
)
/
α
2
)
{\displaystyle m\geq \lambda (\operatorname {VCDim} (H)\log(m/\operatorname {VCDim} (H))/\alpha ^{2})\,\!}
, where
λ
{\displaystyle \lambda \,\!}
is some positive constant. Since we stated earlier that we will output a dataset of size
O
~
(
VCDim
(
H
)
/
α
2
)
{\displaystyle {\tilde {O}}(\operatorname {VCDim} (H)/\alpha ^{2})\,\!}
, so using this bound on
m
{\displaystyle m\,\!}
we get
m
≥
λ
(
VCDim
(
H
)
log
(
1
/
α
)
/
α
2
)
{\displaystyle m\geq \lambda (\operatorname {VCDim} (H)\log(1/\alpha )/\alpha ^{2})\,\!}
. Hence the lemma.
Now we invoke the exponential mechanism.
Definition: For any function
q
:
(
(
{
0
,
1
}
k
)
n
×
(
{
0
,
1
}
k
)
m
)
→
R
{\displaystyle q:((\{0,1\}^{k})^{n}\times (\{0,1\}^{k})^{m})\rightarrow \mathbb {R} \,\!}
and input dataset
D
{\displaystyle D\,\!}
, the exponential mechanism outputs each dataset
D
^
{\displaystyle {\widehat {D}}\,\!}
with probability proportional to
e
q
(
D
,
D
^
)
ε
n
/
2
{\displaystyle e^{q(D,{\widehat {D}})\varepsilon n/2}\,\!}
.
From the exponential mechanism we know this preserves
(
ε
n
G
S
q
)
{\displaystyle (\varepsilon nGS_{q})\,\!}
-differential privacy. Let's get back to the proof of the Theorem.
We define
(
q
(
D
)
,
q
(
D
^
)
)
=
−
max
h
∈
H
|
Q
h
(
D
)
−
Q
h
(
D
^
)
|
{\displaystyle (q(D),q({\widehat {D}}))=-\max _{h\in H}|Q_{h}(D)-Q_{h}({\widehat {D}})|\,\!}
.
To show that the mechanism satisfies the
(
α
,
δ
)
{\displaystyle (\alpha ,\delta )\,\!}
-usefulness, we should show that it outputs some dataset
D
^
{\displaystyle {\widehat {D}}\,\!}
with
q
(
D
,
D
^
)
≥
−
α
{\displaystyle q(D,{\widehat {D}})\geq -\alpha \,\!}
with probability
1
−
δ
{\displaystyle 1-\delta \,\!}
.
There are at most
2
k
m
{\displaystyle 2^{km}\,\!}
output datasets and the probability that
q
(
D
,
D
^
)
≤
−
α
{\displaystyle q(D,{\widehat {D}})\leq -\alpha \,\!}
is at most proportional to
e
−
ε
α
n
/
2
{\displaystyle e^{-\varepsilon \alpha n/2}\,\!}
. Thus by union bound, the probability of outputting any such dataset
D
^
{\displaystyle {\widehat {D}}\,\!}
is at most proportional to
2
k
m
e
−
ε
α
n
/
2
{\displaystyle 2^{km}e^{-\varepsilon \alpha n/2}\,\!}
.
Again, we know that there exists some dataset
D
^
∈
(
{
0
,
1
}
k
)
m
{\displaystyle {\widehat {D}}\in (\{0,1\}^{k})^{m}\,\!}
for which
q
(
D
,
D
^
)
≥
−
α
/
2
{\displaystyle q(D,{\widehat {D}})\geq -\alpha /2\,\!}
. Therefore, such a dataset is output with probability at least proportional to
e
−
α
ε
n
/
4
{\displaystyle e^{-\alpha \varepsilon n/4}\,\!}
.
Let
A
:=
{\displaystyle A:=\,\!}
the event that the exponential mechanism outputs some dataset
D
^
{\displaystyle {\widehat {D}}\,\!}
such that
q
(
D
,
D
^
)
≥
−
α
/
2
{\displaystyle q(D,{\widehat {D}})\geq -\alpha /2\,\!}
.
B
:=
{\displaystyle B:=\,\!}
the event that the exponential mechanism outputs some dataset
D
^
{\displaystyle {\widehat {D}}\,\!}
such that
q
(
D
,
D
^
)
≤
−
α
{\displaystyle q(D,{\widehat {D}})\leq -\alpha \,\!}
.
∴
Pr
[
A
]
Pr
[
B
]
≥
e
−
α
ε
n
/
4
2
k
m
e
−
α
ε
n
/
2
=
e
α
ε
n
/
4
2
k
m
.
{\displaystyle \therefore {\frac {\Pr[A]}{\Pr[B]}}\geq {\frac {e^{-\alpha \varepsilon n/4}}{2^{km}e^{-\alpha \varepsilon n/2}}}={\frac {e^{\alpha \varepsilon n/4}}{2^{km}}}.\,\!}
Now setting this quantity to be at least
1
/
δ
≥
(
1
−
δ
)
/
δ
{\displaystyle 1/\delta \geq (1-\delta )/\delta \,\!}
, we find that it suffices to have
n
≥
4
ε
α
(
k
m
+
ln
1
δ
)
≥
O
(
d
⋅
VCDim
(
H
)
log
(
1
/
α
)
α
3
ε
+
log
(
1
/
δ
)
α
ε
)
.
{\displaystyle n\geq {\frac {4}{\varepsilon \alpha }}\left(km+\ln {\frac {1}{\delta }}\right)\geq O\left({\frac {d\cdot \operatorname {VCDim} (H)\log(1/\alpha )}{\alpha ^{3}\varepsilon }}+{\frac {\log(1/\delta )}{\alpha \varepsilon }}\right).\,\!}
And hence we prove the theorem.
== Applications in other domains ==
In the above example of the usage of exponential mechanism, one can output a synthetic dataset in a differentially private manner and can use the dataset to answer queries with good accuracy. Other private mechanisms, such as posterior sampling, which returns parameters rather than datasets, can be made equivalent to the exponential one.
Apart from the setting of privacy, the exponential mechanism has also been studied in the context of auction theory and classification algorithms. In the case of auctions the exponential mechanism helps to achieve a truthful auction setting.
== References ==
== External links ==
The Algorithmic Foundations of Differential Privacy by Cynthia Dwork and Aaron Roth, 2014. | Wikipedia/Exponential_mechanism_(differential_privacy) |
Local differential privacy (LDP) is a model of differential privacy with the added requirement that if an adversary has access to the personal responses of an individual in the database, that adversary will still be unable to learn much of the user's personal data. This is contrasted with global differential privacy, a model of differential privacy that incorporates a central aggregator with access to the raw data.
Local differential privacy (LDP) is an approach to mitigate the concern of data fusion and analysis techniques used to expose individuals to attacks and disclosures. LDP is a well-known privacy model for distributed architectures that aims to provide privacy guarantees for each user while collecting and analyzing data, protecting from privacy leaks for the client and server. LDP has been widely adopted to alleviate contemporary privacy concerns in the era of big data.
== History ==
The randomized response survey technique proposed by Stanley L. Warner in 1965 is frequently cited as an example of local differential privacy. Warner's innovation was the introduction of what could now be called the “untrusted curator” model, where the entity collecting the data may not be trustworthy. Before users' responses are sent to the curator, the answers are randomized in a controlled manner, guaranteeing differential privacy while still allowing valid population-wide statistical inferences.
In 2003, Alexandre V. Evfimievski, Johannes Gehrke, and Ramakrishnan Srikant gave a definition equivalent to local differential privacy.
In 2008, Kasiviswanathan et al. first used the term "local private learning" and showed it to be equivalent to randomized response.
== Applications ==
The era of big data exhibits a high demand for machine learning services that provide privacy protection for users. Demand for such services has pushed research into algorithmic paradigms that provably satisfy specific privacy requirements.
=== Anomaly Detection ===
Anomaly detection is formally defined as the process of identifying unexpected items or events in data sets. The rise of social networking in the current era has led to many potential concerns related to information privacy. As more and more users rely on social networks, they are often threatened by privacy breaches, unauthorized access to personal information, and leakage of sensitive data. To attempt to solve this issue, the authors of "Anomaly Detection over Differential Preserved Privacy in Online Social Networks" have proposed a model using a social network utilizing restricted local differential privacy. By using this model, it aims for improved privacy preservation through anomaly detection. In this paper, the authors propose a privacy preserving model that sanitizes the collection of user information from a social network utilizing restricted local differential privacy (LDP) to save synthetic copies of collected data. This model uses reconstructed data to classify user activity and detect abnormal network behavior. The experimental results demonstrate that the proposed method achieves high data utility on the basis of improved privacy preservation. Furthermore, local differential privacy sanitized data are suitable for use in subsequent analyses, such as anomaly detection. Anomaly detection on the proposed method’s reconstructed data achieves a detection accuracy similar to that on the original data.
=== Blockchain Technology ===
Potential combinations of blockchain technology with local differential privacy have received research attention. Blockchains implement distributed, secured, and shared ledgers used to record and track data within a decentralized network, and they have successfully replaced certain prior systems of economic transactions within and between organizations. Increased usage of blockchains has raised some questions regarding privacy and security of data they store, and local differential privacy of various kinds has been proposed as a desirable property for blockchains containing sensitive data.
=== Context-Free Privacy ===
Local differential privacy provides context-free privacy even in the absence of a trusted data collector, though often at the expense of a significant drop in utility. The classical definition of LDP assumes that all elements in the data domain are equally sensitive. However, in many applications, some symbols are more sensitive than others. A context-aware framework of local differential privacy can allow a privacy designer to incorporate the application’s context into the privacy definition. For binary data domains, algorithmic research has provided a universally optimal privatization scheme and highlighted its connections to Warner’s randomized response (RR) and Mangat’s improved response. For k-ary data domains, motivated by geolocation and web search applications, researchers have considered at least two special cases of context-aware LDP: block-structured LDP and high-low LDP (the latter is also defined in ). The research has provided communication-efficient, sample-optimal schemes and information theoretic lower bounds for both models.
=== Facial Recognition ===
Facial recognition has become widespread in recent years. Recent smartphones, for example, utilize facial recognition to unlock the users phone as well as authorize the payment with their credit card. Though this is convenient, it poses privacy concerns. It is a resource-intensive task that often involves third party users, often resulting in a gap where the user’s privacy could be compromised. Biometric information delivered to untrusted third-party servers in an uncontrolled manner can constitute a significant privacy leak as biometrics can be correlated with sensitive data such as healthcare or financial records. In Chamikara's academic article, he proposes a privacy-preserving technique for “controlled information release”, where they disguise an original face image and prevent leakage of the biometric features while identifying a person. He introduces a new privacy-preserving face recognition protocol named PEEP (Privacy using Eigenface Perturbation) that utilizes local differential privacy. PEEP applies perturbation to Eigenfaces utilizing differential privacy and stores only the perturbed data in the third-party servers to run a standard Eigenface recognition algorithm. As a result, the trained model will not be vulnerable to privacy attacks such as membership inference and model memorization attacks. This model provided by Chami kara shows the potential solution of this issue or privacy leaks.
=== Federated Learning (FL) ===
Federated learning has the ambition to protect data privacy through distributed learning methods that keep the data in its storage. Likewise, differential privacy (DP) attains to improve the protection of data privacy by measuring the privacy loss in the communication among the elements of federated learning. The prospective matching of federated learning and differential privacy to the challenges of data privacy protection has caused the release of several software tools that support their functionalities, but they lack a unified vision of these techniques, and a methodological workflow that supports their usage. In the study sponsored by the Andalusian Research Institute in Data Science and computational Intelligence, they developed a Sherpa.ai FL, 1,2 which is an open-research unified FL and DP framework that aims to foster the research and development of AI services at the edges and to preserve data privacy. The characteristics of FL and DP tested and summarized in the study suggests that they make them good candidates to support AI services at the edges and to preserve data privacy through their finding that by setting the value of
ϵ
{\displaystyle \epsilon }
for lower values would guarantee higher privacy at the cost of lower accuracy.
=== Health Data Aggregation ===
The rapid growth of health data, the limited storage and computation resources of wireless body area sensor networks is becoming a barrier to the development of the health industry to keep up. Aiming to solve this, the outsourcing of encrypted health data to the cloud has been an appealing strategy. However, there may come potential downsides as do all choices. Data aggregation will become more difficult and more vulnerable to data branches of this sensitive information of the patients of the healthcare industry. In his academic article, "Privacy-Enhanced and Multifunctional Health Data Aggregation under Differential Privacy Guarantees," Hao Ren and his team proposes a privacy enhanced and multifunctional health data aggregation scheme (PMHA-DP) under differential privacy. This aggregation function is designed to protect the aggregated data from cloud servers. The performance and evaluation done in their study shows that the proposal leads to less communication overhead than the existing data aggregation models currently in place.
=== Internet Connected Vehicles ===
A growing number of vehicles contain internet connection for the convenience of the users. This poses yet another threat to the user's privacy. The Internet of vehicles (IoV) is expected to enable intelligent traffic management, intelligent dynamic information services, intelligent vehicle control, etc. However, vehicles’ data privacy is argued to be a major barrier toward the application and development of IoV, thus causing a wide range of attention. Local differential privacy (LDP) is the relaxed version of the privacy standard, differential privacy, and it can protect users’ data privacy against the untrusted third party in the worst adversarial setting. The computational costs of using LDP is one concern among researchers as it is quite expensive to implement for such a specific model given that the model needs high mobility and short connection times. Furthermore, as the number of vehicles increases, the frequent communication between vehicles and the cloud server incurs unexpected amounts of communication cost. To avoid the privacy threat and reduce the communication cost, researchers propose to integrate federated learning and local differential privacy (LDP) to facilitate the crowdsourcing applications to achieve the machine learning model.
=== Phone Blacklisting ===
Since spam phone calls are a growing nuisance in the digital world, researchers have been looking at potential solutions in minimizing this issue. Federal agencies such as the US Federal Trade Commission (FTC) have been working with telephone carriers to design systems for blocking robocalls. Furthermore, a number of commercial and smartphone apps that promise to block spam phone calls have been created, but they come with a subtle cost. The user’s privacy information that comes with giving the app the access to block spam calls may be leaked without the user’s consent or knowledge of it even occurring. In the study, the researchers analyze the challenges and trade-offs related to using local differential privacy, evaluate the LDP-based system on real-world user-reported call records collected by the FTC, and show that it is possible to learn a phone blacklist using a reasonable overall privacy budget and at the same time preserve users’ privacy while maintaining utility for the learned blacklist.
=== Trajectory Cross-Correlation Constraint ===
Aiming to solve the problem of low data utilization and privacy protection, a personalized differential privacy protection method based on cross-correlation constraints is proposed by researcher Hu. By protecting sensitive location points on the trajectory and the sensitive points, this extended differential privacy protection model combines the sensitivity of the user’s trajectory location and user privacy protection requirements and privacy budget. Using autocorrelation Laplace transform, specific white noise is transformed into noise that is related to the user's real trajectory sequence in both time and space. This noise data is used to find the cross-correlation constraint mechanics of the trajectory sequence in the model. By proposing this model, the researcher Hu's personalized differential privacy protection method is broken down and addresses the issue of adding independent and uncorrelated noise and the same degree of scrambling results in low privacy protection and poor data availability.
== ε-local differential privacy ==
=== Definition of ε-local differential privacy ===
Let ε be a positive real number and
A
{\displaystyle {\mathcal {A}}}
be a randomized algorithm that takes a user's private data as input. Let
im
A
{\displaystyle {\textrm {im}}{\mathcal {A}}}
denote the image of
A
{\displaystyle {\mathcal {A}}}
. The algorithm
A
{\displaystyle {\mathcal {A}}}
is said to provide
ε
{\displaystyle \varepsilon }
-local differential privacy if, for all pairs of users' possible private data
x
{\displaystyle x}
and
x
′
{\displaystyle x^{\prime }}
and all subsets
S
{\displaystyle S}
of
im
A
{\displaystyle {\textrm {im}}{\mathcal {A}}}
:
where the probability is taken over the random measure implicit in the algorithm.
The main difference between this definition of local differential privacy and the definition of standard (global) differential privacy is that in standard differential privacy the probabilities are of the outputs of an algorithm that takes all users' data and here it is on an algorithm that takes a single user's data.
Other formal definitions of local differential privacy concern algorithms that categorize all users' data as input and output a collection of all responses (such as the definition in Raef Bassily, Kobbi Nissim, Uri Stemmer and Abhradeep Guha Thakurta's 2017 paper).
== Deployment ==
Algorithms guaranteeing local differential privacy have been deployed in several internet companies:
== References == | Wikipedia/Local_differential_privacy |
Differential privacy composition theorems are mathematical tools used in differential privacy to analyze and bound the accumulated privacy loss when multiple differentially private mechanisms are applied to the same dataset. They quantify how privacy guarantees degrade as more queries or analyses are performed, and are essential for designing complex differentially private systems and algorithms.
For example, if user submits multiple queries to a differentially private database, each query might individually satisfies ε-differential privacy but the repeated interaction can cumulatively leak more information than intended. Composition theorems address this by providing a way to calculate the overall privacy loss after multiple mechanisms have been applied.
== References == | Wikipedia/Differential_privacy_composition_theorems |
Differentially private analysis of graphs studies algorithms for computing accurate graph statistics while preserving differential privacy. Such algorithms are used for data represented in the form of a graph where nodes correspond to individuals and edges correspond to relationships between them. For examples, edges could correspond to friendships, sexual relationships, or communication patterns.
A party that collected sensitive graph data can process it using a differentially private algorithm and publish the output of the algorithm. The goal of differentially private analysis of graphs is to design algorithms that compute accurate global information about graphs while preserving privacy of individuals whose data is stored in the graph.
== Variants ==
Differential privacy imposes a restriction on the algorithm. Intuitively, it requires that the algorithm has roughly the same output distribution on neighboring inputs. If the input is a graph, there are two natural notions of neighboring inputs, edge neighbors and node neighbors, which yield two natural variants of differential privacy for graph data.
Let ε be a positive real number and
A
{\displaystyle {\mathcal {A}}}
be a randomized algorithm that takes a graph as input and returns an output from a set
O
{\displaystyle {\mathcal {O}}}
.
The algorithm
A
{\displaystyle {\mathcal {A}}}
is
ϵ
{\displaystyle \epsilon }
-differentially private if, for all neighboring graphs
G
1
{\displaystyle G_{1}}
and
G
2
{\displaystyle G_{2}}
and all subsets
S
{\displaystyle S}
of
O
{\displaystyle {\mathcal {O}}}
,
where the probability is taken over the randomness used by the algorithm.
=== Edge differential privacy ===
Two graphs are edge neighbors if they differ in one edge. An algorithm is
ϵ
{\displaystyle \epsilon }
-edge-differentially private if, in the definition above, the notion of edge neighbors is used. Intuitively, an edge differentially private algorithm has similar output distributions on any pair of graphs that differ in one edge, thus protecting changes to graph edges.
=== Node differential privacy ===
Two graphs are node neighbors if one can be obtained from the other by deleting a node and its adjacent edges. An algorithm is
ϵ
{\displaystyle \epsilon }
-node-differentially private if, in the definition above, the notion of node neighbors is used. Intuitively, a node differentially private algorithm has similar output distributions on any pair of graphs that differ in one one nodes and edges adjacent to it, thus protecting information pertaining to each individual. Node differential privacy give a stronger privacy protection than edge differential privacy.
== Research history ==
The first edge differentially private algorithm was designed by Nissim, Raskhodnikova, and Smith. The distinction between edge and node differential privacy was first discussed by Hay, Miklau, and Jensen. However, it took several years before first node differentially private algorithms were published in Blocki et al., Kasiviswanathan et al., and Chen and Zhou. In all three papers, the algorithms are for releasing a single statistic, like a triangle count or counts of other subgraphs. Raskhodnikova and Smith gave the first node differentially private algorithm for releasing a vector, specifically, the degree count and the degree distribution.
== References == | Wikipedia/Differentially_private_analysis_of_graphs |
Since the advent of differential privacy, a number of systems supporting differentially private data analyses have been implemented and deployed. This article tracks real-world deployments, production software packages, and research prototypes.
== Real-world deployments ==
== Production software packages ==
These software packages purport to be usable in production systems. They are split in two categories: those focused on answering statistical queries with differential privacy, and those focused on training machine learning models with differential privacy.
=== Statistical analyses ===
=== Machine learning ===
== Research projects and prototypes ==
== See also ==
Differential Privacy
Secure multi-party computation
== References == | Wikipedia/Implementations_of_differentially_private_analyses |
IEEE Intelligent Systems is a bimonthly peer-reviewed academic journal published by the IEEE Computer Society and sponsored by the Association for the Advancement of Artificial Intelligence (AAAI), British Computer Society (BCS), and European Association for Artificial Intelligence (EurAI).
== History ==
The journal was established in 1986 as the quarterly IEEE Expert, changed to bimonthly in 1990. Its name was changed to IEEE Intelligent Systems & Their Applications in 1997 (already in 1996, the journal's title had become IEEE Expert - Intelligent Systems & Their Applications with a marked emphasis put on the text Intelligent Systems). Its current name IEEE Intelligent Systems was given in 2001.
The current editor-in-chief is Longbing Cao (University of Technology Sydney). The editor-in-chief emeritus includes James Hendler (Rensselaer Polytechnic Institute), Fei-Yue Wang (Chinese Academy of Sciences), Daniel Zeng (University of Arizona), and V.S. Subrahmanian (Northwestern University).
== Abstracting and indexing ==
According to the Journal Citation Reports, the journal has a 2021 impact factor of 6.744, ranked in the first quantile of the journals in the category of artificial intelligence.
== Hall of Fame ==
For its 25th anniversary, the journal composed a "Hall of Fame", and the 10 recipients were announced in 2011.
== References ==
== External links ==
Official website of IEEE Intelligent Systems.
IEEE Expert archive on 'IEEE Xplore', 1986-1997 (incl.)
The past IS issues: https://www.obren359.com/ieeeis/index.html | Wikipedia/IEEE_Intelligent_Systems |
The Advanced Simulation and Computing Program (ASC) is a super-computing program run by the National Nuclear Security Administration, in order to simulate, test, and maintain the United States nuclear stockpile. The program was created in 1995 in order to support the Stockpile Stewardship Program (or SSP). The goal of the initiative is to extend the lifetime of the current aging stockpile.
== History ==
After the United States' 1992 moratorium on live nuclear testing, the Stockpile Stewardship Program was created in order to find a way to test, and maintain the nuclear stockpile. In response, the National Nuclear Security Administration began to simulate the nuclear warheads using supercomputers. As the stockpile ages, the simulations have become more complex, and the maintenance of the stockpile requires more computing power. Over the years, due to Moore's Law, the ASC program has created several different supercomputers with increasing power, in order to compute the simulations and mathematics.
In celebration of 25 years of ASC accomplishments, the Advanced Simulation and Computing Program has published this report.
== Research ==
The majority of ASC's research is done on supercomputers in three different laboratories. The calculations are verified by human calculations.
=== Laboratories ===
The ASC program has three laboratories:
Sandia National Laboratories
Los Alamos National Laboratory
Lawrence Livermore National Laboratory
=== Computing ===
==== Current supercomputers ====
The ASC program currently houses numerous supercomputers on the TOP500 list for computing power. This list changes every six months, so please visit https://top500.org/lists/top500/ for the latest list of NNSA machines. Although these computers may be in separate laboratories, remote computing has been established between the three main laboratories.
==== Previous supercomputers ====
ASCI Purple
Red Storm
Blue Gene/L: World's fastest supercomputer, November 2004 – November 2007
Blue Gene Q (aka, Sequoia)
ASCI Q: Installed in 2003, it was a AlphaServer SC45/GS Cluster and reached 7.727 Teraflops. ASQI Q used DEC Alpha 1250 MHz (2.5 GFlops) processors and a Quadrics interconnect. ASCI Q placed as the 2nd fastest supercomputer in the world in 2003.
ASCI White: World's fastest supercomputer, November 2000 – November 2001
ASCI Blue Mountain
ASCI Blue Pacific
ASCI Red: World's fastest supercomputer, June 1997 – June 2000
=== Newsletter ===
The ASC program publishes a quarterly newsletter describing many of its research accomplishments and hardware milestones.
== Elements ==
Within the ASC program, there are six subdivisions, each having their own role in the extension of the life of the stockpile.
=== Facility Operations and User Support ===
The Facility Operations and User Support subdivision is responsible for the physical computers and facilities and the computing network within ASC. They are responsible for making sure the tri-lab network, computing storage space, power usage, and the customer computing resources are all in line.
=== Computational Systems and Software Environment ===
The Computational and User Support subdivision is responsible for maintaining and creating the supercomputer software according to NNSA's standards. They also deal with the data, networking and software tools.
The ASCI Path Forward project substantially funded the initial development of the Lustre parallel file system from 2001 to 2004.
=== Verification and Validation ===
The Verification and Validation subdivision is responsible for mathematically verifying the simulations and outcomes. They also help software engineers write more precise codes in order to decrease the margin of error when the computations are run.
=== Physics and Engineering Models ===
The Physics and Engineering Models subdivision is responsible for deciphering the mathematical and physical analysis of nuclear weapons. They integrate physics models into the codes in order to gain a more accurate simulation. They deal with the way that the nuclear weapon will act under certain conditions based on physics. They also study nuclear properties, vibrations, high explosives, advanced hydrodynamics, material strength and damage, thermal and fluid response, and radiation and electrical responses.
=== Integrated Codes ===
The Integrated Codes subdivision is responsible for the mathematical codes that are produced by the supercomputers. They use these mathematical codes, and present them in a way that is understandable to humans. These codes are then used by the National Nuclear Society Administration, the Stockpile Steward Program, Life Extension Program, and Significant Finding Investigation, in order to decide the next steps that need to be taken in order to secure and lengthen the life of the nuclear stockpile.
=== Advanced Technology Development and Mitigation ===
The Advanced Technology Development and Mitigation subdivision is responsible for researching developments in high performance computing. Once information is found on the next generation of high performance computing, they decide what software and hardware needs to be adapted in order to prepare for the next generation of computers.
== References == | Wikipedia/Accelerated_Strategic_Computing_Initiative |
Descriptive research is used to describe characteristics of a population or phenomenon being studied. It does not answer questions about how/when/why the characteristics occurred. Rather it addresses the "what" question (what are the characteristics of the population or situation being studied?). The characteristics used to describe the situation or population are usually some kind of categorical scheme also known as descriptive categories. For example, the periodic table categorizes the elements. Scientists use knowledge about the nature of electrons, protons and neutrons to devise this categorical scheme. We now take for granted the periodic table, yet it took descriptive research to devise it. Descriptive research generally precedes explanatory research. For example, over time the periodic table's description of the elements allowed scientists to explain chemical reaction and make sound prediction when elements were combined.
Hence, descriptive research cannot describe what caused a situation. Thus, descriptive research cannot be used as the basis of a causal relationship, where one variable affects another. In other words, descriptive research can be said to have a low requirement for internal validity.
The description is used for frequencies, averages, and other statistical calculations. Often the best approach, prior to writing descriptive research, is to conduct a survey investigation. Qualitative research often has the aim of description and researchers may follow up with examinations of why the observations exist and what the implications of the findings are.
== Social science research ==
In addition, the conceptualizing of descriptive research (categorization or taxonomy) precedes the hypotheses of explanatory research. (For a discussion of how the underlying conceptualization of exploratory research, descriptive research and explanatory research fit together, see: Conceptual framework.)
Descriptive research can be statistical research. The main objective of this type of research is to describe the data and characteristics of what is being studied. The idea behind this type of research is to study frequencies, averages, and other statistical calculations. Although this research is highly accurate, it does not gather the causes behind a situation. Descriptive research is mainly done when a researcher wants to gain a better understanding of a topic. That is, analysis of the past as opposed to the future. Descriptive research is the exploration of the existing certain phenomena. The details of the facts won't be known. The existing phenomena's facts are not known to the person.
== Descriptive science ==
Descriptive science is a category of science that involves descriptive research; that is, observing, recording, describing, and classifying phenomena. Descriptive research is sometimes contrasted with hypothesis-driven research, which is focused on testing a particular hypothesis by means of experimentation.
David A. Grimaldi and Michael S. Engel suggest that descriptive science in biology is currently undervalued and misunderstood:
"Descriptive" in science is a pejorative, almost always preceded by "merely," and typically applied to the array of classical -ologies and -omies: anatomy, archaeology, astronomy, embryology, morphology, paleontology, taxonomy, botany, cartography, stratigraphy, and the various disciplines of zoology, to name a few. [...] First, an organism, object, or substance is not described in a vacuum, but rather in comparison with other organisms, objects, and substances. [...] Second, descriptive science is not necessarily low-tech science, and high tech is not necessarily better. [...] Finally, a theory is only as good as what it explains and the evidence (i.e., descriptions) that supports it.
A negative attitude by scientists toward descriptive science is not limited to biological disciplines: Lord Rutherford's notorious quote, "All science is either physics or stamp collecting," displays a clear negative attitude about descriptive science, and it is known that he was dismissive of astronomy, which at the beginning of the 20th century was still gathering largely descriptive data about stars, nebulae, and galaxies, and was only beginning to develop a satisfactory integration of these observations within the framework of physical law, a cornerstone of the philosophy of physics.
== Descriptive versus design sciences ==
Ilkka Niiniluoto has used the terms "descriptive sciences" and "design sciences" as an updated version of the distinction between basic and applied science. According to Niiniluoto, descriptive sciences are those that seek to describe reality, while design sciences seek useful knowledge for human activities.
== See also ==
Methodology
Normative science
Procedural knowledge
Scientific method
== References ==
== External links ==
Descriptive Research from BYU linguistics department | Wikipedia/Descriptive_science |
Applied science is the application of the scientific method and scientific knowledge to attain practical goals. It includes a broad range of disciplines, such as engineering and medicine. Applied science is often contrasted with basic science, which is focused on advancing scientific theories and laws that explain and predict natural or other phenomena.
There are applied natural sciences, as well as applied formal and social sciences. Applied science examples include genetic epidemiology which applies statistics and probability theory, and applied psychology, including criminology.
== Applied research ==
Applied research is the use of empirical methods to collect data for practical purposes. It accesses and uses accumulated theories, knowledge, methods, and techniques for a specific state, business, or client-driven purpose. In contrast to engineering, applied research does not include analyses or optimization of business, economics, and costs. Applied research can be better understood in any area when contrasting it with basic or pure research. Basic geographical research strives to create new theories and methods that aid in explaining the processes that shape the spatial structure of physical or human environments. Instead, applied research utilizes existing geographical theories and methods to comprehend and address particular empirical issues. Applied research usually has specific commercial objectives related to products, procedures, or services. The comparison of pure research and applied research provides a basic framework and direction for businesses to follow.
Applied research deals with solving practical problems and generally employs empirical methodologies. Because applied research resides in the messy real world, strict research protocols may need to be relaxed. For example, it may be impossible to use a random sample. Thus, transparency in the methodology is crucial. Implications for the interpretation of results brought about by relaxing an otherwise strict canon of methodology should also be considered.
Moreover, this type of research method applies natural sciences to human conditions:
Action research: aids firms in identifying workable solutions to issues influencing them.
Evaluation research: researchers examine available data to assist clients in making wise judgments.
Industrial research: create new goods/services that will satisfy the demands of a target market. (Industrial development would be scaling up production of the new goods/services for mass consumption to satisfy the economic demand of the customers while maximizing the ratio of the good/service output rate to resource input rate, the ratio of good/service revenue to material & energy costs, and the good/service quality. Industrial development would be considered engineering. Industrial development would fall outside the scope of applied research.)
Gauging research: A type of evaluation research that uses a logic of rating to assess a process or program. It is a type of normative assessment and used in accreditation, hiring decisions and process evaluation. It uses standards or the practical ideal type and is associated with deductive qualitative research.
Since applied research has a provisional close-to-the-problem and close-to-the-data orientation, it may also use a more provisional conceptual framework, such as working hypotheses or pillar questions. The OECD's Frascati Manual describes applied research as one of the three forms of research, along with basic research & experimental development.
Due to its practical focus, applied research information will be found in the literature associated with individual disciplines.
== Branches ==
Applied research is a method of problem-solving and is also practical in areas of science, such as its presence in applied psychology. Applied psychology uses human behavior to grab information to locate a main focus in an area that can contribute to finding a resolution. More specifically, this study is applied in the area of criminal psychology. With the knowledge obtained from applied research, studies are conducted on criminals alongside their behavior to apprehend them. Moreover, the research extends to criminal investigations. Under this category, research methods demonstrate an understanding of the scientific method and social research designs used in criminological research. These reach more branches along the procedure towards the investigations, alongside laws, policy, and criminological theory.
Engineering is the practice of using natural science, mathematics, and the engineering design process to solve technical problems, increase efficiency and productivity, and improve systems. The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application. Engineering is often characterized as having four main branches: chemical engineering, civil engineering, electrical engineering, and mechanical engineering. Some scientific subfields used by engineers include thermodynamics, heat transfer, fluid mechanics, statics, dynamics, mechanics of materials, kinematics, electromagnetism, materials science, earth sciences, and engineering physics.
Medical sciences, such as medical microbiology, pharmaceutical research, and clinical virology, are applied sciences that apply biology and chemistry to medicine.
== In education ==
In Canada, the Netherlands, and other places, the Bachelor of Applied Science (BASc) is sometimes equivalent to the Bachelor of Engineering and is classified as a professional degree. This is based on the age of the school where applied science used to include boiler making, surveying, and engineering. There are also Bachelor of Applied Science degrees in Child Studies. The BASc tends to focus more on the application of the engineering sciences. In Australia and New Zealand, this degree is awarded in various fields of study and is considered a highly specialized professional degree.
In the United Kingdom's educational system, Applied Science refers to a suite of "vocational" science qualifications that run alongside "traditional" General Certificate of Secondary Education or A-Level Sciences. Applied Science courses generally contain more coursework (also known as portfolio or internally assessed work) compared to their traditional counterparts. These are an evolution of the GNVQ qualifications offered up to 2005. These courses regularly come under scrutiny and are due for review following the Wolf Report 2011; however, their merits are argued elsewhere.
In the United States, The College of William & Mary offers an undergraduate minor as well as Master of Science and Doctor of Philosophy degrees in "applied science". Courses and research cover varied fields, including neuroscience, optics, materials science and engineering, nondestructive testing, and nuclear magnetic resonance. University of Nebraska–Lincoln offers a Bachelor of Science in applied science, an online completion Bachelor of Science in applied science, and a Master of Applied Science. Coursework is centered on science, agriculture, and natural resources with a wide range of options, including ecology, food genetics, entrepreneurship, economics, policy, animal science, and plant science. In New York City, the Bloomberg administration awarded the consortium of Cornell-Technion $100 million in City capital to construct the universities' proposed Applied Sciences campus on Roosevelt Island.
== See also ==
Applied mathematics
Basic research
Exact sciences
Hard and soft science
Invention
Secondary research
== References ==
== External links ==
Media related to Applied sciences at Wikimedia Commons | Wikipedia/Applied_sciences |
Feminist theory in composition studies examines how gender, language, and cultural studies affect the teaching and practice of writing. It challenges the traditional assumptions and methods of composition studies and proposes alternative approaches that are informed by feminist perspectives. Feminist theory in composition studies covers a range of topics, such as the history and development of women's writing, the role of gender in rhetorical situations, the representation and identity of writers, and the pedagogical implications of feminist theory for writing instruction. Feminist theory in composition studies also explores how writing can be used as a tool for empowerment, resistance, and social change. Feminist theory in composition studies emerged in the late 1960s and early 1970s as a response to the male-dominated field of composition and rhetoric. It has been influenced by various feminist movements and disciplines, such as second-wave feminism, poststructuralism, psychoanalysis, critical race theory, and queer theory. Feminist theory in composition studies has contributed to the revision of traditional rhetorical concepts, the recognition of diverse voices and genres, the promotion of collaborative and ethical communication, and the integration of personal and political issues in writing.
== Overview ==
In composition studies, feminism is generally focused on giving feedback while taking into account gender difference. Thus, an instructor with a feminist pedagogy is unlikely to favor an androcentric method of teaching. A feminist approach in composition "would focus on questions of difference and dominance in written language".
Feminist theory and composition studies co-exist when academic scholars look more closely at marginalized writers. Feminism was introduced into the field of composition through the collaboration of educational institutions and writing teachers. It was also influenced by different academic and social disciplines. This helped to change the way that compositionists viewed the expectations and standards of good writing. Laura R Micciche's writing on feminist rhetoric in regard to teaching writing pedagogies suggested that it can be used to create and re-create ideas about how writing is taught. The theory behind feminist rhetoric is to intentionally use ideology and politics when writing. Micciche states, "feminist rhetors—have revised traditional rhetorical concepts through an explicitly gendered and, quite frequently, feminist lens." Micciche also stated, "Sonja Foss and Cindy Griffin identify a primary goal of feminist rhetoric to be the creation of spaces for rhetors to "develop models for cooperative, nonadversarial, and ethical communication." Feminist scholars have used strategic arguments to obtain support for changing power dynamics within society. Feminist theory argues that the empowerment of women can improve society. In collaboration with composition studies the Feminist theory helps to create diverse educational standards in regard to the teaching of writing. Feminist scholars look at how patriarchal perspectives have shaped societies and cultures. They also bring awareness to voices that have been neglected or ignored.
This also is evident in an article by Patricia Fancher and Ellen O'Connell, in social group settings, like a classroom, men use misogyny to intimidate educators. Doing so through a work environment, men feel as though they have the power to undermine women in any setting. Makes it difficult for women to be educators because of these actions. This is reflected through the student evaluations that show gender bias against women faculty Evidently in the article is used, that women who have institutional knowledge over men, could be seen as underwhelming because of the respect aspect towards how men treat women educators. They might perceive them as unserious and not knowledgeable enough assuming because of their gender.
As Shari J. Stenberg explains, women's voices have not necessarily been absent in writing; they have sometimes just not been looked for or valued. Stenberg explains the dangers of only focusing on traditionally privileged voices, saying that doing this often removes important types of rhetorics, such as journal entries, letters, or other more feminine forms, from academic discourse or conversations. Adding these types of rhetoric back into conversations can help to redefine what are considered acceptable forms of writing for study and can also push to include traditionally marginalized groups, such as queer women and women of color. Allowing for these inclusions can also empower women to claim their multi-faceted identities, which further allows for the use of "the personal as a site of knowledge." These ideas not only can expand the range of issues women can write about, but have also helped to frame how some women teach composition. In the 1960s, the second wave of feminism began and one major goal was to raise society's consciousness of the struggles of women. The goals of feminists were largely carried out in university classrooms. Specifically, in the composition classroom, Faye Spencer Moar claimed that the way writing was taught largely favored male writers. Mary P. Hiatt claimed that women implicitly write differently than men, and that men tended to write in the dominant, most oft taught style. As men pursue that their lives are seeming more important than the respect towards women. In the writing of Dotson he explained, Men especially in institution policies think they have the power to use women in authority positions. They have their own issues and pursue them into women's issues. This instance includes a conflict of how men will feel a constant threat towards women's intelligence on how the perceived power they feel is being used to justify the actions of men. As mentioned men push their issues on to women as they feel that they aren't a problem in the workplace.
Hiatt argues that the terms "masculine" and "feminine" are applied to styles of writing–that of men and women, respectively–but, instead of describing the style, what is actually described is the male views on both men and women. Her examples include "strong", "rational", and "logical" for men, and "emotional", "hysterical", and "silly" for women. Thus, the aim of feminism in composition studies was to create a classroom in which women perceived themselves intellectually and in which their voices were relevant in what some feminists perceive to be an androcentric world.
Significant goal to unmask the patriarchy structure and the difference between misogyny and sexism. Educate individuals on the meaning of feminisms through literature. Ability to restructure what the society illustrates as gender-based culture and regress towards younger generations. This is consistently viewed through societal norms from past generations that have also experienced through the respect of women as emphasized through Lori Chamberlain, The concept of marginalized women due to western society tendencies. Be able to argue that the women's duties at one are not just what women are for. Re-examine the value of productivity over reproductively towards masculine or feminine. As well as referenced in Jenkins, Ameliorative analysis of gender concepts seeks to do so by defining women by referring to subordination. Women are difficult and risk exclusion or marginalization. Typically women in oppressed social groups, such as women of color or working class. These marginalized attributes that have almost benefited significantly towards men and how they conceptually perceive women. The societal norms of keeping up with what has been perceived towards women and how women are treated with respect within society.
== Pedagogy ==
Early feminist theory's inflections on composition and pedagogy aimed to challenge the cultural conventions and expectations of the feminine gender role. Women were encouraged to write independently, without relying on external validation. At the time, this process worked in conversation with the Expressivist-Process movement in composition, which valued self-expression, to enable women to grow conscious of themselves.
Finding an authentic female voice in composition can be a challenge amidst a context that does not value what women have to say. Through the lens of feminism and composition, writers and students are encouraged to boldly express women's experience in both content and form.
Elizabeth Flynn's article "Composing as a Woman" is the most cited example of the relationship between composition studies and feminism. She writes that feminist theory "emphasize[s] that males and females differ in their developmental processes and in their interactions with others". Thus, a feminist instructor will take into account the implicit differences between male and female writers and teach appropriately, without favoring or focusing on androcentric or gynocentric studies. Feminist pedagogy involves reading texts written by women, and taking care to understand those texts are not simply appropriations of texts written by men, without any sort of critique of androcentrism.
One style of feminist theory that is being utilized in the composition classroom is the theory of Invitational Rhetoric. Sonja K. Foss and Cindy L. Griffin, first proposed the idea of Invitational Rhetoric as "grounded in the feminist principles of equality, imminent value and self-determination" (5). Originally this was considered a communication theory. More recently, it has grown across curriculums, including the use in English composition classrooms. As a newer philosophy in English composition, the use of invitational rhetoric is used as a way to make students feel comfortable in the classroom setting. By using Foss and Griffin's Invitational Rhetoric theory as a guide in conducting classes, instructors are able to encourage their students to share their beliefs and learn to respect others opinions, without having to feel like opposite views are being force-fed to them in a way that would cause them to turn away from debate or discussions that could foster critical thinking. According to Foss and Griffin, Invitational Rhetoric works through the use of debate and discussion as a way to learn about various viewpoints, with the freedom to ultimately make up one's own minds about the topic. Abby Knoblauch describes the use of Invitational Rhetoric as a way to make sure conservative students are not put on the offensive by more liberal teachers and their ideals. By using Invitational Rhetoric as a guide in presenting material, an instructor can in turn foster a student's creativity and encourage them to write about what is important to them.
Shari J. Stenberg further suggests that women must be able to define their roles in the composition classroom in order to have successful interactions with their students. She proposes the use of metaphors for women to do this, going on to say that they must be willing to constantly reinvent themselves in order to allow for their own shifting personalities and how their personal identities influence their interactions with their students. Female composition teachers have previously been viewed as disciplinarians and care takers. Stenberg supposes that, in order for women to be able to define themselves in their classrooms, they must take control of their unique and multi-dimensional identities in order to create their own space as educators.
== Research ==
Flynn researched the narratives of her first-year composition students for their disparities. She says, "The narratives of the female students are stories of interaction, of connection, or of frustrated connection. The narratives of the male students are stories of achievement, of separation, or of frustrated achievement". Feminist research "tries to arrive at hypotheses that are free of gender loyalties," says Patricia A. Sullivan.
Sandra Harding lists three characteristics of feminist research in her book Feminism and Methodology that Sullivan deems appropriate for consideration into feminist studies of composition, not just the social sciences, which is what Harding is concerned with. These characteristics are, first, using women's experiences as an "indicator of the realist against which hypotheses are tested." Second, the research is "designed for women" and provides "social phenomena that [women] want or need." Third, it "insists that the inquirer her/himself be placed in the same critical plane as the overt subject matter" .
Sullivan believes these three characteristics are relevant to composition studies because of the common practice to conduct research from a standpoint that is gender-neutral (neither men over women, nor vice versa), gender-inclusive (considering both male and female perspectives, processes, and styles, not just those of females), and researcher disinterestedness (the common practice of keeping one's self out of the research process in order to allow for an unbiased analysis).
== References == | Wikipedia/Feminist_theory_in_composition_studies |
Feminist film theory is a theoretical film criticism derived from feminist politics and feminist theory influenced by second-wave feminism and brought about around the 1970s in the United States. With the advancements in film throughout the years feminist film theory has developed and changed to analyse the current ways of film and also go back to analyse films past. Feminists have many approaches to cinema analysis, regarding the film elements analyzed and their theoretical underpinnings.
== History ==
The development of feminist film theory was influenced by second wave feminism and women's studies in the 1960s and 1970s. Initially, in the United States in the early 1970s, feminist film theory was generally based on sociological theory and focused on the function of female characters in film narratives or genres. Feminist film theory, such as Marjorie Rosen's Popcorn Venus: Women, Movies, and the American Dream (1973) and Molly Haskell’s From Reverence to Rape: The Treatment of Women in Movies (1974) analyze the ways in which women are portrayed in film, and how this relates to a broader historical context. Additionally, feminist critiques also examine common stereotypes depicted in film, the extent to which the women were shown as active or passive, and the amount of screen time given to women.
In contrast, film theoreticians in England concerned themselves with critical theory, psychoanalysis, semiotics, and Marxism. Eventually, these ideas gained hold within the American scholarly community in the 1980s. Analysis generally focused on the meaning within a film's text and the way in which the text constructs a viewing subject. It also examined how the process of cinematic production affects how women are represented and reinforces sexism.
British feminist film theorist, Laura Mulvey, best known for her essay, "Visual Pleasure and Narrative Cinema", written in 1973 and published in 1975 in the influential British film theory journal, Screen was influenced by the theories of Sigmund Freud and Jacques Lacan. "Visual Pleasure" is one of the first major essays that helped shift the orientation of film theory towards a psychoanalytic framework. Prior to Mulvey, film theorists such as Jean-Louis Baudry and Christian Metz used psychoanalytic ideas in their theoretical accounts of cinema. Mulvey's contribution, however, initiated the intersection of film theory, psychoanalysis and feminism.
In 1976, the journal Camera Obscura was published by beginning graduate students Janet Bergstrom, Sandy Flitterman, Elisabeth Lyon, and Constance Penley. They discussed how women were portrayed in films, but excluded from the development process. Camera Obscura is still published to this day by Duke University Press and has moved from just film theory to media studies.
Other key influences come from Metz's essay The Imaginary Signifier, "Identification, Mirror," where he argues that viewing film is only possible through scopophilia (pleasure from looking, related to voyeurism), which is best exemplified in silent film. Also, according to Cynthia A. Freeland in "Feminist Frameworks for Horror Films," feminist studies of horror films have focused on psychodynamics where the chief interest is "on viewers' motives and interests in watching horror films".
Beginning in the early 1980s, feminist film theory began to look at film through a more intersectional lens. The film journal Jump Cut published a special issue about titled "Lesbians and Film" in 1981 which examined the lack of lesbian identities in film. Jane Gaines's essay "White Privilege and Looking Relations: Race and Gender in Feminist Film Theory" examined the erasure of black women in cinema by white male filmmakers. While Lola Young argues that filmmakers of all races fail to break away from the use to tired stereotypes when depicting black women. Other theorists who wrote about feminist film theory and race include bell hooks and Michele Wallace.
From 1985 onward the Matrixial theory of artist and psychoanalyst Bracha L. Ettinger revolutionized feminist film theory.
Her concept, from her book, The Matrixial Gaze, has established a feminine gaze and has articulated its differences from the phallic gaze and its relation to feminine as well as maternal specificities and potentialities of "coemergence", offering a critique of Sigmund Freud's and Jacques Lacan's psychoanalysis, is extensively used in analysis of films, by female directors, like Chantal Akerman, as well as by male directors, like Pedro Almodovar. The matrixial gaze offers the female the position of a subject, not of an object, of the gaze, while deconstructing the structure of the subject itself, and offers border-time, border-space and a possibility for compassion and witnessing. Ettinger's notions articulate the links between aesthetics, ethics and trauma.
Recently, scholars have expanded their work to include analysis of television and digital media. Additionally, they have begun to explore notions of difference, engaging in dialogue about the differences among women (part of movement away from essentialism in feminist work more generally), the various methodologies and perspectives contained under the umbrella of feminist film theory, and the multiplicity of methods and intended effects that influence the development of films. Scholars are also taking increasingly global perspectives, responding to postcolonialist criticisms of perceived Anglo- and Eurocentrism in the academy more generally. Increased focus has been given to, "disparate feminisms, nationalisms, and media in various locations and across class, racial, and ethnic groups throughout the world". Scholars in recent years have also turned their attention towards women in the silent film industry and their erasure from the history of those films and women's bodies and how they are portrayed in the films. Jane Gaines's Women's Film Pioneer Project (WFPP), a database of women who worked in the silent-era film industry, has been cited as a major achievement in recognizing pioneering women in the field of silent and non-silent film by scholars such as Rachel Schaff.
As of recent years many believe feminist film theory to be a fading area of feminism with the massive amount of coverage currently around media studies and theory. As these areas have grown the framework created in feminist film theory have been adapted to fit into analysing other forms of media.
== Key themes ==
=== The gaze and the female spectator ===
Considering the way that films are put together, many feminist film critics have pointed to what they argue is the "male gaze" that predominates classical Hollywood filmmaking. Budd Boetticher summarizes the view:
"What counts is what the heroine provokes, or rather what she represents. She is the one, or rather the love or fear she inspires in the hero, or else the concern he feels for her, who makes him act the way he does. In herself, the woman has not the slightest importance.": 28
Laura Mulvey expands on this conception to argue that in cinema, women are typically depicted in a passive role that provides visual pleasure through scopophilia,: 30 and identification with the on-screen male actor.: 28 She asserts: "In their traditional exhibitionist role women are simultaneously looked at and displayed, with their appearance coded for strong visual and erotic impact so that they can be said to connote to-be-looked-at-ness,": 28 and as a result contends that in film a woman is the "bearer of meaning, not maker of meaning.": 28 Mulvey argues that the psychoanalytic theory of Jacques Lacan is the key to understanding how film creates such a space for female sexual objectification and exploitation through the combination of the patriarchal order of society, and 'looking' in itself as a pleasurable act of scopophilia, as "the cinema satisfies a primordial wish for pleasurable looking.": 31
While Laura Mulvey's paper has a particular place in the feminist film theory, it is important to note that her ideas regarding ways of watching the cinema (from the voyeuristic element to the feelings of identification) are important to some feminist film theorists in terms of defining spectatorship from the psychoanalytical viewpoint.
Mulvey identifies three "looks" or perspectives that occur in film which, she argues, serve to sexually objectify women. The first is the perspective of the male character and how he perceives the female character. The second is the perspective of the spectator as they see the female character on screen. The third "look" joins the first two looks together: it is the male audience member's perspective of the male character in the film. This third perspective allows the male audience to take the female character as his own personal sex object because he can relate himself, through looking, to the male character in the film.: 28
In the paper, Mulvey calls for a destruction of modern film structure as the only way to free women from their sexual objectification in film. She argues for a removal of the voyeurism encoded into film by creating distance between the male spectator and the female character. The only way to do so, Mulvey argues, is by destroying the element of voyeurism and "the invisible guest". Mulvey also asserts that the dominance men embody is only so because women exist, as without a woman for comparison, a man and his supremacy as the controller of visual pleasure are insignificant. For Mulvey, it is the presence of the female that defines the patriarchal order of society as well as the male psychology of thought.
Mulvey's argument is likely influenced by the time period in which she was writing. "Visual Pleasure and Narrative Cinema" was composed during the period of second-wave feminism, which was concerned with achieving equality for women in the workplace, and with exploring the psychological implications of sexual stereotypes. Mulvey calls for an eradication of female sexual objectivity, aligning herself with second-wave feminism. She argues that in order for women to be equally represented in the workplace, women must be portrayed as men are: as lacking sexual objectification.
Mulvey proposes in her notes to the Criterion Collection DVD of Michael Powell's controversial film, Peeping Tom (a film about a homicidal voyeur who films the deaths of his victims), that the cinema spectator's own voyeurism is made shockingly obvious and even more shockingly, the spectator identifies with the perverted protagonist. The inference is that she includes female spectators in that, identifying with the male observer rather than the female object of the gaze.
=== Realism and counter cinema ===
The early work of Marjorie Rosen and Molly Haskell on the representation of women in film was part of a movement to depict women more realistically, both in documentaries and narrative cinema. The growing female presence in the film industry was seen as a positive step toward realizing this goal, by drawing attention to feminist issues and putting forth an alternative, true-to-life view of women. However, Rosen and Haskell argue that these images are still mediated by the same factors as traditional film, such as the "moving camera, composition, editing, lighting, and all varieties of sound." While acknowledging the value in inserting positive representations of women in film, some critics asserted that real change would only come about from reconsidering the role of film in society, often from a semiotic point of view.
Claire Johnston put forth the idea that women's cinema can function as "counter cinema." Through consciousness of the means of production and opposition of sexist ideologies, films made by women have the potential to posit an alternative to traditional Hollywood films. Initially, the attempt to show "real" women was praised, eventually critics such as Eileen McGarry claimed that the "real" women being shown on screen were still just contrived depictions. In reaction to this article, many women filmmakers integrated "alternative forms and experimental techniques" to "encourage audiences to critique the seemingly transparent images on the screen and to question the manipulative techniques of filming and editing".
=== Additional theories ===
B. Ruby Rich argues that feminist film theory should shift to look at films in a broader sense. Rich's essay In the Name of Feminist Film Criticism claims that films by women often receive praise for certain elements, while feminist undertones are ignored. Rich goes on to say that because of this feminist theory needs to focus on how film by women are being received.
Coming from a black feminist perspective, American scholar, bell hooks, put forth the notion of the “oppositional gaze,” encouraging black women not to accept stereotypical representations in film, but rather actively critique them. The “oppositional gaze” is a response to Mulvey's visual pleasure and states that just as women do not identify with female characters that are not "real," women of color should respond similarly to the one denominational caricatures of black women.
Janet Bergstrom's article “Enunciation and Sexual Difference” (1979) uses Sigmund Freud's ideas of bisexual responses, arguing that women are capable of identifying with male characters and men with women characters, either successively or simultaneously. Miriam Hansen, in "Pleasure, Ambivalence, Identification: Valentino and Female Spectatorship" (1984) put forth the idea that women are also able to view male characters as erotic objects of desire. In "The Master's Dollhouse: Rear Window," Tania Modleski argues that Hitchcock's film, Rear Window, is an example of the power of male gazer and the position of the female as a prisoner of the "master's dollhouse".
Carol Clover, in her popular and influential book, Men, Women, and Chainsaws: Gender in the Modern Horror Film (Princeton University Press, 1992), argues that young male viewers of the Horror Genre (young males being the primary demographic) are quite prepared to identify with the female-in-jeopardy, a key component of the horror narrative, and to identify on an unexpectedly profound level. Clover further argues that the "final girl" in the psychosexual subgenre of exploitation horror invariably triumphs through her own resourcefulness, and is not by any means a passive, or inevitable, victim. Laura Mulvey, in response to these and other criticisms, revisited the topic in "Afterthoughts on 'Visual Pleasure and Narrative Cinema' inspired by Duel in the Sun" (1981). In addressing the heterosexual female spectator, she revised her stance to argue that women can take two possible roles in relation to film: a masochistic identification with the female object of desire that is ultimately self-defeating, or an identification with men as the active viewers of the text. A new version of the gaze was offered in the early 1990s by Bracha Ettinger, who proposed the notion of the "matrixial gaze".
== List of select feminist film theorists and critics ==
== See also ==
Bechdel test
Cinesexuality
Film theory
List of female film and television directors
List of lesbian filmmakers
List of LGBT films directed by women
Misogyny in horror films
Women's cinema
Camera Obscura (journal)
== References ==
== Further reading ==
Sue Thornham (ed.), Feminist Film Theory. A Reader, Edinburgh University Press 1999
Multiple Voices in Feminist Film Criticism, edited by Diane Carson, Janice R. Welsch, Linda Dittmar, University of Minnesota Press 1994
Kjell R. Soleim (ed.), Fatal Women. Journal of the Center for Women's and Gender Research, Bergen Univ., Vol. 11: 115–128, 1999.
Bracha L. Ettinger (1999), "Matrixial Gaze and Screen: Other than Phallic and Beyond the Late Lacan." In: Laura Doyle (ed.) Bodies of Resistance. Evanston, Illinois: Northwestern University Press, 2001.
Beyond the Gaze: Recent Approaches to Film Feminisms. Signs Vol. 30, no. 1 (Autumn 2004).
Mulvey, Laura (Autumn 1975). "Visual pleasure and narrative cinema". Screen. 16 (3): 6–18. doi:10.1093/screen/16.3.6.
Griselda Pollock, Differencing the Canon. Routledge, London & N.Y., 1999.
Griselda Pollock (ed.), Psychoanalysis and the Image. Oxford: Blackwell, 2006.
Raberger, Ursula: New Queer Oz: Feministische Filmtheorie und weibliche Homosexualiät in zwei Filmen von Samantha Lang. VDM Verlag Dr. Müller: 2009, 128 p. (German) | Wikipedia/Feminist_film_theory |
Feminist philosophy of science is a branch of feminist philosophy that seeks to understand how the acquirement of knowledge through scientific means has been influenced by notions of gender identity and gender roles in society. Feminist philosophers of science question how scientific research and scientific knowledge itself may be influenced and possibly compromised by the social and professional framework within which that research and knowledge is established and exists. The intersection of gender and science allows feminist philosophers to reexamine fundamental questions and truths in the field of science to reveal how gender biases may influence scientific outcomes. The feminist philosophy of science has been described as being located "at the intersections of the philosophy of science and feminist science scholarship" and has attracted considerable attention since the 1980s.
Feminist philosophers of science use feminist epistemology as a lens through which to analyze scientific methods, results, and analysis. This epistemology emphasizes "situated knowledge" that hinges on one's individual perspectives on a subject; feminist philosophers often highlight the under-representation of female scientists in academia and the resulting androcentric biases that exist in science. Feminist philosophers suggest that integrating feminine modes of thought and logic that are undervalued by current scientific theory will enable improvement and broadening of scientific perspectives. Advocates assert that inclusive epistemology via applying a feminist philosophy of science will allow for a field of science that is more accessible to public. Practitioners of feminist philosophy of science also seek to promote gender equality in scientific fields and greater recognition of the achievements of female scientists.
Critics have argued that the political commitments of advocates of feminist philosophy of science is incompatible with modern-day scientific objectivity, emphasizing the success of the scientific method due to its lauded objectivity and "value-free" methods of knowledge-making.
== History ==
The feminist philosophy of science was born out of feminist science studies in the 1960s, when female primatologists began to reevaluate stereotypes of male and female behavior in animals. However, feminist reform born from this branch of philosophy did not receive formal backing from the federal government until the late 1980s, after which its prominence as a philosophy of science grew. In 1986, the National Institutes of Health (NIH) instituted a requirement for both male and female subjects in medical and clinical research. In the early 1990s, the NIH Office of Research on Women's Health and $625 million in funding for the Women's Health Initiative represented drastic support for applications of the feminist philosophy of science in the public sphere.
These reforms coincided with the growth of the feminist philosophy of science in the academic realm. In August 1978, Catharine R. Stimpson and Joan Burstyn published an editorial in a special volume of Signs titled "Women, Science, and Society" highlighting the lack of female scholarship in science and its effects. Their article introduced three areas of scholarship: critiques of gender bias in science, a history of women in science, and social science data and public policy considerations on the status of women in the science.
In the 1980s, feminist science studies had become more philosophical, corresponding to a shift in many fields of academic feminism. Two main fields of thought emerged, creating a divide between scholarship on "women in science" and "feminist critiques of science". While both agreed on the existence of an androcentric bias in science, the former focused on an increase in funding and hiring of female scientists, while the latter called for an interrogation of the underlying assumptions and biases present in scientific theory and methods. The latter became the primary focus of feminist philosophers of science moving forward, and conflict arose between women who were actually involved in scientific research and those attempting a feminist critique of gender roles in science.
By the late nineties, feminist science studies had become well-established and had many prominent scholars within its field of study. Philosopher John Searle characterized feminism in 1993 as a "cause to be advanced" more so than a "domain to be studied", signaling the rise in the use of feminist philosophy as a lens through which to perform science.
== Feminist philosophy of science ==
=== Objectivity and values ===
Feminist philosophers of science state that, rather being purely objective, science is necessarily biased and not value free. This branch of feminist philosophy argues that full understanding and interpretation of scientific results requires an interrogation of how gender inequities influence the credibility of research methods.
Feminist philosophers of science argue that equity and inclusion can help create more robust research methods to alleviate gender bias and produce more thorough results. For example, a lack of female research subjects and perspectives in academic research undermines the "contextual empiricism" required by true neutrality". Thus, because science is affected by social, cultural, and political agendas via funding, feminist philosophers of science believe equitable funding is a critical first step in removing biases from research and increasing autonomy of science.
The values and criticisms of the feminist philosophy of science are more broadly categorized under the idea of "Socially Responsible Science (SRS)". Socially responsible science argues for an impartial evaluation that makes a distinction between facts and values, which is necessary for the creation of "good science". In "The Source and Status of Values for Socially Responsible Science," Matthew Brown discusses the lens of being socially engaged in science as a means of "craft[ing] better ethics codes for their professional societies." He believes this is done by emphasizing "Ethics and social and political philosophy at least as much as epistemology and metaphysics." Valuing the study of ethics, politics, and social studies in understanding the basis upon which research is performed, Brown argues that a new, impartial agenda for science can be developed.
=== Standpoint and knowledge ===
The feminist philosophy of science has traditionally been highly critical of the lack of access and opportunities for women in science, resulting in scientific results that have been "distorted by sexist values." Sharon Crasnow highlights how the "exclusion of women as researchers and subjects" in scientific research, studies and projects can lead to incomplete methods and methodologies and ultimately unreliable or inaccurate results. Some feminist philosophies of science question whether science can lay claim to "impartiality, neutrality, autonomy, and indifference to political positions and the values" when the "neutral" position is benchmarked against the values held by one culture (i.e. western patriarchy) among the multitude of cultures participating in modern science.
A complete Standpoint theory contains seven parts to fully understand the location of power one has, their "epistemic privilege". Anderson lays these out in her journal Feminist Epistemology and Philosophy of Science. The first point of the theory must state the social location of the authority. The second, how large is the grasp of this authority, what does it claim privilege over. Third, what aspect of the social location allows authority. Fourth, the grounds of the authority, what justifies their privilege. Fifth, the type of epistemic privilege it is claiming to have. Sixth, the other perspectives similar to its own. Lastly, access to this privilege, by occupying the social location is it sufficient to gain access to the perspective.
Relating to Objectivity, epistemology can give a fuller understanding of the nature of scientific knowledge. Feminist epistemology is one of a group of approaches in science studies that urges us to recognize the role of the social in the production of knowledge. Feminist epistemology directs people to consider features of themselves and culture as beings of knowledge that had been outside what was considered appropriate. The goals of researchers and the values that shape the choice of goals are relevant to the knowledge we arrive at. This has implications both for how we train scientists and for how we educate everyone about science. If science is seen as more connected to application, more related to human needs and desires, traditionally underrepresented groups will have greater motivation to succeed and persist in their science courses or pursue scientific careers. Motivation will be greater as members of underrepresented groups see how science can produce knowledge that has value to their concerns in ways that are consistent with good scientific methodology. Feminist epistemology urges a continued exploration of science in this way and so has much to offer science education.
=== Criticisms of feminist epistemology in science ===
External critics of the feminist philosophy of science find several flaws in its logic and values. Because feminist philosophers argue that scientific "facts" are necessarily biased by values, one major criticisms is scientists under this epistemological constraint will "impos[e] political constraints on the conclusions it will accept" and that "truths inconvenient to a feminist perspective will be censored." Moreover, some critics contend that while values are important in the interpretation of scientific results, attention to the values present in scientific inquiry does not displace the importance of scientific evidence. Some further argue that because of the "corrosive cynicism about science" suggested by feminist critique, feminist philosophers of science may support a wholly anti-science movement.
Another criticism commonly levied at the feminist philosophy of science is that it suggests all women have the same perspectives and that objective truths can be revealed by performing science in a "feminine" way, which creates multiple issues. By homogenizing the perspectives of women into one monolithic viewpoint, the feminist philosophy of science may valorize a certain female mode of thinking that can be used to diminish individual female perspectives. Furthermore, some critics worry that promoting a feminist epistemological lens through which to perform research will result in an intellectual ghetto for female scientists, who will be pigeonholed into particular fields where feminist theory is deemed more relevant.
=== Applications of the feminist philosophy of science ===
Many applications of the feminist philosophy of science exist in recent work, with feminist epistemology applied to research a variety of scientific fields.
Feminist epistemology is particularly relevant in the area of reproductive biology. Emily Martin describes how stereotypes of male and female behavior have affected descriptions of the human fertilization process. She argues that, due to various perceptions of women throughout history, biologists have mischaracterized the interaction between egg and sperm; Martin applies the feminist philosophy of science to call for an objective model of fertilization unbiased by societal gender roles and harmful perceptions of female behavior.
Further work regarding the application of the feminist philosophy of science in evolutionary biology has been explored. Historically, evolutionary biologists assumed that the female orgasm was assumed to assist in reproduction, since it was analogous to the male orgasm, despite clear evidence to the contrary However, recent accounts describe that these assumptions were largely incorrect. Elisabeth A. Lloyd's findings from extended case studies of the female orgasm illustrate that core beliefs developed solely through assumptions predicated on gender result in major flaws in scientific research, illustrating the importance of applying feminist philosophy in academic work.
Supporters also argue that the feminist philosophy of science should be applied to primary and secondary schooling. To combat the underrepresentation of women in science, technology, engineering, and math, reforms should be implemented through a feminist philosophical viewpoint. Rather than combating gender biases in science by implementing feminist viewpoints into research and analysis, some suggest that encouraging girls to pursue STEM via educational reforms will intrinsically revert gender biases in scientific research.
== See also ==
Feminist technoscience
== References == | Wikipedia/Feminist_philosophy_of_science |
The Florida Public Archaeology Network, or FPAN, is a state supported organization of regional centers dedicated to public outreach and assisting Florida municipalities and the Florida Division of Historical Resources "to promote the stewardship and protection of Florida's archaeological resources." FPAN was established in 2004, upon legislation that sought to establish a "Florida network of public archaeology centers to help stem the rapid deterioration of this state's buried past and to expand public interest in archaeology."
== Regions ==
The Florida Public Archaeology Network is divided into eight regions:
Northwest Region – Counties of Escambia, Santa Rosa, Okaloosa, Walton, Holmes, Washington, Bay, Jackson, Calhoun, and Gulf.
North Central Region – Counties of Gadsden, Liberty, Franklin, Leon, Wakulla, Jefferson, Madison, Taylor, Hamilton, Suwannee, Lafayette, Dixie, Columbia, Baker, and Union.
Northeast Region – Counties of Nassau, Duval, Clay, St. Johns, Putnam, Flagler, and Volusia.
Central Region – Counties of Gilchrist, Levy, Bradford, Alachua, Marion, Citrus, Hernando, Sumter, and Lake.
East Central Region – Counties of Seminole, Orange, Osceola, Brevard, Indian River, Okeechobee, St. Lucie, and Martin.
West Central Region – Counties of Pasco, Pinellas, Hillsborough, Polk, Manatee, Sarasota, Hardee, DeSoto, and Highlands.
Southwest Region – Counties of Charlotte, Glades, Lee, Hendry, and Collier.
Southeast Region – Counties of Palm Beach, Broward, Miami Dade, and Monroe.
== Projects ==
The Florida Panhandle Shipwreck Trail features 12 shipwrecks including artificial reefs and a variety of sea life for diving, snorkeling and fishing offshore of Pensacola, Destin, Panama City and Port St. Joe, Florida. The "trail offers an adventurous opportunity for heritage, recreational, and ecological tourism."
=== USS Oriskany ===
Pensacola:
The largest artificial reef in the world, this wreck was named a Top 25 U.S. Dive Site in 2014 by Scuba Diving magazine.
Depth: 80–212 feet
Sink Date: May 17, 2006
Nicknamed the "Great Carrier Reef," the USS Oriskany, also known as the "Mighty O," was sunk after serving in the Pacific and earning battle stars for service in both the Korean and Vietnam Wars. Located 22 miles off the coast of Pensacola and submerged in more than 200 feet of water, this shipwreck offers exploration for divers of all skills and a myriad of pelagic and sedentary marine life.
=== YDT-14 ===
Pensacola:
After years of training US Navy divers, this shipwreck is now a dive destination itself.
Depth: 90 feet
Sink Date: April 2000
Gulf storms have buried this diving tender to her decks, but the upper structure around 65 feet below sea level offers boundless exploration for divers.
=== San Pablo ===
Pensacola: This ship is steeped in a history of foreign spies, espionage and secret military operations.
Depth: 80 feet
Sink Date: August 11, 1944
Launched from Belfast, Ireland, in 1915, San Pablo started her life as a fruit transport running bananas from Central America to the United States. Early during World War II she was sunk by a U-boat in Costa Rica and refloated. In August 1944 amid rampant rumors of foreign spies and espionage, San Pablo exploded off Pensacola's coast, hence her local name "Russian Freighter." Recently declassified documents reveal that San Pablo was actually destroyed in a top-secret U.S. military operation testing an experimental weapon system. American agents sank the freighter with a radio-controlled boat carrying over 3,000 lbs. of explosives. Her wreckage is scattered across the seafloor where divers can explore boilers, refrigeration coils and huge sections of twisted metal, all home to an impressive array of marine life.
=== Pete Tide II ===
Pensacola: Three levels of decks offer boundless exploration in this upright underwater artificial reef. Depth: 100 feet Sink Date: 1993 An offshore oilfield supply vessel, this upright wreck boasts three levels of superstructure and an intact pilothouse making it home to mesmerizing schools of fish.
=== Three Coal Barges ===
Pensacola: Shallow wreckage is ideal for beginning divers to practice. Depth: 50 feet Sink Date: 1974 Sunk in an emergency operation to avoid the barges running ashore, these three barges lie end-to-end in less than 50 feet of water, creating a thriving habitat for marine life and a shallow destination for diving and snorkeling.
=== Miss Louise ===
Destin: Shallow waters make for an ideal dive training site. Depth: 60 feet Sink Date: 1997 A push tug, Miss Louise sits upright in 60 feet of water, brimming with marine life and serving as a great site for novice and intermediate divers.
=== Black Bart ===
Panama City: Appliances {including the ship's toilet} remain intact in the Head and Galley, offering unique exploration for divers. Depth: 85 feet Sink Date: 1993 Sunk in memory of Navy Supervisor of Salvage Captain Charles "Black Bart" Bartholomew, this oilfield supply vessel remains largely intact with its wheelhouse {40-foot depth}, deck {66-foot depth} and open cargo holds {80-foot depth}.
=== Fami Tugs ===
Panama City Beach: Nature rearranged this artificial reef, picking up one of the two tugs and situating it on top of the other for a most unusual diving experience. Depth: 100 feet Sink Date: 2003 Once resting bow-to-bow and joined by a 30-foot tether, a storm picked up one of the sunken tugs and placed it directly atop the other, allowing divers to enjoy two shipwrecks at once and serving as a reminder to visitors of the power of the sea.
=== USS Accokeek ===
Panama City Beach: More than 50 years of service around the world. Depth: 100 feet Sink Date: July 9, 2000 After a global run serving as a fleet tug, the USS Accokeek was repeatedly sunk and refloated for salvage and ordinance training at the Navy Dive School in Panama City, Fla.
=== USS Strength ===
Panama City Beach: This ship originally laid on her side at the ocean's floor before being righted by a hurricane in 1995. Depth: 80 feet Sink Date: May 19, 1987 A World War II minesweeper that survived a midget submarine attack and a kamikaze raid, the USS Strength offers divers a tour of a large artificial reef, including a large goliath grouper.
=== USS Chippewa ===
Panama City Beach: This tugboat is known for breaking speed records during her U.S. Navy tenure. Depth: 100 feet Sink Date: Feb. 8, 1990 Sunk as a Navy training platform for the Panama City Experimental Dive Unit in 1990, the USS Chippewa now sits upright on the bottom in 100 feet of water and offers good opportunities to observe marine life.
=== Vamar ===
Port St. Joe: Made famous as a support ship for Admiral Richard Byrd's 1928 Antarctic expedition, the Vamar sunk under mysterious circumstances. Depth: 25 feet Sink Date: March 21, 1942 Sunk under mysterious circumstances during a trip to carry lumber to Cuba in 1942, the Vamar now lies in just 25 feet of water, offering a large steam engine, bilge keels and a wide variety of marine life for divers and snorkelers.
=== Mardi Gras Shipwreck ===
In May 2007, an expedition, led by Texas A&M University and funded by the Okeanos Gas Gathering Company (OGGC), under an agreement with the Minerals Management Service (now BOEM), was launched to undertake the deepest scientific archaeological excavation ever attempted at that time to study a wreck site on the seafloor and recover artifacts for eventual public display in the Louisiana State Museum. The "Mardi Gras Shipwreck" sank some 200 years ago about 35 miles off the coast of Louisiana in the Gulf of Mexico in 4,000 feet (1,200 meters) of water. The shipwreck, whose real identity remains a mystery, lay forgotten at the bottom of the sea until it was discovered in 2002 by an oilfield inspection crew working for the OGGC. As part of the educational outreach Nautilus Productions in partnership with BOEM, Texas A&M University, the Florida Public Archaeology Network and Veolia Environmental produced a one-hour HD documentary about the project, short videos for public viewing and provided video updates during the expedition. The Center for Maritime Archaeology and Conservation was tasked with the conservation and analysis of the material recovered from the wreck site.
== References ==
== External links ==
Florida Panhandle Shipwreck Trail
Mardi Gras Shipwreck Project
Mystery Mardi Gras Shipwreck Short | Wikipedia/Florida_Public_Archaeology_Network |
A radio-controlled submarine is a scale model of a submarine that can be piloted by radio control. The most common form are those that are exploited by amateurs. These can be cheap toys or complex projects using Electronics.
Oceanography and the military also operate Radio Control submarines, we then speak of a remotely operated underwater vehicle.
== History ==
== Operation ==
=== Radio transmission by water ===
The more the conductivity of a medium increases, the more the radio signal that passes through it is attenuated, the more the high frequencies are attenuated and the more they tend to be reflected on the surface of the water. Thus, communication with military submarines uses very low-frequency electromagnetic radiation for this reason. Military frequencies are well below the recreational radio control bands, but the lowest recreational bands - usually around 27 Mhz/40 Mhz - can penetrate several feet of water over short distances - usually less than 50 yards. Penetration at these frequencies is best in fresh water - a lake or a swimming pool, and difficult to impossible in seawater. Modern radio controls using the 2.4 GHz band penetrate the water very poorly and are of no use for a model submariner who wishes to dive.
In order for the underwater radio to work, even at these frequencies, the receiving antenna must be completely isolated from the surrounding water. The plastic-covered wire provides adequate insulation - the antenna does not need to be kept in an airtight container - but the cut end of this wire must be sealed to prevent water penetration. Depending on the water conditions, the positive control can be maintained at a depth of about 10 feet.
Remote-controlled professional or military diving equipment can be controlled using a tether or audio signals. Very often, these equipments are equipped with on-board computers which allow autonomous operation along a determined path, so that continuous communication with the control base is not necessary. The advent of cheap small computers such as the Raspberry Pi or the Arduino has allowed scale models of submarines to imitate their professional brothers and provide autonomous control in situations where radio transmission or adequate visibility is lacking.
=== Security system ===
Since the control of submarine models is not always reliable, these models are usually equipped with a variety of devices intended to prevent the loss of models. It is possible to use fail-safe systems that detect the loss of signal and control the submarine on the surface, or pressure sensors that limit the depth reached. Such specialized complexity generally makes a reduced submarine model an expensive object compared to a reduced surface boat model.
=== Diving systems ===
==== Dynamic diving ====
These models are positively floating and remain on the surface until sufficient thrust is generated above their Rudder/Control surfaces to force them to descend underwater. Dynamic diving models are both the cheapest and the simplest, since complex buoyancy control systems are replaced by diving airplanes or thrusters. Dynamic diving models also have the advantage of being able to rise to the surface in the event of loss of radio contact, thanks to their positive buoyancy. However, as they are positively floating, these models must maintain sufficient speed underwater to stay there and are unable to stop without coming to the surface.
==== Static diving ====
These models have the ability to modify their displacement by taking or pumping water. This can be done using a piston, a pneumatic bladder or a ballast. Boats that use a ballast tank usually fill the tank by opening a vent at the top, and force the water out using compressed gas. There are variants that use water pumps for both processes. A liquid gas is dosed in the ballast tank to expel the water.
The Gas-Snort Liquid gas is used to bring the boat to the surface in an emergency, otherwise the ballast is blown by the use of a snorkel tube to the depth of the periscope and the boat is cleaned on the surface to the depth of the periscope with a full ballast.
RCABS - Recycled compressed air ballast system. Originally developed by Darnell (UK) in the 1950s, this system uses a rubber bladder as a ballast tank and this is filled with compressed air supplied by a small compressor. The air is taken from the watertight container (WTC) at the back of the dry space to inflate the bladder.
Another system that is gaining popularity is the English: Snort System. A ballast tank allows water to enter by releasing a vent valve on the top of the bottle, thus allowing the boat to submerge. To rise to the surface, a small 'Snorts' pump pumps the air from the castle of the control tower into the ballast tank, expelling the water.
== See also ==
Remotely operated underwater vehicle
== References == | Wikipedia/Radio-controlled_submarine |
The case method is a teaching approach that uses decision-forcing cases to put students in the role of people who were faced with difficult decisions at some point in the past. It developed during the course of the twentieth-century from its origins in the casebook method of teaching law pioneered by Harvard legal scholar Christopher C. Langdell. In sharp contrast to many other teaching methods, the case method requires that instructors refrain from providing their own opinions about the decisions in question. Rather, the chief task of instructors who use the case method is asking students to devise, describe, and defend solutions to the problems presented by each case.
== Comparison with the casebook method of teaching law ==
The case method evolved from the casebook method, a mode of teaching based on Socratic principles pioneered at Harvard Law School by Christopher C. Langdell. Like the casebook method the case method calls upon students to take on the role of an actual person faced with a difficult problem.
== Decision-forcing cases ==
A decision-forcing case is a kind of decision game. Like any other kinds of decision games, a decision-forcing case puts students in a role of person faced with a problem (often called the "protagonist") and asks them to devise, defend, discuss, and refine solutions to that problem. However, in sharp contrast to decision games that contain fictional elements, decision-forcing cases are based entirely upon reliable descriptions of real events.
A decision-forcing case is also a kind of case study. That is, it is an examination of an incident that took place at some time in the past. However, in contrast to a retrospective case study, which provides a complete description of the events in question, a decision-forcing case is based upon an "interrupted narrative." This is an account that stops whenever the protagonist finds himself faced with an important decision. In other words, while retrospective case studies ask students to analyze past decisions with the aid of hindsight, decision-forcing cases ask students to engage problems prospectively.
== Criticisms of decision-forcing cases ==
After corporate scandals and the 2008 financial crisis, the case method was criticized for contributing to a narrow, instrumental, amoral, managerial perspective on business where making decisions which maximise profit is all that matters, ignoring the social responsibilities of organisations. It is argued that the case method puts too much emphasis on taking action and not enough on thoughtful reflection to see things from different perspectives. It has been suggested that different approaches to case writing, that do not put students in the ‘shoes’ of a manager, be encouraged to address these concerns.
== Role play ==
Every decision-forcing case has a protagonist, the historical person who was faced with the problem or problem that students are asked to solve. Thus, in engaging these problems, students necessarily engage in some degree of role play.
Some case teachers, such as those of the Marine Corps University, place a great deal of emphasis on role play, to the point of addressing each student with the name and titles of the protagonist of the case. (A student playing the role of a king, for example, is asked "Your Majesty, what are your orders?") Other case teachers, such as those at the Harvard Business School, place less emphasis on role play, asking students "what would you do if you were the protagonist of the case."
== Historical solution ==
After discussing student solutions to the problem at the heart of a decision-forcing case, a case teacher will often provide a description of the historical solution, that is, the decision made by the protagonist of the case. Also known as "the rest of the story", "the epilogue", or (particularly at Harvard University) "the 'B' case", the description of the historical solution can take the form of a printed article, a video, a slide presentation, a short lecture, or even an appearance by the protagonist.
Whatever the form of the description of the historical solution, the case teacher must take care to avoid giving the impression that the historical solution is the "right answer." Rather, he should point out that the historical solution to the problem serves primarily to provide students with a baseline to which they can compare their own solutions.
Some case teachers will refrain from providing the historical solution to students. One reason for not providing the historical solution is to encourage students to do their own research about the outcome of the case. Another is to encourage students to think about the decision after the end of the class discussion. "Analytic and problem-solving learning," writes Kirsten Lundgren of Columbia University, "can be all the more powerful when the 'what happened' is left unanswered.
== Complex cases ==
A classic decision-forcing case asks students to solve a single problem faced by a single protagonist at a particular time. There are, however, decision-forcing cases in which students play the role of a single protagonist who is faced with a series of problems, two or more protagonists dealing with the same problem, or two or more protagonists dealing with two or more related problems.
== Decision-forcing staff rides ==
A decision-forcing case conducted in the place where the historical decisions at the heart of the case were made is called a "decision-forcing staff ride." Also known as an "on-site decision-forcing case", a decision-forcing staff ride should not be confused with the two very different exercises that are also known as "staff rides": retrospective battlefield tours of the type practiced by the United States Army in the twentieth century and the on-site contingency planning exercises (Stabs Reisen, literally "staff journeys") introduced by Gerhard von Scharnhorst in 1801 and made famous by the elder Hellmuth von Moltke in the middle years of the nineteenth century.
To avoid confusion between "decision-forcing staff rides" and staff rides of other sorts, the Case Method Project at the Marine Corps University in Quantico, Virginia, adopted the term "Russell Ride" to describe the decision-forcing staff rides that it conducts. The term is an homage to Major General John Henry Russell Jr.,USMC, the 16th Commandant of the United States Marine Corps and an avid supporter of the applicatory method of instruction.
== Sandwich metaphors ==
Decision-forcing cases are sometimes described with a system of metaphors that compares them to various types of sandwiches. In this system, pieces of bread serve as a metaphor for narrative elements (i.e. the start, continuation, or end of an account) and filling of the sandwich serves as a metaphor for a problem that students are asked to solve.
A decision-forcing case in which one protagonist is faced with two problems is thus a "triple-decker case." (The bottom piece of bread is the background to the first problem, the second piece of bread is both the historical solution to the first problem and the background to the second problem, and the third piece of bread is the historical solution to the second problem.) Similarly, a decision-forcing case for which the historical solution is not provided (and is thus a case with but one narrative element) is an "open-face" or "smørrebrød" case.
A decision-forcing case in which students are asked to play the role of a decision-maker who is faced with a series of decisions in a relatively short period of time is sometimes called a "White Castle", "slider" case. or "day in the life" case.
== Case materials ==
Case materials are any materials that are used to inform the decisions made by students in the course of a decision-forcing case. Commonly used case materials include articles that were composed for the explicit purpose of informing case discussion, secondary works initially produced for other purposes, historical documents, artifacts, video programs, and audio programs.
Case materials are made available to students at a variety times in the course of a decision-forcing case. Materials that provide background are distributed at, or before, the beginning of the class meeting. Materials that describe the solution arrived at by the protagonist and the results of that solution are passed out at, or after, the end of the class meeting. (These are called "the B-case", "the rest of the story", or "the reveal.") Materials that provide information that became available to the protagonist in the course of solving the problem are given to students in the course of a class meeting. (These are often referred to as "handouts.")
Case materials may be either "refined" or "raw." Refined case materials are secondary works that were composed expressly for use as part of decision-forcing cases. (Most of the case materials that are available from case clearing houses and academic publishers are of the refined variety.) Raw case materials are those that were initially produced for reasons other than the informing of a case discussion. These include newspaper articles, video and audio news reports, historical documents, memoirs, interviews, and artifacts.
== Published case materials ==
A number of organizations, to include case clearing houses, academic publishers, and professional schools, publish case materials. These organizations include:
Blavatnik School of Government
Harvard Business School
Stanford Graduate School of Business
Columbia Business School
IESE Business School
INSEAD
ICFAI Business School Hyderabad
Ivey Business School
Indian School of Business
Indian Institute of Management, Ahmedabad
Darden School of Business at the University of Virginia
Nagoya University of Commerce & Business
Asian Institute of Management
Asian Case Research Centre at the University of Hong Kong
Globalens at the University of Michigan
Centre for Management Practice at Singapore Management University
== The Case Centre ==
The Case Centre (formerly the European Case Clearing House), headquartered in Cranfield University, Cranfield, Bedford, United Kingdom, and with its US office at Babson College, Wellesley, Massachusetts, is the independent home of the case method. It is a membership-based organization with more than 500 members worldwide, not-for-profit organisation and registered charity founded in 1973.
The Case Centre is the world’s largest and most diverse repository of case studies used in Management Education, with cases from the world’s top case publishing schools, including, Harvard Business School, ICFAI Business School Hyderabad, the Blavatnik School of Government, INSEAD, IMD, Ivey Business School, Darden School of Business, London Business School, Singapore Management University etc. Its stated aim is to promote the case method by sharing knowledge, skills, and expertise in this area among teachers and students, and for this it engages in various activities like conducting case method workshops, offering case scholarships, publishing a journal, and organizing a global case method awards.
The Case Centre Awards (known as the European Awards from 1991 and 2010) recognises outstanding case writers and teachers worldwide. These prestigious awards, popularly known as the case method community's annual 'Oscars', or the “business education Oscars, celebrate worldwide excellence in case writing and teaching.
== The narrative fallacy ==
The presentation of a decision-forcing case necessarily takes the form of a story in which the protagonist is faced with a difficult problem. This can lead to "the narrative fallacy", a mistake that leads both case teachers and the developers of case materials to ignore information that, while important to the decision that students will be asked to make, complicates the telling of the story. This, in turn, can create a situation in which, rather than engaging the problem at the heart of the case, students "parse the case materials." That is, they make decisions on the basis of the literary structure of the case materials rather than the underlying reality.
Techniques for avoiding the narrative fallacy include the avoidance of standard formats for case materials; awareness of tropes and clichés; the use of case materials originally created for purposes other than case teaching; and the deliberate inclusion of "distractors" – information that is misleading, irrelevant, or at odds with other information presented in the case.
== Purpose of the case method ==
The case method gives students the ability to quickly make sense of a complex problem, rapidly arrive at a reasonable solution, and communicate that solution to others in a succinct and effective manner. In the course of doing this, the case method also accomplishes a number of other things, each of which is valuable in its own right. By exciting the interest of students, the case method fosters interest in professional matters. By placing such things in a lively context, the case method facilitates the learning of facts, nomenclature, conventions, techniques, and procedures. By providing both a forum for discussion and concrete topics to discuss, the case method encourages professional dialogue. By providing challenging practice in the art of decision-making, the case method refines professional judgement. By asking difficult questions, the case method empowers students to reflect upon the peculiar demands of their profession.
In his classic essay on the case method ("Because Wisdom Can't Be Told"), Charles I. Gragg of the Harvard Business School argued that "the case system, properly used, initiates students into the ways of independent thought and responsible judgement."
== Incompatible objectives ==
While the case method can be used to accomplish a wide variety of goals, certain objectives are at odds with its nature as an exercise in professional judgement. These incompatible objectives include attempts to use decision-forcing cases to:
provide an example to be emulated
paint a particular person as a hero or a villain
encourage (or discourage) a particular type of behavior
illustrate a pre-existing theory
Thomas W. Shreeve, who uses the case method to teach people in the field of military intelligence, argues that "Cases are not meant to illustrate either the effective or the ineffective handling of administrative, operational, logistic, ethical, or other problems, and the characters in cases should not be portrayed either as paragons of virtue or as archvillains. The instructor/casewriter must be careful not to tell the students what to think—they are not empty vessels waiting to be filled with wisdom. With this method of teaching, a major share of the responsibility for thinking critically about the issues under discussion is shifted to the students, where it belongs."
== Disclaimers ==
Case materials are often emblazoned with a disclaimer that warns both teachers and students to avoid the didactic, hortatory, and "best practices" fallacies. Here are some examples of such disclaimers:
== Use of the case method in professional schools ==
The case method is used in a variety of professional schools. These include the:
Harvard Business School
IESE Business School
Columbia Business School
Singapore Management University
Blavatnik School of Government
INCAE Business School
ICFAI Business School Hyderabad
The Acton School of Business
Hogeschool van Amsterdam
Asian Institute of Management
Indian Institute of Management, Ahmedabad
Richard Ivey School of Business
John F. Kennedy School of Government at Harvard University
NUCB Business School at the Nagoya University of Commerce & Business
Darden School of Business at the University of Virginia
Columbia School of Journalism
Mailman School of Public Health, Columbia University
School of International and Public Affairs, Columbia University
Yale School of Management
Marine Corps University
Cranfield School of Management
School of Advertising & Public Relations, University of Texas
Suleman Dawood School of Business at the Lahore University of Management Sciences
Institute of Business Administration, Karachi
Michael G. Foster School of Business
Institute for Financial Management and Research
Institute of Chartered Accountants in England and Wales
University of Fujairah- MBA Program
INALDE Business School in Bogota, Colombia
== See also ==
Business schools
Case competition
Case study
Casebook method (used by law schools)
Decision game
European Case Clearing House
Experiential learning
Harvard Business Publishing
Teaching method
== References ==
== Literature ==
Corey, Raymond (1998), Case Method Teaching, Harvard Business School 9-581-058, Rev. November 6, 1998.
Gudmundsson, Bruce Ivar (2020), Decision-Forcing Cases (PDF), Marine Corps University, Quantico, VA.
Hammond, J.S. (2002), Learning by the case method (PDF), HBS Publishing Division, Harvard Business School, Boston, MA
Herreid, Clyde Freeman (2005), "Because Wisdom Can't Be Told: Using Case Studies to Teach Science", Peer Review (Winter 2005), archived from the original on 2014-11-08, retrieved 2014-11-08.
Lundgren, Kirsten (2012), The Case Method: Art and Skill.
McNair, Malcolm P., ed. (1954), The Case Method at the Harvard Business School: Papers by Present and Past Members of the Faculty and Staff, New York: McGraw-Hill.
Siddiqui, Zehra (2013), How to write a case study (PDF), William Davidson Institute, University of Michigan, Ann-Arbor, MI, archived from the original (PDF) on 2015-04-21, retrieved 2012-04-30. | Wikipedia/Case_method |
Conversational Constraints Theory, developed in Min-Sun Kim, attempts to explain how and why certain conversational strategies differ across various cultures and the effects of these differences. It is embedded in the Social Science communication approach which is based upon how culture influences communication. There are five universal conversational constraints: 1) clarity, 2) minimizing imposition, 3) consideration for the other's feelings, 4) risking negative evaluation by the receiver, and 5) effectiveness. These five constraints pivot on the notion of if a culture is more social relational (collectivistic cultures), or task oriented (individualistic cultures).
The social relational approach focuses on having more concern for the receiver's feelings, holding more importance upon saving face for the other person than being concise. When constructing messages, the social relational approach takes into account how their words and actions will affect the listener's feelings. The task oriented approach emphasizes concern for clarity over feelings. It places higher value on the degree to which the message is communicated explicitly in its truest form. Cultures have specific manners and behaviors that pertain to conversational style. These behaviors can be preferred by some cultures, and offensive to others. Conversational Constraints Theory seeks to explain why these certain tactics work in some cultures but not in others. It is influenced by the customs, rules, and norms of that culture.
The central focus of Conversational Constraints Theory is not necessarily what is said, but how it is said. Conversations are typically goal-oriented and require coordination between both communicators, and messages are developed built upon various constraints, personal or cultural, in order to pursue any kind of interaction. Kim discusses the need for approval, need for dominance, and gender roles to analyze conversational constraints. The more approval a person needs, thus more feminine, the more they view minimizing imposition and being concerned with the hearer's feelings as being important. The more dominant, thus more masculine, the more they view message clarity and directness as being important.
== Effectiveness ==
Concern for effectiveness is a constraint that is universally important amongst most all cultures. It is focused on the influence that the message has on the receiver and to what extent. Effectiveness explains the capability of how well the content of the message is conveyed to the listener, and if the style of verbal deliverance is soft or punctual. Effectiveness pertains to the potency of the message, if it is strong or weak, powerful or ineffective, weighty or superficial. Collectivistic cultures tend to use effectiveness within their conversations as more diffused and watered-down so as to lessen negativity and offense. This aspect of effectiveness has more ease and cushion in how the message is spoken, and is structured in a way that will minimize dissonance at all costs. On the other hand, individualistic cultures maximize the punctuality of effectiveness in delivering the message. The tone of their message focuses on directness, frankness, and being straightforward with their listener, and intend on being bluntly honest in order to be effective. Individualistic cultures are not generally concerned with the listener's feelings if that sacrifices the effectiveness of the message.
== Clarity ==
“Clarity is defined as the likelihood of an utterance making one’s intention clear and explicit.” Clarity is an important part of conversation because in order for a conversation to flow properly, the communication needs to be clear and precise. A person trying to communicate a specific message explicitly uses direct imperatives to ensure that the proper message is carried to the hearer. If a person is attempting to use the hint strategy, the message will be less clear because the intent is not communicated explicitly, therefore is not derivable from the literal meaning of the utterances.
Kim proposes that task-oriented constraints emphasize a concern for clarity. For example, task-oriented constraints measure the degree to which the intentions of messages are communicated explicitly. When comparing collectivistic cultures to individualistic cultures, the members of individualistic cultures consider clarity as more significant than members of collectivistic when aspiring goals. Further, members of individualistic cultures have thresholds and exploit more attention for clarity than members of collectivistic cultures. Individuals who display independent and interdependent self-construals present different views on the importance of clarity. For instance, individuals who exhibit independent self-construals perceive clarity as significant in pursuing goals more than individuals who stimulate interdependent self-construals. Individuals who display both independent and interdependent self-construals focus more on relational and clarity restraints. On the other hand, individuals who do not exercise independent or interdependent self-construals do not visualize clarity and relational constraints imperative. To help further explain conversational constraints, Kim uses the need for approval, need for dominance, and gender roles. The need for dominance and gender roles apply to the clarity of a conversational constraint. For instance, as individuals possess more dominance, there will be a greater emphasis placed on clarity. Also, the importance on clarity is exhibited through to need to be more masculine. These different displays of clarity provide evidence for Kim's conversational constraints.
== Consideration for Others’ Feelings ==
When communicating with another person, individuals take into account the listener's feelings. People acknowledge how their intended action is going to affect the feelings of the other person. The concern the speaker displays for the hearer relates to what the speaker feels is necessary in order to help the hearer maintain positive self images. Positive face, identity goals, and “concern with support” are three labels that help determine the degree to which a strategy shows consideration for the hearer's feelings. When a person requests an explicit action, it possesses a higher chance of hurting the listener's feelings. On the other hand, communicating with a hint sends a more implicit message, thus, delivering the message successfully. Compared to task-oriented constraints, social relational constraints stress concern for others by withdrawing from injuring the hearer's feelings. They are strongly concerned with how their communication may affect the hearer's and reflect on their concern to successfully accomplish these communicative goals. Some of these communicative goals affect the hearer's still, thus, threatening their autonomy. It has been found that “collectivism influences the importance members of cultures place on relational concerns in conversation” (Kim, 1995). Therefore, members of this culture place more emphasis on face-supporting behaviors, such as avoiding harming the other's feelings than when members are pursuing goals. Compared to members of individualistic cultures, these members “have higher thresholds for face support and select strategies to maximize face support” (Kim, 1995). Individuals who highlight interdependent self-construals want to avoid face loss as much as possible and want to feel welcomed by particular social groups. These individuals see avoiding hurt feelings as more significant than individuals who use independent self-construals. Individuals who are more feminine and need more approval than others tend to put more focus on the concern for others than more dominant individuals.
== Minimizing Imposition ==
An element that is an essential component within the Conversational Constraints Theory emphasizes the role of minimizing imposition. The theory discusses cross-cultural differences that have been observed when studying communicative strategies in different cultures. For instance, members within collectivistic cultures view face-supporting behavior. One way this is done is through minimizing imposition as an important component when a member is in pursuit of a goal. There is ample reason to believe that individualistic cultures, on the other hand, do not consider face-supporting behaviors to be as important to goal oriented behavior. Conversational Constraints Theory suggests that feminine individuals place more value on minimizing their imposition. In contrast, individuals that are masculine tend to place less value on minimizing their imposition. In addition, the theory also reports that the more an individual requires approval within a given context, the higher the amount of importance they will place on minimizing their imposition. This occurs when the individuals are minimizing their imposition on hearers in a social-relational conversational constraint. In addition, this can occur in a task-oriented conversational constraint. Returning to the issue of research about conversational constraints across different cultures, researchers have noted specific concerns which are to be addressed.
== Avoiding Negative Evaluation by the Hearer ==
There have been concerns found in recent studies about conversational constraints across different cultures. Current research suggests that the concern for avoiding negative evaluation by the hearer is only one of the three concerns that were observed in research studies. In most instances, this particular conversational constraint occurs when a speaker within a conversation makes an attempt to avoid negative evaluation from the individual that is hearing the speaker's message. The concern for avoiding negative evaluation by the hearer in a conversation explains a plausible reason that explains why individuals attempt to conduct their behavior in ways that will avoid devaluation from others within a conversation. For example, a person might try to make a good first impression to seek approval in an interview by using strategies to avoid negative evaluation from the individual who is conducting the interview. Among the various cultures that conversational constraints have been studied, individualistic cultures have been shown to have differences in comparison to other types of cultures. Individualistic cultures are more focused on the amount of clarity within a conversational constraint and less concerned with avoiding negative evaluation from the hearer. In contrast, collectivistic cultures are more concerned with behaviors that include avoiding negative evaluation from the hearer, and minimizing imposition because these constraints are considered face-supporting behavior.
== Works cited ==
Gudykunst, William B. "Theories of Intercultural Communication I." China Media Research 1 (2005): 61-75.
Gudykunst, William B.. Cross-cultural and Intercultural Communication. Thousand Oaks: SAGE, 2003.
Martin, J.M., and T. Nakayama. Intercultural Communication in Contexts. New York, New York.: McGraw-Hill, 2004.
Min-Sun, Kim, and Krystyna Aune. "The Effects of Psychological Gender Orientations on the Perceived Salience of Conversational Constra." Sex Roles (1997).
Conversational Constraints Theory e-book | Wikipedia/Conversational_constraints_theory |
An instructional theory is "a theory that offers explicit guidance on how to better help people learn and develop." It provides insights about what is likely to happen and why with respect to different kinds of teaching and learning activities while helping indicate approaches for their evaluation. Instructional designers focus on how to best structure material and instructional behavior to facilitate learning.
== Development ==
Originating in the United States in the late 1970s, instructional theory is influenced by three basic theories in educational thought: behaviorism, the theory that helps us understand how people conform to predetermined standards; cognitivism, the theory that learning occurs through mental associations; and constructivism, the theory explores the value of human activity as a critical function of gaining knowledge. Instructional theory is heavily influenced by the 1956 work of Benjamin Bloom, a University of Chicago professor, and the results of his Taxonomy of Education Objectives—one of the first modern codifications of the learning process. One of the first instructional theorists was Robert M. Gagne, who in 1965 published Conditions of Learning for the Florida State University's Department of Educational Research.
== Definition ==
Instructional theory is different than learning theory. A learning theory describes how learning takes place, and an instructional theory prescribes how to better help people learn. Learning theories often inform instructional theory, and three general theoretical stances take part in this influence: behaviorism (learning as response acquisition), cognitivism (learning as knowledge acquisition), and constructivism (learning as knowledge construction). Instructional theory helps us create conditions that increases the probability of learning. Its goal is understanding the instructional system and to improve the process of instruction.
== Overview ==
Instructional theories identify what instruction or teaching should be like. It outlines strategies that an educator may adopt to achieve the learning objectives. Instructional theories are adapted based on the educational content and more importantly the learning style of the students. They are used as teaching guidelines/tools by teachers/trainers to facilitate learning. Instructional theories encompass different instructional methods, models and strategies.
David Merrill's First Principles of Instruction discusses universal methods of instruction, situational methods and core ideas of the post-industrial paradigm of instruction.
Universal Methods of Instruction:
Task-Centered Principle - instruction should use a progression of increasingly complex whole tasks.
Demonstration Principle - instruction should guide learners through a skill and engage peer discussion/demonstration.
Application Principle - instruction should provide intrinsic or corrective feedback and engage peer-collaboration.
Activation Principle - instruction should build upon prior knowledge and encourage learners to acquire a structure for organizing new knowledge.
Integration Principle - instruction should engage learners in peer-critiques and synthesizing newly acquired knowledge.
Situational Methods:
based on different approaches to instruction
Role play
Synectics
Mastery learning
Direct instruction
Discussion
Conflict resolution
Peer learning
Experiential learning
Problem-based learning
Simulation-based learning
based on different learning outcomes:
Knowledge
Comprehension
Application
Analysis
Synthesis
Evaluation
Affective development
Integrated learning
Core ideas for the Post-industrial Paradigm of Instruction:
Learner centered vs. teacher centered instruction – with respect to the focus, instruction can be based on the capability and style of the learner or the teacher.
Learning by doing vs. teacher presenting – Students often learn more by doing rather than simply listening to instructions given by the teacher.
Attainment based vs. time based progress – The instruction can either be based on the focus on the mastery of the concept or the time spent on learning the concept.
Customized vs. standardized instruction – The instruction can be different for different learners or the instruction can be given in general to the entire classroom
Criterion referenced vs. norm referenced instruction – Instruction related to different types of evaluations.
Collaborative vs. individual instruction – Instruction can be for a team of students or individual students.
Enjoyable vs. unpleasant instructions – Instructions can create a pleasant learning experience or a negative one (often to enforce discipline). Teachers must take care to ensure positive experiences.
Four tasks of Instructional theory:
Knowledge selection
Knowledge sequence
Interaction management
Setting of interaction environment
== Critiques ==
Paulo Freire's work appears to critique instructional approaches that adhere to the knowledge acquisition stance, and his work Pedagogy of the Oppressed has had a broad influence over a generation of American educators with his critique of various "banking" models of education and analysis of the teacher-student relationship.
Freire explains, "Narration (with the teacher as narrator) leads the students to memorize mechanically the narrated content. Worse yet, it turns them into "containers", into "receptacles" to be "filled" by the teacher. The more completely she fills the receptacles, the better a teacher she is. The more meekly the receptacles permit themselves to be filled, the better students they are." In this way he explains educator creates an act of depositing knowledge in a student. The student thus becomes a repository of knowledge. Freire explains that this system that diminishes creativity and knowledge suffers. Knowledge, according to Freire, comes about only through the learner by inquiry and pursuing the subjects in the world and through interpersonal interaction.
Freire further states, "In the banking concept of education, knowledge is a gift bestowed by those who consider themselves knowledgeable upon those whom they consider to know nothing. Projecting an absolute ignorance onto others, a characteristic of the ideology of oppression, negates education and knowledge as processes of inquiry. The teacher presents himself to his students as their necessary opposite; by considering their ignorance absolute, he justifies his own existence. The students, alienated like the slave in the Hegelian dialectic, accept their ignorance as justifying the teacher's existence—but, unlike the slave, they never discover that they educate the teacher."
Freire then offered an alternative stance and wrote, "The raison d'etre of libertarian education, on the other hand, lies in its drive towards reconciliation. Education must begin with the solution of the teacher-student contradiction, by reconciling the poles of the contradiction so that both are simultaneously teachers and students."
In the article, "A process for the critical analysis of instructional theory", the authors use an ontology-building process to review and analyze concepts across different instructional theories. Here are their findings:
Concepts exist in theoretical writing that theorists do not address directly.
These tacit concepts, which supply the ontological categories, enable a more detailed comparison of theories beyond specific terminologies.
Divergences between theories can be concealed behind common terms used by different theorists.
A false sense of understanding often arises from a cursory, uncritical reading of the theories.
Discontinuities and gaps are revealed within the theoretical literature when the tacit concepts are elicited.
== See also ==
Educational technology – Use of technology in education to improve learning and teaching (the use of electronic educational technology is also called e-learning)
Edupunk – Teaching method
Instructional design – Process for design and development of learning resources
Teaching method – Principles and methods used by teachers to enable student learning
Training Within Industry – Service for on the job training in industry was developed during WWII and is still in use around the world
Behaviorism – Systematic approach to understanding the behavior of humans and other animals
Cognitivism – Theoretical framework for understanding the mind
Constructivism – Theory of knowledge
== References ==
Linking Premise to Practice: An Instructional Theory-Strategy Model Approach By: Bowden, Randall. Journal of College Teaching & Learning, v5 n3 p69-76 Mar 2008
Paulo Freire, Pedagogy of the Oppressed. ISBN 0-8264-1276-9. | Wikipedia/Instructional_theory |
Curriculum theory (CT) is an academic discipline devoted to examining and shaping educational curricula. There are many interpretations of CT, being as narrow as the dynamics of the learning process of one child in a classroom to the lifelong learning path an individual takes. CT can be approached from the educational, philosophical, psychological and sociological perspectives. James MacDonald states "one central concern of theorists is identifying the fundamental unit of curriculum with which to build conceptual systems. Whether this be rational decisions, action processes, language patterns, or any other potential unit has not been agreed upon by the theorists." Curriculum theory is fundamentally concerned with values, the historical analysis of curriculum, ways of viewing current educational curriculum and policy decisions, and theorizing about the curricula of the future.
Pinar defines the contemporary field of curriculum theory as "the effort to understand curriculum as a symbolic representation".
The first mention of the word "curriculum" in university records was in 1582, at the University of Leiden, Holland: "having completed the curriculum of his studies". However, curriculum theory as a field of study is thought to have been initiated with the publication of The Yale Report on the Defense of the Classics in 1828, which promoted the study of a classical curriculum, including Latin and Greek, by rote memorization.
== Faculty psychology ==
The school of faculty psychology, dominating the field from 1860-1890 in the United States, believed that the brain was a muscle that could be improved by the exercise of memorization (with comprehension a secondary consideration). This supports the classical theory, which previously emphasized a method of teaching school subjects using memorization and recitation as primary instructional tools. The theory itself claims three constituent faculties or power:
the presence of will or volition, which enables human beings to act;
the emotions, which pertains to the affections and passions that enable human beings to experience pleasure, pain, love, and hate; and,
the intellect or understanding, which is the foundation of human rationality that enables him to make judgments and comprehend meanings.
The idea is that education should expand the faculty of the mind and this is achieved through the key concepts of discipline and furniture. The faculty theory, which steered curriculum policy for elementary, secondary, and high schools, was institutionalized by three committees appointed by the National Education Association (NEA) in the 1890s to follow faculty psychology principles: the Committee of Ten on Secondary School Studies (1893), the Committee of Fifteen on Elementary Education (1895) and the Committee on College Entrance Requirements.
== The Herbartians ==
Different schools of Curriculum Theory developed as a reaction to the classicism of faculty psychology, including the Herbartians, who organized the Herbart Club in 1892, and later the National Herbart Society (1895-1899). Their philosophy was based on the thoughts of Johann Friedrich Herbart, a German philosopher, psychologist and educator, who believed that "the mere memorizing of isolated facts, which had characterized school instruction for ages, had little value of either educational or moral ends".
== The social efficiency movement ==
The publication of John Bobbitt's The Curricula in 1918 took the prevalent industrial revolutionary concepts of experimental science and social efficiency and applied them to the classroom. He believed that "curriculum must directly and specifically prepare students for tasks in the adult world". He also believed that "human life...consists in the performance of specific activities. Education that prepares for life is one that prepares definitely and adequately for these specific activities." From this idea, he suggested that curriculum was a series of experiences that children have in order to meet "objectives," or abilities and habits that people need for particular activities.
Other famous theorists of this movement included Edward L. Thorndyke (1874-1949), the father of experimental psychology in education, Frederick Winslow Taylor (1856-1915), with his theory of scientific management, David Snedden, an educational sociologist who promoted social efficiency and vocational education, and W.W. Charters (1875-1952), a teacher educator who felt that "curriculum was comprised of those methods by which objectives are determined". By using education as an efficiency tool, these theorists believed that society could be controlled. Students were scientifically evaluated by testing (such as IQ tests), and educated towards their predicted role in society. This involved the introduction of vocational and junior high schools to address the curriculum designed around specific life activities that correlated with each student's determined societal future. The socially efficient curriculum consisted of minute parts or tasks that together formed a bigger concept.
== The progressive reform movement ==
The progressive reform movement began in the late 1870s with the work of Colonel Francis Parker, but is most identified with John Dewey, and also John Mayer Rice and Lester Frank Ward. Dewey's 1899 book The School and Society is often credited with starting the movement. These reformers felt that curriculum should be child driven and at the child's present capacity level. To aid in understanding the relationship of curriculum and child, Dewey described curriculum as, "a map, a summary, an arranged and orderly view of previous experiences, serves as a guide to future experience; it gives direction; it facilitates control; it economizes effort, preventing useless wandering, and pointing out the paths which lead most quickly and most certainly to a desired result". He envisioned "the child and the curriculum are simply two limits which define a single process".
The Social Efficiency and Progressive Reform movements were rivals throughout the 1920s in the United States, with the 1930s belonging to the Progressives, or a curriculum combining aspects of both. Ralph W. Tyler's Basic Principles of Curriculum and Instruction (1949) swung the pendulum of curriculum theory away from child centeredness toward more generalized behaviors.
Tyler's theory was based on four fundamental questions which became known as the Tyler Rationale:
What educational purposes should the school seek to attain?
What educational experiences can be provided that are likely to attain these purposes?
How can these educational experiences be effectively organized?
How can we determine whether these purposes are being attained?
== The multicultural education movement ==
There is a racial crisis in America, which is exacerbated by the widening gap between the rich and the poor. In order to address this gap within the multicultural education movement there is a body of knowledge which argues for the need to reconceptualise, re-envision, and rethink American schooling. Numerous authors advocate the need for fundamental changes in the educational system which acknowledges that there is a plurality within teaching and learning for students of diversity. Current research suggests that educational structure is oppressive to students of diversity and is an obstacle to integration into society and student achievement. Current multicultural education theory suggests that curriculum and institutional change is required to support the development of students from diverse ethnic and cultural backgrounds. This is a controversial view but multicultural education argues that traditional curriculum does not adequately represent the history of the non dominant group. Nieto (1999) supports this concern for students who do not belong to the dominant group and seem to have challenging curriculum experiences that conflict with their personal cultural identity and their wider community reference groups.
== Sputnik and the National Defense Act ==
The launch of Sputnik in 1957 created a focus on science and math in the United States curriculum. Admiral Hyman Rickover accused the American public of indifference to intellectual achievement. "Our schools must return to the tradition of formal education in Western civilization-transmission of cultural heritage, and preparation for life through rigorous intellectual training of young minds to think clearly, logically, and independently". The result was a return to curricula similar to the classicists of the 1890s and the modern birth of the traditionalists, with massive federal funding for curriculum development provided by the National Defense Act of 1958.
== Reconceptualized curriculum ==
Joseph J. Schwab was instrumental in provoking curriculum developers to think beyond the traditionalist approach. In his 1969 paper "The Practical: A Language for Curriculum" he declared the curriculum field "moribund". This, plus the social unrest of the 1960s and '70s stirred a new movement of "reconceptualization" of curricula. A group of theorists, including James Macdonald, Dwayne Huebner, Ross Mooney, Herbert M. Kliebard, Paul Klohr, Michael Apple, W.F. Pinar, and others, created ways of thinking about curriculum and its role in the academy, in schools, and in society in general. Their approach included perspectives from the social, racial, gender, phenomenological, political, autobiographical and theological points of view.
== Today ==
W.F. Pinar describes the present field "balkanized...divided into relatively separate fiefdoms or sectors of scholarship, each usually ignoring the other except for occasional criticism." The top-down governmental control of educational curriculum in the Anglophone world, including the United States, has been criticized as being "ahistorical and atheoretical, and as a result prone to difficult problems in its implementation". But there are theorists who are looking beyond curriculum as "simply as a collection of study plans, syllabi, and teaching subjects. Instead, the curriculum becomes the outcome of a process reflecting a political and societal agreement about the what, why, and how of education for the desired society of the future."
== See also ==
Curriculum studies
== References ==
== Further reading ==
Overcoming the crisis in curriculum theory: a knowledge-based approach
21st Century Standards and Curriculum: Current Research and Practice | Wikipedia/Curriculum_theory |
In logic, inference is the process of deriving logical conclusions from premises known or assumed to be true. In checking a logical inference for formal and material validity, the meaning of only its logical vocabulary and of both its logical and extra-logical vocabulary
is considered, respectively.
== Examples ==
For example, the inference "Socrates is a human, and each human must eventually die, therefore Socrates must eventually die" is a formally valid inference; it remains valid if the nonlogical vocabulary "Socrates", "is human", and "must eventually die" is arbitrarily, but consistently replaced.
In contrast, the inference "Montreal is north of New York, therefore New York is south of Montreal" is materially valid only; its validity relies on the extra-logical relations "is north of" and "is south of" being converse to each other.
== Material inferences vs. enthymemes ==
Classical formal logic considers the above "north/south" inference as an enthymeme, that is, as an incomplete inference; it can be made formally valid by supplementing the tacitly used conversity relationship explicitly: "Montreal is north of New York, and whenever a location x is north of a location y, then y is south of x; therefore New York is south of Montreal".
In contrast, the notion of a material inference has been developed by Wilfrid Sellars in order to emphasize his view that such supplements are not necessary to obtain a correct argument.
== Brandom on material inference ==
=== Non-monotonic inference ===
Robert Brandom adopted Sellars' view, arguing that everyday (practical) reasoning is usually non-monotonic, i.e. additional premises can turn a practically valid inference into an invalid one, e.g.
"If I rub this match along the striking surface, then it will ignite." (p→q)
"If p, but the match is inside a strong electromagnetic field, then it will not ignite." (p∧r→¬q)
"If p and r, but the match is in a Faraday cage, then it will ignite." (p∧r∧s→q)
"If p and r and s, but there is no oxygen in the room, then the match will not ignite." (p∧r∧s∧t→¬q)
...
Therefore, practically valid inference is different from formally valid inference (which is monotonic - the above argument that Socrates must eventually die cannot be challenged by whatever additional information), and should better be modelled by materially valid inference. While a classical logician could add a ceteris paribus clause to 1. to make it usable in formally valid inferences:
"If I rub this match along the striking surface, then, ceteris paribus, it will inflame."
However, Brandom doubts that the meaning of such a clause can be made explicit, and prefers to consider it as a hint to non-monotony rather than a miracle drug to establish monotony.
Moreover, the "match" example shows that a typical everyday inference can hardly be ever made formally complete. In a similar way, Lewis Carroll's dialogue "What the Tortoise Said to Achilles" demonstrates that the attempt to make every inference fully complete can lead to an infinite regression.
== See also ==
Material inference should not be confused with the following concepts, which refer to formal, not material validity:
Material conditional — the logical connective "→" (i.e. "formally implies")
Material implication (rule of inference) — a rule for formally replacing "→" by "¬" (negation) and "∨" (disjunction)
== Notes ==
== Citations ==
== References ==
Stanford Encyclopedia of Philosophy on Sellars view | Wikipedia/Material_inference |
Soft science fiction, or soft SF, is a category of science fiction with two different definitions, in contrast to hard science fiction. It explores the "soft" sciences (e.g. psychology, political science, sociology), as opposed to the "hard" sciences (e.g. physics, astronomy, biology). It can also refer to science fiction which prioritizes human emotions over scientific accuracy or plausibility.
Soft science fiction of either type is often more concerned with depicting speculative societies and relationships between characters, rather than realistic portrayals of speculative science or engineering. The term first appeared in the late 1970s and is attributed to Australian literary scholar Peter Nicholls.
== Definition ==
In The Encyclopedia of Science Fiction, Peter Nicholls writes that "soft SF" is a "not very precise item of SF terminology" and that the contrast between hard and soft is "sometimes illogical." In fact, the boundaries between "hard" and "soft" are neither definite nor universally agreed-upon, so there is no single standard of scientific "hardness" or "softness." Some readers might consider any deviation from the possible or probable (for example, including faster-than-light travel or paranormal powers) to be a mark of "softness." Others might see an emphasis on character or the social implications of technological change (however possible or probable) as a departure from the science-engineering-technology issues that in their view ought to be the focus of hard SF. Given this lack of objective and well-defined standards, "soft science fiction" does not indicate a genre or subgenre of SF but a tendency or quality—one pole of an axis that has "hard science fiction" at the other pole.
In Brave New Words, subtitled The Oxford Dictionary of Science Fiction, soft science fiction is given two definitions. The first definition is fiction that is primarily focused on advancements in, or extrapolations of, the soft sciences; that is social sciences and not natural sciences. The second definition is science fiction in which science is not important to the story.
== Etymology ==
The term soft science fiction was formed as the complement of the earlier term hard science fiction.
The earliest known citation for the term is in "1975: The Year in Science Fiction" by Peter Nicholls, in Nebula Award Stories 11 (1976). He wrote "The same list reveals that an already established shift from hard sf (chemistry, physics, astronomy, technology) to soft sf (psychology, biology, anthropology, sociology, and even [...] linguistics) is continuing more strongly than ever."
== History ==
Poul Anderson, in Ideas for SF Writers (Sep 1998), described H. G. Wells as the model for soft science fiction: "He concentrated on the characters, their emotions and interactions" rather than any of the science or technology behind, for example, invisible men or time machines. Jeffrey Wallmann suggests that soft science fiction grew out of the gothic fiction of Edgar Allan Poe and Mary Shelley.
Carol McGuirk, in Fiction 2000 (1992), states that the "soft school" of science fiction dominated the genre in the 1950s, with the beginning of the Cold War and an influx of new readers into the science fiction genre. The early members of the soft science fiction genre were Alfred Bester, Fritz Leiber, Ray Bradbury, and James Blish, who were the first to make a "radical" break from the hard science fiction tradition and "take extrapolation explicitly inward", emphasising the characters and their characterisation. In calling out specific examples from this period, McGuirk describes Ursula K. Le Guin's 1969 novel The Left Hand of Darkness as "a soft SF classic". The New Wave movement in science fiction developed out of soft science fiction in the 1960s and 70s. The conte cruel was the standard narrative form of soft science fiction by the 1980s. During the 1980s cyberpunk developed from soft science fiction.
McGuirk identifies two subgenres of soft science fiction: "Humanist science fiction" (in which human beings, rather than technology, are the cause of advancement or from which change can be extrapolated in the setting; often involving speculation on the human condition) and "Science fiction noir" (focusing on the negative aspects of human nature; often in a dystopian setting).
== Examples ==
George Orwell's Nineteen Eighty-Four might be described as soft science fiction, since it is concerned primarily with how society and interpersonal relationships are altered by a political force that uses technology mercilessly; even though it is the source of many ideas and tropes commonly explored in subsequent science fiction, (even in hard science fiction), such as mind control and surveillance. And yet, its style is uncompromisingly realistic, and despite its then-future setting, very much more like a spy novel or political thriller in terms of its themes and treatment.
Karel Čapek's 1920 play R.U.R., which supplied the term robot (nearly replacing earlier terms such as automaton) and features a trope-defining climax in which artificial workers unite to overthrow human society, covers such issues as free will, a post-scarcity economy, robot rebellion, and post-apocalyptic culture. The play, subtitled "A Fantastic Melodrama", offers only a general description of the process for creating living workers out of artificial tissue, and thus can be compared to social comedy or literary fantasy.
George S. Elrick, in Science Fiction Handbook for Readers and Writers (1978), cited Brian Aldiss' 1959 short story collection The Canopy of Time (using the US title Galaxies Like Grains of Sand) as an example of soft science fiction based on the soft sciences.
Frank Herbert's Dune series is a landmark of soft science fiction. In it, he deliberately spent little time on the details of its futuristic technology so he could devote it chiefly to addressing the politics of humanity, rather than the future of humanity's technology.
Linguistic relativity (also known as the Sapir–Whorf hypothesis), the theory that language influences thought and perception, is a subject explored in some soft science fiction works such as Jack Vance's The Languages of Pao (1958) and Samuel R. Delany's Babel-17 (1966). In these works artificial languages are used to control and change people and whole societies. Science fictional linguistics are also the subject of varied works from Ursula K. Le Guin's novel The Dispossessed (1974), to the Star Trek: The Next Generation episode "Darmok" (1991), to Neal Stephenson's novel Snow Crash (1992), C.J. Cherryh's Foreigner books (1994– ), to the film Arrival (2016).
=== Films set in outer space ===
Soft science fiction filmmakers tend to extend to outer space certain physics that are associated with life on Earth's surface, primarily to make scenes more spectacular or recognizable to the audience. Examples of these artistic liberties are:
Presence of gravity without use of an artificial gravity system.
Radio communication without any speed-of-light time lag.
A spaceship's engines or an explosion generating sound despite the vacuum of space.
Spaceships changing directions without any visible thrusting activity.
Spaceship occupants enduring without any visible effort the enormous g-forces generated from a spaceship's extreme maneuvering (e.g. in a dogfight situation) or launch.
Astronauts instantly freezing to death or getting a frostbite when exposed to outer space
Spacecraft which suffer engine failures "falling" or coming to a stop, instead of continuing along their current trajectory or orbit as per inertia.
Hard science fiction films try to avoid such artistic license.
== Representative works ==
Arranged chronologically by publication year.
=== Short fiction ===
H. G. Wells, The Time Machine (1895) and The Invisible Man (1897)
Miles J. Breuer, "The Gostak and the Doshes" (1930)
Ray Bradbury, The Martian Chronicles (1950, short story collection)
James Blish, "Surface Tension" (1952)
Murray Leinster, "Exploration Team" (1956)
Brian Aldiss, The Canopy of Time (1959, short story collection)
Daniel Keyes, "Flowers for Algernon" (1959)
Sakyo Komatsu, "Shigatsu Juyokkakan" (1974)
=== Novels ===
Mary Shelley, Frankenstein (1818)
Alfred Bester, The Demolished Man (1953)
Ray Bradbury, Fahrenheit 451 (1953)
Ted Sturgeon, More Than Human (1953)
Jack Vance, The Languages of Pao (1958)
Philip K. Dick, Time Out of Joint (1959) and Ubik (1969)
Walter M. Miller, Jr., A Canticle for Leibowitz (1960)
Robert A. Heinlein, Stranger in a Strange Land (1961)
Pierre Boulle, Monkey Planet (1963)
Frank Herbert, Dune (1965)
Samuel R. Delany, Babel-17 (1966)
Ursula K. Le Guin, The Left Hand of Darkness (1969) and The Dispossessed (1974)
Robert Silverberg, Dying Inside (1972)
Frederik Pohl, Man Plus (1976)
Michael Swanwick, In the Drift (1984)
Kim Stanley Robinson, The Wild Shore (1984), (Book 1 of the Three Californias Trilogy)
Storm Constantine, The Wraeththu Chronicles (1987)
David Brin, The Postman (1985)
Audrey Niffenegger, The Time Traveler's Wife (2003)
Ben H. Winters, The Last Policeman (2012)
=== Film and television ===
In the sense of a basis in the soft sciences:
Episodes of Star Trek: The Next Generation (1987–1994) like the fifth season's "Darmok" (S5E02; September 30, 1991) are based on soft science concepts; in this case, linguistics.
Some prime examples of soft science fiction on film and television include:
The Stargate franchise
The Star Trek franchise
The Star Wars franchise
The Farscape franchise
The Planet of the Apes franchise
The Transformers franchise
The Terminator franchise
Frank Herbert's Dune and its direct sequel Frank Herbert's Children of Dune
The Firefly franchise
== See also ==
Social science fiction
Definitions of science fiction
Outline of science fiction
Time travel in fiction
Hard and soft magic systems
== Notes ==
== References ==
== External links ==
"Soft-core Science Fiction Movies". IMDb. | Wikipedia/Soft_science_fiction |
In sociology of science, the graphism thesis is a proposition of Bruno Latour that graphs are important in science.
Research has shown that one can distinguish between hard science and soft science disciplines based on the level of graph use, so it can be argued that there is a correlation between scientificity and visuality. Furthermore, natural sciences publications appear to make heavier use of graphs than mathematical and social sciences.
It has been claimed that an example of a discipline that uses graphs heavily but is not at all scientific is technical analysis.
== See also ==
Philosophy of science
Epistemology
Fields of science
List of academic disciplines
Graphism
== References ==
== External links ==
Best, L. A.; Smith, L. D.; Stubbs, D. A. (2001). "Graph use in psychology and other sciences". Behavioural Processes. 54 (1–3): 155–165. doi:10.1016/S0376-6357(01)00156-5. PMID 11369467.
Krohn, R. (1991). "Why are graphs so central in science?". Biology and Philosophy. 6 (2): 181–203. doi:10.1007/BF02426837.
Cleveland, W. S. (1984). "Graphs in Scientific Publications". The American Statistician. 38 (4): 261–9. doi:10.2307/2683400. JSTOR 2683400. | Wikipedia/Graphism_thesis |
Basic research, also called pure research, fundamental research, basic science, or pure science, is a type of scientific research with the aim of improving scientific theories for better understanding and prediction of natural or other phenomena. In contrast, applied research uses scientific theories to develop technology or techniques, which can be used to intervene and alter natural or other phenomena. Though often driven simply by curiosity, basic research often fuels the technological innovations of applied science. The two aims are often practiced simultaneously in coordinated research and development.
In addition to innovations, basic research serves to provide insights and public support of nature, possibly improving conservation efforts. Technological innovations may influence engineering concepts, such as the beak of a kingfisher influencing the design of a high-speed bullet train.
== Overview ==
Basic research advances fundamental knowledge about the world. It focuses on creating and refuting or supporting theories that explain observed phenomena. Pure research is the source of most new scientific ideas and ways of thinking about the world. It can be exploratory, descriptive, or explanatory; however, explanatory research is the most common.
Basic research generates new ideas, principles, and theories, which may not be immediately utilized but nonetheless form the basis of progress and development in different fields. Today's computers, for example, could not exist without research in pure mathematics conducted over a century ago, for which there was no known practical application at the time. Basic research rarely helps practitioners directly with their everyday concerns; nevertheless, it stimulates new ways of thinking that have the potential to revolutionize and dramatically improve how practitioners deal with a problem in the future.
== By country ==
In the United States, basic research is funded mainly by the federal government and done mainly at universities and institutes. As government funding has diminished in the 2010s, however, private funding is increasingly important.
== Basic versus applied science ==
Applied science focuses on the development of technology and techniques. In contrast, basic science develops scientific knowledge and predictions, principally in natural sciences but also in other empirical sciences, which are used as the scientific foundation for applied science. Basic science develops and establishes information to predict phenomena and perhaps to understand nature, whereas applied science uses portions of basic science to develop interventions via technology or technique to alter events or outcomes. Applied and basic sciences can interface closely in research and development. The interface between basic research and applied research has been studied by the National Science Foundation. A worker in basic scientific research is motivated by a driving curiosity about the unknown. When his explorations yield new knowledge, he experiences the satisfaction of those who first attain the summit of a mountain or the upper reaches of a river flowing through unmapped territory. Discovery of truth and understanding of nature are his objectives. His professional standing among his fellows depends upon the originality and soundness of his work. Creativeness in science is of a cloth with that of the poet or painter.It conducted a study in which it traced the relationship between basic scientific research efforts and the development of major innovations, such as oral contraceptives and videotape recorders. This study found that basic research played a key role in the development in all of the innovations. The number of basic science research that assisted in the production of a given innovation peaked between 20 and 30 years before the innovation itself. While most innovation takes the form of applied science and most innovation occurs in the private sector, basic research is a necessary precursor to almost all applied science and associated instances of innovation. Roughly 76% of basic research is conducted by universities.
A distinction can be made between basic science and disciplines such as medicine and technology. They can be grouped as STM (science, technology, and medicine; not to be confused with STEM [science, technology, engineering, and mathematics]) or STS (science, technology, and society). These groups are interrelated and influence each other, although they may differ in the specifics such as methods and standards.
The Nobel Prize mixes basic with applied sciences for its award in Physiology or Medicine. In contrast, the Royal Society of London awards distinguish natural science from applied science.
== See also ==
Blue skies research
Hard and soft science
Metascience
Normative science
Physics
Precautionary principle
Pure mathematics
Pure Chemistry
== References ==
== Further reading ==
Levy, David M. (2002). "Research and Development". In David R. Henderson (ed.). Concise Encyclopedia of Economics (1st ed.). Library of Economics and Liberty. OCLC 317650570, 50016270, 163149563 | Wikipedia/Basic_Science |
Hard science fiction is a category of science fiction characterized by concern for scientific accuracy and logic. The term was first used in print in 1957 by P. Schuyler Miller in a review of John W. Campbell's Islands of Space in the November issue of Astounding Science Fiction. The complementary term soft science fiction, formed by analogy to the popular distinction between the "hard" (natural) and "soft" (social) sciences, first appeared in the late 1970s. Though there are examples generally considered as "hard" science fiction such as Isaac Asimov's Foundation series, built on mathematical sociology, science fiction critic Gary Westfahl argues that while neither term is part of a rigorous taxonomy, they are approximate ways of characterizing stories that reviewers and commentators have found useful.
== History ==
Stories revolving around scientific and technical consistency were written as early as the 1870s with the publication of Jules Verne's Twenty Thousand Leagues Under the Seas in 1870, among other stories. The attention to detail in Verne's work became an inspiration for many future scientists and explorers, although Verne himself denied writing as a scientist or seriously predicting machines and technology of the future.
Hugo Gernsback believed from the beginning of his involvement with science fiction in the 1920s that the stories should be instructive, although it was not long before he found it necessary to print fantastical and unscientific fiction in Amazing Stories to attract readers. During Gernsback's long absence from science fiction (SF) publishing, from 1936 to 1953, the field evolved away from his focus on facts and education. The Golden Age of Science Fiction is generally considered to have started in the late 1930s and lasted until the mid-1940s, bringing with it "a quantum jump in quality, perhaps the greatest in the history of the genre", according to science fiction historians Peter Nicholls and Mike Ashley.
However, Gernsback's views were unchanged. In his editorial in the first issue of Science-Fiction Plus, he gave his view of the modern SF story: "the fairy tale brand, the weird or fantastic type of what mistakenly masquerades under the name of Science-Fiction today!" and he stated his preference for "truly scientific, prophetic Science-Fiction with the full accent on SCIENCE". In the same editorial, Gernsback called for patent reform to give science fiction authors the right to create patents for ideas without having patent models because many of their ideas predated the technical progress needed to develop specifications for their ideas. The introduction referenced the numerous prescient technologies described throughout Ralph 124C 41+.
== Definition ==
The heart of the "hard science fiction" designation is the relationship of the science content and attitude to the rest of the narrative, and (for some readers, at least) the "hardness" or rigor of the science itself. One requirement for hard SF is procedural or intentional: a story should try to be accurate, logical, credible and rigorous in its use of current scientific and technical knowledge about which technology, phenomena, scenarios and situations that are practically or theoretically possible. For example, the development of concrete proposals for spaceships, space stations, space missions, and a US space program in the 1950s and 1960s influenced a widespread proliferation of "hard" space stories. Later discoveries do not necessarily invalidate the label of hard SF, as evidenced by P. Schuyler Miller, who called Arthur C. Clarke's 1961 novel A Fall of Moondust hard SF, and the designation remains valid even though a crucial plot element, the existence of deep pockets of "moondust" in lunar craters, is now known to be incorrect.
There is a degree of flexibility in how far from "real science" a story can stray before it becomes less of a hard SF. Hard science fiction authors only include more controversial devices when the ideas draw from well-known scientific and mathematical principles. In contrast, authors writing softer SF use such devices without a scientific basis (sometimes referred to as "enabling devices", since they allow the story to take place).
Readers of "hard SF" often try to find inaccuracies in stories. For example, a group at MIT concluded that the planet Mesklin in Hal Clement's 1953 novel Mission of Gravity would have had a sharp edge at the equator, and a Florida high school class calculated that in Larry Niven's 1970 novel Ringworld the topsoil would have slid into the seas in a few thousand years. Niven fixed these errors in his sequel The Ringworld Engineers, and noted them in the foreword.
== Representative works ==
Arranged chronologically by publication year.
=== Anthologies ===
David G. Hartwell and Kathryn Cramer (eds.), The Ascent of Wonder: The Evolution of Hard SF (1994)
David G. Hartwell and Kathryn Cramer (eds.), The Hard SF Renaissance: An Anthology (2002)
Ben Bova and Eric Choi (eds.), Carbide-Tipped Pens: Seventeen Tales of Hard Science Fiction (2014)
Wade Roush (ed.) Twelve Tomorrows (MIT Press 2018)
=== Short stories ===
Robert Heinlein, The Past Through Tomorrow collection of stories (1939–1962)
Tom Godwin, "The Cold Equations" (1954)
Poul Anderson, "Kyrie" (1968)
Frederik Pohl, "Day Million" (1971)
Larry Niven, "Inconstant Moon" (1971) and "The Hole Man" (1974)
Greg Bear, "Tangents" (1986)
Geoffrey A. Landis, "A Walk in the Sun" (1991)
Vernor Vinge, "Fast Times at Fairmont High" (2001)
=== Novels ===
Aldous Huxley, Brave New World (1932)
Hal Clement, Mission of Gravity (1953)
Fred Hoyle, The Black Cloud (1957)
James Blish, A Case of Conscience (1958)
Jack Vance, The Languages of Pao (1958)
Arthur C. Clarke, A Fall of Moondust (1961)
Stanisław Lem, The Invincible (1963)
John Brunner, Stand on Zanzibar (1968), The Jagged Orbit (1969), The Sheep Look Up (1972), The Shockwave Rider (1975)
Philip K. Dick, Do Androids Dream of Electric Sheep? (1968)
Michael Crichton, The Andromeda Strain (1969), Jurassic Park (1990)
Larry Niven, Ringworld (1970)
Poul Anderson, Tau Zero (1970)
James Gunn, The Listeners (1972)
Larry Niven and Jerry Pournelle, The Mote in God's Eye (1974)
Bob Shaw, Orbitsville (1975)
James P. Hogan, The Two Faces of Tomorrow (1979)
Robert L. Forward, Dragon's Egg (1980) and its sequel Starquake (1985)
Steven Barnes and Larry Niven, The Descent of Anansi (1982)
Carl Sagan, Contact
Kim Stanley Robinson, The Mars trilogy (Red Mars (1992), Green Mars (1993), Blue Mars (1996))
Nancy Kress, Beggars in Spain (1993)
Charles R. Pellegrino & George Zebrowski, The Killing Star (1995)
Allen Steele, The Tranquillity Alternative (1996)
Greg Egan, Schild's Ladder (2002)
Alastair Reynolds, Pushing Ice (2005)
Cixin Liu, Remembrance of Earth's Past (trilogy, 2006–2016)
Andy Weir, The Martian (2011), Project Hail Mary (2021)
=== Films and TV shows ===
Destination Moon (1950)
2001: A Space Odyssey (1968)
Colossus: The Forbin Project (1970)
The Andromeda Strain (1971)
Silent Running (1972)
Blade Runner (1982)
The Abyss (1989)
Contact (1997)
Gattaca (1997)
Primer (2004)
Moon (2009)
Europa Report (2013)
Her (2013)
Gravity (2013)
Ex Machina (2014)
The Martian (2015)
The Expanse (2015-2022)
Arrival (2016)
Ad Astra (2019)
For All Mankind (2019–present)
Away (2020)
Pantheon (2022–2023)
3 Body Problem (2024–present)
=== Anime / manga ===
Patlabor (1988–present)
Ghost in the Shell (1989–present)
Planetes (1999, 2004)
Rocket Girls (2007)
Revisions (2018–2019)
Space Brothers/Uchuu Kyoudai (2007–present, 2012–2014)
=== Video games ===
Marathon (1994)
Policenauts (1994)
Sid Meier's Alpha Centauri (1999)
Kerbal Space Program (2015)
Terra Invicta (2022)
== See also ==
Hypothetical technology
Mundane science fiction
Techno-thriller
== References ==
== Further reading ==
On Hard Science Fiction: A Bibliography, originally published in Science Fiction Studies #60 (July 1993).
David G. Hartwell, "Hard Science Fiction", Introduction to The Ascent of Wonder: The Evolution of Hard Science Fiction, 1994, ISBN 0-312-85509-5
Kathryn Cramer's chapter on hard science fiction in The Cambridge Companion to SF, ed. Farah Mendlesohn & Edward James.
Westfahl, Gary (1996-02-28). Cosmic Engineers: A Study of Hard Science Fiction (Contributions to the Study of Science Fiction and Fantasy). Greenwood Press. ISBN 0-313-29727-4.
A Political History of SF by Eric Raymond
The Science in Science Fiction by Brian Stableford, David Langford, & Peter Nicholls (1982)
David N. Samuelson, "Hard SF", pp. 194–200, The Routledge Companion to Science Fiction, 2009.
== External links ==
Hard Science Fiction Exclusive Interviews
Science Fiction Stories with Good Astronomy & Physics: A Topical Index Archived 2021-02-25 at the Wayback Machine
The Ascent of Wonder by David G. Hartwell & Kathryn Cramer. Story notes and introductions.
The Ten Best Hard Science Fiction Books of all Time Archived 2012-04-12 at the Wayback Machine, selected by the editors of MIT's Technology Review, 2011
"Low-Level Science fiction: Sci-fi with hard science and a literary slant"
Hard SF at The Encyclopedia of Science Fiction | Wikipedia/Hard_science_fiction |
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph.
It is slower than Dijkstra's algorithm for the same problem, but more versatile, as it is capable of handling graphs in which some of the edge weights are negative numbers. The algorithm was first proposed by Alfonso Shimbel (1955), but is instead named after Richard Bellman and Lester Ford Jr., who published it in 1958 and 1956, respectively. Edward F. Moore also published a variation of the algorithm in 1959, and for this reason it is also sometimes called the Bellman–Ford–Moore algorithm.
Negative edge weights are found in various applications of graphs. This is why this algorithm is useful.
If a graph contains a "negative cycle" (i.e. a cycle whose edges sum to a negative value) that is reachable from the source, then there is no cheapest path: any path that has a point on the negative cycle can be made cheaper by one more walk around the negative cycle. In such a case, the Bellman–Ford algorithm can detect and report the negative cycle.
== Algorithm ==
Like Dijkstra's algorithm, Bellman–Ford proceeds by relaxation, in which approximations to the correct distance are replaced by better ones until they eventually reach the solution. In both algorithms, the approximate distance to each vertex is always an overestimate of the true distance, and is replaced by the minimum of its old value and the length of a newly found path.
However, Dijkstra's algorithm uses a priority queue to greedily select the closest vertex that has not yet been processed, and performs this relaxation process on all of its outgoing edges; by contrast, the Bellman–Ford algorithm simply relaxes all the edges, and does this
|
V
|
−
1
{\displaystyle |V|-1}
times, where
|
V
|
{\displaystyle |V|}
is the number of vertices in the graph.
In each of these repetitions, the number of vertices with correctly calculated distances grows, from which it follows that eventually all vertices will have their correct distances. This method allows the Bellman–Ford algorithm to be applied to a wider class of inputs than Dijkstra's algorithm. The intermediate answers depend on the order of edges relaxed, but the final answer remains the same.
Bellman–Ford runs in
O
(
|
V
|
⋅
|
E
|
)
{\displaystyle O(|V|\cdot |E|)}
time, where
|
V
|
{\displaystyle |V|}
and
|
E
|
{\displaystyle |E|}
are the number of vertices and edges respectively.
function BellmanFord(list vertices, list edges, vertex source) is
// This implementation takes in a graph, represented as
// lists of vertices (represented as integers [0..n-1]) and
// edges, and fills two arrays (distance and predecessor)
// holding the shortest path from the source to each vertex
distance := list of size n
predecessor := list of size n
// Step 1: initialize graph
for each vertex v in vertices do
// Initialize the distance to all vertices to infinity
distance[v] := inf
// And having a null predecessor
predecessor[v] := null
// The distance from the source to itself is zero
distance[source] := 0
// Step 2: relax edges repeatedly
repeat |V|−1 times:
for each edge (u, v) with weight w in edges do
if distance[u] + w < distance[v] then
distance[v] := distance[u] + w
predecessor[v] := u
// Step 3: check for negative-weight cycles
for each edge (u, v) with weight w in edges do
if distance[u] + w < distance[v] then
predecessor[v] := u
// A negative cycle exists;
// find a vertex on the cycle
visited := list of size n initialized with false
visited[v] := true
while not visited[u] do
visited[u] := true
u := predecessor[u]
// u is a vertex in a negative cycle,
// find the cycle itself
ncycle := [u]
v := predecessor[u]
while v != u do
ncycle := concatenate([v], ncycle)
v := predecessor[v]
error "Graph contains a negative-weight cycle", ncycle
return distance, predecessor
Simply put, the algorithm initializes the distance to the source to 0 and all other nodes to infinity. Then for all edges, if the distance to the destination can be shortened by taking the edge, the distance is updated to the new lower value.
The core of the algorithm is a loop that scans across all edges at every loop. For every
i
≤
|
V
|
−
1
{\displaystyle i\leq |V|-1}
, at the end of the
i
{\displaystyle i}
-th iteration, from any vertex v, following the predecessor trail recorded in predecessor yields a path that has a total weight that is at most distance[v], and further, distance[v] is a lower bound to the length of any path from source to v that uses at most i edges.
Since the longest possible path without a cycle can be
|
V
|
−
1
{\displaystyle |V|-1}
edges, the edges must be scanned
|
V
|
−
1
{\displaystyle |V|-1}
times to ensure the shortest path has been found for all nodes. A final scan of all the edges is performed and if any distance is updated, then a path of length
|
V
|
{\displaystyle |V|}
edges has been found which can only occur if at least one negative cycle exists in the graph.
The edge (u, v) that is found in step 3 must be reachable from a negative cycle, but it isn't necessarily part of the cycle itself, which is why it's necessary to follow the path of predecessors backwards until a cycle is detected. The above pseudo-code uses a Boolean array (visited) to find a vertex on the cycle, but any cycle finding algorithm can be used to find a vertex on the cycle.
A common improvement when implementing the algorithm is to return early when an iteration of step 2 fails to relax any edges, which implies all shortest paths have been found, and therefore there are no negative cycles. In that case, the complexity of the algorithm is reduced from
O
(
|
V
|
⋅
|
E
|
)
{\displaystyle O(|V|\cdot |E|)}
to
O
(
l
⋅
|
E
|
)
{\displaystyle O(l\cdot |E|)}
where
l
{\displaystyle l}
is the maximum length of a shortest path in the graph.
== Proof of correctness ==
The correctness of the algorithm can be shown by induction:
Lemma. After i repetitions of for loop,
if Distance(u) is not infinity, it is equal to the length of some path from s to u; and
if there is a path from s to u with at most i edges, then Distance(u) is at most the length of the shortest path from s to u with at most i edges.
Proof. For the base case of induction, consider i=0 and the moment before for loop is executed for the first time. Then, for the source vertex, source.distance = 0, which is correct. For other vertices u, u.distance = infinity, which is also correct because there is no path from source to u with 0 edges.
For the inductive case, we first prove the first part. Consider a moment when a vertex's distance is updated by
v.distance := u.distance + uv.weight. By inductive assumption, u.distance is the length of some path from source to u. Then u.distance + uv.weight is the length of the path from source to v that follows the path from source to u and then goes to v.
For the second part, consider a shortest path P (there may be more than one) from source to v with at most i edges. Let u be the last vertex before v on this path. Then, the part of the path from source to u is a shortest path from source to u with at most i-1 edges, since if it were not, then there must be some strictly shorter path from source to u with at most i-1 edges, and we could then append the edge uv to this path to obtain a path with at most i edges that is strictly shorter than P—a contradiction. By inductive assumption, u.distance after i−1 iterations is at most the length of this path from source to u. Therefore, uv.weight + u.distance is at most the length of P. In the ith iteration, v.distance gets compared with uv.weight + u.distance, and is set equal to it if uv.weight + u.distance is smaller. Therefore, after i iterations, v.distance is at most the length of P, i.e., the length of the shortest path from source to v that uses at most i edges.
If there are no negative-weight cycles, then every shortest path visits each vertex at most once, so at step 3 no further improvements can be made. Conversely, suppose no improvement can be made. Then for any cycle with vertices v[0], ..., v[k−1],
v[i].distance <= v[i-1 (mod k)].distance + v[i-1 (mod k)]v[i].weight
Summing around the cycle, the v[i].distance and v[i−1 (mod k)].distance terms cancel, leaving
0 <= sum from 1 to k of v[i-1 (mod k)]v[i].weight
I.e., every cycle has nonnegative weight.
== Finding negative cycles ==
When the algorithm is used to find shortest paths, the existence of negative cycles is a problem, preventing the algorithm from finding a correct answer. However, since it terminates upon finding a negative cycle, the Bellman–Ford algorithm can be used for applications in which this is the target to be sought – for example in cycle-cancelling techniques in network flow analysis.
== Applications in routing ==
A distributed variant of the Bellman–Ford algorithm is used in distance-vector routing protocols, for example the Routing Information Protocol (RIP). The algorithm is distributed because it involves a number of nodes (routers) within an Autonomous system (AS), a collection of IP networks typically owned by an ISP.
It consists of the following steps:
Each node calculates the distances between itself and all other nodes within the AS and stores this information as a table.
Each node sends its table to all neighboring nodes.
When a node receives distance tables from its neighbors, it calculates the shortest routes to all other nodes and updates its own table to reflect any changes.
The main disadvantages of the Bellman–Ford algorithm in this setting are as follows:
It does not scale well.
Changes in network topology are not reflected quickly since updates are spread node-by-node.
Count to infinity if link or node failures render a node unreachable from some set of other nodes, those nodes may spend forever gradually increasing their estimates of the distance to it, and in the meantime there may be routing loops.
== Improvements ==
The Bellman–Ford algorithm may be improved in practice (although not in the worst case) by the observation that, if an iteration of the main loop of the algorithm terminates without making any changes, the algorithm can be immediately terminated, as subsequent iterations will not make any more changes. With this early termination condition, the main loop may in some cases use many fewer than |V| − 1 iterations, even though the worst case of the algorithm remains unchanged. The following improvements all maintain the
O
(
|
V
|
⋅
|
E
|
)
{\displaystyle O(|V|\cdot |E|)}
worst-case time complexity.
A variation of the Bellman–Ford algorithm described by Moore (1959), reduces the number of relaxation steps that need to be performed within each iteration of the algorithm. If a vertex v has a distance value that has not changed since the last time the edges out of v were relaxed, then there is no need to relax the edges out of v a second time. In this way, as the number of vertices with correct distance values grows, the number whose outgoing edges that need to be relaxed in each iteration shrinks, leading to a constant-factor savings in time for dense graphs. This variation can be implemented by keeping a collection of vertices whose outgoing edges need to be relaxed, removing a vertex from this collection when its edges are relaxed, and adding to the collection any vertex whose distance value is changed by a relaxation step. In China, this algorithm was popularized by Fanding Duan, who rediscovered it in 1994, as the "shortest path faster algorithm".
Yen (1970) described another improvement to the Bellman–Ford algorithm. His improvement first assigns some arbitrary linear order on all vertices and then partitions the set of all edges into two subsets. The first subset, Ef, contains all edges (vi, vj) such that i < j; the second, Eb, contains edges (vi, vj) such that i > j. Each vertex is visited in the order v1, v2, ..., v|V|, relaxing each outgoing edge from that vertex in Ef. Each vertex is then visited in the order v|V|, v|V|−1, ..., v1, relaxing each outgoing edge from that vertex in Eb. Each iteration of the main loop of the algorithm, after the first one, adds at least two edges to the set of edges whose relaxed distances match the correct shortest path distances: one from Ef and one from Eb. This modification reduces the worst-case number of iterations of the main loop of the algorithm from |V| − 1 to
|
V
|
/
2
{\displaystyle |V|/2}
.
Another improvement, by Bannister & Eppstein (2012), replaces the arbitrary linear order of the vertices used in Yen's second improvement by a random permutation. This change makes the worst case for Yen's improvement (in which the edges of a shortest path strictly alternate between the two subsets Ef and Eb) very unlikely to happen. With a randomly permuted vertex ordering, the expected number of iterations needed in the main loop is at most
|
V
|
/
3
{\displaystyle |V|/3}
.
Fineman (2024), at Georgetown University, created an improved algorithm that with high probability runs in
O
~
(
|
V
|
8
9
⋅
|
E
|
)
{\displaystyle {\tilde {O}}(|V|^{\frac {8}{9}}\cdot |E|)}
time. Here, the
O
~
{\displaystyle {\tilde {O}}}
is a variant of big O notation that hides logarithmic factors.
== Notes ==
== References ==
=== Original sources ===
Shimbel, A. (1955). Structure in communication nets. Proceedings of the Symposium on Information Networks. New York, New York: Polytechnic Press of the Polytechnic Institute of Brooklyn. pp. 199–203.
Bellman, Richard (1958). "On a routing problem". Quarterly of Applied Mathematics. 16: 87–90. doi:10.1090/qam/102435. MR 0102435.
Ford, Lester R. Jr. (August 14, 1956). Network Flow Theory. Paper P-923. Santa Monica, California: RAND Corporation.
Moore, Edward F. (1959). The shortest path through a maze. Proc. Internat. Sympos. Switching Theory 1957, Part II. Cambridge, Massachusetts: Harvard Univ. Press. pp. 285–292. MR 0114710.
Yen, Jin Y. (1970). "An algorithm for finding shortest routes from all source nodes to a given destination in general networks". Quarterly of Applied Mathematics. 27 (4): 526–530. doi:10.1090/qam/253822. MR 0253822.
Bannister, M. J.; Eppstein, D. (2012). "Randomized speedup of the Bellman–Ford algorithm". Analytic Algorithmics and Combinatorics (ANALCO12), Kyoto, Japan. pp. 41–47. arXiv:1111.5414. doi:10.1137/1.9781611973020.6.
Fineman, Jeremy T. (2024). "Single-source shortest paths with negative real weights in
O
~
(
m
n
8
/
9
)
{\displaystyle {\tilde {O}}(mn^{8/9})}
time". In Mohar, Bojan; Shinkar, Igor; O'Donnell, Ryan (eds.). Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, Vancouver, BC, Canada, June 24–28, 2024. Association for Computing Machinery. pp. 3–14. arXiv:2311.02520. doi:10.1145/3618260.3649614.
=== Secondary sources ===
Ford, L. R. Jr.; Fulkerson, D. R. (1962). "A shortest chain algorithm". Flows in Networks. Princeton University Press. pp. 130–134.
Bang-Jensen, Jørgen; Gutin, Gregory (2000). "Section 2.3.4: The Bellman-Ford-Moore algorithm". Digraphs: Theory, Algorithms and Applications (First ed.). Springer. ISBN 978-1-84800-997-4.
Schrijver, Alexander (2005). "On the history of combinatorial optimization (till 1960)" (PDF). Handbook of Discrete Optimization. Elsevier: 1–68.
Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. Introduction to Algorithms. MIT Press and McGraw-Hill., Fourth Edition. MIT Press, 2022. ISBN 978-0-262-04630-5. Section 22.1: The Bellman–Ford algorithm, pp. 612–616. Problem 22–1, p. 640.
Heineman, George T.; Pollice, Gary; Selkow, Stanley (2008). "Chapter 6: Graph Algorithms". Algorithms in a Nutshell. O'Reilly Media. pp. 160–164. ISBN 978-0-596-51624-6.
Kleinberg, Jon; Tardos, Éva (2006). Algorithm Design. New York: Pearson Education, Inc.
Sedgewick, Robert (2002). "Section 21.7: Negative Edge Weights". Algorithms in Java (3rd ed.). Addison-Wesley. ISBN 0-201-36121-3. Archived from the original on 2008-05-31. Retrieved 2007-05-28. | Wikipedia/Shortest_path_faster_algorithm |
In computer science, lexicographic breadth-first search or Lex-BFS is a linear time algorithm for ordering the vertices of a graph. The algorithm is different from a breadth-first search, but it produces an ordering that is consistent with breadth-first search.
The lexicographic breadth-first search algorithm is based on the idea of partition refinement and was first developed by Donald J. Rose, Robert E. Tarjan, and George S. Lueker (1976). A more detailed survey of the topic is presented by Corneil (2004).
It has been used as a subroutine in other graph algorithms including the recognition of chordal graphs, and optimal coloring of distance-hereditary graphs.
== Background ==
The breadth-first search algorithm is commonly defined by the following process:
Initialize a queue of graph vertices, with the starting vertex of the graph as the queue's only element.
While the queue is non-empty, remove (dequeue) a vertex v from the queue, and add to the queue (enqueue) all the other vertices that can be reached by an edge from v that have not already been added in earlier steps.
However, rather than defining the vertex to choose at each step in an imperative way as the one produced by the dequeue operation of a queue, one can define the same sequence of vertices declaratively by the properties of these vertices. That is, a standard breadth-first search is just the result of repeatedly applying this rule:
Repeatedly output a vertex v, choosing at each step a vertex v that has not already been chosen and that has a predecessor (a vertex that has an edge to v) as early in the output as possible.
In some cases, this ordering of vertices by the output positions of their predecessors may have ties — two different vertices have the same earliest predecessor. In this case, the order in which those two vertices are chosen may be arbitrary. The output of lexicographic breadth-first search differs from a standard breadth-first search in having a consistent rule for breaking such ties. In lexicographic breadth-first search, the output ordering is the order that would be produced by the rule:
Repeatedly output a vertex v, choosing at each step a vertex v that has not already been chosen and whose entire set of already-output predecessors is as small as possible in lexicographic order.
So, when two vertices v and w have the same earliest predecessor, earlier than any other unchosen vertices,
the standard breadth-first search algorithm will order them arbitrarily. Instead, in this case, the LexBFS algorithm would choose between v and w by the output ordering of their second-earliest predecessors.
If only one of them has a second-earliest predecessor that has already been output, that one is chosen.
If both v and w have the same second-earliest predecessor, then the tie is broken by considering their third-earliest predecessors, and so on.
Applying this rule directly by comparing vertices according to this rule would lead to an inefficient algorithm. Instead, the lexicographic breadth-first search uses a set partitioning data structure in order to produce the same ordering more efficiently, just as a standard breadth-first search uses a queue data structure to produce its ordering efficiently.
== Algorithm ==
The lexicographic breadth-first search algorithm replaces the queue of vertices of a standard breadth-first search with an ordered sequence of sets of vertices. The sets in the sequence form a partition of the remaining vertices. At each step, a vertex v from the first set in the sequence is removed from that set, and if that removal causes the set to become empty then the set is removed from the sequence. Then, each set in the sequence is replaced by two subsets: the neighbors of v and the non-neighbors of v. The subset of neighbors is placed earlier in the sequence than the subset of non-neighbors. In pseudocode, the algorithm can be expressed as follows:
Initialize a sequence Σ of sets, to contain a single set containing all vertices.
Initialize the output sequence of vertices to be empty.
While Σ is non-empty:
Find and remove a vertex v from the first set in Σ
If the first set in Σ is now empty, remove it from Σ
Add v to the end of the output sequence.
For each edge v-w such that w still belongs to a set S in Σ:
If the set S containing w has not yet been replaced while processing v, create a new empty replacement set T and place it prior to S in the sequence; otherwise, let T be the set prior to S.
Move w from S to T, and if this causes S to become empty remove S from Σ.
Each vertex is processed once, each edge is examined only when its two endpoints are processed, and (with an appropriate representation for the sets in Σ that allows items to be moved from one set to another in constant time) each iteration of the inner loop takes only constant time. Therefore, like simpler graph search algorithms such as breadth-first search and depth-first search, this algorithm takes linear time.
The algorithm is called lexicographic breadth-first search because the order it produces is an ordering that could also have been produced by a breadth-first search, and because if the ordering is used to index the rows and columns of an adjacency matrix of a graph then the algorithm sorts the rows and columns into lexicographical order.
== Applications ==
=== Chordal graphs ===
A graph G is defined to be chordal if its vertices have a perfect elimination ordering, an ordering such that for any vertex v the neighbors that occur later in the ordering form a clique. In a chordal graph, the reverse of a lexicographic ordering is always a perfect elimination ordering. Therefore, one can test whether a graph is chordal in linear time by the following algorithm:
Use lexicographic breadth-first search to find a lexicographic ordering of G
For each vertex v:
Let w be the neighbor of v occurring prior to v, as close to v in the sequence as possible
(Continue to the next vertex v if there is no such w)
If the set of earlier neighbors of v (excluding w itself) is not a subset of the set of earlier neighbors of w, the graph is not chordal
If the loop terminates without showing that the graph is not chordal, then it is chordal.
This application was the original motivation that led Rose, Tarjan & Lueker (1976) to develop the lexicographic breadth first search algorithm.
=== Graph coloring ===
A graph G is said to be perfectly orderable if there is a sequence of its vertices with the property that, for any induced subgraph of G, a greedy coloring algorithm that colors the vertices in the induced sequence ordering is guaranteed to produce an optimal coloring.
For a chordal graph, a perfect elimination ordering is a perfect ordering: the number of the color used for any vertex is the size of the clique formed by it and its earlier neighbors, so the maximum number of colors used is equal to the size of the largest clique in the graph, and no coloring can use fewer colors. An induced subgraph of a chordal graph is chordal and the induced subsequence of its perfect elimination ordering is a perfect elimination ordering on the subgraph, so chordal graphs are perfectly orderable, and lexicographic breadth-first search can be used to optimally color them.
The same property is true for a larger class of graphs, the distance-hereditary graphs: distance-hereditary graphs are perfectly orderable, with a perfect ordering given by the reverse of a lexicographic ordering, so lexicographic breadth-first search can be used in conjunction with greedy coloring algorithms to color them optimally in linear time.
=== Other applications ===
Bretscher et al. (2008) describe an extension of lexicographic breadth-first search that breaks any additional ties using the complement graph of the input graph. As they show, this can be used to recognize cographs in linear time. Habib et al. (2000) describe additional applications of lexicographic breadth-first search including the recognition of comparability graphs and interval graphs.
== LexBFS ordering ==
An enumeration of the vertices of a graph is said to be a LexBFS ordering if it is the possible output of the application of LexBFS to this graph.
Let
G
=
(
V
,
E
)
{\displaystyle G=(V,E)}
be a graph with
n
{\displaystyle n}
vertices. Recall that
N
(
v
)
{\displaystyle N(v)}
is the set of neighbors of
v
{\displaystyle v}
.
Let
σ
=
(
v
1
,
…
,
v
n
)
{\displaystyle \sigma =(v_{1},\dots ,v_{n})}
be an enumeration of the vertices of
V
{\displaystyle V}
.
The enumeration
σ
{\displaystyle \sigma }
is a LexBFS ordering (with source
v
1
{\displaystyle v_{1}}
) if, for all
1
≤
i
<
j
<
k
≤
n
{\displaystyle 1\leq i<j<k\leq n}
with
v
i
∈
N
(
v
k
)
∖
N
(
v
j
)
{\displaystyle v_{i}\in N(v_{k})\setminus N(v_{j})}
, there exists
m
<
i
{\displaystyle m<i}
such that
v
m
∈
N
(
v
j
)
∖
N
(
v
k
)
{\displaystyle v_{m}\in N(v_{j})\setminus N(v_{k})}
.
== Notes ==
== References ==
Brandstädt, Andreas; Le, Van Bang; Spinrad, Jeremy (1999), Graph Classes: A Survey, SIAM Monographs on Discrete Mathematics and Applications, ISBN 0-89871-432-X.
Bretscher, Anna; Corneil, Derek; Habib, Michel; Paul, Christophe (2008), "A simple linear time LexBFS cograph recognition algorithm", SIAM Journal on Discrete Mathematics, 22 (4): 1277–1296, CiteSeerX 10.1.1.188.5016, doi:10.1137/060664690.
Corneil, Derek G. (2004), "Lexicographic breadth first search – a survey", Graph-Theoretic Methods in Computer Science: 30th International Workshop, WG 2004, Bad Honnef, Germany, June 21-23, 2004, Revised Papers, Lecture Notes in Computer Science, vol. 3353, Springer-Verlag, pp. 1–19, doi:10.1007/978-3-540-30559-0_1, ISBN 978-3-540-24132-4.
Habib, Michel; McConnell, Ross; Paul, Christophe; Viennot, Laurent (2000), "Lex-BFS and partition refinement, with applications to transitive orientation, interval graph recognition and consecutive ones testing", Theoretical Computer Science, 234 (1–2): 59–84, doi:10.1016/S0304-3975(97)00241-7.
Rose, D. J.; Tarjan, R. E.; Lueker, G. S. (1976), "Algorithmic aspects of vertex elimination on graphs", SIAM Journal on Computing, 5 (2): 266–283, doi:10.1137/0205021. | Wikipedia/Lexicographic_breadth-first_search |
In graph theory, Yen's algorithm computes single-source K-shortest loopless paths for a graph with non-negative edge cost. The algorithm was published by Jin Y. Yen in 1971 and employs any shortest path algorithm to find the best path, then proceeds to find K − 1 deviations of the best path.
== Algorithm ==
=== Terminology and notation ===
=== Description ===
The algorithm can be broken down into two parts: determining the first k-shortest path,
A
1
{\displaystyle A^{1}}
, and then determining all other k-shortest paths. It is assumed that the container
A
{\displaystyle A}
will hold the k-shortest path, whereas the container
B
{\displaystyle B}
will hold the potential k-shortest paths. To determine
A
1
{\displaystyle A^{1}}
, the shortest path from the source to the sink, any efficient shortest path algorithm can be used.
To find the
A
k
{\displaystyle A^{k}}
, where
k
{\displaystyle k}
ranges from
2
{\displaystyle 2}
to
K
{\displaystyle K}
, the algorithm assumes that all paths from
A
1
{\displaystyle A^{1}}
to
A
k
−
1
{\displaystyle A^{k-1}}
have previously been found. The
k
{\displaystyle k}
iteration can be divided into two processes: finding all the deviations
A
k
i
{\displaystyle {A^{k}}_{i}}
and choosing a minimum length path to become
A
k
{\displaystyle A^{k}}
. Note that in this iteration,
i
{\displaystyle i}
ranges from
1
{\displaystyle 1}
to
Q
k
k
{\displaystyle {Q^{k}}_{k}}
.
The first process can be further subdivided into three operations: choosing the
R
k
i
{\displaystyle {R^{k}}_{i}}
, finding
S
k
i
{\displaystyle {S^{k}}_{i}}
, and then adding
A
k
i
{\displaystyle {A^{k}}_{i}}
to the container
B
{\displaystyle B}
. The root path,
R
k
i
{\displaystyle {R^{k}}_{i}}
, is chosen by finding the subpath in
A
k
−
1
{\displaystyle A^{k-1}}
that follows the first
i
{\displaystyle i}
nodes of
A
j
{\displaystyle A^{j}}
, where
j
{\displaystyle j}
ranges from
1
{\displaystyle 1}
to
k
−
1
{\displaystyle k-1}
. Then, if a path is found, the cost of edge
d
i
(
i
+
1
)
{\displaystyle d_{i(i+1)}}
of
A
j
{\displaystyle A^{j}}
is set to infinity. Next, the spur path,
S
k
i
{\displaystyle {S^{k}}_{i}}
, is found by computing the shortest path from the spur node, node
i
{\displaystyle i}
, to the sink. The removal of previous used edges from
(
i
)
{\displaystyle (i)}
to
(
i
+
1
)
{\displaystyle (i+1)}
ensures that the spur path is different.
A
k
i
=
R
k
i
+
S
k
i
{\displaystyle {A^{k}}_{i}={R^{k}}_{i}+{S^{k}}_{i}}
, the addition of the root path and the spur path, is added to
B
{\displaystyle B}
. Next, the edges that were removed, i.e. had their cost set to infinity, are restored to their initial values.
The second process determines a suitable path for
A
k
{\displaystyle A^{k}}
by finding the path in container
B
{\displaystyle B}
with the lowest cost. This path is removed from container
B
{\displaystyle B}
and inserted into container
A
{\displaystyle A}
, and the algorithm continues to the next iteration.
=== Pseudocode ===
The algorithm assumes that the Dijkstra algorithm is used to find the shortest path between two nodes, but any shortest path algorithm can be used in its place.
function YenKSP(Graph, source, sink, K):
// Determine the shortest path from the source to the sink.
A[0] = Dijkstra(Graph, source, sink);
// Initialize the set to store the potential kth shortest path.
B = [];
for k from 1 to K:
// The spur node ranges from the first node to the next to last node in the previous k-shortest path.
for i from 0 to size(A[k − 1]) − 2:
// Spur node is retrieved from the previous k-shortest path, k − 1.
spurNode = A[k-1].node(i);
// The sequence of nodes from the source to the spur node of the previous k-shortest path.
rootPath = A[k-1].nodes(0, i);
for each path p in A:
if rootPath == p.nodes(0, i):
// Remove the links that are part of the previous shortest paths which share the same root path.
remove p.edge(i,i + 1) from Graph;
for each node rootPathNode in rootPath except spurNode:
remove rootPathNode from Graph;
// Calculate the spur path from the spur node to the sink.
// Consider also checking if any spurPath found
spurPath = Dijkstra(Graph, spurNode, sink);
// Entire path is made up of the root path and spur path.
totalPath = rootPath + spurPath;
// Add the potential k-shortest path to the heap.
if (totalPath not in B):
B.append(totalPath);
// Add back the edges and nodes that were removed from the graph.
restore edges to Graph;
restore nodes in rootPath to Graph;
if B is empty:
// This handles the case of there being no spur paths, or no spur paths left.
// This could happen if the spur paths have already been exhausted (added to A),
// or there are no spur paths at all - such as when both the source and sink vertices
// lie along a "dead end".
break;
// Sort the potential k-shortest paths by cost.
B.sort();
// Add the lowest cost path becomes the k-shortest path.
A[k] = B[0];
// In fact we should rather use shift since we are removing the first element
B.pop();
return A;
=== Example ===
The example uses Yen's K-Shortest Path Algorithm to compute three paths from
(
C
)
{\displaystyle (C)}
to
(
H
)
{\displaystyle (H)}
. Dijkstra's algorithm is used to calculate the best path from
(
C
)
{\displaystyle (C)}
to
(
H
)
{\displaystyle (H)}
, which is
(
C
)
−
(
E
)
−
(
F
)
−
(
H
)
{\displaystyle (C)-(E)-(F)-(H)}
with cost 5. This path is appended to container
A
{\displaystyle A}
and becomes the first k-shortest path,
A
1
{\displaystyle A^{1}}
.
Node
(
C
)
{\displaystyle (C)}
of
A
1
{\displaystyle A^{1}}
becomes the spur node with a root path of itself,
R
2
1
=
(
C
)
{\displaystyle {R^{2}}_{1}=(C)}
. The edge,
(
C
)
−
(
E
)
{\displaystyle (C)-(E)}
, is removed because it coincides with the root path and a path in container
A
{\displaystyle A}
. Dijkstra's algorithm is used to compute the spur path
S
2
1
{\displaystyle {S^{2}}_{1}}
, which is
(
C
)
−
(
D
)
−
(
F
)
−
(
H
)
{\displaystyle (C)-(D)-(F)-(H)}
, with a cost of 8.
A
2
1
=
R
2
1
+
S
2
1
=
(
C
)
−
(
D
)
−
(
F
)
−
(
H
)
{\displaystyle {A^{2}}_{1}={R^{2}}_{1}+{S^{2}}_{1}=(C)-(D)-(F)-(H)}
is added to container
B
{\displaystyle B}
as a potential k-shortest path.
Node
(
E
)
{\displaystyle (E)}
of
A
1
{\displaystyle A^{1}}
becomes the spur node with
R
2
2
=
(
C
)
−
(
E
)
{\displaystyle {R^{2}}_{2}=(C)-(E)}
. The edge,
(
E
)
−
(
F
)
{\displaystyle (E)-(F)}
, is removed because it coincides with the root path and a path in container
A
{\displaystyle A}
. Dijkstra's algorithm is used to compute the spur path
S
2
2
{\displaystyle {S^{2}}_{2}}
, which is
(
E
)
−
(
G
)
−
(
H
)
{\displaystyle (E)-(G)-(H)}
, with a cost of 7.
A
2
2
=
R
2
2
+
S
2
2
=
(
C
)
−
(
E
)
−
(
G
)
−
(
H
)
{\displaystyle {A^{2}}_{2}={R^{2}}_{2}+{S^{2}}_{2}=(C)-(E)-(G)-(H)}
is added to container
B
{\displaystyle B}
as a potential k-shortest path.
Node
(
F
)
{\displaystyle (F)}
of
A
1
{\displaystyle A^{1}}
becomes the spur node with a root path,
R
2
3
=
(
C
)
−
(
E
)
−
(
F
)
{\displaystyle {R^{2}}_{3}=(C)-(E)-(F)}
. The edge,
(
F
)
−
(
H
)
{\displaystyle (F)-(H)}
, is removed because it coincides with the root path and a path in container
A
{\displaystyle A}
. Dijkstra's algorithm is used to compute the spur path
S
2
3
{\displaystyle {S^{2}}_{3}}
, which is
(
F
)
−
(
G
)
−
(
H
)
{\displaystyle (F)-(G)-(H)}
, with a cost of 8.
A
2
3
=
R
2
3
+
S
2
3
=
(
C
)
−
(
E
)
−
(
F
)
−
(
G
)
−
(
H
)
{\displaystyle {A^{2}}_{3}={R^{2}}_{3}+{S^{2}}_{3}=(C)-(E)-(F)-(G)-(H)}
is added to container
B
{\displaystyle B}
as a potential k-shortest path.
Of the three paths in container
B
{\displaystyle B}
,
A
2
2
{\displaystyle {A^{2}}_{2}}
is chosen to become
A
2
{\displaystyle A^{2}}
because it has the lowest cost of 7. This process is continued to the 3rd k-shortest path. However, within this 3rd iteration, note that some spur paths do not exist. And the path that is chosen to become
A
3
{\displaystyle A^{3}}
is
(
C
)
−
(
D
)
−
(
F
)
−
(
H
)
{\displaystyle (C)-(D)-(F)-(H)}
.
== Features ==
=== Space complexity ===
To store the edges of the graph, the shortest path list
A
{\displaystyle A}
, and the potential shortest path list
B
{\displaystyle B}
,
N
2
+
K
N
{\displaystyle N^{2}+KN}
memory addresses are required. At worse case, the every node in the graph has an edge to every other node in the graph, thus
N
2
{\displaystyle N^{2}}
addresses are needed. Only
K
N
{\displaystyle KN}
addresses are need for both list
A
{\displaystyle A}
and
B
{\displaystyle B}
because at most only
K
{\displaystyle K}
paths will be stored, where it is possible for each path to have
N
{\displaystyle N}
nodes.
=== Time complexity ===
The time complexity of Yen's algorithm is dependent on the shortest path algorithm used in the computation of the spur paths, so the Dijkstra algorithm is assumed. Dijkstra's algorithm has a worse case time complexity of
O
(
N
2
)
{\displaystyle O(N^{2})}
, but using a Fibonacci heap it becomes
O
(
M
+
N
log
N
)
{\displaystyle O(M+N\log N)}
, where
M
{\displaystyle M}
is the number of edges in the graph. Since Yen's algorithm makes
K
l
{\displaystyle Kl}
calls to the Dijkstra in computing the spur paths, where
l
{\displaystyle l}
is the length of spur paths. In a condensed graph, the expected value of
l
{\displaystyle l}
is
O
(
log
N
)
{\displaystyle O(\log N)}
, while the worst case is
N
{\displaystyle N}
.
The time complexity becomes
O
(
K
N
(
M
+
N
log
N
)
)
{\displaystyle O(KN(M+N\log N))}
.
== Improvements ==
Yen's algorithm can be improved by using a heap to store
B
{\displaystyle B}
, the set of potential k-shortest paths. Using a heap instead of a list will improve the performance of the algorithm, but not the complexity. One method to slightly decrease complexity is to skip the nodes where there are non-existent spur paths. This case is produced when all the spur paths from a spur node have been used in the previous
A
k
{\displaystyle A^{k}}
. Also, if container
B
{\displaystyle B}
has
K
−
k
{\displaystyle K-k}
paths of minimum length, in reference to those in container
A
{\displaystyle A}
, then they can be extract and inserted into container
A
{\displaystyle A}
since no shorter paths will be found.
=== Lawler's modification ===
Eugene Lawler proposed a modification to Yen's algorithm in which duplicates path are not calculated as opposed to the original algorithm where they are calculated and then discarded when they are found to be duplicates. These duplicates paths result from calculating spur paths of nodes in the root of
A
k
{\displaystyle A^{k}}
. For instance,
A
k
{\displaystyle A^{k}}
deviates from
A
k
−
1
{\displaystyle A^{k-1}}
at some node
(
i
)
{\displaystyle (i)}
. Any spur path,
S
k
j
{\displaystyle {S^{k}}_{j}}
where
j
=
0
,
…
,
i
{\displaystyle j=0,\ldots ,i}
, that is calculated will be a duplicate because they have already been calculated during the
k
−
1
{\displaystyle k-1}
iteration. Therefore, only spur paths for nodes that were on the spur path of
A
k
−
1
{\displaystyle A^{k-1}}
must be calculated, i.e. only
S
k
h
{\displaystyle {S^{k}}_{h}}
where
h
{\displaystyle h}
ranges from
(
i
+
1
)
k
−
1
{\displaystyle (i+1)^{k-1}}
to
(
Q
k
)
k
−
1
{\displaystyle (Q_{k})^{k-1}}
. To perform this operation for
A
k
{\displaystyle A^{k}}
, a record is needed to identify the node where
A
k
−
1
{\displaystyle A^{k-1}}
branched from
A
k
−
2
{\displaystyle A^{k-2}}
.
== See also ==
Yen's improvement to the Bellman–Ford algorithm
== References ==
== External links ==
Open Source C++ Implementation
Open Source C++ Implementation using Boost Graph Library | Wikipedia/Yen's_algorithm |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.