{"text":"Critical thinking is the analysis of facts to form a judgment. The subject is complex, and several different definitions exist, which generally include the rational, skeptical, unbiased analysis, or evaluation of factual evidence. Critical thinking is self-directed, self-disciplined, self-monitored, and self-corrective thinking. It presupposes assent to rigorous standards of excellence and mindful command of their use. It entails effective communication and problem-solving abilities as well as a commitment to overcome native egocentrism and sociocentrism."} {"text":"The earliest records of critical thinking are the teachings of Socrates recorded by Plato. These included a part in Plato's early dialogues, where Socrates engages with one or more interlocutors on the issue of ethics such as question whether it was right for Socrates to escape from prison. The philosopher considered and reflected on this question and came to the conclusion that escape violates all the things that he holds higher than himself: the laws of Athens and the guiding voice that Socrates claims to hear."} {"text":"Socrates established the fact that one cannot depend upon those in \"authority\" to have sound knowledge and insight. He demonstrated that persons may have power and high position and yet be deeply confused and irrational. Socrates maintained that for an individual to have a good life or to have one that is worth living, he must be a critical questioner and possess an interrogative soul. He established the importance of asking deep questions that probe profoundly into thinking before we accept ideas as worthy of belief."} {"text":"Socrates set the agenda for the tradition of critical thinking, namely, to reflectively question common beliefs and explanations, carefully distinguishing beliefs that are reasonable and logical from those that\u2014however appealing to our native egocentrism, however much they serve our vested interests, however comfortable or comforting they may be\u2014lack adequate evidence or rational foundation to warrant belief."} {"text":"Critical thinking was described by Richard W. Paul as a movement in two waves (1994). The \"first wave\" of critical thinking is often referred to as"} {"text":"a 'critical analysis' that is clear, rational thinking involving critique. Its details vary amongst those who define it. According to Barry K. Beyer (1995), critical thinking means making clear, reasoned judgments. During the process of critical thinking, ideas should be reasoned, well thought out, and judged. The U.S. National Council for Excellence in Critical Thinking defines critical thinking as the \"intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action.\""} {"text":"In the term \"critical thinking\", the word \"critical\", (Grk. \u03ba\u03c1\u03b9\u03c4\u03b9\u03ba\u03cc\u03c2 = \"kritikos\" = \"critic\") derives from the word \"critic\" and implies a critique; it identifies the intellectual capacity and the means \"of judging\", \"of judgement\", \"for judging\", and of being \"able to discern\". The intellectual roots of critical thinking are as ancient as its etymology, traceable, ultimately, to the teaching practice and vision of Socrates 2,500 years ago who discovered by a method of probing questioning that people could not rationally justify their confident claims to knowledge."} {"text":"Traditionally, critical thinking has been variously defined as follows:"} {"text":"Contemporary critical thinking scholars have expanded these traditional definitions to include qualities, concepts, and processes such as creativity, imagination, discovery, reflection, empathy, connecting knowing, feminist theory, subjectivity, ambiguity, and inconclusiveness. Some definitions of critical thinking exclude these subjective practices."} {"text":"The study of logical argumentation is relevant to the study of critical thinking. Logic is concerned with the analysis of arguments, including the appraisal of their correctness or incorrectness. In the field of epistemology, critical thinking is considered to be logically correct thinking, which allows for differentiation between logically true and logically false statements."} {"text":"In the 'second wave' of critical thinking, authors consciously moved away from the logocentric mode of critical thinking characteristic of the 'first wave'. Although many scholars began to take a less exclusive view of what constitutes critical thinking, rationality and logic remain widely accepted as essential bases for critical thinking. Walters argues that exclusive logicism in the first wave sense is based on \"the unwarranted assumption that good thinking is reducible to logical thinking\"."} {"text":"There are three types of logical reasoning. Informally, two kinds of logical reasoning can be distinguished in addition to formal deduction, which are induction and abduction."} {"text":"Kerry S. Walters, an emeritus philosophy professor from Gettysburg College, argues that rationality demands more than just logical or traditional methods of problem solving and analysis or what he calls the \"calculus of justification\" but also considers \"cognitive acts such as imagination, conceptual creativity, intuition and insight\" (p.\u00a063). These \"functions\" are focused on discovery, on more abstract processes instead of linear, rules-based approaches to problem-solving. The linear and non-sequential mind must both be engaged in the rational mind."} {"text":"The ability to critically analyze an argument\u2014to dissect structure and components, thesis and reasons\u2014is essential. But so is the ability to be flexible and consider non-traditional alternatives and perspectives. These complementary functions are what allow for critical thinking to be a practice encompassing imagination and intuition in cooperation with traditional modes of deductive inquiry."} {"text":"The list of core critical thinking skills includes observation, interpretation, analysis, inference, evaluation, explanation, and metacognition. According to Reynolds (2011), an individual or group engaged in a strong way of critical thinking gives due consideration to establish for instance:"} {"text":"In addition to possessing strong critical-thinking skills, one must be disposed to engage problems and decisions using those skills. Critical thinking employs not only logic but broad intellectual criteria such as clarity, credibility, accuracy, precision, relevance, depth, breadth, significance, and fairness."} {"text":"Critical thinking calls for the ability to:"} {"text":"\"A persistent effort to examine any belief or supposed form of knowledge in the light of the evidence that supports or refutes it and the further conclusions to which it tends.\""} {"text":"The habits of mind that characterize a person strongly disposed toward critical thinking include a desire to follow reason and evidence wherever they may lead, a systematic approach to problem solving, inquisitiveness, even-handedness, and confidence in reasoning."} {"text":"According to a definition analysis by Kompf & Bond (2001), critical thinking involves problem solving, decision making, metacognition, rationality, rational thinking, reasoning, knowledge, intelligence and also a moral component such as reflective thinking. Critical thinkers therefore need to have reached a level of maturity in their development, possess a certain attitude as well as a set of taught skills."} {"text":"There is a postulation by some writers that the tendencies from habits of mind should be thought as virtues to demonstrate the characteristics of a critical thinker. These intellectual virtues are ethical qualities that encourage motivation to think in particular ways towards specific circumstances. However, these virtues have also been criticized by skeptics, who argue that there is lacking evidence for this specific mental basis that are causative to critical thinking."} {"text":"Edward M. Glaser proposed that the ability to think critically involves three elements:"} {"text":"Educational programs aimed at developing critical thinking in children and adult learners, individually or in group problem solving and decision making contexts, continue to address these same three central elements."} {"text":"The Critical Thinking project at Human Science Lab, London, is involved in the scientific study of all major educational systems in prevalence today to assess how the systems are working to promote or impede critical thinking."} {"text":"Contemporary cognitive psychology regards human reasoning as a complex process that is both reactive and reflective. This presents a problem which is detailed as a division of a critical mind in juxtaposition to sensory data and memory."} {"text":"The psychological theory disposes of the absolute nature of the rational mind, in reference to conditions, abstract problems and discursive limitations. Where the relationship between critical thinking skills and critical thinking dispositions is an empirical question, the ability to attain causal domination exists, for which Socrates was known to be largely disposed against as the practice of Sophistry. Accounting for a measure of \"critical thinking dispositions\" is the California Measure of Mental Motivation and the California Critical Thinking Dispositions Inventory. The Critical Thinking Toolkit is an alternative measure that examines student beliefs and attitudes about critical thinking"} {"text":"John Dewey is one of many educational leaders who recognized that a curriculum aimed at building thinking skills would benefit the individual learner, the community, and the entire democracy."} {"text":"Critical thinking is significant in the learning process of internalization, in the construction of basic ideas, principles, and theories inherent in content. And critical thinking is significant in the learning process of application, whereby those ideas, principles, and theories are implemented effectively as they become relevant in learners' lives."} {"text":"Each discipline adapts its use of critical thinking concepts and principles. The core concepts are always there, but they are embedded in subject-specific content. For students to learn content, intellectual engagement is crucial. All students must do their own thinking, their own construction of knowledge. Good teachers recognize this and therefore focus on the questions, readings, activities that stimulate the mind to take ownership of key concepts and principles underlying the subject."} {"text":"Historically, the teaching of critical thinking focused only on logical procedures such as formal and informal logic. This emphasized to students that good thinking is equivalent to logical thinking. However, a second wave of critical thinking, urges educators to value conventional techniques, meanwhile expanding what it means to be a critical thinker. In 1994, Kerry Walters compiled a conglomeration of sources surpassing this logical restriction to include many different authors' research regarding connected knowing, empathy, gender-sensitive ideals, collaboration, world views, intellectual autonomy, morality and enlightenment. These concepts invite students to incorporate their own perspectives and experiences into their thinking."} {"text":"There used to also be an Advanced Extension Award offered in Critical Thinking in the UK, open to any A-level student regardless of whether they have the Critical Thinking A-level. Cambridge International Examinations have an A-level in Thinking Skills."} {"text":"From 2008, Assessment and Qualifications Alliance has also been offering an A-level Critical Thinking specification."} {"text":"OCR exam board have also modified theirs for 2008. Many examinations for university entrance set by universities, on top of A-level examinations, also include a critical thinking component, such as the LNAT, the UKCAT, the BioMedical Admissions Test and the Thinking Skills Assessment."} {"text":"In Qatar, critical thinking was offered by AL-Bairaq\u2014an outreach, non-traditional educational program that targets high school students and focuses on a curriculum based on STEM fields. The idea behind AL-Bairaq is to offer high school students the opportunity to connect with the research environment in the Center for Advanced Materials (CAM) at Qatar University. Faculty members train and mentor the students and help develop and enhance their critical thinking, problem-solving, and teamwork skills."} {"text":"In 1995, a meta-analysis of the literature on teaching effectiveness in higher education was undertaken."} {"text":"The study noted concerns from higher education, politicians, and business that higher education was failing to meet society's requirements for well-educated citizens. It concluded that although faculty may aspire to develop students' thinking skills, in practice they have tended to aim at facts and concepts utilizing lowest levels of cognition, rather than developing intellect or values."} {"text":"Scott Lilienfeld notes that there is some evidence to suggest that basic critical thinking skills might be successfully taught to children at a younger age than previously thought."} {"text":"Critical thinking is considered important in the academic fields for enabling one to analyze, evaluate, explain, and restructure thinking, thereby ensuring the act of thinking without false belief. However, even with knowledge of the methods of logical inquiry and reasoning, mistakes occur, and due to a thinker's inability to apply the methodology consistently, and because of overruling character traits such as egocentrism. Critical thinking includes identification of prejudice, bias, propaganda, self-deception, distortion, misinformation, etc. Given research in cognitive psychology, some educators believe that schools should focus on teaching their students critical thinking skills and cultivation of intellectual traits."} {"text":"It requires nurses to engage in Reflective Practice and keep records of this continued professional development for possible review by the college."} {"text":"Critical thinking is also considered important for human rights education for toleration. The Declaration of Principles on Tolerance adopted by UNESCO in 1995 affirms that \"education for tolerance could aim at countering factors that lead to fear and exclusion of others, and could help young people to develop capacities for independent judgement, \"critical thinking\" and ethical reasoning\"."} {"text":"Researchers assessing critical thinking in online discussion forums often employ a technique called Content Analysis, where the text of online discourse (or the transcription of face-to-face discourse) is systematically coded for different kinds of statements relating to critical thinking. For example, a statement might be coded as \"Discuss ambiguities to clear them up\" or \"Welcoming outside knowledge\" as positive indicators of critical thinking. Conversely, statements reflecting poor critical thinking may be labeled as \"Sticking to prejudice or assumptions\" or \"Squashing attempts to bring in outside knowledge\". The frequency of these codes in CMC and face-to-face discourse can be compared to draw conclusions about the quality of critical thinking."} {"text":"Parallel thinking is a term coined by Edward de Bono. Parallel thinking is described as a constructive alternative to: \"adversarial thinking\"; debate; and the approaches exemplified by Socrates, Plato, and Aristotle (whom de Bono refers to as the \"Greek gang of three\" (GG3) ). In general parallel thinking is a further development of the well known lateral thinking processes, focusing even more on explorations\u2014looking for \"what can be\" rather than for \"what is\"."} {"text":"Parallel thinking is defined as a thinking process where focus is split in specific directions. When done in a group it effectively avoids the consequences of the adversarial approach (as used in courts)."} {"text":"In adversarial debate, the objective is to prove or disprove statements put forward by the parties (normally two). This is also known as the dialectic approach. In Parallel Thinking, practitioners put forward as many statements as possible in several (preferably more than two) parallel tracks. This leads to \"exploration\" of a subject where all participants can contribute, in parallel, with knowledge, facts, feelings, etc."} {"text":"Crucial to the method is that the process is done in a disciplined manner, and that all participants play along and contribute \"in parallel\". Thus each participant must stick to the specific track."} {"text":"The constructive developmental framework (CDF) is a theoretical framework for epistemological and psychological assessment of adults. The framework is based on empirical developmental research showing that an individual's perception of reality is an actively constructed \"world of their own\", unique to them and which they continue to develop over their lifespan."} {"text":"CDF was developed by Otto Laske based on the work of Robert Kegan and Michael Basseches, Laske's teachers at Harvard University. The CDF methodology involves three separate instruments that respectively measure a person's social\u2013emotional stage, cognitive level of development, and psychological profile. It provides three epistemological perspectives on individual clients as well as teams. These constructs are designed to probe how an individual and\/or group constructs the real world conceptually, and how close an individual's present thinking approaches the complexity of the real world."} {"text":"The methodology of CDF is grounded in empirical research on positive adult development which began under Lawrence Kohlberg in the 1960s, continued by Robert Kegan (1982, 1994), Michael Basseches 1984, and Otto Laske (1998, 2006, 2009, 2015, 2018). Laske (1998, 2009) introduced concepts from Georg Wilhelm Friedrich Hegel's philosophy and the Frankfurt School into the framework, making a strict differentiation between social\u2013emotional and cognitive development."} {"text":"In CDF, social-emotional, cognitive, and psychological assessment are arrived at separately, as follows:"} {"text":"In CDF, each of these profiles by itself is considered a pure abstraction since it is only in their togetherness that the \"hidden dimensions of a person's consciousness\" can be empirically understood and made the basis of an intervention. Importantly, a CDF intervention requires dialectical thinking, in contrast to purely logical thinking as used in positivistic research. For this reason, CDF is a model of dialogical, not monological, research."} {"text":"According to the developmental psychologist Robert Kegan, a person's self-concept evolves in a series of stages through their lifetime. Such evolution is driven alternately by two main motivations: that of being autonomous and that of belonging to a group. Human beings are \"controlled\" by these motivations in the sense that they do not have influence on them but are rather defined by them. Additionally, these motivations are in conflict and their relationship develops over a lifespan."} {"text":"Kegan describes 5 stages of development, of which the latter 4 are progressively attained in adulthood, although only a small proportion of adults reach the fourth stage and beyond:"} {"text":"CDF refers to such stages as \"social\u2013emotional\" in that they relate to the way a person makes meaning of their experience in the social world. CDF holds that people are rarely precisely at a single stage but more accurately are distributed over a range where they are subject to the conflicting influences of a higher and a lower stage."} {"text":"Assessing the social\u2013emotional profile of a person."} {"text":"The social\u2013emotional profile of person is assessed by means of an interview, referred to as the \"subject\u2013object\" interview. In the interview, the interviewer offers prompts such as \"success\", \"change\", \"control\", \"limits\", \"frustration\", and \"risk\" and invites the interviewee to describe meaningful experiences under those headings. The interviewer serves as a listener, whose role is to focus the attention of the interviewee onto their own thoughts and feelings."} {"text":"The interview is scored by identifying excerpts of speech that indicate a particular stage or sub-stage. Relevant sections are chosen from the transcript of the interview and analyzed for indications of the stage of development. The most frequent sub-stage revealed by the scoring is described as the interviewee's \"centre of gravity\". Stages scored at below the center of gravity are described as \"risk\" (of regression) while stages scored above the center of gravity are described as \"potential\" (for development). The distribution of scores is summarized by a \"risk\u2013clarity\u2013potential\" index (RCP) that can be used to characterize the nature of the developmental challenges facing a person."} {"text":"According to Jean Piaget, thinking develops in 4 stages from childhood to young adulthood. Piaget named these stages sensory-motor, pre-operational, concrete-operational, and formal-operational. Development of formal-operational thinking is considered to continue until approximately until the 25th year of life. Subsequent researchers have concentrated on the now famous question of Kohlberg: \"Is there a life after 25?\" In CDF, the development of post formal-operational thinking in an adult is indicated primarily by the strength of dialectical thinking measured in thought form use fluidity."} {"text":"Dialectical thinking has its roots in Greek classical philosophy but is also found in ancient Hindu and Buddhist philosophy, and relates to the search for truth through reasoned argument. It finds its foremost expression in the work of the German philosopher Georg Hegel. Essentially, dialectics is viewed as the system by which human thought attempts to capture the nature of reality. Building on Bhaskar and Basseches, CDF uses a framework for dialectical thinking based on the idea that everything in reality is transient and composed of contradictions, part of a larger whole, related in some way to everything else, and subject to sudden transformation. This framework therefore distinguishes dialectical thinking in terms of four classes of dialectical thought forms that can be said to define reality:"} {"text":"In addition, CDF distinguishes seven individual thought forms for every class, making a total of 28 thought forms, representing a re-formulation of Basseches' 24 schematas."} {"text":"The cognitive profile describes the thinking tools at a person's disposal and shows the degree to which a person's thinking has developed as indicated by their use of dialectical thought forms in the four classes. The profile is derived by means of a semi-structured interview where the interviewer has the task of eliciting the interviewee's use of thought forms in a conversation about the interviewee's work and workplace. The text of the interview is subsequently analyzed and scored to give a series of mathematical indicators."} {"text":"According to CDF thinking that is highly developed is represented by the following features:"} {"text":"Link between social\u2013emotional development and cognitive development."} {"text":"Social\u2013emotional and cognitive development are often seen as separate lines of development but Laske (2008) proposed that they are linked by \"stages of reflective judgment\" or \"epistemic position\", described as the view taken by a person on what constitutes \"knowledge\" and \"truth\". Epistemic position defines a person's ability to deal with uncertainty and insecurity in their knowledge of the world and, together with the stage of social\u2013emotional development, reflects the \"stance\" that a person takes towards the world. Whilst cognitive development provides a person with \"tools\" for thinking consisting of thought forms derived from both logic and dialectics, the \"stance\" that a person takes determines whether they apply the thinking tools at their disposal."} {"text":"CDF employs the theory put forward by psychologist Henry Murray that much of human behavior is determined by the effort to satisfy certain psychological (or \"psychogenic\" needs), most of which are unconscious. Personality is thus seen as characteristic behavior emerging from the dynamic between a person's pattern of psychogenic needs and the environmental forces acting on that person\u2014termed \"press\"."} {"text":"The need\u2013press analysis draws on Sigmund Freud's model of the human psyche divided into the components of Id, Ego and Super-ego. In living, a person is subject to the unconscious yearnings of the Id, whilst consciously aspiring to certain ideals imposed by the Super-ego, which itself is influenced by the social context. It is the dynamic balance between the forces of Id and Super-ego and the work environment that determines a person's capacity for work. Imbalances between the social reality of work and a person's ideals lead to frustration, and imbalances between a person's unconscious needs and their ideals lead to a waste of energy or \"energy sink.\""} {"text":"CDF assessment methodology uses a self-report psychometric questionnaire originated by Henry Murray's student Morris Aderman, called the need\u2013press (NP) inventory."} {"text":"The assessment methodology employed by CDF was created to measure peoples' capability and capacity for work. The theory of work used by CDF is derived from the work of Elliott Jaques. According to Jaques, work is defined as the application of reflective judgment in order to pursue certain goals within certain time limits. This definition stresses the importance of how decisions are made in a complex world and the time-span within which decisions are carried out. While Jaques offers a strictly cognitive definition of work, CDF views the social\u2013emotional aspects of work as equally important, also including the person's (manager's, CEO's) NP profile."} {"text":"CDF distinguishes between two kinds of work capability, applied and potential. Applied capability refers to the resources that an individual can already apply in order to carry out work. Potential capability refers to the resources that an individual may be capable of applying in the future. An individual can decide at any time not to apply their potential work capability. Equally circumstances may impede a person from applying their potential capability. Work capability is therefore not the same as the capacity to deliver work but rather defines and limits it."} {"text":"In CDF work capacity is measured in terms of the need\u2013press personality profile, whilst applied capability is measured in terms of the thinking tools shown up by the cognitive profile, and potential capability is measured in terms of the risk\u2013clarity\u2013potential score taken from the social\u2013emotional profile."} {"text":"For Elliot Jaques, human organizations are structured managerially according to levels of accountability. Each level of accountability entails a higher level of complexity in the work required of the role-holder, termed \"size of role\". Jaques defined the notion of requisite organization, where roles in an organization are hierarchically organized at specific levels of increasing complexity."} {"text":"The application of CDF as an assessment methodology to measure the \"size of person\" in terms of their work capability and capacity provides a way forward for talent management systems to match the \"size of person\" to the \"size of role\". Progressively more complex roles require progressively higher levels of social\u2013emotional development and cognitive development in the role-holder. In this way requisite organizations can align their human capability architecture with their managerial accountability architecture and design \"growth assignments\" that facilitate the development of capability for more complex roles."} {"text":"CDF provides a platform for professional coaching such as in leadership development and management development in a variety of ways. Firstly it provides assessment tools from which the coach can construct an integrated model of the coachee complete with the developmental challenges of the client who is to be helped. Secondly, and in the sense used by Edgar Schein the use of the assessment tools and the feedback of results by the coach is an act of \"process consultation\" by which the client may come to understand better the assumptions, values, attitudes and behaviors that are helping or hindering their success. Thirdly, CDF provides tools for deeper and more sophisticated thinking, thereby enabling the client to explore and expand their conceptual landscape of a problem."} {"text":"CDF distinguishes between behavioral and developmental coaching. The goal of behavioral coaching is to improve the client's actual performance at work, described in CDF terms as their applied capability. In contrast, the goal of developmental coaching is to illuminate and develop the client's current and emergent capabilities for work in the context of their cognitive and social\u2013emotional development."} {"text":"As shown in the book \"Dynamic Collaboration: Strengthening Self Organization and Collaborative Intelligence in Teams\", by Jan De Visch and Otto Laske (2018), CDF can be a tool for building in organizations a \"dialogical culture\" by which distributed leadership in organizations can be realized."} {"text":"Covert facial recognition is the unconscious recognition of familiar faces by people with prosopagnosia. The individuals who express this phenomenon are unaware that they are recognizing the faces of people they have seen before."} {"text":"Joachim Bodamer created the term prosopagnosia in 1947. Individuals with this disorder do not have the ability to overtly recognize faces, but discoveries have been made showing that people with this disorder have the ability to covertly recognize faces."} {"text":"There are two types of prosopagnosia, congenital and acquired. Congenital prosopagnosia is an inability to recognize faces without a history of brain damage; while acquired prosopagnosia is caused by damage to the right occipital-temporal region of the brain. In the 1950s it was theorized that the right cerebral hemisphere was involved in facial recognition and in the 1960s this theory was supported by many experiments."} {"text":"Although the ability for overt facial recognition is inhibited in patients with prosopagnosia, there have been many studies done which show that some of these individuals may have the ability to recognize familiar faces covertly. These experiments have used behavioral and physiological measures in order to demonstrate covert facial recognition. A common physiological measure that is used is the measure of autonomic activity by using skin-conductance responses (SCR) which show a larger response in individuals with prosopagnosia who are shown pictures of familiar faces compared to pictures of unfamiliar faces."} {"text":"Many theories reside in the topic of cognitive facial recognition. First, the theory of contradiction between prosopagnosia and covert recognition. Prosopagnosia is the inability to recognize faces but is believed to stem from damage to the ventral route of the visual system. Whereas covert recognition is shown in people that lost their ability to recognize faces, implying an intact ventral limbic structure projecting to the amygdala."} {"text":"Theory two states that it cannot be observed in developmental cases of prosopagnosia, which was proposed by Grueter. Developmental prosopagnosia is a severe face processing impairment without brain damage and visual or thinking dysfunction but can sometimes run in families (some indications that there may be a genetic reason for the disorder). This theory is thought to rely on the activation of face representations created during the time of normal processing."} {"text":"Contradicting the last theory, the affective valence in developmental prosopagnosia theory states that individuals may be processing faces on affective dimensions, feelings and emotions, rather than familiarity dimensions, previous occasions and when they met."} {"text":"Next is the dual-route models theory, proposed by Bauer, and states covert recognition can be seen in people that endured a time of normal face processing before actually getting the condition. With this, there are two different types of covert recognition: behavioral and physiological. Behavioral covert recognition is measured by reaction time and occurs within a cognitive pathway consisting of face recognition units (FRUs), personality identity units (PINs) and semantic information units. Physiological covert recognition is measured by SCR and is the second route that mediates reactions to familiar faces. This theory can be explained by the disconnection of FRUs or that it may be that the face recognition system is intact but has been disconnected from a higher system enabling their conscious awareness."} {"text":"The parallel distributing process is a theory that proposes it would be easier to relearn previous known faces rather than to learn new ones. This process has three steps: the distributed information is represented, memory and knowledge for some things are not stored explicitly but are connected between nodes, learning can occur with gradual changes to the connections. Damaged networks are less effective by zeroing the weight of the connection. Each connection is embedded and is still faintly there, making it easier to relearn."} {"text":"Other theories include one proposed by Bauer, states that neurological routes mediate overt recognition. His theory went with Bruce and Young\u2019s theory that when using these three sequential stages in order, each stage will affect the next with overt mediation. The three stages are familiarity, occupation and name retrieval."} {"text":"There are several problems that may damage the ability to properly perceive faces, many of these don\u2019t have effects on both the covert and overt recognition of faces. Many of these problems only have an effect on the overt recognition of faces and leave the covert recognition intact."} {"text":"Global precedence was first studied using the Navon figure, where many small letters are arranged to form a larger letter that either does or does not match. Variations of the original Navon figure include both shapes and objects."} {"text":"Individuals presented with a Navon figure will be given one of two tasks. In one type of task, participants are told before the presentation of the stimulus whether to focus on a global or local level, and their accuracy and reaction times are recorded."} {"text":"In another type of task, participants are first presented with a target stimulus, and later presented with two different visuals. One of the visuals matches the target stimulus on the global level, while the other visual matches the target stimulus on the local level. In this condition, experimenters note which of the two visuals, the global or local, is chosen to match the target stimulus."} {"text":"In general, reaction time for identifying the larger letter is faster than for the smaller letters that make up the shape. Navon directed participants to focus either globally or locally to stimuli that were consistent, neutral, or conflicting on the global and local levels (see figures above). Reaction time for global identification was much faster than for local identification, showing global precedence. Additionally, global interference effect, which occurs when the global aspect is automatically processed even when attention is directed locally, causes slow reaction time. Navon's study global precedence and his stimuli, or variations of it, are still used in nearly all global precedence experiments."} {"text":"When presented with a Navon figure, there is a slight local preference for Caucasians, but East Asians show an obvious global preference and are faster and more accurate at global processing. The inclination towards global precedence is also evident in second generation Asian-Australians, but the correlation is weaker than that of recent immigrants. This could stem from the physical environment of East Asian versus Western cities, as the level of visual complexity varies across these environments. The tendency of Caucasians to process information \"analytically\" and Asians \"holistically\" has also been attributed to differences in brain structure."} {"text":"For some cognitive scientists, the stark contrast in cognitive processing trends across cultures and races suggests that all studies on cognitive perception should report participants\u2019 races to ensure valid theoretical conclusions. Especially in experiments involving spatially distributed stimuli, neglected racial or cultural differences in visual perception could skew results."} {"text":"Global precedence is not a universal phenomenon."} {"text":"When Navon figure stimuli are presented to participants from a remote African culture, the Himba, local precedence is observed although the Himba show the capabilities for both global and local processing."} {"text":"This difference in precedence for Navon figure stimuli can be attributed to cultural differences in occupations, or in the practice of reading and writing. This finding dispels the idea that local precedence is a consequence or symptom of disorders, since the Himba is a normally functioning society capable of both global and local processing."} {"text":"Stimuli are either meaningful or meaningless. For example, letters and familiar objects, like a cup, are meaningful, while unidentifiable and non-geometric forms are not. In both types of stimuli, the global advantage is observed, but the global interference effect only occurs with meaningful stimuli. In other words, when the global object is meaningful, the reaction time for identification of the local feature increases."} {"text":"This supports the theory that within global precedence, global advantage and global interference rely on two separate mechanisms. Global-local interference occurs as a result of automatic processing of global objects. The theory is that the global precedence effect has a sensory mechanism active in global advantage, whereas automatic and semantic processes are active in the interference effect."} {"text":"Cognitive processing varies across different age groups, and several studies have been done using Navon-like figures to examine the correlation between precedence and age."} {"text":"When presented with a global-local task, children and adolescents exemplify a local bias. Younger children respond slower to different types of stimuli compared to older children, and thus local precedence seems more prevalent than global precedence in perceptual organization, at least until adolescence, when the transition to globally oriented visual perception begins. The ability to encode a global shape, which is necessary for efficiently recognizing and identifying objects, increases with age. However, it has also been found that there is a bias towards global information during infancy, which may be based upon high spatial frequency information, as well as limited vision. Therefore, global precedence during the early years of life may not be upwards but rather a U-shaped development."} {"text":"There is a decline of global precedence in older subjects. When presented with a Navon-like figures, young adults demonstrate global precedence enhancement in that when the number of local letters forming the global letter increases, their global precedence increases. On the other hand, there is no precedence effect or enhancement for older subjects when presented with the same task. This links global precedence to the Gestalt principles of Proximity and Continuity, and suggests that Gestalt-related deficiencies, such as decline in perceptual grouping, may underlie the decline of global precedence in older subjects."} {"text":"Global precedence decline may also relate to hemispheric specialization. The spatial frequency theory proposes that global versus local information is processed through two \u201cchannels\u201d of low (global) versus high (local) spatial frequencies. spatial frequency measures how often a stimulus moves through space. Based upon this theory, the double frequency theory links the left hemisphere with high spatial frequencies, leading to a global precedence effect, and the right hemisphere with low spatial frequencies, leading to a local precedence effect. This suggests neuropsychological factors behind global precedence decline in there may be faster aging in the right than the left hemisphere."} {"text":"Studies regarding mood have shown that positive and negative cues can influence global versus local attention during image-based tasks."} {"text":"Some studies have shown that positive priming decreases local response time, demonstrating a lessening effect of global precedence, while negative priming increases local response time. Mood dictates one's preferences for processing type."} {"text":"The result that negative priming reduces flexibility correlates to the Psi theory states that negative emotion inhibits one\u2019s access to extension memory, reducing cognitive flexibility. This also supports the theory that positive affect increases cognitive flexibility."} {"text":"Positive mood priming also increases cognitive flexibility when prime words do not have individualistic specificity and when primes are visual. Positive affect does not simply promote local processing, but rather improves one\u2019s abilities in his non-preferred dimension. For example, one preferring the local aspect of stimuli would show increased performance in identifying the global aspect and vice versa. This further supports the cognitive flexibility theory."} {"text":"Priming with Navon figures aides the recognition of faces, a holistic task, when the response elicited from the figure matches the precedence of the figure. For example, if the stimulus has local precedence and the participant is cued to respond with the local feature identification, his accuracy in facial recognition improves. The same occurs when global responses are asked of global stimuli."} {"text":"When a facial task requires local processing for identification, participants\u2019 facial recognition improves when they must respond to global precedence stimuli with local responses and vice versa. They are forced to show cognitive flexibility in their responses to the Navon figure primes."} {"text":"One theory explains that normal facial recognition requires automatic processes, whereas special facial recognition requires controlled processes. Automatic processes are aided by correlative stimuli and responses, while controlled processes are aided by stimuli and responses that do not correlate. This indicates that facial recognition depends on type of attention, automatic or controlled, rather than focus on global or local features."} {"text":"When identifying inverted faces, those showing stronger global precedence show a more prominent Those showing a stronger global precedence also have a greater deficit in identification abilities when the faces are inverted; their identification abilities decrease more from upright identification to inverted identification than weak global precedence individuals."} {"text":"This correlates to the theory that upright faces are processed holistically, or with a special mechanism. Those with stronger global precedence should perform better at holistically processing a face upright. Stronger global precedence should show a greater decrease in accuracy of identification of inverted faces because the task relies on local processing."} {"text":"The degree of global precedence one demonstrates has been found to differ in relation to the variable of an individual's field dependence."} {"text":"Field dependency is the amount that one relies on Gestalt laws of perceptual organization. High field dependency corresponds to a greater bias toward the global level, while field independence corresponds to a lesser dependency on the global level."} {"text":"This indicates that individual characteristics have an effect on the prevalence of global precedence and that global and local processing exist on a continuum."} {"text":"Neuropsychological evidence based on PET scans suggests that the global aspect of visual situations activates and is processed preferentially by the right hemisphere, whereas the local aspect of visual situations activates and is processed preferentially by the left hemisphere. The classical view of Gestalt psychology also suggests the right hemisphere is involved in the perception of wholes and thus plays a stronger role in global processing, whereas the left hemisphere involves separate local elements and therefore plays a stronger role in local processing."} {"text":"However, hemispheric specialization is relative because it depends on the experimental setting as well as the individual\u2019s \u201cattentional set.\u201d In addition, stimulus type may influence the neural structures underlying hemispheric specialization. Global processing is the default strategy for most individuals, but local stimuli are often more perceptually demanding to recognize and identify, showing the effect of stimuli on visual processing."} {"text":"The Navon figure has been used in relating theories regarding processing to assessing cognitive learning disabilities, such as developmental dyslexia, dyscalculia, obsessive-compulsive personality disorder, and autism."} {"text":"When given a Navon figure test, people with dyslexia have difficulty automatically identifying graphemes with phonemes, but not with identifying numbers with magnitudes. On the other hand, people with dyscalculia have difficulty automatically identifying numbers with magnitudes, but not letters and with phonemes. This suggests a dissociation between subjects with dyslexia and dyscalculia. These developmental learning disabilities do not cause general problems with identifying symbols to their mental representations, but rather create specific challenges."} {"text":"Obsessive-compulsive personality disorder (OCPD) subjects are prone to be distracted by the local aspects of stimuli when asked to identify global aspects of figures such as the Navon figure. This is likely because individuals with OCPD characteristically have sharp, detail-oriented attentions, and tend to focus more on specifics rather than the larger context."} {"text":"There are correlations between global or local performance on a task and the abilities to identify emotion and canine age for autistic children. In both cases, global responses correlate to better identification. In general, autistic children demonstrate much weaker global precedence than those without the disorder. Within the group of autistic children, those who respond more globally to a discrimination task perform better on emotion and canine age tasks."} {"text":"One explanation is a possible biological dysfunction in the brain region where facial processing occurs. Research indicates that global processing, facial recognition, and emotional expression recognition are all linked to the right hemisphere. A defect in that area would explain the characteristics of autism. For further information on facial recognition and processing in individuals with autism see the autism and facial recognition section of face perception."} {"text":"A contrast effect is the enhancement or diminishment, relative to normal, of perception, cognition or related performance as a result of successive (immediately previous) or simultaneous exposure to a stimulus of lesser or greater value in the same dimension. (Here, normal perception, cognition or performance is that which would be obtained in the absence of the comparison stimulus\u2014i.e., one based on all previous experience.)"} {"text":"Perception example: A neutral gray target will appear lighter or darker than it does in isolation when immediately preceded by, or simultaneously compared to, respectively, a dark gray or light gray target."} {"text":"Cognition example: A person will appear more or less attractive than that person does in isolation when immediately preceded by, or simultaneously compared to, respectively, a less or more attractive person."} {"text":"Performance example: A laboratory rat will work faster, or slower, during a stimulus predicting a given amount of reward when that stimulus and reward are immediately preceded by, or alternated with, respectively, different stimuli associated with either a lesser or greater amount of reward."} {"text":"The oldest reference to simultaneous contrast in the scientific literature is by the hand of the 11th century physicist Ibn al-Haytham who describes spots of paint on a white background appearing almost black and conversely paler than their true colour on black:"} {"text":"He also describes that a leaf green paint may appear clearer and younger on dark blue and darker and older on yellow:"} {"text":"Johann Wolfgang von Goethe writes in 1810 that a grey image on a black background appears much brighter than the same on white. And Johannes Peter M\u00fcller notes the same in 1838 and also that a strip of grey on a brightly coloured field appears to be tinted ever so slightly in the contrasting colour."} {"text":"The subject of the impact of the surrounding field on colour perception has been a subject of ongoing research since. It has been found that the size of the surrounding field has an impact, as does the separation between colour and surround, similarity of chromaticity, luminance difference and the structure of the surround."} {"text":"There has been some debate over the degree to which simultaneous contrast is a physiological process caused by the connections of neurons in the visual cortex, or whether it is a psychological effect. Both appear to have some effect. A possible source of the effect are neurons in the V4 area that have inhibitory connections to neighboring cells. The most likely evolutionary rationale for this effect is that it enhances edges in the visual field, thus facilitating the recognition of shapes and objects."} {"text":"Successive contrast occurs when the perception of currently viewed stimuli is modulated by previously viewed stimuli. In the example below you can use the scrollbar to quickly swap the red and green disks for two orange disks. Staring at the dot in the centre of one of the top two coloured disks and then looking at the dot in the centre of the corresponding lower disk makes the two lower disks briefly appear to have different colours, though in reality their colour is identical."} {"text":"Metacontrast and paracontrast involve both time and space. When one half of a circle is lit for 10 milliseconds (ms), it is at its maximal intensity. If the other half is displayed at the same time (but 20\u201350 ms later), there is a mutual inhibition: the left side is darkened by the right half (\"metacontrast\"), and the center may be completely obliterated. At the same time, there is a slight darkening of the right side due to the first stimulus (\"paracontrast\")."} {"text":"The contrast effect was noted by the 17th century philosopher John Locke, who observed that lukewarm water can feel hot or cold depending on whether the hand touching it was previously in hot or cold water."} {"text":"Imagination is the ability to produce and simulate novel objects, sensations, and ideas in the mind without any immediate input of the senses. It is also described as the forming of experiences in one's mind, which can be re-creations of past experiences such as vivid memories with imagined changes, or they can be completely invented and possibly fantastic scenes. Imagination helps make knowledge applicable in solving problems and is fundamental to integrating experience and the learning process. A basic training for imagination is listening to storytelling (narrative), in which the exactness of the chosen words is the fundamental factor to \"evoke worlds\"."} {"text":"Imagination, however, is not considered to be exclusively a cognitive activity because it is also linked to the body and place, particularly that it also involves setting up relationships with materials and people, precluding the sense that imagination is locked away in the head."} {"text":"Imagination can also be expressed through stories such as fairy tales or fantasies. Children often use such narratives and pretend play in order to exercise their imaginations. When children develop fantasy they play at two levels: first, they use role playing to act out what they have developed with their imagination, and at the second level they play again with their make-believe situation by acting as if what they have developed is an actual reality."} {"text":"The notion of a \"mind's eye\" goes back at least to Cicero's reference to mentis oculi during his discussion of the orator's appropriate use of simile."} {"text":"In this discussion, Cicero observed that allusions to \"the Syrtis of his patrimony\" and \"the Charybdis of his possessions\" involved similes that were \"too far-fetched\"; and he advised the orator to, instead, just speak of \"the rock\" and \"the gulf\" (respectively) \u2014 on the grounds that \"the eyes of the mind are more easily directed to those objects which we have seen, than to those which we have only heard\"."} {"text":"The concept first appeared in English in Chaucer's (c.1387) Man of Law's Tale in his Canterbury Tales, where he tells us that one of the three men dwelling in a castle was blind, and could only see with \"the eyes of his mind\"; namely, those eyes \"with which all men see after they have become blind\"."} {"text":"The condition of not being able to internally visualize (the lack of a \u201dmind\u2019s eye\u201d) is called aphantasia."} {"text":"The common use of the term is for the process of forming new images in the mind that have not been previously experienced with the help of what has been seen, heard, or felt before, or at least only partially or in different combinations. This could also be involved with thinking out possible or impossible outcomes of something or someone in life's abundant situations and experiences. Some typical examples follow:"} {"text":"Imagination, not being limited to the acquisition of exact knowledge by the requirements of practical necessity is largely free from objective restraints. The ability to imagine one's self in another person's place is very important to social relations and understanding. Albert Einstein said, \"Imagination ... is more important than knowledge. Knowledge is limited. Imagination encircles the world.\""} {"text":"The same limitations beset imagination in the field of scientific hypothesis. Progress in scientific research is due largely to provisional explanations which are developed by imagination, but such hypotheses must be framed in relation to previously ascertained facts and in accordance with the principles of the particular science."} {"text":"Regarding the volunteer effort, imagination can be classified as:"} {"text":"Psychologists have studied imaginative thought, not only in its exotic form of creativity and artistic expression but also in its mundane form of everyday imagination. Ruth M.J. Byrne has proposed that everyday imaginative thoughts about counterfactual alternatives to reality may be based on the same cognitive processes on which rational thoughts are also based. Children can engage in the creation of imaginative alternatives to reality from their very early years. Cultural psychology is currently elaborating a view of imagination as a higher mental function involved in a number of everyday activities both at the individual and collective level that enables people to manipulate complex meanings of both linguistic and iconic forms in the process of experiencing."} {"text":"The phenomenology of imagination is discussed In \"The Imaginary: A Phenomenological Psychology of the Imagination\" (), also published under the title \"The Psychology of the Imagination\", a 1940 book by Jean-Paul Sartre, in which he propounds his concept of the imagination and discusses what the existence of imagination shows about the nature of human consciousness."} {"text":"The imagination is also active in our perception of photographic images in order to make them appear real."} {"text":"Piaget posited that perceptions depend on the world view of a person. The world view is the result of arranging perceptions into existing imagery by imagination. Piaget cites the example of a child saying that the moon is following her when she walks around the village at night. Like this, perceptions are integrated into the world view to make sense. Imagination is needed to make sense of perceptions."} {"text":"A study using fMRI while subjects were asked to imagine precise visual figures, to mentally disassemble them, or mentally blend them, showed activity in the occipital, frontoparietal, posterior parietal, precuneus, and dorsolateral prefrontal regions of the subject's brains."} {"text":"Three philosophers for whom imagination is a central concept are Kendall Walton, John Sallis and Richard Kearney. See in particular:"} {"text":"In education, computational thinking (CT) is a set of problem-solving methods that involve expressing problems and their solutions in ways that a computer could also execute. It involves the mental skills and practices for designing computations that get computers to do jobs for people, and explaining and interpreting the world as a complex of information processes. Those ideas range from \"basic CT for beginners\" to \"advanced CT for experts\", and CT includes both \"CT-in-the-small\" (related to how to design small programs and algorithms by single people) and \"CT-in-the-large\" (related to how to design multi-version programs consisting of millions of lines of code written in team effort, ported to numerous platforms, and compatible with a range of different system setups)."} {"text":"The history of computational thinking dates back at least to the 1950s but most ideas are much older. Computational thinking involves ideas like abstraction, data representation, and logically organizing data, which are also prevalent in other kinds of thinking, such as scientific thinking, engineering thinking, systems thinking, design thinking, model-based thinking, and the like. Neither the idea nor the term are recent: Preceded by terms like algorithmizing, procedural thinking, algorithmic thinking, and computational literacy by computing pioneers like Alan Perlis and Donald Knuth, the term \"computational thinking\" was first used by Seymour Papert in 1980 and again in 1996. Computational thinking can be used to algorithmically solve complicated problems of scale, and is often used to realize large improvements in efficiency."} {"text":"For the first ten years computational thinking was a US-centered movement, and still today that early focus is seen in the field's research. The field's most cited articles and most cited people were active in the early US CT wave, and the field's most active researcher networks are US-based. Dominated by US and European researchers, it is unclear to what extent can the field's predominantly Western body of research literature cater to the needs of students in other cultural groups."} {"text":"The characteristics that define computational thinking are decomposition, pattern recognition \/ data representation, generalization\/abstraction, and algorithms. By decomposing a problem, identifying the variables involved using data representation, and creating algorithms, a generic solution results. The generic solution is a generalization or abstraction that can be used to solve a multitude of variations of the initial problem."} {"text":"Another characterization of computational thinking is the \"three As\" iterative process based on three stages:"} {"text":"The four Cs of 21st century learning are communication, critical thinking, collaboration, and creativity. The fifth C could be computational thinking which entails the capability to resolve problems algorithmically and logically. It includes tools that produce models and visualise data. Grover describes how computational thinking is applicable across subjects beyond science, technology, engineering, and mathematics (STEM) which include the social sciences and language arts. Students can engage in activities where they identify patterns in grammar as well as sentence structure and use models for studying relationships."} {"text":"Similar to Seymour Papert, Alan Perlis, and Marvin Minsky before, Jeannette Wing envisioned computational thinking becoming an essential part of every child's education. However, integrating computational thinking into the K\u201312 curriculum and computer science education has faced several challenges including the agreement on the definition of computational thinking, how to assess children's development in it, and how to distinguish it from other similar \"thinking\" like systems thinking, design thinking, and engineering thinking. Currently, computational thinking is broadly defined as a set of cognitive skills and problem solving processes that include (but are not limited to) the following characteristics (but there are arguments that few, if any, of them belong to computing specifically, instead of being principles in many fields of science and engineering)"} {"text":"Current integration computational thinking into the K\u201312 curriculum comes in two forms: in computer science classes directly or through the use and measure of computational thinking techniques in other subjects. Teachers in Science, Technology, Engineering, and Mathematics (STEM) focused classrooms that include computational thinking, allow students to practice problem-solving skills such as trial and error. Valerie Barr and Chris Stephenson describe computational thinking patterns across disciplines in a 2011 ACM Inroads article However Conrad Wolfram has argued that computational thinking should be taught as a distinct subject."} {"text":"There are online institutions that provide a curriculum, and other related resources, to build and strengthen pre-college students with computational thinking, analysis and problem-solving."} {"text":"A textbook `From Computing to Computational Thinking' by Paul S. Wang has been used, at the high school and college levels, to introduce the topic to non-computer science students through understanding of computing and applying concepts as a way of thinking in"} {"text":"other areas including in everyday life. The textbook, written in English, has been translated into other languages and used in many parts of the world. The textbook also introduced a new word `computize', a verb defined as `to apply computational thinking to analize and solve problems.'"} {"text":"Carnegie Mellon University in Pittsburgh has a Center for Computational Thinking. The Center's major activity is conducting PROBEs or PROBlem-oriented Explorations. These PROBEs are experiments that apply novel computing concepts to problems to show the value of computational thinking. A PROBE experiment is generally a collaboration between a computer scientist and an expert in the field to be studied. The experiment typically runs for a year. In general, a PROBE will seek to find a solution for a broadly applicable problem and avoid narrowly focused issues. Some examples of PROBE experiments are optimal kidney transplant logistics and how to create drugs that do not breed drug-resistant viruses."} {"text":"Cognitive holding power is a concept measured by John C. Stevenson in 1994 using a questionnaire, the Cognitive Holding Power Questionnaire (CHPQ). This tool is assesses first- or second-order cognitive processing preferences."} {"text":"Studies using holding power have suggest improvements to mathematical education."} {"text":"Cognitive styles analysis (CSA) was developed by Richard J. Riding and is the most frequently used computerized measure of cognitive styles. Although CSA is not well known in North American institutions, it is quite popular among European universities and organizations."} {"text":"Unlike many other cognitive style measures, CSA has been the subject of much empirical investigation. Three experiments reported by showed the reliability of CSA to be low. Considering the theoretical strength of CSA, and unsuccessful earlier attempts to create a more reliable parallel form of it , a revised version was made to improve its validity and reliability."} {"text":"In psychology and cognitive neuroscience, pattern recognition describes cognitive process that matches information from a stimulus with information retrieved from memory."} {"text":"Pattern recognition is not only crucial to humans, but to other animals as well. Even koalas, who possess less-developed thinking abilities, use pattern recognition to find and consume eucalyptus leaves. The human brain has developed more, but holds similarities to the brains of birds and lower mammals. The development of neural networks in the outer layer of the brain in humans has allowed for better processing of visual and auditory patterns. Spatial positioning in the environment, remembering findings, and detecting hazards and resources to increase chances of survival are examples of the application of pattern recognition for humans and animals."} {"text":"There are six main theories of pattern recognition: template matching, prototype-matching, feature analysis, recognition-by-components theory, bottom-up and top-down processing, and Fourier analysis. The application of these theories in everyday life is not mutually exclusive. Pattern recognition allows us to read words, understand language, recognize friends, and even appreciate music. Each of the theories applies to various activities and domains where pattern recognition is observed. Facial, music and language recognition, and seriation are a few of such domains. Facial recognition and seriation occur through encoding visual patterns, while music and language recognition use the encoding of auditory patterns."} {"text":"Template and feature analysis approaches to recognition of objects (and situations) have been merged \/ reconciled \/ overtaken by multiple discrimination theory. This states that the amounts in a test stimulus of each salient feature of a template are recognized in any perceptual judgment as being at a distance in the universal unit of 50% discrimination (the objective performance 'JND') from the amount of that feature in the template."} {"text":"The RBC principles of visual object recognition can be applied to auditory language recognition as well. In place of geons, language researchers propose that spoken language can be broken down into basic components called phonemes. For example, there are 44 phonemes in the English language."} {"text":"In psychologist Jean Piaget's theory of cognitive development, the third stage is called the Concrete Operational State. It is during this stage that the abstract principle of thinking called \"seriation\" is naturally developed in a child. Seriation is the ability to arrange items in a logical order along a quantitative dimension such as length, weight, age, etc. It is a general cognitive skill which is not fully mastered until after the nursery years . To seriate means to understand that objects can be ordered along a dimension, and to effectively do so, the child needs to be able to answer the question \"What comes next?\" Seriation skills also help to develop problem-solving skills, which are useful in recognizing and completing patterning tasks."} {"text":"To help build up math skills in children, teachers and parents can help them learn seriation and patterning. Young children who understand seriation can put numbers in order from lowest to highest. Eventually, they will come to understand that 6 is higher than 5, and 20 is higher than 10. Similarly, having children copy patterns or create patterns of their own, like ABAB patterns, is a great way to help them recognize order and prepare for later math skills, such as multiplication. Child care providers can begin exposing children to patterns at a very young age by having them make groups and count the total number of objects."} {"text":"Recognizing faces is one of the most common forms of pattern recognition. Humans are extremely effective at remembering faces, but this ease and automaticity belies a very challenging problem. All faces are physically similar. Faces have two eyes, one mouth, and one nose all in predictable locations, yet humans can recognize a face from several different angles and in various lighting conditions."} {"text":"Neuroscientists posit that recognizing faces takes place in three phases. The first phase starts with visually focusing on the physical features. The facial recognition system then needs to reconstruct the identity of the person from previous experiences. This provides us with the signal that this might be a person we know. The final phase of recognition completes when the face elicits the name of the person."} {"text":"Although humans are great at recognizing faces under normal viewing angles, upside-down faces are tremendously difficult to recognize. This demonstrates not only the challenges of facial recognition but also how humans have specialized procedures and capacities for recognizing faces under normal upright viewing conditions."} {"text":"Scientists agree that there is a certain area in the brain specifically devoted to processing faces. This structure is called the fusiform gyrus, and brain imaging studies have shown that it becomes highly active when a subject is viewing a face."} {"text":"Several case studies have reported that patients with lesions or tissue damage localized to this area have tremendous difficulty recognizing faces, even their own. Although most of this research is circumstantial, a study at Stanford University provided conclusive evidence for the fusiform gyrus' role in facial recognition. In a unique case study, researchers were able to send direct signals to a patient's fusiform gyrus. The patient reported that the faces of the doctors and nurses changed and morphed in front of him during this electrical stimulation. Researchers agree this demonstrates a convincing causal link between this neural structure and the human ability to recognize faces."} {"text":"Recent research reveals that infant language acquisition is linked to cognitive pattern recognition. Unlike classical nativist and behavioral theories of language development, scientists now believe that language is a learned skill. Studies at the Hebrew University and the University of Sydney both show a strong correlation between the ability to identify visual patterns and to learn a new language. Children with high shape recognition showed better grammar knowledge, even when controlling for the effects of intelligence and memory capacity. This is supported by the theory that language learning is based on statistical learning, the process by which infants perceive common combinations of sounds and words in language and use them to inform future speech production."} {"text":"The first step in infant language acquisition is to decipher between the most basic sound units of their native language. This includes every consonant, every short and long vowel sound, and any additional letter combinations like \"th\" and \"ph\" in English. These units, called phonemes, are detected through exposure and pattern recognition. Infants use their \"innate feature detector\" capabilities to distinguish between the sounds of words. They split them into phonemes through a mechanism of categorical perception. Then they extract statistical information by recognizing which combinations of sounds are most likely to occur together, like \"qu\" or \"h\" plus a vowel. In this way, their ability to learn words is based directly on the accuracy of their earlier phonetic patterning."} {"text":"The transition from phonemic differentiation into higher-order word production is only the first step in the hierarchical acquisition of language. Pattern recognition is furthermore utilized in the detection of prosody cues, the stress and intonation patterns among words. Then it is applied to sentence structure and the understanding of typical clause boundaries. This entire process is reflected in reading as well. First, a child recognizes patterns of individual letters, then words, then groups of words together, then paragraphs, and finally entire chapters in books. Learning to read and learning to speak a language are based on the \"stepwise refinement of patterns\" in perceptual pattern recognition."} {"text":"Music provides deep and emotional experiences for the listener. These experiences become contents in long-term memory, and every time we hear the same tunes, those contents are activated. Recognizing the content by the pattern of the music affects our emotion. The mechanism that forms the pattern recognition of music and the experience has been studied by multiple researchers. The sensation felt when listening to our favorite music is evident by the dilation of the pupils, the increase in pulse and blood pressure, the streaming of blood to the leg muscles, and the activation of the cerebellum, the brain region associated with physical movement."} {"text":"The medial prefrontal cortex \u2013 one of the last areas affected by Alzheimer\u2019s disease \u2013 is the region activated by music."} {"text":"MIT researchers conducted a study to examine this notion. The results showed six neural clusters in the auditory cortex responding to the sounds. Four were triggered when hearing standard acoustic features, one specifically responded to speech, and the last exclusively responded to music. Researchers who studied the correlation between temporal evolution of timbral, tonal and rhythmic features of music, came to the conclusion that music engages the brain regions connected to motor actions, emotions and creativity. The research indicates that the whole brain \"lights up\" when listening to music. This amount of activity boosts memory preservation, hence pattern recognition."} {"text":"Recognizing patterns of music is different for a musician and a listener. Although a musician may play the same notes every time, the details of the frequency will always be different. The listener will recognize the musical pattern and their types despite the variations. These musical types are conceptual and learned, meaning they might vary culturally. While listeners are involved with recognizing (implicit) musical material, musicians are involved with recalling them (explicit)."} {"text":"A UCLA study found that when watching or hearing music being played, neurons associated with the muscles needed for playing the instrument fire. Mirror neurons light up when musicians and non-musicians listen to a piece."} {"text":"In a study at University of California, Davis mapped the brain of participants while they listened to music. The results showed links between brain regions to autobiographical memories and emotions activated by familiar music. This study can explain the strong response of patients with Alzheimer\u2019s disease to music. This research can help such patients with pattern recognition-enhancing tasks."} {"text":"The human tendency to see patterns that do not actually exist is called apophenia. Examples include the Man in the Moon, faces or figures in shadows, in clouds, and in patterns with no deliberate design, such as the swirls on a baked confection, and the perception of causal relationships between events which are, in fact, unrelated. Apophenia figures prominently in conspiracy theories, gambling, misinterpretation of statistics and scientific data, and some kinds of religious and paranormal experiences. Misperception of patterns in random data is called pareidolia."} {"text":"Introspection is the examination of one's own conscious thoughts and feelings. In psychology, the process of introspection relies on the observation of one's mental state, while in a spiritual context it may refer to the examination of one's soul. Introspection is closely related to human self-reflection and self-discovery and is contrasted with external observation."} {"text":"Introspection generally provides a privileged access to one's own mental states, not mediated by other sources of knowledge, so that individual experience of the mind is unique. Introspection can determine any number of mental states including: sensory, bodily, cognitive, emotional and so forth."} {"text":"Introspection has been a subject of philosophical discussion for thousands of years. The philosopher Plato asked, \"\u2026why should we not calmly and patiently review our own thoughts, and thoroughly examine and see what these appearances in us really are?\" While introspection is applicable to many facets of philosophical thought it is perhaps best known for its role in epistemology; in this context introspection is often compared with perception, reason, memory, and testimony as a source of knowledge."} {"text":"Partly as a result of Titchener's misrepresentation, the use of introspection diminished after his death and the subsequent decline of structuralism. Later psychological movements, such as functionalism and behaviorism, rejected introspection for its lack of scientific reliability among other factors. Functionalism originally arose in direct opposition to structuralism, opposing its narrow focus on the elements of consciousness and emphasizing the purpose of consciousness and other psychological behavior. Behaviorism's objection to introspection focused much more on its unreliability and subjectivity which conflicted with behaviorism's focus on measurable behavior."} {"text":"The more recently established cognitive psychology movement has to some extent accepted introspection's usefulness in the study of psychological phenomena, though generally only in experiments pertaining to internal thought conducted under experimental conditions. For example, in the \"think aloud protocol\", investigators cue participants to speak their thoughts aloud in order to study an active thought process without forcing an individual to comment on the process itself."} {"text":"Indeed, it is questionable how confident researchers can be in their own introspections."} {"text":"Another question in regards to the veracious accountability of introspection is if researchers lack the confidence in their own introspections and those of their participants, then how can it gain legitimacy? Three strategies are accountable: identifying behaviors that establish credibility, finding common ground that enables mutual understanding, and developing a trust that allows one to know when to give the benefit of the doubt."} {"text":"That is to say, that words are only meaningful if validated by one's actions; When people report strategies, feelings or beliefs, their behaviors must correspond with these statements if they are to be believed."} {"text":"One experiment tried to give their subjects access to others' introspections. They made audio recordings of subjects who had been told to say whatever came into their heads as they answered a question about their own bias. Although subjects persuaded themselves they were unlikely to be biased, their introspective reports did not sway the assessments of observers. When subjects were explicitly told to avoid relying on introspection, their assessments of their own bias became more realistic."} {"text":"In Eastern Christianity some concepts addressing human needs, such as sober introspection \"(nepsis\"), require watchfulness of the human heart and the conflicts of the human \"nous\", heart or mind. Noetic understanding can not be achieved by rational or discursive thought (i.e. systemization)."} {"text":"Jains practise \"pratikraman\" (Sanskrit \"introspection\"), a process of repentance of wrongdoings during their daily life, and remind themselves to refrain from doing so again. Devout Jains often do Pratikraman at least twice a day."} {"text":"Introspection is encouraged in schools such as Advaita Vedanta; in order for one to know their own true nature, they need to reflect and introspect on their true nature\u2014which is what meditation is. Especially, Swami Chinmayananda emphasised the role of introspection in five stages, outlined in his book \"Self Unfoldment.\""} {"text":"Introspection (also referred to as Rufus dialogue, interior monologue, self-talk) is the fiction-writing mode used to convey a character's thoughts. As explained by Renni Browne and Dave King, \"One of the great gifts of literature is that it allows for the expression of unexpressed thoughts\u2026\""} {"text":"According to Nancy Kress, a character's thoughts can greatly enhance a story: deepening characterization, increasing tension, and widening the scope of a story. As outlined by Jack M. Bickham, thought plays a critical role in both scene and sequel."} {"text":"In the field of psychology, cognitive dissonance occurs when a person holds contradictory beliefs, ideas, or values, and is typically experienced as psychological stress when they participate in an action that goes against one or more of them. According to this theory, when two actions or ideas are not psychologically consistent with each other, people do all in their power to change them until they become consistent. The discomfort is triggered by the person's belief clashing with new information perceived, wherein they try to find a way to resolve the contradiction to reduce their discomfort."} {"text":"In \"A Theory of Cognitive Dissonance\" (1957), Leon Festinger proposed that human beings strive for internal psychological consistency to function mentally in the real world. A person who experiences internal inconsistency tends to become psychologically uncomfortable and is motivated to reduce the cognitive dissonance. They tend to make changes to justify the stressful behavior, either by adding new parts to the cognition causing the psychological dissonance (rationalization) or by avoiding circumstances and contradictory information likely to increase the magnitude of the cognitive dissonance (confirmation bias)."} {"text":"Coping with the nuances of contradictory ideas or experiences is mentally stressful. It requires energy and effort to sit with those seemingly opposite things that all seem true. Festinger argued that some people would inevitably resolve dissonance by blindly believing whatever they wanted to believe."} {"text":"To function in the reality of society, human beings continually adjust the correspondence of their mental attitudes and personal actions; such continual adjustments, between cognition and action, result in one of three relationships with reality:"} {"text":"The term \"magnitude of dissonance\" refers to the level of discomfort caused to the person. This can be caused by the relationship between two differing internal beliefs, or an action that is incompatible with the beliefs of the person. Two factors determine the degree of psychological dissonance caused by two conflicting cognitions or by two conflicting actions:"} {"text":"There is always some degree of dissonance within a person as they go about making decisions, due to the changing quantity and quality of knowledge and wisdom that they gain. The magnitude itself is a subjective measurement since the reports are self relayed, and there is no objective way as yet to get a clear measurement of the level of discomfort."} {"text":"Cognitive dissonance theory proposes that people seek psychological consistency between their expectations of life and the existential reality of the world. To function by that expectation of existential consistency, people continually reduce their cognitive dissonance in order to align their cognitions (perceptions of the world) with their actions."} {"text":"The creation and establishment of psychological consistency allows the person afflicted with cognitive dissonance to lessen mental stress by actions that reduce the magnitude of the dissonance, realized either by changing with or by justifying against or by being indifferent to the existential contradiction that is inducing the mental stress. In practice, people reduce the magnitude of their cognitive dissonance in four ways:"} {"text":"Three cognitive biases are components of dissonance theory. The bias that one does not have any biases, the bias that one is \"better, kinder, smarter, more moral and nicer than average\" and confirmation bias."} {"text":"That a consistent psychology is required for functioning in the real world also was indicated in the results of \"The Psychology of Prejudice\" (2006), wherein people facilitate their functioning in the real world by employing human categories (i.e. sex and gender, age and race, etc.) with which they manage their social interactions with other people."} {"text":"Based on a brief overview of models and theories related to cognitive consistency from many different scientific fields, such as social psychology, perception, neurocognition, learning, motor control, system control, ethology, and stress, it has even been proposed that \"all behaviour involving cognitive processing is caused by the activation of inconsistent cognitions and functions to increase perceived consistency\"; that is, all behaviour functions to reduce cognitive inconsistency at some level of information processing. Indeed, the involvement of cognitive inconsistency has long been suggested for behaviors related to for instance curiosity, and aggression and fear, while it has also been suggested that the inability to satisfactorily reduce cognitive inconsistency may - dependent on the type and size of the inconsistency - result in stress."} {"text":"Another method to reduce cognitive dissonance is through selective exposure theory. This theory has been discussed since the early days of Festinger's discovery of cognitive dissonance. He noticed that people would selectively expose themselves to some media over others; specifically, they would avoid dissonant messages and prefer consonant messages. Through selective exposure, people actively (and selectively) choose what to watch, view, or read that fit to their current state of mind, mood or beliefs. In other words, consumers select attitude-consistent information and avoid attitude-challenging information. This can be applied to media, news, music, and any other messaging channel. The idea is, choosing something that is in opposition to how you feel or believe in will render cognitive dissonance."} {"text":"Another example to note is how people mostly consume media that aligns with their political views. In a study done in 2015, participants were shown \u201cattitudinally consistent, challenging, or politically balanced online news.\u201d Results showed that the participants trusted attitude-consistent news the most out of all the others, regardless of the source. It is evident that the participants actively selected media that aligns with their beliefs rather than opposing media."} {"text":"In fact, recent research has suggested that while a discrepancy between cognitions drives individuals to crave for attitude-consistent information, the experience of negative emotions drives individuals to avoid counterattitudinal information. In other words, it is the psychological discomfort which activates selective exposure as a dissonance-reduction strategy."} {"text":"There are four theoretic paradigms of cognitive dissonance, the mental stress people suffer when exposed to information that is inconsistent with their beliefs, ideals or values: Belief Disconfirmation, Induced Compliance, Free Choice, and Effort Justification, which respectively explain what happens after a person acts inconsistently, relative to their intellectual perspectives; what happens after a person makes decisions and what are the effects upon a person who has expended much effort to achieve a goal. Common to each paradigm of cognitive-dissonance theory is the tenet: People invested in a given perspective shall\u2014when confronted with contrary evidence\u2014expend great effort to justify retaining the challenged perspective."} {"text":"The contradiction of a belief, ideal, or system of values causes cognitive dissonance that can be resolved by changing the challenged belief, yet, instead of effecting change, the resultant mental stress restores psychological consonance to the person by misperception, rejection, or refutation of the contradiction, seeking moral support from people who share the contradicted beliefs or acting to persuade other people that the contradiction is unreal."} {"text":"The study of \"The Rebbe, the Messiah, and the Scandal of Orthodox Indifference\" (2008) reported the belief contradiction that occurred in the \"Chabad\" Orthodox Jewish congregation, who believed that their Rebbe (Menachem Mendel Schneerson) was the Messiah. When he died of a stroke in 1994, instead of accepting that their Rebbe was not the Messiah, some of the congregation proved indifferent to that contradictory fact and continued claiming that Schneerson was the Messiah and that he would soon return from the dead."} {"text":"In the \"Cognitive Consequences of Forced Compliance\" (1959), the investigators Leon Festinger and Merrill Carlsmith asked students to spend an hour doing tedious tasks; e.g. turning pegs a quarter-turn, at fixed intervals. The tasks were designed to induce a strong, negative, mental attitude in the subjects. Once the subjects had done the tasks, the experimenters asked one group of subjects to speak with another subject (an actor) and persuade that impostor-subject that the tedious tasks were interesting and engaging. Subjects of one group were paid twenty dollars ($20); those in a second group were paid one dollar ($1) and those in the control group were not asked to speak with the imposter-subject."} {"text":"In the \"Effect of the Severity of Threat on the Devaluation of Forbidden Behavior\" (1963), a variant of the induced-compliance paradigm, by Elliot Aronson and Carlsmith, examined self-justification in children. Children were left in a room with toys, including a greatly desirable steam shovel, the forbidden toy. Upon leaving the room, the experimenter told one-half of the group of children that there would be severe punishment if they played with the steam-shovel toy and told the second half of the group that there would be a mild punishment for playing with the forbidden toy. All of the children refrained from playing with the forbidden toy (the steam shovel)."} {"text":"Later, when the children were told that they could freely play with any toy they wanted, the children in the mild-punishment group were less likely to play with the steam shovel (the forbidden toy), despite the removal of the threat of mild punishment. The children threatened with mild punishment had to justify, to themselves, why they did not play with the forbidden toy. The degree of punishment was insufficiently strong to resolve their cognitive dissonance; the children had to convince themselves that playing with the forbidden toy was not worth the effort."} {"text":"In \"The Efficacy of Musical Emotions Provoked by Mozart's Music for the Reconciliation of Cognitive Dissonance\" (2012), a variant of the forbidden-toy paradigm, indicated that listening to music reduces the development of cognitive dissonance. Without music in the background, the control group of four-year-old children were told to avoid playing with a forbidden toy. After playing alone, the control-group children later devalued the importance of the forbidden toy. In the variable group, classical music played in the background while the children played alone. In the second group, the children did not later devalue the forbidden toy. The researchers, Nobuo Masataka and Leonid Perlovsky, concluded that music might inhibit cognitions that induce cognitive dissonance."} {"text":"Music is a stimulus that can diminish post-decisional dissonance; in an earlier experiment, \"Washing Away Postdecisional Dissonance\" (2010), the researchers indicated that the actions of hand-washing might inhibit the cognitions that induce cognitive dissonance."} {"text":"In the study \"Post-decision Changes in Desirability of Alternatives\" (1956) 225 female students rated domestic appliances and then were asked to choose one of two appliances as a gift. The results of a second round of ratings indicated that the women students increased their ratings of the domestic appliance they had selected as a gift and decreased their ratings of the appliances they rejected."} {"text":"This type of cognitive dissonance occurs in a person faced with a difficult decision, when there always exist aspects of the rejected-object that appeal to the chooser. The action of deciding provokes the psychological dissonance consequent to choosing X instead of Y, despite little difference between X and Y; the decision \"I chose X\" is dissonant with the cognition that \"There are some aspects of Y that I like\". The study \"Choice-induced Preferences in the Absence of Choice: Evidence from a Blind Two-choice Paradigm with Young Children and Capuchin Monkeys\" (2010) reports similar results in the occurrence of cognitive dissonance in human beings and in animals."} {"text":"\"Peer Effects in Pro-Social Behavior: Social Norms or Social Preferences?\" (2013) indicated that with internal deliberation, the structuring of decisions among people can influence how a person acts, and that social preferences and social norms are related and function with wage-giving among three persons. The actions of the first person influenced the wage-giving actions of the second person. That inequity aversion is the paramount concern of the participants."} {"text":"Cognitive dissonance occurs to a person who voluntarily engages in (physically or ethically) unpleasant activities to achieve a goal. The mental stress caused by the dissonance can be reduced by the person exaggerating the desirability of the goal. In \"The Effect of Severity of Initiation on Liking for a Group\" (1956), to qualify for admission to a discussion group, two groups of people underwent an embarrassing initiation of varied psychological severity. The first group of subjects were to read aloud twelve sexual words considered obscene; the second group of subjects were to read aloud twelve sexual words not considered obscene."} {"text":"Both groups were given headphones to unknowingly listen to a recorded discussion about animal sexual behaviour, which the researchers designed to be dull and banal. As the subjects of the experiment, the groups of people were told that the animal-sexuality discussion actually was occurring in the next room. The subjects whose strong initiation required reading aloud obscene words evaluated the people of their group as more-interesting persons than the people of the group who underwent the mild initiation to the discussion group."} {"text":"In \"Washing Away Your Sins: Threatened Morality and Physical Cleansing\" (2006), the results indicated that a person washing their hands is an action that helps resolve post-decisional cognitive dissonance because the mental stress usually was caused by the person's ethical\u2013moral self-disgust, which is an emotion related to the physical disgust caused by a dirty environment."} {"text":"The study \"The Neural Basis of Rationalization: Cognitive Dissonance Reduction During Decision-making\" (2011) indicated that participants rated 80 names and 80 paintings based on how much they liked the names and paintings. To give meaning to the decisions, the participants were asked to select names that they might give to their children. For rating the paintings, the participants were asked to base their ratings on whether or not they would display such art at home."} {"text":"The extent of cognitive dissonance with regards to meat eating can vary depending on the attitudes and values of the individual involved because these can affect whether or not they see any moral conflict with their values and what they eat. For example, individuals who are more dominance minded and who value having a masculine identity are less likely to experience cognitive dissonance because they are less likely to believe eating meat is morally wrong."} {"text":"The study \"Patterns of Cognitive Dissonance-reducing Beliefs Among Smokers: A Longitudinal Analysis from the International Tobacco Control (ITC) Four Country Survey\" (2012) indicated that smokers use justification beliefs to reduce their cognitive dissonance about smoking tobacco and the negative consequences of smoking it."} {"text":"To reduce cognitive dissonance, the participant smokers adjusted their beliefs to correspond with their actions:"} {"text":"If a contradiction occurs between how a person feels and how a person acts, one's perceptions and emotions align to alleviate stress. The Ben Franklin effect refers to that statesman's observation that the act of performing a favor for a rival leads to increased positive feelings toward that individual. It is also possible that one's emotions be altered to minimize the regret of irrevocable choices. At a hippodrome, bettors had more confidence in their horses after the betting than before."} {"text":"The management of cognitive dissonance readily influences the apparent motivation of a student to pursue education. The study \"Turning Play into Work: Effects of Adult Surveillance and Extrinsic Rewards on Children's Intrinsic Motivation\" (1975) indicated that the application of the effort justification paradigm increased student enthusiasm for education with the offer of an external reward for studying; students in pre-school who completed puzzles based upon an adult promise of reward were later less interested in the puzzles than were students who completed the puzzle-tasks without the promise of a reward."} {"text":"The incorporation of cognitive dissonance into models of basic learning-processes to foster the students\u2019 self-awareness of psychological conflicts among their personal beliefs, ideals, and values and the reality of contradictory facts and information, requires the students to defend their personal beliefs. Afterwards, the students are trained to objectively perceive new facts and information to resolve the psychological stress of the conflict between reality and the student's value system. Moreover, educational software that applies the derived principles facilitates the students\u2019 ability to successfully handle the questions posed in a complex subject. Meta-analysis of studies indicates that psychological interventions that provoke cognitive dissonance in order to achieve a directed conceptual change do increase students\u2019 learning in reading skills and about science."} {"text":"The general effectiveness of psychotherapy and psychological intervention is partly explained by the theory of cognitive dissonance. In that vein, social psychology proposed that the mental health of the patient is positively influenced by his and her action in freely choosing a specific therapy and in exerting the required, therapeutic effort to overcome cognitive dissonance. That effective phenomenon was indicated in the results of the study \"Effects of Choice on Behavioral Treatment of Overweight Children\" (1983), wherein the children's belief that they freely chose the type of therapy received, resulted in each overweight child losing a greater amount of excessive body weight."} {"text":"In the study \"Reducing Fears and Increasing Attentiveness: The Role of Dissonance Reduction \" (1980), people afflicted with ophidiophobia (fear of snakes) who invested much effort in activities of little therapeutic value for them (experimentally represented as legitimate and relevant) showed improved alleviation of the symptoms of their phobia. Likewise, the results of \"Cognitive Dissonance and Psychotherapy: The Role of Effort Justification in Inducing Weight Loss\" (1985) indicated that the patient felt better in justifying their efforts and therapeutic choices towards effectively losing weight. That the therapy of effort expenditure can predict long-term change in the patient's perceptions."} {"text":"Cognitive dissonance is used to promote positive social behaviours, such as increased condom use; other studies indicate that cognitive dissonance can be used to encourage people to act pro-socially, such as campaigns against public littering, campaigns against racial prejudice, and compliance with anti-speeding campaigns. The theory can also be used to explain reasons for donating to charity."} {"text":"Three main conditions exist for provoking cognitive dissonance when buying: (i) The decision to purchase must be important, such as the sum of money to spend; (ii) The psychological cost; and (iii) The purchase is personally relevant to the consumer. The consumer is free to select from the alternatives and the decision to buy is irreversible."} {"text":"Cognitive dissonance theory might suggest that since votes are an expression of preference or beliefs, even the act of voting might cause someone to defend the actions of the candidate for whom they voted, and if the decision was close then the effects of cognitive dissonance should be greater."} {"text":"This effect was studied over the 6 presidential elections of the United States between 1972 and 1996, and it was found that the opinion differential between the candidates changed more before and after the election than the opinion differential of non-voters. In addition, elections where the voter had a favorable attitude toward both candidates, making the choice more difficult, had the opinion differential of the candidates change more dramatically than those who only had a favorable opinion of one candidate. What wasn't studied were the cognitive dissonance effects in cases where the person had unfavorable attitudes toward both candidates. The 2016 U.S. election held historically high unfavorable ratings for both candidates."} {"text":"Cognitive dissonance theory of communication was initially advanced by American psychologist Leon Festinger in the 1960s. Festinger theorized that cognitive dissonance usually arises when a person holds two or more incompatible beliefs simultaneously. This is a normal occurrence since people encounter different situations that invoke conflicting thought sequences. This conflict results in a psychological discomfort. According to Festinger, people experiencing a thought conflict try to reduce the psychological discomfort by attempting to achieve an emotional equilibrium. This equilibrium is achieved in three main ways. First, the person may downplay the importance of the dissonant thought. Second, the person may attempt to outweigh the dissonant thought with consonant thoughts. Lastly, the person may incorporate the dissonant thought into their current belief system."} {"text":"Dissonance plays an important role in persuasion. To persuade people, you must cause them to experience dissonance, and then offer your proposal as a way to resolve the discomfort. Although there is no guarantee your audience will change their minds, the theory maintains that without dissonance, there can be no persuasion. Without a feeling of discomfort, people are not motivated to change. Similarly, it is the feeling of discomfort which motivates people to perform selective exposure (i.e., avoiding disconfirming information) as a dissonance-reduction strategy."} {"text":"It is hypothesized that introducing cognitive dissonance into machine learning may be able to assist in the long-term aim of developing 'creative autonomy' on the part of agents, including in multi-agent systems (such as games), and ultimately to the development of 'strong' forms of artificial intelligence, including artificial general intelligence."} {"text":"In \"Self-perception: An alternative interpretation of cognitive dissonance phenomena\" (1967), the social psychologist Daryl Bem proposed the self-perception theory whereby people do not think much about their attitudes, even when engaged in a conflict with another person. The Theory of Self-perception proposes that people develop attitudes by observing their own behaviour, and concludes that their attitudes caused the behaviour observed by self-perception; especially true when internal cues either are ambiguous or weak. Therefore, the person is in the same position as an observer who must rely upon external cues to infer their inner state of mind. Self-perception theory proposes that people adopt attitudes without access to their states of mood and cognition."} {"text":"As such, the experimental subjects of the Festinger and Carlsmith study (\"Cognitive Consequences of Forced Compliance\", 1959) inferred their mental attitudes from their own behaviour. When the subject-participants were asked: \"Did you find the task interesting?\", the participants decided that they must have found the task interesting, because that is what they told the questioner. Their replies suggested that the participants who were paid twenty dollars had an external incentive to adopt that positive attitude, and likely perceived the twenty dollars as the reason for saying the task was interesting, rather than saying the task actually was interesting."} {"text":"The theory of self-perception (Bem) and the theory of cognitive dissonance (Festinger) make identical predictions, but only the theory of cognitive dissonance predicts the presence of unpleasant arousal, of psychological distress, which were verified in laboratory experiments."} {"text":"In \"The Theory of Cognitive Dissonance: A Current Perspective\" (Aronson, Berkowitz, 1969), Elliot Aronson linked cognitive dissonance to the self-concept: That mental stress arises when the conflicts among cognitions threatens the person's positive self-image. This reinterpretation of the original Festinger and Carlsmith study, using the induced-compliance paradigm, proposed that the dissonance was between the cognitions \"I am an honest person.\" and \"I lied about finding the task interesting.\""} {"text":"The study \"Cognitive Dissonance: Private Ratiocination or Public Spectacle?\" (Tedeschi, Schlenker, etc. 1971) reported that maintaining cognitive consistency, rather than protecting a private self-concept, is how a person protects their public self-image. Moreover, the results reported in the study \"I'm No Longer Torn After Choice: How Explicit Choices Implicitly Shape Preferences of Odors\" (2010) contradict such an explanation, by showing the occurrence of revaluation of material items, after the person chose and decided, even after having forgotten the choice."} {"text":"Fritz Heider proposed a motivational theory of attitudinal change that derives from the idea that humans are driven to establish and maintain psychological balance. The driving force for this balance is known as the \"consistency motive\", which is an urge to maintain one's values and beliefs consistent over time. Heider's conception of psychological balance has been used in theoretical models measuring cognitive dissonance."} {"text":"According to balance theory, there are three interacting elements: (1) the self (P), (2) another person (O), and (3) an element (X). These are each positioned at one vertex of a triangle and share two relations:"} {"text":"Under balance theory, human beings seek a balanced state of relations among the three positions. This can take the form of three positives or two negatives and one positive:"} {"text":"People also avoid unbalanced states of relations, such as three negatives or two positives and one negative:"} {"text":"In the study \"On the Measurement of the Utility of Public Works\" (1969), Jules Dupuit reported that behaviors and cognitions can be understood from an economic perspective, wherein people engage in the systematic processing of comparing the costs and benefits of a decision. The psychological process of cost-benefit comparisons helps the person to assess and justify the feasibility (spending money) of an economic decision, and is the basis for determining if the benefit outweighs the cost, and to what extent. Moreover, although the method of cost-benefit analysis functions in economic circumstances, men and women remain psychologically inefficient at comparing the costs against the benefits of their economic decision."} {"text":"E. Tory Higgins proposed that people have three selves, to which they compare themselves:"} {"text":"When these self-guides are contradictory psychological distress (cognitive dissonance) results. People are motivated to reduce self-discrepancy (the gap between two self-guides)."} {"text":"During the 1980s, Cooper and Fazio argued that dissonance was caused by aversive consequences, rather than inconsistency. According to this interpretation, the belief that lying is wrong and hurtful, not the inconsistency between cognitions, is what makes people feel bad. Subsequent research, however, found that people experience dissonance even when they feel they have not done anything wrong. For example, Harmon-Jones and colleagues showed that people experience dissonance even when the consequences of their statements are beneficial\u2014as when they convince sexually active students to use condoms, when they, themselves are not using condoms."} {"text":"In the study \"How Choice Affects and Reflects Preferences: Revisiting the Free-choice Paradigm\" (Chen, Risen, 2010) the researchers criticized the free-choice paradigm as invalid, because the rank-choice-rank method is inaccurate for the study of cognitive dissonance. That the designing of research-models relies upon the assumption that, if the experimental subject rates options differently in the second survey, then the attitudes of the subject towards the options have changed. That there are other reasons why an experimental subject might achieve different rankings in the second survey; perhaps the subjects were indifferent between choices."} {"text":"Although the results of some follow-up studies (e.g. \"Do Choices Affect Preferences? Some Doubts and New Evidence\", 2013) presented evidence of the unreliability of the rank-choice-rank method, the results of studies such as \"Neural Correlates of Cognitive Dissonance and Choice-induced Preference Change\" (2010) have not found the Choice-Rank-Choice method to be invalid, and indicate that making a choice can change the preferences of a person."} {"text":"Festinger's original theory did not seek to explain how dissonance works. Why is inconsistency so aversive? The action\u2013motivation model seeks to answer this question. It proposes that inconsistencies in a person's cognition cause mental stress, because psychological inconsistency interferes with the person's functioning in the real world. Among the ways for coping, the person can choose to exercise a behavior that is inconsistent with their current attitude (a belief, an ideal, a value system), but later try to alter that belief to be consonant with a current behavior; the cognitive dissonance occurs when the person's cognition does not match the action taken. If the person changes the current attitude, after the dissonance occurs, he or she then is obligated to commit to that course of behavior."} {"text":"Cognitive dissonance produces a state of negative affect, which motivates the person to reconsider the causative behavior in order to resolve the psychological inconsistency that caused the mental stress. As the afflicted person works towards a behavioral commitment, the motivational process then is activated in the left frontal cortex of the brain."} {"text":"Technological advances are allowing psychologists to study the biomechanics of cognitive dissonance."} {"text":"The study \"Neural Activity Predicts Attitude Change in Cognitive Dissonance\" (Van Veen, Krug, etc., 2009) identified the neural bases of cognitive dissonance with functional magnetic resonance imaging (fMRI); the neural scans of the participants replicated the basic findings of the induced-compliance paradigm. When in the fMRI scanner, some of the study participants argued that the uncomfortable, mechanical environment of the MRI machine nevertheless was a pleasant experience for them; some participants, from an experimental group, said they enjoyed the mechanical environment of the fMRI scanner more than did the control-group participants (paid actors) who argued about the uncomfortable experimental environment."} {"text":"The results of the neural scan experiment support the original theory of Cognitive Dissonance proposed by Festinger in 1957; and also support the psychological conflict theory, whereby the anterior cingulate functions, in counter-attitudinal response, to activate the dorsal anterior cingulate cortex and the anterior insular cortex; the degree of activation of said regions of the brain is predicted by the degree of change in the psychological attitude of the person."} {"text":"As an application of the free-choice paradigm, the study \"How Choice Reveals and Shapes Expected Hedonic Outcome\" (2009) indicates that after making a choice, neural activity in the striatum changes to reflect the person's new evaluation of the choice-object; neural activity increased if the object was chosen, neural activity decreased if the object was rejected. Moreover, studies such as \"The Neural Basis of Rationalization: Cognitive Dissonance Reduction During Decision-making\" (2010) and \"How Choice Modifies Preference: Neural Correlates of Choice Justification\" (2011) confirm the neural bases of the psychology of cognitive dissonance."} {"text":"\"The Neural Basis of Rationalization: Cognitive Dissonance Reduction During Decision-making\" (Jarcho, Berkman, Lieberman, 2010) applied the free-choice paradigm to fMRI examination of the brain's decision-making process whilst the study participant actively tried to reduce cognitive dissonance. The results indicated that the active reduction of psychological dissonance increased neural activity in the right-inferior frontal gyrus, in the medial fronto-parietal region, and in the ventral striatum, and that neural activity decreased in the anterior insula. That the neural activities of rationalization occur in seconds, without conscious deliberation on the part of the person; and that the brain engages in emotional responses whilst effecting decisions."} {"text":"The results reported in \"The Origins of Cognitive Dissonance: Evidence from Children and Monkeys\" (Egan, Santos, Bloom, 2007) indicated that there might be evolutionary force behind the reduction of cognitive dissonance in the actions of pre-school-age children and Capuchin monkeys when offered a choice between two like options, decals and candies. The groups then were offered a new choice, between the choice-object not chosen and a novel choice-object that was as attractive as the first object. The resulting choices of the human and simian subjects concorded with the theory of cognitive dissonance when the children and the monkeys each chose the novel choice-object instead of the choice-object not chosen in the first selection, despite every object having the same value."} {"text":"The hypothesis of \"An Action-based Model of Cognitive-dissonance Processes\" (Harmon-Jones, Levy, 2015) proposed that psychological dissonance occurs consequent to the stimulation of thoughts that interfere with a goal-driven behavior. Researchers mapped the neural activity of the participant when performing tasks that provoked psychological stress when engaged in contradictory behaviors. A participant read aloud the printed name of a color. To test for the occurrence of cognitive dissonance, the name of the color was printed in a color different than the word read aloud by the participant. As a result, the participants experienced increased neural activity in the anterior cingulate cortex when the experimental exercises provoked psychological dissonance."} {"text":"Artificial neural network models of cognition provide methods for integrating the results of empirical research about cognitive dissonance and attitudes into a single model that explains the formation of psychological attitudes and the mechanisms to change such attitudes. Among the artificial neural-network models that predict how cognitive dissonance might influence a person's attitudes and behavior, are:"} {"text":"There are some that are skeptical of the idea. Charles G. Lord wrote a paper on whether or not the theory of cognitive dissonance was not tested enough and if it was a mistake to accept it into theory. He claimed that the theorist did not take into account all the factors and came to a conclusion without looking at all the angles."} {"text":"In typography, a bouma ( ) is the shape of a cluster of letters, often a whole word. It is a reduction of \"Bouma-shape\", which was probably first used in Paul Saenger's 1997 book \"Space between Words: The Origins of Silent Reading\", although Saenger himself attributes it to Insup & Maurice Martin Taylor. Its origin is in reference to hypotheses by the prominent vision researcher Herman Bouma, who studied the shapes and confusability of letters and letter strings."} {"text":"Some typographers believe that, when reading, people can recognize words by deciphering boumas, not just individual letters, or that the shape of the word is related to readability and\/or legibility. The claim is that this is a natural strategy for increasing reading efficiency. However, considerable study and experimentation by cognitive psychologists led to their general acceptance of a different, and largely contradictory, theory by the end of the 1980s: parallel letterwise recognition. Since 2000, parallel letterwise recognition has been more evangelized to typographers by Microsoft's Dr Kevin Larson, via conference presentations and a widely read article. Nonetheless, ongoing research (starting from 2009) often supports the bouma model of reading."} {"text":"Intuition's effect on decision-making is distinct from insight, which requires time to mature. A month spent pondering a math problem may lead to a gradual understanding of the answer, even if one does not know where that understanding came from. Intuition, in contrast, is a more instantaneous, immediate understanding upon first being confronted with the math problem. Intuition is also distinct from implicit knowledge and learning, which inform intuition but are separate concepts. Intuition is the mechanism by which implicit knowledge is made available during an instance of decision-making."} {"text":"Traditional research often points to the role of heuristics in helping people make \u201cintuitive\u201d decisions. Those following the heuristics-and-biases school of thought developed by Amos Tversky and Daniel Kahneman believe that intuitive judgments are derived from an \u201cinformal and unstructured mode of reasoning\u201d that ultimately does not include any methodical calculation. Tversky and Kahneman identify availability, representativeness, and anchoring\/adjustment as three heuristics that influence many intuitive judgments made under uncertain conditions."} {"text":"The heuristics-and-biases approach looks at patterns of biased judgments to distinguish heuristics from normative reasoning processes. Early studies supporting this approach associated each heuristic with a set of biases. These biases were \u201cdepartures from the normative rational theory\u201d and helped identify the underlying heuristics. Use of the availability heuristic, for example, leads to error whenever the memory retrieved is a biased recollection of actual frequency. This can be attributed to an individual's tendency to remember dramatic cases. Heuristic processes are quick intuitive responses to basic questions such as frequency."} {"text":"Intuitive decision-making can be contrasted with deliberative decision-making, which is based on cognitive factors like beliefs, arguments, and reasons, commonly referred to as one's explicit knowledge. Intuitive decision-making is based on implicit knowledge relayed to the conscious mind at the point of decision through affect or unconscious cognition. Some studies also suggest that intuitive decision-making relies more on the mind's parallel processing functions, while deliberative decision-making relies more on sequential processing."} {"text":"Prevalence of intuitive judgment and measurement of use."} {"text":"Although people use intuitive and deliberative decision-making modes interchangeably, individuals value the decisions they make more when they are allowed to make them using their preferred style. This specific kind of regulatory fit is referred to as decisional fit. The emotions people experience after a decision is made tend to be more pleasant when the preferred style is used, regardless of the decision outcome. Some studies suggest that the mood with which the subject enters the decision-making process can also affect the style they choose to employ: sad people tend to be more deliberative, while people in a happy mood rely more on intuition."} {"text":"The Preference for Intuition and Deliberation Scale developed by Coralie Bestch in 2004 measures propensity toward intuitiveness. The scale defines preference for intuition as tendency to use affect (\u201cgut-feel\u201d) as a basis for decision-making instead of cognition. The Myers-Briggs Type Indicator is also sometimes used."} {"text":"Researchers have also explored the efficacy of intuitive judgments and the debate on the function of intuition versus analysis in decisions that require specific expertise, as in management of organizations. In this context, intuition is interpreted as an \u201cunconscious expertise\u201d rather than a traditionally purely heuristic response. Research suggests that this kind of intuition is based on a \u201cbroad constellation of past experiences, knowledge, skills, perceptions and feelings.\u201d The efficacy of intuitive decision-making in the management environment is largely dependent on the decision context and decision maker's expertise."} {"text":"Traditional literature attributes the role of judgment processes in risk perception and decision-making to cognition rather than emotion. However, more recent studies suggest a link between emotion and cognition as it relates to decision-making in high-risk environments. Studies of decision-making in high-risk environments suggest that individuals who self-identify as intuitive decision-makers tend to make faster decisions that imply greater deviation from risk neutrality than those who prefer the deliberative style. For example, risk-averse intuitive decision-makers will choose to not participate in a dangerous event more quickly than deliberative decision-makers, but will choose not to participate in more instances than their deliberative counterparts."} {"text":"Strategic decisions are usually made by the top management in the organizations. Usually strategic decisions also effect on the future of the organization. Rationality has been the guideline and also justified way to make decisions because they are based on facts. Intuition in strategic decision making is less examined and for example can be depending on a case be described as managers know-how, expertise or just a gut feeling, hunch."} {"text":"In psychology, a dual process theory provides an account of how thought can arise in two different ways, or as a result of two different processes. Often, the two processes consist of an implicit (automatic), unconscious process and an explicit (controlled), conscious process. Verbalized explicit processes or attitudes and actions may change with persuasion or education; though implicit process or attitudes usually take a long amount of time to change with the forming of new habits. Dual process theories can be found in social, personality, cognitive, and clinical psychology. It has also been linked with economics via prospect theory and behavioral economics, and increasingly in sociology through cultural analysis."} {"text":"The foundations of dual process theory likely comes from William James. He believed that there were two different kinds of thinking: associative and true reasoning. James theorized that empirical thought was used for things like art and design work. For James, images and thoughts would come to mind of past experiences, providing ideas of comparison or abstractions. He claimed that associative knowledge was only from past experiences describing it as \"only reproductive\". James believed that true reasoning could enable overcoming \u201cunprecedented situations\u201d just as a map could enable navigating past obstacles."} {"text":"There are various dual process theories that were produced after William James's work. Dual process models are very common in the study of social psychological variables, such as attitude change. Examples include Petty and Cacioppo's elaboration likelihood model (explained below) and Chaiken's heuristic systematic model. According to these models, persuasion may occur after either intense scrutiny or extremely superficial thinking. In cognitive psychology, attention and working memory have also been conceptualized as relying on two distinct processes. Whether the focus be on social psychology or cognitive psychology, there are many examples of dual process theories produced throughout the past. The following just show a glimpse into the variety that can be found."} {"text":"Peter Wason and Jonathan Evans suggested dual process theory in 1974. In Evans' later theory, there are two distinct types of processes: heuristic processes and analytic processes. He suggested that during heuristic processes, an individual chooses which information is relevant to the current situation. Relevant information is then processed further whereas irrelevant information is not. Following the heuristic processes come analytic processes. During analytic processes, the relevant information that is chosen during the heuristic processes is then used to make judgments about the situation."} {"text":"Richard E. Petty and John Cacioppo proposed a dual process theory focused in the field of social psychology in 1986. Their theory is called the elaboration likelihood model of persuasion. In their theory, there are two different routes to persuasion in making decisions. The first route is known as the central route and this takes place when a person is thinking carefully about a situation, elaborating on the information they are given, and creating an argument. This route occurs when an individual's motivation and ability are high. The second route is known as the peripheral route and this takes place when a person is not thinking carefully about a situation and uses shortcuts to make judgments. This route occurs when an individual's motivation or ability are low."} {"text":"Steven Sloman produced another interpretation on dual processing in 1996. He believed that associative reasoning takes stimuli and divides it into logical clusters of information based on statistical regularity. He proposed that how you associate is directly proportional to the similarity of past experiences, relying on temporal and similarity relations to determine reasoning rather than an underlying mechanical structure. The other reasoning process in Sloman's opinion was of the Rule-based system. The system functioned on logical structure and variables based upon rule systems to come to conclusions different from that of the associative system. He also believed that the Rule-based system had control over the associative system, though it could only suppress it. This interpretation corresponds well to earlier work on computational models of dual processes of reasoning."} {"text":"Daniel Kahneman provided further interpretation by differentiating the two styles of processing more, calling them intuition and reasoning in 2003. Intuition (or system 1), similar to associative reasoning, was determined to be fast and automatic, usually with strong emotional bonds included in the reasoning process. Kahneman said that this kind of reasoning was based on formed habits and very difficult to change or manipulate. Reasoning (or system 2) was slower and much more volatile, being subject to conscious judgments and attitudes."} {"text":"Fritz Strack and Roland Deutsch proposed another dual process theory focused in the field of social psychology in 2004. According to their model, there are two separate systems: the reflective system and the impulsive system. In the reflective system, decisions are made using knowledge and the information that is coming in from the situation is processed. On the other hand, in the impulsive system, decisions are made using schemes and there is little or no thought required."} {"text":"Ron Sun proposed a dual-process model of learning (both implicit learning and explicit learning). The model (named CLARION) re-interpreted voluminous behavioral data in psychological studies of implicit learning and skill acquisition in general. The resulting theory is two-level and interactive, based on the idea of the interaction of one-shot explicit rule learning (i.e., explicit learning) and gradual implicit tuning through reinforcement (i.e. implicit learning), and it accounts for many previously unexplained cognitive data and phenomena based on the interaction of implicit and explicit learning."} {"text":"Using a somewhat different approach, Allan Paivio has developed a dual-coding theory of information processing. According to this model, cognition involves the coordinated activity of two independent, but connected systems, a nonverbal system and a verbal system that is specialized to deal with language. The nonverbal system is hypothesized to have developed earlier in evolution. Both systems rely on different areas of the brain. Paivio has reported evidence that nonverbal, visual images are processed more efficiently and are approximately twice as memorable. Additionally, the verbal and nonverbal systems are additive, so one can improve memory by using both types of information during learning."} {"text":"Dual-process accounts of reasoning postulate that there are two systems or minds in one brain. A current theory is that there are two cognitive systems underlying thinking and reasoning and that these different systems were developed through evolution. These systems are often referred to as \"implicit\" and \"explicit\" or by the more neutral \"System 1\" and \"System 2\", as coined by Keith Stanovich and Richard West."} {"text":"The systems have multiple names by which they can be called, as well as many different properties."} {"text":"One takeaway from the psychological research on dual process theory is that our System 1 (intuition) is more accurate in areas where we\u2019ve gathered a lot of data with reliable and fast feedback, like social dynamics."} {"text":"System 2 is evolutionarily recent and specific to humans. It is also known as the \"explicit\" system, the \"rule-based\" system, the \"rational\" system, or the \"analytic\" system. It performs the more slow and sequential thinking. It is domain-general, performed in the central working memory system. Because of this, it has a limited capacity and is slower than System 1 which correlates it with general intelligence. It is known as the rational system because it reasons according to logical standards. Some overall properties associated with System 2 are that it is rule-based, analytic, controlled, demanding of cognitive capacity, and slow."} {"text":"Unconscious thought theory is the counterintuitive and contested view that the unconscious mind is adapted to highly complex decision making. Where most dual system models define complex reasoning as the domain of effortful conscious thought, UTT argues complex issues are best dealt with unconsciously."} {"text":"Terror management theory and the dual process model."} {"text":"According to psychologists Pyszczynski, Greenberg, & Solomon, the dual process model, in relation to terror management theory, identifies two systems by which the brain manages fear of death: distal and proximal. Distal defenses fall under the system 1 category because it is unconscious whereas proximal defenses fall under the system 2 category because it operates with conscious thought."} {"text":"Habituation can be described as decreased response to a repeated stimulus. According to Groves and Thompson, the process of habituation also mimics a dual process. The dual process theory of behavioral habituation relies on two underlying (non-behavioral) processes; depression and facilitation with the relative strength of one over the other determining whether or not habituation or sensitization is seen in the behavior. Habituation weakens the intensity of a repeated stimulus over time subconsciously. As a result, a person will give the stimulus less conscious attention over time. Conversely, sensitization subconsciously strengthens a stimulus over time, giving the stimulus more conscious attention. Though these two systems are not both conscious, they interact to help people understand their surroundings by strengthening some stimuli and diminishing others."} {"text":"A belief bias is the tendency to judge the strength of arguments based on the plausibility of their conclusion rather than how strongly they support that conclusion. Some evidence suggests that this bias results from competition between logical (System 2) and belief-based (System 1) processes during evaluation of arguments."} {"text":"Studies on belief-bias effect were first designed by Jonathan Evans to create a conflict between logical reasoning and prior knowledge about the truth of conclusions. Participants are asked to evaluate syllogisms that are: valid arguments with believable conclusions, valid arguments with unbelievable conclusions, invalid arguments with believable conclusions, and invalid arguments with unbelievable conclusions. Participants are told to only agree with conclusions that logically follow from the premises given. The results suggest when the conclusion is believable, people erroneously accept invalid conclusions as valid more often than invalid arguments are accepted which support unpalatable conclusions. This is taken to suggest that System 1 beliefs are interfering with the logic of System 2."} {"text":"Vinod Goel and others produced neuropsychological evidence for dual-process accounts of reasoning using fMRI studies. They provided evidence that anatomically distinct parts of the brain were responsible for the two different kinds of reasoning. They found that content-based reasoning caused left temporal hemisphere activation whereas abstract formal problem reasoning activated the parietal system. They concluded that different kinds of reasoning, depending on the semantic content, activated one of two different systems in the brain."} {"text":"A similar study incorporated fMRI during a belief-bias test. They found that different mental processes were competing for control of the response to the problems given in the belief-bias test. The prefrontal cortex was critical in detecting and resolving conflicts, which are characteristic of System 2, and had already been associated with that System 2. The ventral medial prefrontal cortex, known to be associated with the more intuitive or heuristic responses of System 1, was the area in competition with the prefrontal cortex."} {"text":"Matching bias is a non-logical heuristic. The matching bias is described as a tendency to use lexical content matching of the statement about which one is reasoning, to be seen as relevant information and do the opposite as well, ignore relevant information that doesn't match. It mostly affects problems with abstract content. It doesn't involve prior knowledge and beliefs but it is still seen as a System 1 heuristic that competes with the logical System 2."} {"text":"Studies have shown that you can train people to inhibit matching bias which provides neuropsychological evidence for the dual-process theory of reasoning. When you compare trials before and after the training there is evidence for a forward shift in activated brain area. Pre-test results showed activation in locations along the ventral pathway and post-test results showed activation around the ventro-medial prefrontal cortex and anterior cingulate. Matching bias has also been shown to generalise to syllogistic reasoning."} {"text":"Dual-process theorists claim that System 2, a general purpose reasoning system, evolved late and worked alongside the older autonomous sub-systems of System 1. The success of \"Homo sapiens\" lends evidence to their higher cognitive abilities above other hominids. Mithen theorizes that the increase in cognitive ability occurred 50,000 years ago when representational art, imagery, and the design of tools and artefacts are first documented. She hypothesizes that this change was due to the adaptation of System 2."} {"text":"Most evolutionary psychologists do not agree with dual-process theorists. They claim that the mind is modular, and domain-specific, thus they disagree with the theory of the general reasoning ability of System 2. They have difficulty agreeing that there are two distinct ways of reasoning and that one is evolutionarily old, and the other is new. To ease this discomfort, the theory is that once System 2 evolved, it became a 'long leash' system without much genetic control which allowed humans to pursue their individual goals."} {"text":"Issues with the dual-process account of reasoning."} {"text":"The dual-process account of reasoning is an old theory, as noted above. But according to Evans it has adapted itself from the old, logicist paradigm, to the new theories that apply to other kinds of reasoning as well. And the theory seems more influential now than in the past which is questionable. Evans outlined 5 \"fallacies\":"} {"text":"Another argument against dual-process accounts for reasoning which was outlined by Osman is that the proposed dichotomy of System 1 and System 2 does not adequately accommodate the range of processes accomplished. Moshman proposed that there should be four possible types of processing as opposed to two. They would be implicit heuristic processing, implicit rule-based processing, explicit heuristic processing, and explicit rule-based processing. Another fine-grained division is as follows: implicit action-centered processes, implicit non-action-centered processes, explicit action-centered processes, and explicit non-action-centered processes (that is, a four-way division reflecting both the implicit-explicit distinction and the procedural-declarative distinction)."} {"text":"In response to the question as to whether there are dichotomous processing types, many have instead proposed a single-system framework which incorporates a continuum between implicit and explicit processes."} {"text":"According to Charles Brainerd and Valerie Reyna's fuzzy-trace theory of memory and reasoning, people have two memory representations: verbatim and gist. Verbatim is memory for surface information (e.g. the words in this sentence) whereas gist is memory for semantic information (e.g. the meaning of this sentence)."} {"text":"This dual process theory posits that we encode, store, retrieve, and forget the information in these two traces of memory separately and completely independently of each other. Furthermore, the two memory traces decay at different rates: verbatim decays quickly, while gist lasts longer."} {"text":"In terms of reasoning, fuzzy-trace theory posits that as we mature, we increasingly rely more on gist information over verbatim information. Evidence for this lies in framing experiments where framing effects become stronger when verbatim information (percentages) are replaced with gist descriptions. Other experiments rule out predictions of prospect theory (extended and original) as well as other current theories of judgment and decision making."} {"text":"Semantic memory builds schemas and scripts. With this, semantic memory is known as the knowledge that people gain from experiencing events in the everyday world. This information is then organized into a concept that people can understand in their own way. Semantic memory relates to scripts because scripts are made through the knowledge that one gains through these everyday experiences and habituation."} {"text":"Behavioral scripts that people are taught allow them to make realistic assumptions about situations, places, and people. These assumptions stem from what are known as schemas. Schemas make our environments more approachable to understand, and therefore people are able to familiarize themselves with what is around them. When people become comfortable with what they find familiar, they are more likely to remember events, people or places that obscure from their initial thought or script."} {"text":"Some people may have a tendency to habituate behavioral scripts in a manner that can act to limit consciousness in a subliminal way. This can negatively influence the subconscious mind and, subsequently, can negatively affect perceptions, judgments, values, beliefs, cognition and behavior. For example, over-reliance upon behavioral scripts combined with social norms that encourage an individual to use these behavioral scripts may influence one to stereotype and develop a prejudiced attitude toward others based on socioeconomic status, ethnicity, race, etc."} {"text":"Some applied behavior analysts even use scripts to train new skills and 20 years of research supports script use as an effective way to build new language, social, and activity routines for adults and children with developmental disabilities. With language scripts fading, efforts are being made in an attempt to help the scripts recombine in order to approximate more natural language."} {"text":"Much of the development of scripts first addresses language and how it influences what we know and understand. With language, many psychologists have used the specific study of language to develop theories about concepts and scripts. In particular, researchers recognize semantic memory development is mostly possible through verbal-linguistic stimuli. Language and memory are constantly used for people to be able to interpret what experiences or people mean to or relate to them. Here, language has influence on the scripts people use because of its relationship to semantic memory."} {"text":"Depressive realism is the hypothesis developed by Lauren Alloy and Lyn Yvonne Abramson that depressed individuals make more realistic inferences than non-depressed individuals. Although depressed individuals are thought to have a negative cognitive bias that results in recurrent, negative automatic thoughts, maladaptive behaviors, and dysfunctional world beliefs, depressive realism argues not only that this negativity may reflect a more accurate appraisal of the world but also that non-depressed individuals' appraisals are positively biased."} {"text":"When participants were asked to press a button and rate the control they perceived they had over whether or not a light turned on, depressed individuals made more accurate ratings of control than non-depressed individuals. Among participants asked to complete a task and rate their performance without any feedback, depressed individuals made more accurate self-ratings than non-depressed individuals. For participants asked to complete a series of tasks, given feedback on their performance after each task, and who self-rated their overall performance after completing all the tasks, depressed individuals were again more likely to give an accurate self-rating than non-depressed individuals. When asked to evaluate their performance both immediately and some time after completing a task, depressed individuals made accurate appraisals both immediately before and after time had passed."} {"text":"In a functional magnetic resonance imaging study of the brain, depressed patients were shown to be more accurate in their causal attributions of positive and negative social events than non-depressed participants who demonstrated a positive bias. This difference was also reflected in the differential activation of the fronto-temporal network, higher activation for non self-serving attributions in non-depressed participants and for self-serving attributions in depressed patients, and reduced coupling of the dorsomedial prefrontal cortex seed region and the limbic areas when depressed patients made self-serving attributions."} {"text":"When asked to rate both their performance and the performance of others, non-depressed individuals demonstrated positive bias when rating themselves but no bias when rating others. Depressed individuals conversely showed no bias when rating themselves but a positive bias when rating others."} {"text":"When assessing participant thoughts in public versus private settings, the thoughts of non-depressed individuals were more optimistic in public than private, while depressed individuals were less optimistic in public."} {"text":"When asked to rate their performance immediately after a task and after some time had passed, depressed individuals were more accurate when they rated themselves immediately after the task but were more negative after time had passed whereas non-depressed individuals were positive immediately after and some time after."} {"text":"Although depressed individuals make accurate judgments about having no control in situations where they in fact have no control, this appraisal also carries over to situations where they do have control, suggesting that the depressed perspective is not more accurate overall."} {"text":"One study suggested that in real-world settings, depressed individuals are actually less accurate and more overconfident in their predictions than their non-depressed peers. Participants' attributional accuracy may also be more related to their overall attributional style rather than the presence and severity of their depressive symptoms."} {"text":"Some have argued that the evidence is not more conclusive because no standard for reality exists, the diagnoses are dubious, and the results may not apply to the real world. Because many studies rely on self-report of depressive symptoms and self-reports are known to be biased, the diagnosis of depression in these studies may not be valid, necessitating the use of other objective measures. Due to most of these studies using designs that do not necessarily approximate real-world phenomena, the external validity of the depressive realism hypothesis is unclear. There is also concern that the depressive realism effect is merely a byproduct of the depressed person being in a situation that agrees with their negative bias."} {"text":"An intrusive thought is an unwelcome, involuntary thought, image, or unpleasant idea that may become an obsession, is upsetting or distressing, and can feel difficult to manage or eliminate. When such thoughts are associated with obsessive-compulsive disorder (OCD), depression, body dysmorphic disorder (BDD), and sometimes attention-deficit hyperactivity disorder (ADHD), the thoughts may become paralyzing, anxiety-provoking, or persistent. Intrusive thoughts may also be associated with episodic memory, unwanted worries or memories from OCD, post-traumatic stress disorder, other anxiety disorders, eating disorders, or psychosis. Intrusive thoughts, urges, and images are of inappropriate things at inappropriate times, and generally have aggressive, sexual, or blasphemous themes."} {"text":"Many people experience the type of bad or unwanted thoughts that people with more troubling intrusive thoughts have, but most people can dismiss these thoughts. For most people, intrusive thoughts are a \"fleeting annoyance\". Psychologist Stanley Rachman presented a questionnaire to healthy college students and found that virtually all said they had these thoughts from time to time, including thoughts of sexual violence, sexual punishment, \"unnatural\" sex acts, painful sexual practices, blasphemous or obscene images, thoughts of harming elderly people or someone close to them, violence against animals or towards children, and impulsive or abusive outbursts or utterances. Such thoughts are universal among humans, and have \"almost certainly always been a part of the human condition\"."} {"text":"When intrusive thoughts occur with obsessive-compulsive disorder (OCD), patients are less able to ignore the unpleasant thoughts and may pay undue attention to them, causing the thoughts to become more frequent and distressing. The suppression of intrusive thoughts often cause these thoughts to become more intense and persistent. The thoughts may become obsessions that are paralyzing, severe, and constantly present, these might involve such topics such as thoughts of violence, sex, or religious blasphemy to name a few examples. Distinguishing them from normal intrusive thoughts experienced by many people, the intrusive thoughts associated with OCD may be anxiety provoking, irrepressible, and persistent."} {"text":"Intrusive thoughts may involve violent obsessions about hurting others or themselves. They can be related to primarily obsessional obsessive compulsive disorder. These thoughts can include harming a child; jumping from a bridge, mountain, or the top of a tall building; urges to jump in front of a train or automobile; and urges to push another in front of a train or automobile. Rachman's survey of healthy college students found that virtually all of them had intrusive thoughts from time to time, including:"} {"text":"These thoughts are part of being human, and need not ruin the quality of life. Treatment is available when the thoughts are associated with OCD and become persistent, severe, or distressing."} {"text":"A variant of aggressive intrusive thoughts is L'appel du vide, or the call of the void. Sufferers of \"L'appel du vide\" generally describe the condition as manifesting in certain situations, normally as a wish or brief desire to jump from a high location."} {"text":"Sexual obsession involves intrusive thoughts or images of \"kissing, touching, fondling, oral sex, anal sex, intercourse, and rape\" with \"strangers, acquaintances, parents, children, family members, friends, coworkers, animals and religious figures\", involving \"heterosexual or homosexual content\" with persons of any age."} {"text":"Common sexual themes for intrusive thoughts for men involve \u201c(a) having sex in a public place, (b) people I come in contact with being naked, and (c) engaging in a sexual act with someone who is unacceptable to me because they have authority over me.\u201d While common sexual intrusive thoughts for women are (a) having sex in a public place, (b) engaging in a sexual act with someone who is unacceptable to me because they have authority over me, and (c) being sexually victimized."} {"text":"Like other unwanted intrusive thoughts or images, most people have some inappropriate sexual thoughts at times, but people with OCD may attach significance to the unwanted sexual thoughts, generating anxiety and distress. The doubt that accompanies OCD leads to uncertainty regarding whether one might act on the intrusive thoughts, resulting in self-criticism or loathing."} {"text":"One of the more common sexual intrusive thoughts occurs when an obsessive person doubts their sexual identity. As in the case of most sexual obsessions, sufferers may feel shame and live in isolation, finding it hard to discuss their fears, doubts, and concerns about their sexual identity."} {"text":"According to Fred Penzel, a New York psychologist, some common religious obsessions and intrusive thoughts are:"} {"text":"Suffering can be greater and treatment complicated when intrusive thoughts involve religious implications; patients may believe the thoughts are inspired by Satan, and may fear punishment from God or have magnified shame because they perceive themselves as sinful. Symptoms can be more distressing for sufferers with strong religious convictions or beliefs."} {"text":"Baer believes that blasphemous thoughts are more common in Catholics and evangelical Protestants than in other religions, whereas Jews or Muslims tend to have obsessions related more to complying with the laws and rituals of their faith, and performing the rituals perfectly. He hypothesizes that this is because what is considered inappropriate varies among cultures and religions, and intrusive thoughts torment their sufferers with whatever is considered most inappropriate in the surrounding culture."} {"text":"Adults under the age of 40 seem to be the most affected by intrusive thoughts. Individuals in this age range tend to be less experienced at coping with these thoughts, and the stress and negative affect induced by them. Younger adults also tend to have stressors specific to that period of life that can be particularly challenging especially in the face of intrusive thoughts. Although, when introduced with an intrusive thought, both age groups immediately look for ways to reduce the recurrence of the thoughts."} {"text":"Intrusive thoughts appear to occur at the same rate across the lifespan, however, older adults seem to be less negatively affected than younger adults. Older adults have more experience in ignoring or suppressing strong negative reactions to stress."} {"text":"Intrusive thoughts are associated with OCD or OCPD, but may also occur with other conditions such as post-traumatic stress disorder, clinical depression, postpartum depression, and anxiety. One of these conditions is almost always present in people whose intrusive thoughts reach a clinical level of severity. A large study published in 2005 found that aggressive, sexual, and religious obsessions were broadly associated with comorbid anxiety disorders and depression. The intrusive thoughts that occur in a schizophrenic episode differ from the obsessional thoughts that occur with OCD or depression in that the intrusive thoughts of schizophrenics are false or delusional beliefs (i.e. held by the schizophrenic individual to be real and not doubted, as is typically the case with intrusive thoughts) ."} {"text":"The key difference between OCD and post-traumatic stress disorder (PTSD) is that the intrusive thoughts of PTSD sufferers are of content relating to traumatic events that actually happened to them, whereas OCD sufferers have thoughts of imagined catastrophes. PTSD patients with intrusive thoughts have to sort out violent, sexual, or blasphemous thoughts from memories of traumatic experiences. When patients with intrusive thoughts do not respond to treatment, physicians may suspect past physical, emotional, or sexual abuse. If a person who has experienced trauma practices looks for the positive outcomes, it is suggested they will experience less depression and higher self well-being. While a person may experience less depression for benefit finding, they may also experience an increased amount of intrusive and\/or avoidant thoughts."} {"text":"One study looking at women with PTSD found that intrusive thoughts were more persistent when the individual tried to cope by using avoidance-based thought regulation strategies. Their findings further support that not all coping strategies are helpful in diminishing the frequency of intrusive thoughts."} {"text":"People who are clinically depressed may experience intrusive thoughts more intensely, and view them as evidence that they are worthless or sinful people. The suicidal thoughts that are common in depression must be distinguished from intrusive thoughts, because suicidal thoughts\u2014unlike harmless sexual, aggressive, or religious thoughts\u2014can be dangerous."} {"text":"Non-depressed individuals have been shown to have a higher activation in the dorsolateral prefrontal cortex, which is the area of the brain that primarily functions in cognition, working memory, and planning,\u00a0 while attempting to suppress intrusive thoughts. This activation decreases in people at risk of or currently diagnosed with depression. When the intrusive thoughts re-emerge, non depressed individuals also show higher activation levels in the anterior cingulate cortices, which functions in error detection, motivation, and emotional regulation, than their depressed counterparts."} {"text":"Roughly 60% of depressed individuals report experiencing bodily, visual, or auditory perceptions along with their intrusive thoughts. There is a correlation with experiencing those sensations with intrusive thoughts and more intense depressive symptoms as well as the need for heavier treatment."} {"text":"Unwanted thoughts by mothers about harming infants are common in postpartum depression. A 1999 study of 65 women with postpartum major depression by Katherine Wisner \"et al.\" found the most frequent aggressive thought for women with postpartum depression was causing harm to their newborn infants. A study of 85 new parents found that 89% experienced intrusive images, for example, of the baby suffocating, having an accident, being harmed, or being kidnapped."} {"text":"Some women may develop symptoms of OCD during pregnancy or the postpartum period. Postpartum OCD occurs mainly in women who may already have OCD, perhaps in a mild or undiagnosed form. Postpartum depression and OCD may be comorbid (often occurring together). And though physicians may focus more on the depressive symptoms, one study found that obsessive thoughts did accompany postpartum depression in 57% of new mothers."} {"text":"Wisner found common obsessions about harming babies in mothers experiencing postpartum depression include images of the baby lying dead in a casket or being eaten by sharks; stabbing the baby; throwing the baby down the stairs; or drowning or burning the baby (as by submerging it in the bathtub in the former case or throwing it in the fire or putting it in the microwave in the latter). Baer estimates that up to 200,000 new mothers with postpartum depression each year may develop these obsessional thoughts about their babies; and because they may be reluctant to share these thoughts with a physician or family member, or suffer in silence out of fear they could be \"crazy\", their depression can worsen."} {"text":"Intrusive fears of harming immediate children can last longer than the postpartum period. A study of 100 clinically depressed women found that 41% had obsessive fears that they might harm their child, and some were afraid to care for their children. Among non-depressed mothers, the study found 7% had thoughts of harming their child\u2014a rate that yields an additional 280,000 non-depressed mothers in the United States with intrusive thoughts about harming their children."} {"text":"Treatment for intrusive thoughts is similar to treatment for OCD. Exposure and response prevention therapy\u2014also referred to as habituation or desensitization\u2014is useful in treating intrusive thoughts. Mild cases can also be treated with cognitive behavioral therapy, which helps patients identify and manage the unwanted thoughts."} {"text":"Exposure therapy (or exposure and response prevention) is the practice of staying in an anxiety-provoking or feared situation until the distress or anxiety diminishes. The goal is to reduce the fear reaction, learning to not react to the bad thoughts. This is the most effective way to reduce the frequency and severity of the intrusive thoughts. The goal is to be able to \"expose yourself to the thing that most triggers your fear or discomfort for one to two hours at a time, without leaving the situation, or doing anything else to distract or comfort you.\" Exposure therapy will not completely eliminate intrusive thoughts\u2014everyone has bad thoughts\u2014but most patients find that it can decrease their thoughts sufficiently that intrusive thoughts no longer interfere with their lives."} {"text":"Cognitive behavioral therapy (CBT) is a newer therapy than exposure therapy, available for those unable or unwilling to undergo exposure therapy. Cognitive therapy has been shown to be useful in reducing intrusive thoughts, but developing a conceptualization of the obsessions and compulsions with the patient is important. One of the strategies sometimes used in Cognitive Behavioral Theory is mindfulness exercises. These include practices such as being aware of the thoughts, accepting the thoughts without judgement for them, and \u201cbeing larger than your thoughts.\u201d"} {"text":"Antidepressants or antipsychotic medications may be used for more severe cases if intrusive thoughts do not respond to cognitive behavioral or exposure therapy alone. Whether the cause of intrusive thoughts is OCD, depression, or post-traumatic stress disorder, the selective serotonin reuptake inhibitor (SSRI) drugs (a class of antidepressants) are the most commonly prescribed. Intrusive thoughts may occur in persons with Tourette syndrome (TS) who also have OCD; the obsessions in TS-related OCD are thought to respond to SSRI drugs as well."} {"text":"Patients with intense intrusive thoughts that do not respond to SSRIs or other antidepressants may be prescribed typical and atypical neuroleptics including risperidone (trade name Risperdal), ziprasidone (Geodon), haloperidol (Haldol), and pimozide (Orap)."} {"text":"Studies suggest that therapeutic doses of inositol may be useful in the treatment of obsessive thoughts."} {"text":"A 2007 study found that 78% of a clinical sample of OCD patients had intrusive images. Most people who suffer from intrusive thoughts have not identified themselves as having OCD, because they may not have what they believe to be classic symptoms of OCD, such as handwashing. Yet, epidemiological studies suggest that intrusive thoughts are the most common kind of OCD worldwide; if people in the United States with intrusive thoughts gathered, they would form the fourth-largest city in the US, following New York City, Los Angeles, and Chicago."} {"text":"The prevalence of OCD in every culture studied is at least 2% of the population, and the majority of those have obsessions, or bad thoughts, only; this results in a conservative estimate of more than 2 million sufferers in the United States alone (as of 2000). One author estimates that one in 50 adults have OCD and about 10\u201320% of these have sexual obsessions. A recent study found that 25% of 293 patients with a primary diagnosis of OCD had a history of sexual obsessions."} {"text":"Human Computer Interaction has a huge part in cognitive ergonomics because we are in a time period where most of life is digitalized. This created new problems and solutions. Studies show that most of the problems that happen are due to the digitalization of dynamic systems. With this it created a rise in the diversity in methods on how to process many streams of information. The changes in our socio-technical contexts adds to the stress of methods of visualization and analysis, along with the capabilities regarding cognitive perceptions by the user."} {"text":"A proposed way of expanding a users effectiveness with cognitive ergonomics is to expand the interdisciplinary connects related to normal dynamics. The method behind this is to transfer the pre-existing knowledge of the various mechanics in computers into structural patterns of the cognitive space that could be used. This will work with human factors in 1.) developing an intellectual learning support system 2.) apply a interdisciplinary methodology of training. This will help the effective interaction between the person and the computer with the strengthening of critical thinking and intuition."} {"text":"Some of the best practices for accessible content include:"} {"text":"\"Cognitive task analysis\" is a general term for the set of methods used to identify the mental demands and cognitive skills needed to complete a task. Frameworks like GOMS provide a formal set of methods for identifying the mental activities required by a task and an artifact, such as a desktop computer system. By identifying the sequence of mental activities of a user engaged in a task, cognitive ergonomics engineers can identify bottlenecks and critical paths that may present opportunities for improvement or risks (such as human error) that merit changes in training or system behavior. It is the whole study of what we know, how we think, and how we organize new information."} {"text":"As a design philosophy, cognitive ergonomics can be applied to any area where humans interact with technology. Applications include aviation (e.g., cockpit layouts), transportation (e.g., collision avoidance), the health care system (e.g., drug bottle labelling), mobile devices, appliance interface design, product design, and nuclear power plants."} {"text":"The focus of cognitive ergonomics is to be simple, clear and \"easy to use\" and accessible to everyone. Softwares are designed to help make better use of this. Its aim is to design icons and visual cues that are \"easy\" to use and function by all."} {"text":"Inhibitory control, also known as response inhibition, is a cognitive process\u00a0and more specifically, an executive function\u00a0\u2013 that permits an individual to inhibit their impulses and natural, habitual, or dominant behavioral responses to stimuli ( prepotent responses) in order to select a more appropriate behavior that is consistent with completing their goals. Self-control is an important aspect of inhibitory control. For example, successfully suppressing the natural behavioral response to eat cake when one is craving it while dieting requires the use of inhibitory control."} {"text":"The prefrontal cortex, caudate nucleus, and subthalamic nucleus are known to regulate inhibitory control cognition. Inhibitory control is impaired in both addiction and attention deficit hyperactivity disorder. In healthy adults and ADHD individuals, inhibitory control improves over the short term with low (therapeutic) doses of methylphenidate or amphetamine. Inhibitory control may also be improved over the long-term via consistent aerobic exercise."} {"text":"An inhibitory control test is a neuropsychological test that measures an individual's ability to override their natural, habitual, or dominant behavioral response to a stimulus in order to implement more adaptive behaviors. Some of the neuropsychological tests that measure inhibitory control include the Stroop task, go\/no-go task, Simon task, Flanker task, antisaccade tasks, delay of gratification tasks, and stop-signal tasks."} {"text":"Females tend to have a greater basal capacity to exert inhibitory control over undesired or habitual behaviors and respond differently to modulatory environmental contextual factors relative to males. For example, listening to music tends to significantly improve the rate of response inhibition in females, but reduces the rate of response inhibition in males."} {"text":"Cultural learning is made possible by a deep understanding of social cognition. Humans have the unique capacity to identify and relate to others and view them as intentional beings. Humans are able to understand that others have intentions, goals, desires, and beliefs. It is this deep understanding, this cognitive adaptation, that allows humans to learn from and with others through cultural transmission (Tomasello, 1999)."} {"text":"Dogs have also shown some interesting but limited abilities at social cognition in a series of studies by Hare and Tomasello (2005). Dogs have the ability to read human social cues, even to a greater extent than chimpanzees. Dogs are able to respond to human pointing, the human gaze, and subtle human nods without training. Researchers now believe that these abilities are the result of convergent evolution between humans and dogs through domestication. Research with domesticated foxes has shown that the likely mechanism for this convergent evolution was the selection of tame behavior in dogs. This finding suggests that perhaps humans had to evolve a propensity to cooperate before cultural evolution was able to take place (Hare & Tomasello, 2005)."} {"text":"Sociogenesis refers to collaborative inventiveness. It is the process by which two or more humans collectively interact and invent something new which could not have been developed by one individual alone, such as language and mathematics (Tomasello, 1999). Sociogenesis can occur across time, or simultaneously (Tomasello, 1999). Socigenesis across times occurs through the ratchet effect, when one individual modifies something they had previously learned through others. Over time, ideas, tools, and language advance. Simultaneous sociogenesis occurs when two or more individuals work together at the same time and develop something new."} {"text":"Hare, B., & Tomasello, M. (2005). Human-like social skills in dogs? Trends in Cognitive Sciences, Vol. 9 (9), 439-444."} {"text":"Tomasello, M., Call, J., & Hare, B. (2003). Chimpanzees understand psychological states \u2013 the question is which ones and to what extent. Trends in Cognitive Sciences, Vol. 7 (4), 153-156. Tomasello (1999). The cultural origins of human cognition. Cambridge, Massachusetts: Harvard University Press (Chapters. 1 & 2, pp.\u00a01\u201355)."} {"text":"Lexicalization is the process of adding words, set phrases, or word patterns to a language's lexicon."} {"text":"Whether or not word formation and lexicalization refer to the same process is a source of controversy within the field of linguistics. Most linguists assert that there is a distinction, but there are many ideas of what the distinction is. Lexicalization may be simple, for example borrowing a word from another language, or more involved, as in calque or loan translation, wherein a foreign phrase is translated literally, as in \"march\u00e9 aux puces\", or in English, flea market."} {"text":"Other mechanisms include compounding, abbreviation, and blending. Particularly interesting from the perspective of historical linguistics is the process by which \"ad hoc\" phrases become set in the language, and eventually become new words (see lexicon). Lexicalization contrasts with grammaticalization, and the relationship between the two processes is subject to some debate."} {"text":"In psycholinguistics, lexicalization is the process of going from meaning to sound in speech production. The most widely accepted model, speech production, in which an underlying concept is converted into a word, is at least a two-stage process."} {"text":"First, the semantic form (which is specified for meaning) is converted into a lemma, which is an abstract form specified for semantic and syntactic information (how a word can be used in a sentence), but not for phonological information (how a word is pronounced). The next stage is the lexeme, which is phonologically specified."} {"text":"Some recent work has challenged this model, suggesting for example that there is no lemma stage, and that syntactic information is retrieved in the semantic and phonological stages."} {"text":"Psychoanalytic conceptions of language refers to the intersection of psychoanalytic theory with linguistics and psycholinguistics. Language has been an integral component of the psychoanalytic framework since its inception, as evidenced by the fact that Anna O. (pseud. for Bertha Pappenheim), whose treatment via the cathartic method influenced the later development of psychoanalytic therapy, referred to her method of treatment as the \"talking cure\" (Freud & Breuer, 1895; de Mijolla, 2005)."} {"text":"Language is relevant to psychoanalysis in two key respects. First, it is important with respect to the therapeutic process, serving as the principal means by which unconscious mental processes are given expression through the verbal exchange between analyst and patient (e.g., free association, dream analysis, transference-countertransference dynamics). Secondly, psychoanalytic theory is linked in many ways to linguistic phenomena, such as parapraxes and the telling of jokes. According to Freud (1915, 1923), the essential difference between modes of thought characterized by \"primary\" (irrational, governed by the id) as opposed to \"secondary\" (logical, governed by the ego and external reality) thought processes is one of preverbal vs. verbal ways of conceptualizing the world."} {"text":"According to Freud (1940), \"...the function of speech\u2026brings material in the ego into a firm connection with the residues of visual, but more particularly of auditory, perceptions\" (p.\u00a035). In other words, the mind is able to assimilate perceptual information through language - we are able to make sense of our perceptions by thinking about them in the form of words."} {"text":"One of Freud's earliest papers, \"On Aphasia\" (1891), was concerned with speech disorders of neurological mechanisms of which had been investigated earlier in the century by Paul Broca and Carl Wernicke. Freud was skeptical of Wernicke's findings, citing a paucity of clinical observation as his reason. Although he conceded the fact that language is linked to neurological processes, Freud repudiated a model of localization of brain function, according to which specific regions of the brain are responsible for certain cognitive functions. In contrast to most of his contemporaries, Freud rejected the notion that in most cases pathological phenomena are manifestations of physiological dysfunctions (Lanteri-Laura, 2005a)."} {"text":"In this joke, we see multiple use of the same phrase with words in a different order, as well as the double meaning of the words \"lay\" and \"lain.\" Ostensibly about a couple's financial status, this joke is effective because it allows for the overcoming of inhibition and the indirect expression of sexual impulses through the double meaning of words."} {"text":"The new journal \"Language and Psychoanalysis\" is just devoted to research in the intersection between psychoanalysis and linguistics."} {"text":"2. http:\/\/criminalisticassociation.org\/Dokumenti\/KTIP_12_20201219201241.pdf#page=8 (SCAN revisited through linguistic psychoanalysis)"} {"text":"A garden-path sentence is a grammatically correct sentence that starts in such a way that a reader's most likely interpretation will be incorrect; the reader is lured into a parse that turns out to be a dead end or yields a clearly unintended meaning. \"Garden path\" refers to the saying \"to be led down [or up] the garden path\", meaning to be deceived, tricked, or seduced. In \"A Dictionary of Modern English Usage\", Fowler describes such sentences as unwittingly laying a \"false scent\"."} {"text":"Such a sentence leads the reader toward a seemingly familiar meaning that is actually not the one intended. It is a special type of sentence that creates a momentarily ambiguous interpretation because it contains a word or phrase that can be interpreted in multiple ways, causing the reader to begin to believe that a phrase will mean one thing when in reality it means something else. When read, the sentence seems ungrammatical, makes almost no sense, and often requires rereading so that its meaning may be fully understood after careful parsing."} {"text":"\"The complex houses married and single soldiers and their families.\"."} {"text":"This is another commonly cited example. Like the previous sentence, the initial parse is to read \"the complex houses\" as a noun phrase, but \"the complex houses married\" does not make semantic sense (houses do not marry) and \"the complex houses married and single\" makes no sense at all (after \"married and...\", the expectation is another verb to form a compound predicate). The correct parsing is \"The complex\" [noun phrase] \"houses\" [verb] \"married and single soldiers\" [noun phrase] \"and their families\" [noun phrase]. Rephrased, the sentence could be rewritten as \"The complex provides housing for the soldiers, married or single, as well as their families.\""} {"text":"\"The horse raced past the barn fell.\"."} {"text":"This example turns on the two meanings in German of , which can be either the adjective \"modern\" as in English, or the verb \"modern\" meaning, \"to become moldy\", \"to rot\"."} {"text":"The theme of the \"picture exhibition\" in the first clause lends itself to interpreting \"modern\" as an adjective meaning \"contemporary\", until the last two words of the sentence:"} {"text":"This causes dissonance at the end of the sentence, and forces back-tracking to recover the proper usage and sense (and different pronunciation) of the first word of the sentence, not as the adjective meaning \"contemporary\", but as the verb meaning \"going moldy\":"} {"text":"This example makes use of the ambiguity between the verb \"suspeita\" and the adjective \"suspeita\", which is also captured by the English word \"suspect\". It also makes use of a misreading in which the word is passed over by the parser, which lends to two different meanings."} {"text":"Various strategies can be used when parsing a sentence, and there is much debate over which parsing strategy humans use. Differences in parsing strategies can be seen from the effects of a reader attempting to parse a part of a sentence that is ambiguous in its syntax or meaning. For this reason, garden-path sentences are often studied as a way to test which strategy humans use. Two debated parsing strategies that humans are thought to use are serial and parallel parsing."} {"text":"Serial parsing is where the reader makes one interpretation of the ambiguity and continues to parse the sentence in the context of that interpretation. The reader will continue to use the initial interpretation as reference for future parsing until disambiguating information is given."} {"text":"Parallel parsing is where the reader recognizes and generates multiple interpretations of the sentence and stores them until disambiguating information is given, at which point only the correct interpretation is maintained."} {"text":"When ambiguous nouns appear, they can function as both the object of the first item or the subject of the second item. In that case, the former use is preferred. It is also found that the reanalysis of a garden-path sentence gets more and more difficult with the length of the ambiguous phrase."} {"text":"A research paper published by Meseguer, Carreiras and Clifton (2002) stated that intensive eye movements are observed when people are recovering from a mild garden-path sentence. They proposed that people use two strategies, both of which are consistent with the selective reanalysis process described by Frazier and Rayner in 1982. According to them, the readers predominantly use two alternative strategies to recover from mild garden-path sentences."} {"text":"Partial re-analysis occurs when analysis is not complete. Frequently, when people can make even a little bit of sense of the later sentence, they stop analysing further so the former part of the sentence still remains in memory and does not get discarded from it."} {"text":"Therefore, the original misinterpretation of the sentence remains even after the re-analysis is done; hence participants' final interpretations are often incorrect."} {"text":"is a professor in the Department of Psychology at The University of Warwick. Professor Kita's work focuses on the Psycholinguistics properties of co-speech gesture, the relationship between spatial language, developmental psychology and cognition and sound symbolism. Kita received his PhD from the University of Chicago, working in the lab of David McNeill. from 1993-2003 he led the Gesture Project at Max Planck Institute for Psycholinguistics, one of the research foci of the MPI."} {"text":"From April 2017 he is the editor of \"GESTURE\" (published by John Benjamins of Amsterdam). From 2012\u20132014 he was the president of the International Society for Gesture Studies, and vice-president from 2010-2012."} {"text":"Poverty of the stimulus (POS) is the controversial argument from linguistics that children are not exposed to rich enough data within their linguistic environments to acquire every feature of their language. This is considered evidence contrary to the empiricist idea that language is learned solely through experience. The claim is that the sentences children hear while learning a language do not contain the information needed to develop a thorough understanding of the grammar of the language."} {"text":"The POS is often used as evidence for universal grammar. This is the idea that all languages conform to the same structural principles, which define the space of possible languages. Both poverty of the stimulus and universal grammar are terms that can be credited to Noam Chomsky, the main proponent of generative grammar. Chomsky coined the term \"poverty of the stimulus\" in 1980. However, he had argued for the idea since his 1959 review of B.F. Skinner's \"Verbal Behavior\"."} {"text":"There was much research based on generative grammar in language development during the latter half of the twentieth century. This approach was abandoned by the mainstream researchers as a result of what many scientists perceived as the problems with the Poverty of the Stimulus argument."} {"text":"An argument from the poverty of the stimulus generally takes the following structure:"} {"text":"Chomsky coined the term \"poverty of the stimulus\" in 1980. This idea is closely related to what Chomsky calls \"Plato's Problem\". He outlined this philosophical approach in the first chapter of the \"Knowledge of Language\" in 1986. Plato's Problem traces back to \"Meno\", a Socratic dialogue. In Meno, Socrates unearths knowledge of geometry concepts from a slave who was never explicitly taught them. Plato's Problem directly parallels the idea of the innateness of language, universal grammar, and more specifically the poverty of the stimulus argument because it reveals that people's knowledge is richer than what they are exposed to. Chomsky illustrates that humans are not exposed to all structures of their language, yet they fully achieve knowledge of these structures."} {"text":"Linguistic nativism is the theory that humans are born with some knowledge of language. One acquires a language not entirely through experience. According to Noam Chomsky, \"The speed and precision of vocabulary acquisition leaves no real alternative to the conclusion that the child somehow has the concepts available before experience with language and is basically learning labels for concepts that are already a part of his or her conceptual apparatus.\" One of the most significant arguments generative grammarians have for linguistic nativism is the poverty of the stimulus argument."} {"text":"However, the argument that the poverty of the stimulus supports the innateness hypothesis remains controversial. For example, Fiona Cowie claims that the Poverty of Stimulus argument fails \"on both empirical and conceptual grounds to support nativism\"."} {"text":"Generative Grammarians have extensively studied the hypothesised innate effects on language in order to provide evidence for Poverty of the Stimulus. An overarching theme in examples is that children acquire grammatical rules based on evidence that is consistent with multiple generalizations. And since children are not instructed in the grammar of their language, the gap must be filled in by properties of the learner."} {"text":"In general, pronouns can refer to any prominent individual in the discourse context. However, a pronoun cannot find its antecedent in certain structural positions, as defined by the Binding Theory. For example, the pronoun \"he\" can refer to the Ninja Turtle in (1) but not (2), above. Given that speech to children does not indicate what interpretations are impossible, the input is equally consistent with a grammar that allows coreference between \"he\" and \"the Ninja Turtle\" in (2) and one that does not. But, since all speakers of English recognize that (2) does not allow this coreference, this aspect of the grammar must come from some property internal to the learner."} {"text":"The English word \"one\" can refer back to a previously mentioned property in the discourse. For example in (1), \"one\" can mean \"ball\"."} {"text":"In Wh-questions, the Wh-word at the beginning of the sentence (the filler) is related to a position later in the sentence (the gap). This relation can hold over an unbounded distance, as in (1). However, there are restrictions on the gap positions that a filler can be related to. These restrictions are called syntactic islands (2). Because questions with islands are ungrammatical, they are not included in the speech that children hear\u2014but neither are grammatical Wh-questions that span multiple clauses. Because the speech children are exposed to is consistent with grammars which have island constraints and grammars which don't, something internal to the child must contribute this knowledge."} {"text":"The poverty of the stimulus also applies in the domain of word learning. When learning a new word, children are exposed to examples of the word's referent, but not to the full extent of the category. For example, in learning the word \"dog\", a child might see a German Shepherd, a Great Dane and a Poodle. How do they know to extend this category to include Dachshunds and Bulldogs? The situations in which the word is used cannot provide the relevant information. Thus, something internal to learners must shape the way that they generalize. This problem is closely related to Quine's gavagai problem."} {"text":"Critics claimed in the 1980s and 1990s that Chomsky's purported linguistic evidence for poverty of the stimulus may have been false. Around the same time there was research in applied linguistics and neuroscience that rejected the idea of significant aspects of languages being innate and not learned. Some scholars working on language acquisition in fields like psychology and applied linguistics reject most claims of nativism and consider that decades of research have been wasted since 1964 owing to the assumption of the poverty of the stimulus---an enterprise which has failed to make a lasting impact. However, those working in the framework of generative grammar consider nativism to be a logical necessity and to be supported by the existence of deep parallels among the languages of the world."} {"text":"Hypocognition, in cognitive linguistics, means missing and being unable to communicate cognitive and linguistic representations because there are no words for particular concepts."} {"text":"The word hypocognition (and its opposite, hypercognition) was coined by American psychiatrist and anthropologist Robert Levy in his 1973 book \"Tahitians: Mind and Experience in the Society Islands\". After 26 months of studying them, Levy described Tahitians as having no words to describe sorrow or guilt, resulting in people who had suffered personal losses describing themselves as feeling sick or strange instead of sad. Levy believed the Tahitians' lack of frames for thinking about and expressing grief contributed to their high suicide rate. He believed that a balance between hypercognition and hypocognition was culturally most desirable."} {"text":"Hypocognition is a phrase commonly used in linguistics. In 2004 George Lakoff used it to describe political progressives in the United States, saying that relative to conservatives they suffer from \"massive hypocognition,\" which he described as the lack of having a progressive philosophy framed around the progressive core values of empathy and responsibility such as \"effective government\" versus \"less government\" or \"broader prosperity\" versus \"free markets.\""} {"text":"Hypocognition has been blamed for preventing the practical application of evidence-based medicine in areas where frames (contextual and presentational influences on perceptions of reality) obscure facts. More generally, experts often overuse their own expertise: e.g. cardiologist diagnose a heart problem when the actual problem is something else."} {"text":"Frame-based terminology is a cognitive approach to terminology developed by Pamela Faber and colleagues at the University of Granada. One of its basic premises is that the conceptualization of any specialized domain is goal-oriented, and depends to a certain degree on the task to be accomplished. Since a major problem in modeling any domain is the fact that languages can reflect different conceptualizations and construals, texts as well as specialized knowledge resources are used to extract a set of domain concepts. Language structure is also analyzed to obtain an inventory of conceptual relations to structure these concepts."} {"text":"As its name implies, frame-based terminology uses certain aspects of frame semantics to structure specialized domains and create non-language-specific representations. Such configurations are the conceptual meaning underlying specialized texts in different languages, and thus facilitate specialized knowledge acquisition."} {"text":"In frame-based terminology, conceptual networks are based on an underlying domain event, which generates templates for the actions and processes that take place in the specialized field as well as the entities that participate in them."} {"text":"As a result, knowledge extraction is largely text-based. The terminological entries are composed of information from specialized texts as well as specialized language resources. Knowledge is configured and represented in a dynamic conceptual network that is capable of adapting to new contexts. At the most general level, generic roles of agent, patient, result, and instrument are activated by basic predicate meanings such as make, do, affect, use, become, etc. which structure the basic meanings in specialized texts. From a linguistic perspective, Aktionsart distinctions in texts are based on Van Valin's classification of predicate types. At the more specific levels of the network, the qualia structure of the generative lexicon is used as a basis for the systematic classification and relation of nominal entities."} {"text":"The methodology of frame-based terminology derives the conceptual system of the domain by means of an integrated top-down and bottom-up approach. The bottom-up approach consists of extracting information from a corpus of texts in various languages, specifically related to the domain. The top-down approach includes the information provided by specialized dictionaries and other reference material, complemented by the help of experts in the field."} {"text":"In a parallel way, the underlying conceptual framework of a knowledge-domain event is specified. The most generic or base-level categories of a domain are configured in a prototypical domain event or action-environment interface. This provides a template applicable to all levels of information structuring. In this way a structure is obtained which facilitates and enhances knowledge acquisition since the information in term entries is internally as well as externally coherent."} {"text":"Reading is the process of taking in the sense or meaning of letters, symbols, etc., especially by sight or touch."} {"text":"For educators and researchers, reading is a multifaceted process involving such areas as word recognition, orthography (spelling), alphabetics, phonics, phonemic awareness, vocabulary, comprehension, fluency, and motivation."} {"text":"Other types of reading and writing, such as pictograms (e.g., a hazard symbol and an emoji), are not based on speech based writing systems. The common link is the interpretation of symbols to extract the meaning from the visual notations or tactile signals (as in the case of Braille)."} {"text":"Reading is typically an individual activity, done silently, although on occasion a person reads out loud for other listeners; or reads aloud for one's own use, for better comprehension. Before the reintroduction of separated text in the late Middle Ages, the ability to read silently was considered rather remarkable."} {"text":"Major predictors of an individual's ability to read both alphabetic and non-alphabetic scripts are oral language skills, phonological awareness, rapid automatized naming and verbal IQ."} {"text":"As a leisure activity, children and adults read because it is pleasant and interesting. In the US, about half of all adults read one or more books for pleasure each year. About 5% read more than 50 books per year. Americans read more if they: have more education, read fluently and easily, are female, live in cities, and have higher socioeconomic status. Children become better readers when they know more about the world in general, and when they perceive reading as fun rather than another chore to be performed."} {"text":"Reading is an essential part of literacy, yet from a historical perspective literacy is about having the ability to both read and write."} {"text":"And, since the 1990s some organizations have defined literacy in a wide variety of ways that may go beyond the traditional ability to read and write. The following are some examples:"} {"text":"In the academic field, some view literacy in a more philosophical manner and propose the concept of \"multiliteracies\". For example, they say, \"this huge shift from traditional print-based literacy to 21st century multiliteracies reflects the impact of communication technologies and multimedia on the evolving nature of texts, as well as the skills and dispositions associated with the consumption, production, evaluation, and distribution of those texts (Borsheim, Meritt, & Reed, 2008, p. 87)\". According to cognitive neuroscientist Mark Seidenberg these \"multiple literacies\" have allowed educators to change the topic from reading and writing to \"Literacy\". He goes on to say that some educators, when faced with criticisms of how reading is taught, \"didn't alter their practices, they changed the subject\"."} {"text":"Also, some organizations might include numeracy skills and technology skills separately but alongside of literacy skills."} {"text":"In addition, since the 1940s the term literacy is often used to mean having knowledge or skill in a particular field (e.g., computer literacy, ecological literacy, health literacy, media literacy, quantitative literacy (numeracy) and visual literacy)."} {"text":"In order to understand a text, it is usually necessary to understand the spoken language associated with that text. In this way, writing systems are distinguished from many other symbolic communication systems. Once established, writing systems on the whole change more slowly than their spoken counterparts, and often preserve features and expressions which are no longer current in the spoken language. The great benefit of writing systems is their ability to maintain a persistent record of information expressed in a language, which can be retrieved independently of the initial act of formulation."} {"text":"Reading for pleasure has been linked to increased cognitive progress in vocabulary and mathematics during adolescence."} {"text":"Sustained high volume lifetime reading has been associated with high levels of academic attainment."} {"text":"Reading has also been shown to improve stress management, memory, focus, writing skills, and imagination."} {"text":"The cognitive benefits of reading continue into mid-life and the senior years."} {"text":"Reading books and writing are among brain-stimulating activities shown to slow down cognitive decline in seniors."} {"text":"Reading has been the subject of considerable research and reporting for decades. Many organizations measure and report on for children and adults (e.g., NAEP, PIRLS, PISA and PIAAC)."} {"text":"Researchers have concluded that 95% of students can be taught to read by the end of first grade, yet in many countries 20% or more do not meet that expectation."} {"text":"According to the 2019 Nation's Report card, 35% of grade four students in the USA failed to perform at or above the \"Basic level\" (partial mastery of the proficient level skills). There was a significant difference by race and ethnicity (e.g., black students at 53% and white students at 24%). See more ."} {"text":"The Progress in International Reading Literacy Study (PIRLS) publishes reading achievement for fourth graders in 50 countries. The five countries with the highest overall reading average are the Russian Federation, Singapore, Hong Kong SAR, Ireland and Finland. Some others are: England 10th, United States 15th, Australia 21st, Canada 23rd, and New Zealand 33rd."} {"text":"The Programme for International Student Assessment (PISA) measures 15-year-old school pupils scholastic performance on mathematics, science, and reading."} {"text":"The reading levels of adults, ages 16 \u2013 65, in 39 countries are reported by The Programme for the International Assessment of Adult Competencies (PIAAC). Between 2011 and 2018, PIAAC reports the percentage of adults reading \"at-or-below level one\" (the lowest of five levels). Some examples are Japan 4.9%, Finland 10.6%, Netherlands 11.7%, Australia 12.6%, Sweden 13.3%, Canada 16.4%, England (UK) 16.4%, and the USA 16.9%."} {"text":"According to the World Bank, 53% of all children in low-and-middle-income countries suffer from 'learning poverty'. In 2019, using data from the UNESCO Institute for Statistics, they published a report entitled \"Ending Learning Poverty: What will it take?\". Learning poverty is defined as being unable to read and understand a simple text by age 10."} {"text":"Although they say that all foundational skills are important, include reading, numeracy, basic reasoning ability, socio-emotional skills, and others \u2013 they focus specifically on reading. Their reasoning is that reading proficiency is an easily understood metric of learning, reading is a student's gateway to learning in every other area, and reading proficiency can serve as a proxy for foundational learning in other subjects."} {"text":"They suggest five pillars to reduce learning poverty: 1) learners are prepared and motivated to learn, 2) teachers at all levels are effective and valued, 3) classrooms are equipped for learning, 4) Schools are safe and inclusive spaces, and 5) education systems are well-managed."} {"text":"Learning to read (or, reading skills acquisition) is the acquisition and practice of the skills necessary to understand the meaning behind printed words. For a skilled reader, the act of reading feels simple, effortless, and automatic. However, the process of learning to read is complex and builds on cognitive, linguistic, and social skills developed from a very early age. As one of the four core language skills (listening, speaking, reading and writing), reading is vital to gaining a command of the written language."} {"text":"In the United States and elsewhere, it is widely believed that students who lack proficiency in reading by the end of grade three may face obstacles for the rest of their academic career. For example, it is estimated that they would not be able to read half of the material they will encounter in grade four."} {"text":"In 2019, with respect to the reading skills of grade-four US public school students, only 44% of white students and 18% of black students performed at or above the \"proficient level\" of the . Also, in 2012, in the United Kingdom it has been reported that 15-year-old students are reading at the age of 12-year-old students."} {"text":"As a result, many governments put practices in place to ensure that students are reading at grade level by the end of grade three. An example of this is the Third Grade Reading Guarantee created by the State of Ohio in 2017. This is a program to identify students from kindergarten through grade three that are behind in reading, and provide support to make sure they are on track for reading success by the end of grade three. This is also known as remedial education. Another example is the policy in England whereby any pupil who is struggling to decode words properly by year three must \"urgently\" receive help through a \"rigorous and systematic phonics programme\"."} {"text":"In 2016, out of 50 countries, the United States achieved the 15th highest score in grade-four reading ability. The ten countries with the highest overall reading average are the Russian Federation, Singapore, Hong Kong SAR, Ireland, Finland, Poland, Northern Ireland, Norway, Chinese Taipei and England (UK). Some others are: Australia 21st, Canada 23rd, New Zealand 33rd, France 34th, Saudi Arabia 44th, and South Africa 50th."} {"text":"Spoken language is the foundation of learning to read (long before children see any letters) and children\u2019s knowledge of the phonological structure of language is a good predictor of early reading ability. Spoken language is dominant for most of childhood, however, reading ultimately catches up and surpasses speech."} {"text":"By their first birthday most children have learned all the sounds in their spoken language. However, it takes longer for them to learn the phonological form of words and to begin developing a spoken vocabulary."} {"text":"Children acquire a spoken language in a few years. Five-to-six-year-old English learners have vocabularies of 2,500 to 5,000 words, and add 5,000 words per year for the first several years of schooling. This exponential learning rate cannot be accounted for by the instruction they receive. Instead, children learn that the meaning of a new word can be inferred because it occurs in the same context as familiar words (e.g., \"lion\" is often seen with \"cowardly\" and \"king\"). As British linguist John Rupert Firth says, \"You shall know a word by the company it keeps\"."} {"text":"The environment in which children live may also impact their ability to acquire reading skills. Children who are regularly exposed to chronic environmental noise pollution, such as highway traffic noise, have been known to show decreased ability to discriminate between phonemes (oral language sounds) as well as lower reading scores on standardized tests."} {"text":"Reading to children: necessary but not sufficient."} {"text":"Children learn to speak naturally \u2014 by listening to other people speak. However, reading is not a natural process, and many children need to learn to read through a process that involves \"systematic guidance and feedback\"."} {"text":"So, \"reading to children is not the same as teaching children to read\". Nonetheless, reading to children is important because it socializes them to the activity of reading; it engages them; it expands their knowledge of spoken language; and it enriches their linguistic ability by hearing new and novel words and grammatical structures."} {"text":"However, there is some evidence that \"shared reading\" with children does help to improve reading if the children's attention is directed to the words on the page as they are being read to."} {"text":"The path to skilled reading involves learning the alphabetic principle, phonemic awareness, phonics, fluency, vocabulary and comprehension."} {"text":"British psychologist Uta Frith introduced a three stages model to acquire skilled reading. Stage one is the \"logographic or pictorial stage\" where students attempt to grasp words as objects, an artificial form of reading. Stage two is the \"phonological stage\" where students learn the relationship between the graphemes (letters) and the phonemes (sounds). Stage three is the \"orthographic stage\" where students read familiar words more quickly than unfamiliar words, and word length gradually ceases to play a role."} {"text":"There is some debate as to the optimum age to teach children to read."} {"text":"The Common Core State Standards Initiative (CCSS) in the USA has standards for foundational reading skills in kindergarten and grade one that include instruction in print concepts, phonological awareness, phonics, word recognition and fluency. However, some critics of CCSS say that \"To achieve reading standards usually calls for long hours of drill and worksheets \u2014 and reduces other vital areas of learning such as math, science, social studies, art, music and creative play.\""} {"text":"The PISA 2007 OECD data from 54 countries demonstrates \"no association between school entry age ... and reading achievement at age 15\". Also, a German study of 50 kindergartens compared children who, at age 5, had spent a year either \"academically focused\", or \"play-arts focused\" and found that in time the two groups became inseparable in reading skill. The authors conclude that the effects of early reading are like \"watering a garden before a rainstorm; the earlier watering is rendered undetectable by the rainstorm, the watering wastes precious water, and the watering detracts the gardener from other important preparatory groundwork.\""} {"text":"Other researchers and educators favor limited amounts of literacy instruction at the age of four and five, in addition to non-academic, intellectually stimulating activities. Some parents teach their children to read as babies. Some say that babies learn to read differently and more easily than children who learn to read in school from formal instruction. They also suggest, the most important aspect of early (baby) reading is interaction with loving parents and bonding."} {"text":"Reviews of the academic literature by the Education Endowment Foundation in the UK have found that starting literacy teaching in preschool has \"been consistently found to have a positive effect on early learning outcomes\" and that \"beginning early years education at a younger age appears to have a high positive impact on learning outcomes\". This supports current standard practice in the UK which includes developing children's phonemic awareness in preschool and teaching reading from age four."} {"text":"There does not appear to be any definitive research about the \"magic window\" to begin reading instruction. However, there is also no definitive research to suggest that starting early causes any harm. Researcher Timothy Shanahan, suggests, \"Start teaching reading from the time you have kids available to teach, and pay attention to how they respond to this instruction\u2014both in terms of how well they are learning what you are teaching, and how happy and invested they seem to be. If you haven't started yet, don't feel guilty, just get going.\""} {"text":"Some education researchers suggest the teaching of the various reading components by specific grade levels. The following is one example from Carol Tolman, Ed.D and Louisa Moats, Ed.D that corresponds in many respects with the USA Common Core State Standards Initiative:"} {"text":"According to some researchers, learners (children and adults) progress through several stages while first learning to read in English, and then refining their reading skills. One of the recognized experts in this area is Harvard professor Jeanne Sternlicht Chall. In 1983 she published a book entitled \"Stages of Reading Development\" that proposed six stages."} {"text":"Subsequently, in 2008 Maryanne Wolf, UCLA Graduate School of Education and Information Studies, published a book entitled \"Proust and the Squid\" in which she describes her view of the following five stages of reading development. It is normal that children will move through these stages at different rates; however, typical ages for chlidren in the United States are shown below."} {"text":"Emerging pre-reader: 6 months to 6 years old."} {"text":"The emerging pre-reader stage, also known as reading readiness, usually lasts for the first five years of a child's life. Children typically speak their first few words before their first birthday. Educators and parents help learners to develop their skills in listening, speaking, reading and writing."} {"text":"Reading to children helps them to develop their vocabulary, a love of reading, and phonemic awareness, (the ability to hear and manipulate the individual sounds (phonemes) of oral language). And children will often \"read\" stories they have memorized. However, in the late 1990s United States' researchers found that the traditional way of reading to children made little difference in their later ability to read because children spend relatively little time actually looking at the text. Yet, in a shared reading program with four-year-old children, teachers found that directing children's attention to the letters and words (e.g. verbally or pointing to the words) made a significant difference in early reading, spelling and comprehension."} {"text":"Novice reader: 6 to 7 years old."} {"text":"Novice readers continue to develop their phonemic awareness, and come to realise that the letters (graphemes) connect to the sounds (phonemes) of the language; known as decoding, phonics, and the alphabetic principle. They may also memorize the most common letter patterns and some of the high-frequency words that do not necessarily follow basic phonological rules (e.g. \"have and who\"). However, it is a mistake to assume a reader understands the meaning of a text merely because they can decode it. Vocabulary and oral language comprehension are also important parts of text comprehension as described in the Simple view of reading and Scarborough's Reading Rope. Reading and speech are codependent: reading promotes vocabulary development and a richer vocabulary facilitates skilled reading."} {"text":"Decoding reader: 7 to 9 years old."} {"text":"The transition from the novice reader stage to the decoding stage is marked by a reduction of painful pronunciations and in its place the sounds of a smoother, more confident reader. In this phase the reader adds at least 3,000 words to what they can decode. For example, in the English language, readers now learn the variations of the vowel-based rimes (e.g. s\"at\", m\"at\", c\"at\") and vowel pairs (also digraph) (e.g. r\"ai\"n, pl\"ay\", b\"oa\"t)"} {"text":"As readers move forward, they learn the make up of morphemes (i.e. stems, roots, prefixes and suffixes). They learn the common morphemes such as \"s\" and \"ed\" and see them as \"sight chunks\". \"The faster a child can see that \"beheaded\" is \"be + head + ed\"\", the faster they will become a more fluent reader."} {"text":"In the beginning of this stage a child will often be devoting so much mental capacity to the process of decoding that they will have no understanding of the words being read. It is nevertheless an important stage, allowing the child to achieve their ultimate goal of becoming fluent and automatic."} {"text":"It is in the decoding phase that the child will get to what the story is really about, and to learn to re-read a passage when necessary so as to truly understand it."} {"text":"Fluent, comprehending reader: 9 to 15 years old."} {"text":"The goal of this stage is to \"go below the surface of the text\", and in the process the reader will build their knowledge of spelling substantially."} {"text":"Teachers and parents may be tricked by fluent-sounding reading into thinking that a child understands everything that they are reading. As the content of what they are able to read becomes more demanding, good readers will develop knowledge of figurative language and irony which helps them to discover new meanings in the text."} {"text":"Children improve their comprehension when they use a variety of tools such as connecting prior knowledge, predicting outcomes, drawing inferences, and monitoring gaps in their understanding. One of the most powerful moments is when fluent comprehending readers learn to enter into the lives of imagined heroes and heroines."} {"text":"The educational psychologist, G. Michael Pressley, concluded there are two important aids to fluent comprehension: explicit instruction in major content areas by a child's teacher, and the child's own desire to read."} {"text":"At the end of this stage many processes are starting to become automatic, allowing the reader to focus on meaning. With the decoding process almost automatic by this point, the brain learns to integrate more metaphorical, inferential, analogical, background and experiential knowledge. This stage in learning to read will often last until early adulthood."} {"text":"At the expert stage it will usually only take a reader one-half second to read almost any word. The degree to which expert reading will change over the course of an adult's life depends on what they read and how much they read."} {"text":"There is no single definition of Science of reading (SOR). Foundational skills such as phonics (decoding) and phonemic awareness are considered to be important parts of the science of reading, but they are not the only ingredients. SOR includes any research and evidence about how humans learn to read, and how reading should be taught. This includes areas such as oral reading fluency, vocabulary, morphology, reading comprehension, text, spelling and pronunciation, thinking strategies, oral language proficiency, working memory training, and written language performance (e.g., cohesion, sentence combining\/reducing)."} {"text":"In addition, some educators feel that SOR should include digital literacy; background knowledge; content-rich instruction; infrastructural pillars (curriculum, reimagined teacher preparation, and leadership); adaptive teaching (recognizing the student's individual, culture and linguistic strengths); bi-literacy development; equity, social justice and supporting underserved populations (e.g., students from low-income backgrounds)."} {"text":"Some researchers suggest there is a need for more studies on the relationship between theory and practice. They say \"we know more about the science of reading than about the science of teaching based on the science of reading\", and \"there are many layers between basic science findings and teacher implementation that must be traversed\"."} {"text":"Many researchers are concerned that low reading levels are due to the manner in which reading is taught. They point to three areas: a) contemporary reading science has had very little impact on educational practice mainly because of a \"two-cultures problem separating science and education\", b) current teaching practices rest on outdated assumptions that make learning to read harder than it needs to be, and c) connecting evidence-based practice to educational practice would be beneficial but is extremely difficult to achieve because many teachers are not properly trained in the science of reading."} {"text":"\"The simple view of reading\" is a scientific theory about reading comprehension. According to the theory, in order to comprehend what they are reading students need both \"decoding skills\" and \"oral language (listening) comprehension ability\". Neither is enough on their own. In other words, they need the ability to recognize and process (e.g., sound out) the text, and the ability to understand the language in which the text is written (i.e., vocabulary, grammar and background knowledge). Students are not reading if they can decode words but do not understand their meaning. Similarly, students are not reading if they cannot decode words that they would ordinarily recognize and understand if they heard them spoken out loud."} {"text":"Decoding \u00d7 Oral Language Comprehension = Reading Comprehension."} {"text":"As shown in the graphic, the Simple View of Reading proposes four broad categories of developing readers: typical readers; poor readers (general reading disability); dyslexics; and hyperlexics"} {"text":"Hollis Scarborough, the creator of the Reading Rope and senior scientist at Haskins Laboratories, is a leading researcher of early language development and its connection to later literacy."} {"text":"Scarborough published the Reading Rope infographic in 2001 using strands of rope to illustrate the many ingredients that are involved in becoming a skilled reader. The upper strands represent \"language-comprehension\" and reinforce one another. The lower strands represent \"word-recognition\" and work together as the reader becomes accurate, fluent, and automatic through practice. The upper and lower strands all weave together to produce a skilled reader."} {"text":"More recent research by Laurie E. Cutting and Hollis S. Scarborough has highlighted the importance of executive function processes (e.g. working memory, planning, organization, self-monitoring, and similar abilities) to reading comprehension. Easy texts do not require much executive functions, however more difficult text require more \"focus on the ideas\". Reading comprehension strategies, such as summarizing, may help."} {"text":"Several researchers and neuroscientist have attempted to explain how the brain reads. They have written articles and books, and created websites and YouTube videos to help the average consumer."} {"text":"Neuroscientist Stanislas Dehaene says that a few simple truths should be accepted by all, namely: a) all children have similar brains, are well tuned to systematic grapheme-phoneme correspondences, \"and have everything to gain from phonics \u2014 the only method that will give them the freedom to read any text\", b) classroom size is largely irrelevant if the proper teaching methods are used, c) it is essential to have standardized screening tests for dyslexia, followed by appropriate specialized training, and d) while decoding is essential, vocabulary enrichment is equally important."} {"text":"Reading is an intensive process in which the eye quickly moves to assimilate the text \u2014 seeing just accurately enough to interpret groups of symbols. It is necessary to understand visual perception and eye movement in reading to understand the reading process."} {"text":"When reading, the eye moves continuously along a line of text, but makes short rapid movements (saccades) intermingled with short stops (fixations). There is considerable variability in fixations (the point at which a saccade jumps to) and saccades between readers, and even for the same person reading a single passage of text. When reading, the eye has a perceptual span of about 20 slots. In the best-case scenario and reading English, when the eye is fixated on a letter, four to five letters to the right and three to four letters to the left can be clearly identified. Beyond that, only the general shape of some letters can be identified."} {"text":"Research published in 2019 concluded that the silent reading rate of adults in English for \"non-fiction\" is in the range of 175 to 300 words per minute (wpm); and for \"fiction\" the range is 200 to 320 wpm."} {"text":"In the early 1970s the dual-route hypothesis to reading aloud was proposed, according to which there are two separate mental mechanisms involved in reading aloud, with output from both contributing to the pronunciation of written words. One mechanism is the lexical route whereby skilled readers can recognize a word as part of their sight vocabulary. The other is the nonlexical or sublexical route, in which the reader \"sounds out\" (decodes) written words."} {"text":"Evidence-based reading instruction refers to practices having research evidence showing their success in improving reading achievement. It is related to evidence-based education."} {"text":"Several organizations report on research about reading instruction, for example:"} {"text":"A systematic review and meta\u2010analysis was conducted on the advantages of reading from paper vs. screens. It found no difference in reading times, however, reading from paper has a small advantage in reading performance and metacognition."} {"text":"Apart from that, depending on the circumstances, some people prefer one medium over the other and each appears to have its own unique advantages."} {"text":"Some teachers, even after obtaining a master's degree in education, don't feel they have the necessary knowledge and skills to teach all students how to read."} {"text":"A survey in the USA reported that 70% of teachers believe in a balanced literacy approach to teaching reading \u2013 however balanced literacy \"is not systematic, explicit instruction\". Teacher, researcher and author, Louisa Moats, in a video about teachers and science of reading, says that sometime, when teachers talk about their \"philosophy\" of teaching reading, she responds by saying, \"But your 'philosophy' doesn't work\". She says this is evidenced by the fact that so many children are struggling with reading."} {"text":"In an Education Week Research Center survey of more than 530 professors of reading instruction, just 22 percent said their philosophy of teaching early reading centered on explicit, systematic phonics with comprehension as a separate focus."} {"text":"However, at least one State, Arkansas, is requiring every elementary and special education teacher to be proficient in the scientific research on reading by 2021; causing Amy Murdoch, an associate professor and the director of the reading science program at Mount St. Joseph University in Cincinnati to say \u201cWe still have a long way to go \u2013 but I do see some hope.\u201d"} {"text":"Timothy Shanahan (educator) acknowledges that comprehensive research does not always exist for specific aspects of reading instruction. However, \"the lack of evidence doesn\u2019t mean something doesn\u2019t work, only that we don\u2019t know\". He suggests that teachers make use of the research that is available in such places as Journal of Educational Psychology, Reading Research Quarterly, Reading & Writing Quarterly, Review of Educational Research, and Scientific Studies of Reading. If a practice lacks supporting evidence, it can be used with the understanding that it is based upon a claim, not science."} {"text":"Educators have debated for years about which method is best to teach reading for the English language. There are three main methods, phonics, whole language and balanced literacy. There are also a variety of other areas and practices such as phonemic awareness, fluency, reading comprehension, sight words and sight vocabulary, the three-cueing system (the searchlights model in England), guided reading, shared reading, and leveled reading. Each practice is employed in different manners depending on the country and the specific school division."} {"text":"In 2001, some researchers reached two conclusions: 1) \"mastering the alphabetic principle is essential\" and 2) \"instructional techniques (namely, phonics) that teach this principle directly are more effective than those that do not\". However, while they make it clear they have some fundamental disagreements with some of the claims made by whole-language advocates, some principles of whole language have value such as the need to ensure that students are enthusiastic about books and eager to learn to read."} {"text":"Phonics emphasizes the alphabetic principle \u2013 the idea that letters (graphemes) represent the sounds of speech (phonemes). It is taught in a variety of ways; some are systematic and others are unsystematic. Unsystematic phonics teaches phonics on a \"when needed\" basis and in no particular sequence. \"Systematic\" phonics uses a planned, sequential introduction of a set of phonic elements along with \"explicit\" teaching and practice of those elements. The National Reading Panel (NPR) concluded that systematic phonics instruction is more effective than unsystematic phonics or non-phonics instruction."} {"text":"Phonics approaches include analogy phonics, analytic phonics, embedded phonics with mini-lessons, phonics through spelling, and synthetic phonics."} {"text":"According to a 2018 review of research related to \"English speaking poor readers\", phonics training is effective for improving literacy-related skills, particularly the fluent reading of words and non-words, and the accurate reading of irregular words."} {"text":"In addition, phonics produces higher achievement for all beginning readers, and the greatest improvement is experienced by students who are at risk of failing to learn to read. While some children are able to infer these rules on their own, some need explicit instruction on phonics rules. Some phonics instruction has marked benefits such as expansion of a student's vocabulary. Overall, children who are directly taught phonics are better at reading, spelling and comprehension."} {"text":"A challenge in teaching phonics is that in some languages, such as English, complex letter-sound correspondences can cause confusion for beginning readers. For this reason, it is recommended that teachers of English-reading begin by introducing the \"most frequent sounds\" and the \"common spellings\", and save the less frequent sounds and complex spellings for later. (e.g. the sounds \/s\/ and \/t\/ before \/v\/ and \/w\/; and the spellings c\"a\"ke before \"eigh\"t and \"c\"at before du\"ck\")."} {"text":"Phonics is taught in many different ways and it is often taught together with some of the following: oral language skills, concepts about print, phonological awareness, phonemic awareness, phonology, oral reading fluency, vocabulary, syllables, reading comprehension, spelling, word study, cooperative learning, multisensory learning, and guided reading. And, phonics is often featured in discussions about science of reading, and evidence-based practices."} {"text":"The National Reading Panel (U.S.A. 2000) is clear that \"systematic phonics instruction should be integrated with other reading instruction to create a balanced reading program\". It suggests that phonics be taught together with phonemic awareness, oral fluency, vocabulary and comprehension. Timothy Shanahan (educator), a member of that panel, recommends that primary students receive 60\u201390 minutes per day of explicit, systematic, literacy instruction time; and that it be divided equally between a) words and word parts (e.g. letters, sounds, decoding and phonemic awareness), b) oral reading fluency, c) reading comprehension, and d) writing. Furthermore, he states that \"the phonemic awareness skills found to give the greatest reading advantage to kindergarten and first-grade children are \"segmenting and blending\"\"."} {"text":"The Ontario Association of Deans of Education (Canada) published research Monograph # 37 entitled \"Supporting early language and literacy\" with suggestions for parents and teachers in helping children prior to grade one. It covers the areas of letter names and letter-sound correspondence (phonics), as well as conversation, play-based learning, print, phonological awareness, shared reading, and vocabulary."} {"text":"Interest in evidence-based education appears to be growing. In 2021, Best evidence encyclopedia (BEE) released a review of research on 51 different programs for struggling readers in elementary schools. Many of the programs used phonics-based teaching and\/or one or more of the following: cooperative learning, technology-supported adaptive instruction (see Educational technology), metacognitive skills, phonemic awareness, word reading, fluency, vocabulary, multisensory learning, spelling, guided reading, reading comprehension, word analysis, structured curriculum, and balanced literacy (non-phonetic approach)."} {"text":"The BEE review concludes that a) outcomes were positive for one-to-one tutoring, b) outcomes were positive, but not as large, for one-to-small group tutoring, c) there were no differences in outcomes between teachers and teaching assistants as tutors, d) technology-supported adaptive instruction did not have positive outcomes, e) whole-class approaches (mostly cooperative learning) and whole-school approaches incorporating tutoring obtained outcomes for struggling readers as large as those found for one- to-one tutoring, and benefitted many more students, and f) approaches mixing classroom and school improvements, with tutoring for the most at-risk students, have the greatest potential for the largest numbers of struggling readers."} {"text":"Robert Slavin, of BEE, goes so far as to suggest that states should \"hire thousands of tutors\" to support students scoring far below grade level\u2014particularly in elementary school reading. Research, he says, shows \"only tutoring, both one-to-one and one-to-small group, in reading and mathematics, had an effect size larger than +0.10\u00a0... averages are around +0.30\", and \"well-trained teaching assistants using structured tutoring materials or software can obtain outcomes as good as those obtained by certified teachers as tutors\"."} {"text":"What works clearinghouse allows you to see the effectiveness of specific programs. For example, as of 2020 they have data on 231 literacy programs. If you filter them by grade 1 only, all class types, all school types, all delivery methods, all program types, and all outcomes you receive 22 programs. You can then view the program details and, if you wish, compare one with another."} {"text":"Evidence for ESSA (Center for Research and Reform in Education) offers free up-to-date information on current PK-12 programs in reading, writing, math, science, and others that meet the standards of the Every Student Succeeds Act (U.S.A.)."} {"text":"\"Systematic phonics\" is not one specific method of teaching phonics; it is a term used to describe phonics approaches that are taught \"explicitly\" and in a structured, systematic manner. They are \"systematic\" because the letters and the sounds they relate to are taught in a specific sequence, as opposed to incidentally or on a \"when needed\" basis."} {"text":"The National Reading Panel (NPR) concluded that systematic phonics instruction is more effective than unsystematic phonics or non-phonics instruction. The NRP also found that systematic phonics instruction is effective (with varying degrees) when delivered through one-to-one tutoring, small groups, and teaching classes of students; and is effective from kindergarten onward, the earlier the better. It helps significantly with word-reading skills and reading comprehension for kindergartners and 1st graders as well as for older struggling readers and reading disabled students. Benefits to spelling were positive for kindergartners and 1st graders but not for older students."} {"text":"Systematic phonics is sometimes mischaracterized as \"skill and drill\" with little attention to meaning. However, researchers point out that this impression is false. Teachers can use engaging games or materials to teach letter-sound connections, and it can also be incorporated with the reading of meaningful text."} {"text":"Phonics can be taught systematically in a variety of ways, such as: analogy phonics, analytic phonics, phonics through spelling, and synthetic phonics. However, their effectiveness vary considerably because the methods differ in such areas as the range of letter-sound coverage, the structure of the lesson plans, and the time devoted to specific instructions."} {"text":"Systematic phonics has gained increased acceptance in different parts of the world since the completion of three major studies into teaching reading; one in the US in 2000, another in Australia in 2005, and the other in the UK in 2006."} {"text":"In 2009, the UK Department of Education published a curriculum review that added support for systematic phonics. In fact, systematic phonics in the UK is known as Synthetic phonics."} {"text":"Beginning as early as 2014, several States in the USA have changed their curriculum to include systematic phonics instruction in elementary school."} {"text":"In 2018, the State Government of Victoria, Australia, published a website containing a comprehensive Literacy Teaching Toolkit including Effective Reading Instruction, Phonics, and Sample Phonics Lessons."} {"text":"\"Analogy phonics\" is a particular type of \"analytic phonics\" in which the teacher has students analyze phonic elements according to the speech sounds (phonograms) in the word. For example, a type of phonogram (known in linguistics as a rime) is composed of the vowel and the consonant sounds that follow it (e.g. in the words \"cat, mat and sat,\" the rime is \"at\".) Teachers using the analogy method may have students memorize a bank of phonograms, such as \"-at\" or \"-am\", or use \"word families\" (e.g. c\"an\", r\"an\", m\"an\", or m\"ay\", pl\"ay\", s\"ay\")."} {"text":"\"Analytic phonics\" does not involve pronouncing individual sounds (phonemes) in isolation and blending the sounds, as is done in synthetic phonics. Rather, it is taught at the word level and students learn to analyze letter-sound relationships once the word is identified. For example, students analyze letter-sound correspondences such as the \"ou\" spelling of in shr\"ou\"ds. Also, students might be asked to practice saying words with similar sounds such as \"b\"all, \"b\"at and \"b\"ite. Furthermore, students are taught consonant blends (separate, adjacent consonants) as units, such as \"br\"eak or \"shr\"ouds."} {"text":"Typically, the instruction starts with sounds that have only one letter and simple CVC words such as \"sat\" and \"pin\". Then it progresses to longer words, and sounds with more than one letter (e.g. h\"ea\"r and d\"ay\"), and perhaps even syllables (e.g. wa-ter). Sometimes the student practices saying (or sounding-out) cards that contain entire words."} {"text":"The 2005 Rose Report from the UK concluded that systematic synthetic phonics was the most effective method for teaching reading. It also suggests the \"best teaching\" included a brisk pace, engaging children's interest with multi-sensory activities and stimulating resources, praise for effort and achievement; and above all, the full backing of the headteacher."} {"text":"It also has considerable support in some States in the U.S.A. and some support from expert panels in Canada."} {"text":"In the US, a pilot program using the Core Knowledge Early Literacy program that used this type of phonics approach showed significantly higher results in K-3 reading compared with comparison schools. In addition, several States such as California, Ohio, New York and Arkansas, are promoting the principles of synthetic phonics (see synthetic phonics in the USA)."} {"text":"Resources for teaching phonics are available here"} {"text":"A critical aspect of reading comprehension is vocabulary development. When a reader encounters an unfamiliar word in print and decodes it to derive its spoken pronunciation, the reader understands the word if it is in the reader's spoken vocabulary. Otherwise, the reader must derive the meaning of the word using another strategy, such as context. If the development of the child's vocabulary is impeded by things such as ear infections, that inhibit the child from hearing new words consistently, then the development of reading will also be impaired."} {"text":"Sight words (i.e. high-frequency or common words), sometimes called the \"look-say\" method or whole-word method, are \"not\" a part of the phonics method. They are usually associated with whole language and balanced literacy where students are expected to memorize common words such as those on the Dolch word list and the Fry word list (e.g. a, be, call, do, eat, fall, gave, etc.). The supposition (in whole language and balanced literacy) is that students will learn to read more easily if they memorize the most common words they will encounter, especially words that are not easily decoded (i.e. exceptions)."} {"text":"On the other hand, using sight words as a method of teaching reading in English is seen as being at odds with the alphabetic principle and treating English as though it was a logographic language (e.g. Chinese or Japanese)."} {"text":"In addition, according to research, whole-word memorisation is \"labor-intensive\", requiring on average about 35 trials per word. Also, phonics advocates say that most words are decodable, so comparatively few words have to be memorized. And because a child will over time encounter many low-frequency words, \"the phonological recoding mechanism is a very powerful, indeed essential, mechanism throughout reading development\". Furthermore, researchers suggest that teachers who withhold phonics instruction to make it easier on children \"are having the opposite effect\" by making it harder for children to gain basic word-recognition skills. They suggest that learners should focus on understanding the principles of phonics so they can recognize the phonemic overlaps among words (e.g. have, had, has, having, haven't, etc.), making it easier to decode them all."} {"text":"Fluency is ability to read orally with speed, accuracy, and vocal expression. The ability to read fluently is one of several critical factors necessary for reading comprehension. If a reader is not fluent, it may be difficult to remember what has been read and to relate the ideas expressed in the text to their background knowledge. This accuracy and automaticity of reading serves as a bridge between decoding and comprehension."} {"text":"The NRP describes reading comprehension as a complex cognitive process in which a reader intentionally and interactively engages with the text. The science of reading says that reading comprehension is heavily dependent on word recognition (i.e., phonological awareness, decoding, etc.) and oral language comprehension (i.e., background knowledge, vocabulary, etc.). Phonological awareness and rapid naming predict reading comprehension in second grade but oral language skills account for an additional 13.8% of the variance."} {"text":"Whole language has the reputation of being a meaning-based method of teaching reading that emphasizes literature and text comprehension. It discourages any significant use of phonics, if at all. Instead, it trains students to focus on words, sentences and paragraphs as a whole rather than letters and sounds. Students are taught to use context and pictures to \"guess\" words they do not recognize, or even just skip them and read on. It aims to make reading fun, yet many students struggle to figure out the specific rules of the language on their own, which causes the student's decoding and spelling to suffer."} {"text":"The following are some features of the whole language philosophy:"} {"text":"Balanced literacy is not well defined, however it is intended as a method that combines elements of both phonics and whole language. According to a survey in 2010, 68% of elementary school teachers in the USA profess to use balanced literacy. However, only 52% of teachers in the USA include \"phonics\" in their definition of \"balanced literacy\"."} {"text":"The National Reading Panel concluded that phonics must be integrated with instruction in phonemic awareness, vocabulary, fluency, and comprehension. And, some studies indicate that \"the addition of language activities and tutoring to phonics produced larger effects than any of these components in isolation\". They suggest that this may be a constructive way to view balanced reading instruction."} {"text":"However, balanced literacy has received criticism from researchers and others suggesting that, in many instances, it is merely \"whole language\" by another name."} {"text":"According to phonics advocate and cognitive neuroscientist Mark Seidenberg, balanced literacy allows educators to diffuse the reading wars while not making specific recommendations for change. He goes on to say that, in his opinion, the high number of struggling readers in the USA is the result of the manner in which teachers are taught to teach reading. He also says that struggling readers should not be encouraged to skip a challenging word, nor rely on pictures or semantic and syntactic cues to \"guess at\" a challenging word. Instead, they should use evidence-based decoding methods such as systematic phonics."} {"text":"Structured literacy has many of the elements of systematic phonics and few of the elements of balanced literacy. It is defined as explicit, systematic teaching that focuses on phonological awareness, word recognition, phonics and decoding, spelling, and syntax at the sentence and paragraph levels. It is considered to be beneficial for all early literacy learners, especially those with dyslexia."} {"text":"According to the International Dyslexia Association, structured literacy contains the elements of phonology and phonemic awareness, sound-symbol association (the alphabetic principle and phonics), syllables, morphology, syntax, and semantics. The elements are taught using methods that are systematic, cumulative, explicit, multisensory, and use diagnostic assessment."} {"text":"According to some, three-cueing isn't the most effective way for beginning readers to learn how to decode printed text. While a cueing system does help students to \"make better guesses\", it does not help when the words become more sophisticated; and it reduces the amount of practice time available to learn essential decoding skills. They also say that students should first decode the word, \"then they can use context to figure out the meaning of any word they don\u2019t understand\"."} {"text":"Consequently, researchers such as cognitive neuroscientists Mark Seidenberg and professor Timothy Shanahan do not support the theory. They say the three-cueing system's value in reading instruction \"is a magnificent work of the imagination\", and it developed not because teachers lack integrity, commitment, motivation, sincerity, or intelligence, but because they \"were poorly trained and advised\" about the science of reading. In England, the simple view of reading and synthetic phonics are intended to replace \"the searchlights multi-cueing model\". On the other hand, some researchers suggest that \"context\" can be useful, not to guess a word, but to confirm a word after it has been phonetically decoded."} {"text":"Three Ps (3Ps) \u2013 Pause Prompt Praise."} {"text":"The three Ps approach is used by teachers, tutors and parents to guide oral reading practice with a struggling reader. For some, it is merely a variation of the above-mentioned \"three-cueing system\"."} {"text":"However, for others it is very different. For example: when a student encounters a word they do not know or get it wrong, the three steps are: 1) pause to see if they can fix it themselves, even letting them read on a little, 2) prompt them with strategies to find the correct pronunciation, and 3) praise them directly and genuinely. In the \"prompt\" step, the tutor does not suggest the student skip the word or guess the word based on the pictures or the first sound. Instead, they encourage student to use their decoding training to sound out the word, and use the context (meaning) to confirm they have found the correct word."} {"text":"Guided reading, shared reading, leveled reading, silent reading (and self-teaching)."} {"text":"\"Guided reading\" is small group reading instruction that is intended to allow for the differences in students' reading abilities. While they are reading, students are encouraged to use strategies from the three-cueing system, the searchlights model, or MSV."} {"text":"It is no longer supported by the Primary National Strategy in England as Synthetic phonics is the officially recognized method for teaching reading."} {"text":"In the United States, Guided Reading is part of the Reading Workshop model of reading instruction."} {"text":"Shared (oral) reading is an activity whereby the teacher and students read from a shared text that is determined to be at the students' reading level."} {"text":"Leveled reading involves students reading from \"leveled books\" at an appropriate reading level. A student that struggles with a word is encouraged to use a cueing system (e.g. three-cueing, searchlights model or MSV) to guess its meaning. There are many systems that purport to gauge the students' reading levels using scales incorporating numbers, letters, colors and lexile readability scores."} {"text":"Silent reading (and self-teaching) is a common practice in elementary schools. A 2007 study in the USA found that, on average only 37% of class time was spent on active reading instruction or practice, and the most frequent activity was students reading silently. Based on the limited available studies on silent reading, the NRP concluded that independent silent reading did not prove an effective practice when used as the only type of reading instruction to develop fluency and other reading skills \u2013 particularly with students who have not yet developed critical alphabetic and word reading skills."} {"text":"Other studies indicate that unlike silent reading, \"oral reading increases phonological effects\"."} {"text":"According to some, the classroom method called DEAR (Drop everything and read) is not the best use of classroom time for students who are not yet fluent. However, according to the \"self-teaching hypothesis\", when fluent readers practice decoding words while reading silently, they learn what whole words look like (spelling), leading to improved fluency and comprehension."} {"text":"The suggestion is: \"if some students are fluent readers, they could read silently while the teacher works with the struggling readers\"."} {"text":"Languages such as Chinese and Japanese are normally written (fully or partly) in logograms (hanzi and kanji, respectively), which represent a whole word or morpheme with a single character. There are a large number of characters, and the sound that each makes must be learned directly or from other characters which contain \"hints\" in them. For example, in Japanese, the On-reading of the kanji \u6c11 is \"min\" and the related kanji \u7720 shares the same On-reading, \"min\": the right-hand part shows the character's pronunciation. However this is not true for all characters. Kun readings, on the other hand, have to be learned and memorized as there is no way to tell from each character."} {"text":"Ruby characters are used in textbooks to help children learn the sounds that each logogram makes. These are written in a smaller size, using an alphabetic or syllabic script. For example, hiragana is typically used in Japanese, and the pinyin romanization into Latin alphabet characters is used in Chinese."} {"text":"The examples above each spell the word \"kanji\", which is made up of two kanji characters: \u6f22 (\"kan\", written in hiragana as \u304b\u3093), and \u5b57 (\"ji\", written in hiragana as \u3058)."} {"text":"Textbooks are sometimes edited as a cohesive set across grades so that children will not encounter characters they are not yet expected to have learned."} {"text":"The Reading Wars: phonics vs. whole language."} {"text":"A debate has been going on for decades about the merits of phonics vs. whole language. It is sometimes referred to as the \"Reading Wars\"."} {"text":"Until the mid-1800's, phonics was the accepted method in the United States to teach children to read. Then, in 1841 Horace Mann, the Secretary of the Massachusetts Board of Education, advocated for a whole-word method of teaching reading to replace phonics. Others, such as Rudolf Flesch, advocated for a return to phonics in his book \"Why Johnny Can't Read\" (1955). The whole-word method received support from Kenneth J. Goodman who wrote an article in 1967 entitled \"Reading: A psycholinguistic guessing game\". Although not supported by scientific studies, the theory became very influential as the whole language method. Since the 1970s some whole language supporters such as Frank Smith (psycholinguist), are unyielding in arguing that phonics should be taught little, if at all."} {"text":"Yet, other researchers say instruction in phonics and phonemic awareness are \"critically important\" and \"essential\" to develop early reading skills. In 2000, the National Reading Panel (U.S.A.) identified five ingredients of effective reading instruction, of which phonics is one; the other four are phonemic awareness, fluency, vocabulary and comprehension. Reports from other countries, such as the Australian report on \"Teaching reading\" (2005) and the U.K. Independent review of the teaching of early reading (Rose Report 2006) have also supported the use of phonics."} {"text":"Some notable researchers such as Stanislas Dehaene and Mark Seidenberg have clearly stated their disapproval of \"whole language\"."} {"text":"Furthermore, a 2017 study in the UK that compared teaching with phonics vs. teaching whole written words concluded that phonics is more effective, saying \"our findings suggest that interventions aiming to improve the accuracy of reading aloud and\/or comprehension in the early stages of learning should focus on the systematicities present in print-to-sound relationships, rather than attempting to teach direct access to the meanings of whole written words\"."} {"text":"More recently, some educators have advocated for the theory of balanced literacy purported to combine phonics and whole language yet not necessarily in a consistent or systematic manner. It may include elements such as word study and phonics mini-lessons, differentiated learning, cueing, leveled reading, shared reading, guided reading, independent reading and sight words. According to a survey in 2010, 68% of K-2 teachers in the USA practice balanced literacy; however, only 52% of teachers included \"phonics\" in their definition of \"balanced literacy\". In addition, 75% of teachers teach the three-cueing system (i.e., meaning\/structure\/visual or semantic\/syntactic\/graphophonic) that has its roots in whole language."} {"text":"In addition, some phonics supporters assert that \"balanced literacy\" is merely \"whole language\" by another name. And critics of whole language and sceptics of balanced literacy, such as neuroscientist Mark Seidenberg, state that struggling readers should \"not\" be encouraged to skip words they find puzzling or rely on semantic and syntactic cues to guess words."} {"text":"Over time a growing number of countries and states have put greater emphasis on phonics and other evidence-based practices (see Phonics practices by country or region)."} {"text":"According to the report by the US National Reading Panel (NRP) in 2000, the elements required for proficient reading of alphabetic languages are phonemic awareness, phonics, fluency, vocabulary, and text comprehension. In non-Latin languages, proficient reading does not necessarily require phonemic awareness, but rather an awareness of the individual parts of speech, which may also include the whole word (as in Chinese characters) or syllables (as in Japanese) as well as others depending on the writing system being employed."} {"text":"The Rose Report, from the Department for Education in England makes it clear that, in their view, systematic phonics, specifically synthetic phonics, is the best way to ensure that children learn to read; such that it is now the law. In 2005 the government of Australia published a report stating \"The evidence is clear ... that direct systematic instruction in phonics during the early years of schooling is an essential foundation for teaching children to read.\" Phonics has been gaining acceptance in many other countries as can be seen from this page Practices by country or region."} {"text":"Other important elements are: rapid automatized naming (RAN), a general understanding of the orthography of the language, and practice."} {"text":"Difficulties in reading typically involve difficulty with one or more of the following: decoding, reading rate, reading fluency, or reading comprehension."} {"text":"Brain activity in young and older children can be used to predict future reading skill. Cross model mapping between the orthographic and phonologic areas in the brain are critical in reading. Thus, the amount of activation in the left dorsal inferior frontal gyrus while performing reading tasks can be used to predict later reading ability and advancement. Young children with higher phonological word characteristic processing have significantly better reading skills later on than older children who focus on whole-word orthographic representation."} {"text":"Difficulty with decoding is marked by having not acquired the phoneme-grapheme mapping concept. One specific disability characterized by poor decoding is dyslexia, defined as brain-based type of learning disability that specifically impairs a person's ability to read. These individuals typically read at levels significantly lower than expected despite having normal intelligence. It can also be inherited in some families, and recent studies have identified a number of genes that may predispose an individual to developing dyslexia. Although the symptoms vary from person to person, common characteristics among people with dyslexia are difficulty with spelling, phonological processing (the manipulation of sounds), and\/or rapid visual-verbal responding. Adults can have either developmental dyslexia or acquired dyslexia which occurs after a brain injury, stroke or dementia."} {"text":"Individuals with reading rate difficulties tend to have accurate word recognition and normal comprehension abilities, but their reading speed is below grade level. Strategies such as guided reading (guided, repeated oral-reading instruction), may help improve a reader's reading rate."} {"text":"Many studies show that increasing reading speed improves comprehension. Reading speed requires a long time to reach adult levels. According to Carver (1990), children's reading speed increases throughout the school years. On average, from grade 2 to college, reading rate increases 14 standard-length words per minute each year (where one standard-length word is defined as six characters in text, including punctuation and spaces)."} {"text":"Scientific studies have demonstrated that speed reading \u2014 defined here as capturing and decoding words faster than 900 wpm \u2014 is not feasible given the limits set by the anatomy of the eye."} {"text":"Individuals with reading fluency difficulties fail to maintain a fluid, smooth pace when reading. Strategies used for overcoming reading rate difficulties are also useful in addressing reading fluency issues."} {"text":"Individuals with reading comprehension difficulties are commonly described as poor comprehenders. They have normal decoding skills as well as a fluid rate of reading, but have difficulty comprehending text when reading. The simple view of reading holds that reading comprehension requires both \"decoding skills\" and \"oral language comprehension\" ability."} {"text":"Increasing vocabulary knowledge, listening skills and teaching basic comprehension techniques may help facilitate better reading comprehension. It is suggested that students receive brief, explicit instruction in reading comprehension strategies in the areas of vocabulary, noticing understanding, and connecting ideas."} {"text":"Scarborough's Reading Rope also outlines some of the essential ingredients of reading comprehension."} {"text":"The following organizations measure and report on reading achievement in the United States and internationally:"} {"text":"In the United States, the National Assessment of Educational Progress or NAEP (\"The Nation's Report Card\") is the national assessment of what students know and can do in various subjects. Four of these subjects \u2013 reading, writing, mathematics and science \u2013 are assessed most frequently and reported at the state and district level, usually for grades 4 and 8."} {"text":"In 2019, with respect to the reading skills of the nation's grade-four public school students, 34% performed at or above the NAEP \"Proficient level\" (solid academic performance) and 65% performed at or above the NAEP \"Basic level\" (partial mastery of the proficient level skills). The results by race \/ ethnicity were as follows:"} {"text":"NAEP reading assessment results are reported as average scores on a 0\u2013500 scale. The Basic Level is 208 and the Proficient Level is 238. The average reading score for grade-four public school students was 219. Female students had an average score that was 7 points higher than male students. Students who were eligible for the National School Lunch Program (NSLP) had an average score that was 28 points lower than that for students who were not eligible."} {"text":"Reading scores for the individual States and Districts are available on the NAEP site. Between 2017 and 2019 Mississippi was the only State that had a grade-four reading score increase and 17 States had a score decrease."} {"text":"The Progress in International Reading Literacy Study (PIRLS) is an international study of reading (comprehension) achievement in fourth graders. It is designed to measure children's reading literacy achievement, to provide a baseline for future studies of trends in achievement, and to gather information about children's home and school experiences in learning to read. The 2016 PIRLS report shows the 4th grade reading achievement by country in two categories (literary and informational). The ten countries with the highest overall reading average are the Russian Federation, Singapore, Hong Kong SAR, Ireland, Finland, Poland, Northern Ireland, Norway, Chinese Taipei and England (UK). Some others are: the United States 15th, Australia 21st, Canada 23rd, and New Zealand 33rd."} {"text":"The Programme for International Student Assessment (PISA) measures 15-year-old school pupils scholastic performance on mathematics, science, and reading. In 2018, of the 79 participating countries\/economies, on average, students in Beijing, Shanghai, Jiangsu and Zhejiang (China) and Singapore outperformed students from all other countries in reading, mathematics and science. 21 countries have reading scores above the OECD average scores and many of the scores are not statistically different."} {"text":"The history of reading dates back to the invention of writing during the 4th millennium BC. Although reading print text is now an important way for the general population to access information, this has not always been the case. With some exceptions, only a small percentage of the population in many countries was considered literate before the Industrial Revolution. Some of the pre-modern societies with generally high literacy rates included classical Athens and the Islamic Caliphate."} {"text":"Scholars assume that reading aloud (Latin \"clare legere\") was the more common practice in antiquity, and that reading silently (\"legere tacite\" or \"legere sibi\") was unusual. In his \"Confessions\", Saint Augustine remarks on Saint Ambrose's unusual habit of reading silently in the 4th century AD."} {"text":"In 18th-century Europe, the then new practice of reading alone in bed was, for a time, considered dangerous and immoral. As reading became less a communal, oral practice, and more a private, silent one\u2014and as sleeping increasingly moved from communal sleeping areas to individual bedrooms, some raised concern that reading in bed presented various dangers, such as fires caused by bedside candles. Some modern critics, however, speculate that these concerns were based on the fear that readers\u2014especially women\u2014could escape familial and communal obligations and transgress moral boundaries through the private fantasy worlds in books."} {"text":"In 19th century Russia, reading practices were highly varied, as people from a wide range of social statuses read Russian and foreign-language texts ranging from high literature to the peasant lubok. Provincial readers such as Andrei Chikhachev give evidence of the omnivorous appetite for fiction and non-fiction alike among middling landowners."} {"text":"The history of learning to read dates back to the invention of writing during the 4th millennium BC."} {"text":"With respect to the English language in the United States, the phonics principle of teaching reading was first presented by John Hart in 1570, who suggested the teaching of reading should focus on the relationship between what is now referred to as graphemes (letters) and phonemes (sounds)."} {"text":"In the colonial times of the USA, reading material was not written specifically for children, so instruction material consisted primarily of the Bible and some patriotic essays. The most influential early textbook was The New England Primer, published in 1687. There was little consideration given to the best ways to teach reading or assess reading comprehension."} {"text":"Phonics was a popular way to learn reading in the 1800s. William Holmes McGuffey (1800\u20131873), an American educator, author, and Presbyterian minister who had a lifelong interest in teaching children, compiled the first four of the McGuffey Readers in 1836."} {"text":"The whole-word method was invented by Thomas Hopkins Gallaudet, the director of the American Asylum at Hartford. It was designed to educate deaf people by placing a word alongside a picture. In 1830, Gallaudet described his method of teaching children to recognize a total of 50 sight words written on cards. Horace Mann, the Secretary of the Board of Education of Massachusetts, USA, favored the method for everyone, and by 1837 the method was adopted by the Boston Primary School Committee."} {"text":"By 1844 the defects of the whole-word method became so apparent to Boston schoolmasters that they urged the Board to return to phonics. In 1929, Samuel Orton, a neuropathologist in Iowa, concluded that the cause of children's reading problems was the new sight method of reading. His findings were published in the February 1929 issue of the Journal of Educational Psychology in the article \"The Sight Reading Method of Teaching Reading as a Source of Reading Disability\"."} {"text":"The meaning-based curriculum came to dominate reading instruction by the second quarter of the 20th century. In the 1930s and 1940s, reading programs became very focused on comprehension and taught children to read whole words by sight. Phonics was taught as a last resort."} {"text":"Edward William Dolch developed his list of sight words in 1936 by studying the most frequently occurring words in children's books of that era. Children are encouraged to memorize the words with the idea that it will help them read more fluently. Many teachers continue to use this list, although some researchers consider the theory of sight word reading to be a \"myth\". Researchers and literacy organizations suggest it would be more effective if students learned the words using a phonics approach."} {"text":"In 1955, Rudolf Flesch published a book entitled \"Why Johnny Can't Read\", a passionate argument in favor of teaching children to read using phonics, adding to the reading debate among educators, researchers, and parents."} {"text":"Government-funded research on reading instruction in the United States and elsewhere began in the 1960s. In the 1970s and 1980s, researchers began publishing studies with evidence on the effectiveness of different instructional approaches. During this time, researchers at the National Institutes of Health (NIH) conducted studies that showed early reading acquisition depends on the understanding of the connection between sounds and letters (i.e. phonics). However, this appears to have had little effect on educational practices in public schools."} {"text":"In the 1970s, the whole language method was introduced. This method de-emphasizes the teaching of phonics out of context (e.g. reading books), and is intended to help readers \"guess\" the right word. It teaches that guessing individual words should involve three systems (letter clues, meaning clues from context, and the syntactical structure of the sentence). It became the primary method of reading instruction in the 1980s and 1990s. However, it is falling out of favor. The neuroscientist Mark Seidenberg refers to it as a \"theoretical zombie\" because it persists in spite of a lack of supporting evidence. It is still widely practiced in related methods such as sight words, the three-cueing system and balanced literacy."} {"text":"In the 1980s the three-cueing system (the searchlights model in England) emerged. According to a 2010 survey 75% of teachers in the USA teach the three-cueing system. It teaches children to guess a word by using \"meaning cues\" (semantic, syntactic and graphophonic). While the system does help students to \"make better guesses\", it does not help when the words become more sophisticated; and it reduces the amount of practice time available to learn essential decoding skills. Consequently, present-day researchers such as cognitive neuroscientists Mark Seidenberg and professor Timothy Shanahan do not support the theory. In England, synthetic phonics is intended to replace \"the searchlights multi-cueing model\"."} {"text":"In the 1990s Balanced literacy arose. It is a theory of teaching reading and writing that is not clearly defined. It may include elements such as word study and phonics mini-lessons, differentiated learning, cueing, leveled reading, shared reading, guided reading, independent reading and sight words. For some, balanced literacy strikes a balance between whole language and phonics. Others say balanced literacy in practice usually means the \"whole language\" approach to reading. According to a survey in 2010, 68% of K-2 teachers in the USA practice balanced literacy. Furthermore, only 52% of teachers included \"phonics\" in their definition of \"balanced literacy\"."} {"text":"In 1996 the California Department of Education took an increased interest in using phonics in schools. And in 1997 the department called for grade one teaching in concepts about print, phonemic awareness, decoding and word recognition, and vocabulary and concept development."} {"text":"By 1998 in the U.K. whole language instruction and the searchlights-model were still the norm, however there was some attention to teaching phonics in the early grades, as seen in the National Literacy Strategies."} {"text":"Beginning in 2000, several reading research reports were published:"} {"text":"In Australia the 2005 report, \"Teaching Reading\", recommends teaching reading based on evidence and teaching systematic, explicit phonics within an integrated approach. The executive summary says \"systematic phonics instruction is critical if children are to be taught to read well, whether or not they experience reading difficulties.\" As of October 5, 2018, The State Government of Victoria, Australia, publishes a website containing a comprehensive Literacy Teaching Toolkit including effective reading instruction, phonics, and sample phonics lessons."} {"text":"Until 2006, the English language syllabus of Singapore advocated \"a balance between decoding and meaning-based instruction \u2026 phonics and whole language\". However, a review in 2006 advocated for a \"systematic\" approach. Subsequently, the syllabus in 2010 had no mention of whole language and advocated for a balance between \"systematic and explicit instruction\" and \"a rich language environment\". It called for increased instruction in oral language skills together with phonemic awareness and the key decoding elements of synthetic phonics, analytic phonics and analogy phonics."} {"text":"In 2007 the Department of Education (DE) in Northern Ireland was required by law to teach children foundational skills in phonological awareness and the understanding that \"words are made up of sounds and syllables and that sounds are represented by letters (phoneme\/grapheme awareness)\". In 2010 the DE required that teachers receive support in using evidence-based practices to teach literacy and numeracy, including: a \"systematic programme of high-quality phonics\" that is explicit, structured, well-paced, interactive, engaging, and applied in a meaningful context."} {"text":"In 2008, the National Center for Family Literacy, with the \"National Institute for Literacy\", published a report entitled \"Developing Early Literacy\". It is a synthesis of the scientific research on the development of early literacy skills in children ages zero to five as determined by the \"National Early Literacy Panel\" that was convened in 2002. Amongst other things, the report concluded that code-focused interventions on the early literacy and conventional literacy skills of young children yield a moderate to large effect on the predictors of later reading and writing, irrespective of socioeconomic status, ethnicity, or population density."} {"text":"In 2010 the Common Core State Standards Initiative was introduced in the USA. The \"English Language Arts Standards for Reading: Foundational Skills in Grades 1\u20135\" include recommendations to teach print concepts, phonological awareness, phonics and word recognition, and fluency."} {"text":"In the United Kingdom a 2010 government white paper contained plans to train all primary school teachers in phonics. The 2013 curriculum has \"statutory requirements\" that, amongst other things, students in years one and two be capable in using systematic synthetic phonics in regards to word reading, reading comprehension, fluency, and writing. This includes having skills in \"sound to graphemes\", \"decoding\", and \"blending\"."} {"text":"In 2013, the National Commission for UNESCO launched the \"Leading for Literacy\" project to develop the literacy skills of grades 1 and 2 students. The project facilitates the training of primary school teachers in the use of a \"synthetic phonics\" program. From 2013 to 2015, the Trinidad and Tobago Ministry of Education appointed seven reading specialist to help primary and secondary school teachers improve their literacy instruction. From February 2014 to January 2016, literacy coaches were hired in selected primary schools to assist teachers of kindergarten, grades 1 and 2 with pedagogy and content of early literacy instruction. Primary schools have been provided with literacy resources for instruction, including phonemic awareness, word recognition, vocabulary manipulatives, phonics and comprehension."} {"text":"In 2013 the State of Mississippi passed the Literacy-Based Promotion Act. The Mississippi Department of Education provided resources for teachers in the areas of phonemic awareness, phonics, vocabulary, fluency, comprehension and reading strategies."} {"text":"The school curriculum in Ireland focuses on ensuring children are literate in both the English language and the Irish language. The 2014 teachers' Professional Development guide covers the seven areas of attitude and motivation, fluency, comprehension, word identification, vocabulary, phonological awareness, phonics, and assessment. It recommends that phonics be taught in a systematic and structured way and is preceded by training in phonological awareness."} {"text":"In 2014 the California Department of Education said children should know how to decode regularly spelled one-syllable words by mid-first grade, and be phonemically aware (especially able to segment and blend phonemes)\". In grades two and three children receive explicit instruction in advanced phonic-analysis and reading multi-syllabic and more complex words."} {"text":"In 2015 the New York State Public School system revised its English Language Arts learning standards, calling for teaching involving \"reading or literacy experiences\" as well as phonemic awareness from prekindergarten to grade 1 and phonics and word recognition for grades 1\u20134. That same year, the Ohio Legislature set minimum standards requiring the use of phonics including guidelines for teaching phonemic awareness, phonics, fluency, vocabulary and comprehension."} {"text":"In 2016 the What Works Clearinghouse and the Institute of Education Sciences published an Educator's Practice Guide on Foundational Skills to Support Reading for Understanding in Kindergarten Through 3rd Grade. It contains four recommendations to support reading: 1) teach students academic language skills, including the use of inferential and narrative language, and vocabulary knowledge, 2) develop awareness of the segments of sounds in speech and how they link to letters (phonemic awareness and phonics), 3) teach students to decode words, analyze word parts, and write and recognize words (phonics and synthetic phonics), and 4) ensure that each student reads connected text every day to support reading accuracy, fluency, and comprehension."} {"text":"In 2016 the Colorado Department of Education updated their \"Elementary Teacher Literacy Standards\" with standards for development in the areas of phonology, phonics and word recognition, fluent automatic reading, vocabulary, text comprehension, handwriting, spelling, and written expression."} {"text":"The European Literacy Policy Network (ELINET) 2016 reports that Hungarian children in grades one and two receive explicit instruction in phonemic awareness and phonics \"as the route to decode words\". In grades three and four they continue to apply their knowledge of phonics, however the emphasis shifts to the more meaning-focused technical aspects of reading and writing (i.e., vocabulary, types of texts, reading strategies, spelling, punctuation and grammar)."} {"text":"In 2017 the Ohio Department of Education adopted \"Reading Standards for Foundational Skills K\u201312\" laying out a systematic approach to teaching \"phonological awareness\" in kindergarten and grade one, and \"grade-level phonics and word analysis skills in decoding words\" (including fluency and comprehension) in grades 1\u20135."} {"text":"In 2018 the Arkansas Department of Education published a report about their new initiative known as R.I.S.E., Reading Initiative for Student Excellence, that was the result of The Right to Read Act, passed in 2017. The first goal of this initiative is to provide educators with the in-depth knowledge and skills of \"the science of reading\" and evidence-based instructional strategies. This included a focus on research-based instruction on phonological awareness, phonics, vocabulary, fluency, and comprehension; specifically systematic and explicit instruction."} {"text":"As of 2018, the Ministry of Education in New Zealand has online information to help teachers to support their students in years 1\u20133 in relation to sounds, letters, and words. It states that phonics instruction \"is not an end in itself\" and it is \"not\" necessary to teach students \"every combination of letters and sounds\"."} {"text":"In 2018, ScienceDirect published the results of a study of early literacy and numeracy outcomes in developing countries entitled \"Identifying the essential ingredients to literacy and numeracy improvement: Teacher professional development and coaching, student textbooks, and structured teachers\u2019 guides\". It concluded that \"Including teachers\u2019 guides was by far the most cost-effective intervention\"."} {"text":"There has been a strong debate in France on the teaching of phonics (\"m\u00e9thode syllabique\") versus whole language (\"m\u00e9thode globale\"). After the 1990s, supporters of the later started defending a so-called \"mixed method\" (also known as Balanced literacy) in which approaches from both methods are used. Influential researchers in psycho-pedagogy, cognitive sciences and neurosciences, such as Stanislas Dehaene and have put their heavy scientific weight on the side of phonics. In 2018 the ministry created a science educational council that openly supported phonics. In April 2018, the minister issued a set of four guiding documents for early teaching of reading and mathematics and a booklet detailing phonics recommendations. Some have described his stance as \"traditionalist\", but he openly declared that the so-called mixed approach is no serious choice."} {"text":"In 2019 the Minnesota Department of Education introduced standards requiring school districts to \"develop a local literacy plan to ensure that all students have achieved early reading proficiency by no later than the end of third grade\" in accordance with a Statute of the Minnesota Legislature requiring elementary teachers to be able to implement comprehensive, scientifically based reading and oral language instruction in the five reading areas of phonemic awareness, phonics, fluency, vocabulary, and comprehension."} {"text":"Also in 2019, 26% of grade 4 students in Louisiana were reading at the \"proficiency level\" according to the Nation's Report Card, as compared to the National Average of 34%. In March 2019 the Louisiana Department of Education revised their curriculum for K-12 English Language Arts including requirements for instruction in the alphabetic principle, phonological awareness, phonics and word recognition, fluency and comprehension."} {"text":"And again in 2019, 30% of grade 4 students in Texas were reading at the \"proficiency level\" according to the Nation's Report Card. In June of that year the Texas Legislature passed a Bill requiring all kindergarten through grade-three teachers and principals to \"\"begin\" a teacher literacy achievement academy before the 2022\u20132023 school year\". The required content of the academies' training includes the areas of \"The Science of Teaching Reading, Oral Language, Phonological Awareness, Decoding (i.e. Phonics), Fluency and Comprehension.\" The goal is to \"increase teacher knowledge and implementation of evidence-based practices to positively impact student literacy achievement\"."} {"text":"For more information on reading educational developments, see Phonics practices by country or region."} {"text":"The cohort model in psycholinguistics and neurolinguistics is a model of lexical retrieval first proposed by William Marslen-Wilson in the late 1970s. It attempts to describe how visual or auditory input (i.e., hearing or reading a word) is mapped onto a word in a hearer's lexicon. According to the model, when a person hears speech segments real-time, each speech segment \"activates\" every word in the lexicon that begins with that segment, and as more segments are added, more words are ruled out, until only one word is left that still matches the input."} {"text":"The cohort model relies on a number of concepts in the theory of lexical retrieval. The lexicon is the store of words in a person's mind.; it contains a person's vocabulary and is similar to a mental dictionary. A lexical entry is all the information about a word and the lexical storage is the way the items are stored for peak retrieval. Lexical access is the way that an individual accesses the information in the mental lexicon. A word's cohort is composed of all the lexical items that share an initial sequence of phonemes, and is the set of words activated by the initial phonemes of the word."} {"text":"The cohort model is based on the concept that auditory or visual input to the brain stimulates neurons as it enters the brain, rather than at the end of a word. This fact was demonstrated in the 1980s through experiments with speech shadowing, in which subjects listened to recordings and were instructed to repeat aloud exactly what they heard, as quickly as possible; Marslen-Wilson found that the subjects often started to repeat a word before it had actually finished playing, which suggested that the word in the hearer's lexicon was activated before the entire word had been heard. Findings such as these led Marslen-Wilson to propose the cohort model in 1987."} {"text":"Since its original proposal, the model has been adjusted to allow for the role that context plays in helping the hearer rule out competitors, and the fact that activation is \"tolerant\" to minor acoustic mismatches that arise because of coarticulation (a property by which language sounds are slightly changed by the sounds preceding and following them)."} {"text":"Later experiments refined the model. For example, some studies showed that \"shadowers\" (subjects who listen to auditory stimuli and repeat it as quickly as possible) could not shadow as quickly when words were jumbled up so they didn't mean anything; those results suggested that sentence structure and speech context also contribute to the process of activation and selection."} {"text":"Research in bilinguals has found that word recognition is influenced by the number of neighbors in both languages."} {"text":"Linguistic prediction is a phenomenon in psycholinguistics occurring whenever information about a word or other linguistic unit is activated before that unit is actually encountered. Evidence from eyetracking, event-related potentials, and other experimental methods indicates that in addition to integrating each subsequent word into the context formed by previously encountered words, language users may, under certain conditions, try to predict upcoming words."} {"text":"In particular, prediction seems to occur regularly when the context of a sentence greatly limits the possible words that have not yet been revealed. For instance, a person listening to a sentence like, \"In the summer it is hot, and in the winter it is...\" would be highly likely to predict the sentence completion \"cold\" in advance of actually hearing it. A form of prediction is also thought to occur in some types of lexical priming, a phenomenon whereby a word becomes easier to process if it is preceded by a related word. Linguistic prediction is an active area of research in psycholinguistics and cognitive neuroscience."} {"text":"In the eyetracking visual world paradigm, experimental subjects listen to a sentence while staring at an array of pictures on a computer monitor. Their eye movements are recorded, allowing the experimenter to understand how language influences eye movements toward pictures related to the content of the sentence. Experiments of this type have shown that while listening to the verb in a sentence, comprehenders anticipatorily move their eyes to the picture of the verb's likely direct object (e.g. \"cake\" rather than \"ball\" while hearing, \"The boy will eat...\"). Subsequent investigations using the same experimental setup showed that the verb's subject can also determine which object comprehenders anticipate (e.g., comprehenders look at the merry-go-round rather than the motorcycle while hearing, \"The little girl will ride...\")."} {"text":"In short, comprehenders use the information in the sentence context to predict the meanings of upcoming words. In these experiments, comprehenders used the verb and its subject to activate information about the verb's direct object before hearing that word. However, another experiment has shown that in a language with more flexible word order (German), comprehenders can also use context to predict the sentence's subject."} {"text":"Computational models of eye movements during reading, which model data related to word predictability, include Reichle and colleagues' E-Z Reader model and Engbert and colleagues' SWIFT model."} {"text":"The M100 discussed here is the magnetic equivalent of the visual N1 potential\u2014an event-related potential linked to visual processing and attention. The M100 was also linked to prediction in language comprehension in a series of event-related magnetoencephalography (MEG) experiments. In these experiments, participants read words whose visual forms were either predictable or unpredictable based on prior linguistic context or based on a recently seen picture. The predictability of the word's visual form (but not the predictability of its meaning) affected the amplitude of the M100."} {"text":"There is ongoing controversy about whether this M100 effect is related to the early left anterior negativity (eLAN), an event-related potential response to words that is theorized to reflect the brain's assignment of local phrase structure."} {"text":"The P2 component is generally thought to reflect higher-order perceptual processing and its modulation by attention. However, it has also been linked to prediction of visual word forms. The P2 response to words in highly constraining contexts is often larger than the P2 response to words in less constraining contexts. When experimental participants read words that are presented to the left or right of their visual fixation (stimulating the opposite hemisphere of the brain first), the larger P2 for words in highly constraining contexts is observed only for right visual field presentation (targeting left hemisphere). This is consistent with the PARLO hypothesis that linguistic prediction is mainly a function of the left hemisphere, discussed below."} {"text":"The N400 is part of the normal ERP response to potentially meaningful stimuli, whose amplitude is inversely correlated with the predictability of a stimuli in a particular context. In sentence processing, the predictability of a word is established by two related factors: 'cloze probability' and 'sentential constraint'. Cloze probability reflects the expectancy of a target word given the context of the sentence, which is determined by the percentage of individuals who supply the word when completing a sentence whose final word is missing. Kutas and colleagues found that the N400 to sentences final words with cloze probability of 90% was smaller (i.e., more positive) than the N400 for words with cloze probability of 70%, which was then smaller for words with cloze probability of 30%."} {"text":"Closely related, sentential constraint reflects the degree to which the context of the sentence constrains the number of acceptable continuations. Whereas cloze probability is the percent of individuals who choose a particular word, constraint is the number of different words chosen by a representative sample of individuals. Although words that are not predicted elicit a larger N400, the N400 to unpredicted words that are semantically related to the predicted word elicit a smaller N400 than when the unpredicted words are semantically unrelated. When the sentence context is highly constraining, semantically related words receive further facilitation in that the N400 to semantically related words is smaller in high constraint sentences than in low constraint sentences."} {"text":"Evidence for the prediction of specific words comes from a study by DeLong et al. DeLong and colleagues took advantage of the use of different indefinite articles, 'A' and 'AN' for English words that begin with a consonant or vowel respectively. They found that when the most probable sentence completion began with a consonant, the N400 was larger for 'AN' than for 'A' and vice versa, suggesting that prediction occurs at both a semantic and lexical level during language processing. (The study never replicated. In the most recent multi-lab attempt (335 participants), no evidence for word form prediction was found (Niewland et al., 2018)."} {"text":"The P300, specifically the P3b is an ERP response to improbable stimuli and is sensitive to the subjective probability that a particular stimulus will occur. The P300 has been closely tied to context updating, which can be initiated by unexpected stimuli."} {"text":"The P600 an ERP response to syntactic violations, as well as complex, but error free, language. A P600-like response is also observed for thematically implausible sentences: example, \"For breakfast, the eggs would only EAT toast and jam\". Both P600 responses are generally attributed to the process of revising or continuing the analysis of the sentence. The syntactic P600 has been compared to the P300 in that both responses are sensitive to similar manipulations; importantly, the probability of the stimulus. The similarity between the two responses may suggest that the P300 significantly contributes to the syntactic P600 response."} {"text":"A late positivity is often observed subsequent to the N400. Recent meta-analysis of the ERP literature on language processing has identified two different Post-N400 Positivities. In comparing the Post-N400 Positivity (PNP) for congruent and incongruent sentence final words, a parietal PNP is observed for incongruent words. This parietal PNP is similar to the typical P600 response, suggesting continued or revised analysis. Within the congruent condition, when comparing high- and low-cloze probability sentence final words, a PNP response (if it is observed) is generally distributed across the front of the scalp. A recent study has shown that the frontal PNP may reflect processing an unexpected lexical item instead of an unexpected concept, suggesting that the frontal PNP reflects disconfirmed lexical predictions."} {"text":"Functional magnetic resonance imaging (fMRI) is a neuroimaging technology that uses nuclear magnetic resonance to measure blood oxygenation levels in the brain and spinal cord. Because neural activity affects blood flow, the pattern of the hemodynamic response is thought to correspond closely to the pattern of neural activity. The fine spatial resolution afforded by fMRI allows cognitive neuroscientists to see in detail which areas of the brain are activated in relation to an experimental task. However, the hemodynamic response is much slower than the neural activity measured by EEG and MEG. This poor sensitivity to timing information makes fMRI a less useful technique than EEG or eyetracking for studying linguistic prediction."} {"text":"One exception is an fMRI test of the differences in neural activation between strategic and automatic semantic priming. When the time between the prime and the target word is short (around 150 milliseconds), priming is theorized to rely on automatic neural processes. However, at longer time intervals (approaching 1 second), it is thought that experimental subjects strategically predict related upcoming words and suppress unrelated words, leading to a processing penalty in the event that an unrelated word actually occurs. An fMRI test of this hypothesis showed that at longer intervals, the processing penalty for an incorrect prediction is related to heightened activity in the anterior cingulate gyrus and Broca's area."} {"text":"The surprisal theory is a theory of sentence processing based on information theory. In the surprisal theory, the cost of processing a word is determined by its self-information, or how predictable the word is, given its context. A highly probable word carries a small amount of self-information and would therefore be processed easily, as measured by reduced reaction time, a smaller N400 response, or reduced fixation times in an eyetracking reading study. Empirical tests of this theory have shown a high degree of match between processing cost measures and the self-information values assigned to words."} {"text":"An acceptability judgment task, also called acceptability rating task, is a common method in empirical linguistics to gather information about the internal grammar of speakers of a language."} {"text":"The goal of acceptability rating studies is to gather insights into the mental grammars of participants. As the grammaticality of a linguistic construction is an abstract construct that cannot be accessed directly, this type of tasks is usually not called grammaticality, but acceptability judgment. This can be compared to intelligence. Intelligence is an abstract construct that cannot be measured directly. What can be measured are the outcomes of specific test items. The result of one item, however, is not very telling. Instead, IQ tests consist of several items building a score. Similarly, in acceptability rating studies, grammatical constructions are measured through several items, i.e., sentences to be rated. This is also done to ensure that participants do not rate the meaning of a particular sentence."} {"text":"The difference between acceptability and grammaticality is linked to the distinction between performance and competence in generative grammar."} {"text":"Several different types of acceptability rating tasks are used in linguistics. The most common tasks use Likert scales. Forced choice and yes-no rating tasks are also common. Besides these classical test types, there are other, methods like thermometer judgments or magnitude estimation which have been argued to be more difficult to process for participants, however."} {"text":"Verbal intelligence is the ability to understand and reason using concepts framed in words. More broadly, it is linked to problem solving, abstract reasoning, and working memory. Verbal intelligence is one of the most \"g\"-loaded abilities."} {"text":"In order to understand linguistic intelligence, it is important to understand the mechanisms that control speech and language. These mechanisms can be broken down into four major groups: speech generation (talking), speech comprehension (hearing), writing generation (writing), and writing comprehension (reading)."} {"text":"In a practical sense, linguistic intelligence is the extent to which an individual can use language, both written and verbal, to achieve goals."} {"text":"Linguistic intelligence is a part of Howard Gardner's multiple intelligence theory that deals with individuals' ability to understand both spoken and written language, as well as their ability to speak and write themselves."} {"text":"In most cases, speech production is controlled by the left hemisphere. In a series of studies, Wilder Penfield, among others, probed the brains of both right-handed (generally left-hemisphere dominant) and left-handed (generally right-hemisphere dominant) patients. They discovered that, regardless of handedness, the left hemisphere was almost always the speech controlling side. However, it has been discovered that in cases of neural stress (hemorrhage, stroke, etc.) the right hemisphere has the ability to take control of speech functions."} {"text":"Verbal Comprehension is a fairly complex process, and it is not fully understood. From various studies and experiments, it has been found that the superior temporal sulcus activates when hearing human speech, and that speech processing seems to occur within Wernicke's area."} {"text":"Generation of written language is thought to be closely related to speech generation. Neurophysiologically speaking, it is believed that Broca's area is crucial for early linguistic processing, while the inferior frontal gyrus is critical in semantic processing. According to Penfield, writing differs in two major ways from verbal language. First, instead of relating the thought to sounds, the brain must relate the thought to symbols or letters, and second, the motor cortex activates a different set of muscles to write, than when speaking."} {"text":"Written comprehension, similar to spoken comprehension, seems to occur primarily in Wernicke's area. However, instead of using the auditory system to gain language input, written comprehension relies on the visual system."} {"text":"While the capabilities of the physical structures used are large factors in determining linguistic intelligence, there have been several genes that have been linked to individual linguistic ability. The NRXN1 gene has been linked to general language ability, and mutations of this gene has been shown to cause major issues to overall linguistic intelligence. The CNTNAP2 gene is believed to affect language development and performance, and mutations in this gene is thought to be involved in autism spectrum disorders. PCDH11 has been linked to language capacity, and it is believed to be one of the factors that accounts for the variation in linguistic intelligence."} {"text":"The Wechsler Adult Intelligence Scale III (WAIS-III) divides Verbal IQ (VIQ) into two categories:"} {"text":"In general, it is difficult to test for linguistic intelligence as a whole, therefore various types of verbal fluency tests are often used."} {"text":"In one series of tests, it was shown that when children were given verbal fluency tests, a larger portion of their cortex activated compared to adults, as well as activation of both the left and right hemispheres. This is most likely due to the high plasticity of newly developing brains."} {"text":"Recently, a study was done showing that verbal fluency test results can differ depending on the mental focus of the subject. In this study, mental focus on physical speech production mechanisms caused speech production times to suffer, whereas mental focus on auditory feedback improved these times."} {"text":"Since linguistic intelligence is based on several complex skills, there are many disorders and injuries that can affect an individual's linguistic intelligence."} {"text":"There are several disorders that primarily affect only language skills. Three major pure language disorders are Developmental verbal dyspraxia, specific language impairment, and stuttering. Developmental verbal dyspraxia (DVD) is a disorder where children have errors in consonant and vowel production. Specific language impairment (SLI) is a disorder where the patient has a lack of language acquisition skills, despite a seemingly normal intelligence level in other areas. Stuttering is a fairly common disorder where speech flow is interrupted by involuntary repetitions of syllables."} {"text":"Fictive motion is the metaphorical motion of an object or abstraction through space. Fictive motion has become a subject of study in psycholinguistics and cognitive linguistics. In fictive motion sentences, a motion verb applies to a subject that is not literally capable of movement in the physical world, as in the sentence, \"The fence runs along the perimeter of the house.\" Fictive motion is so called because it is attributed to material states, objects, or abstract concepts, that cannot (sensibly) be said to move themselves through physical space. Fictive motion sentences are pervasive in English and other languages."} {"text":"Cognitive linguist Leonard Talmy discussed many of the spatial and linguistic properties of fictive motion in a book chapter called \"Fictive motion in language and 'ception (Talmy 1996). He provided further insights in his seminal book, \"Toward a Cognitive Semantics Vol. 1\", in 2000. Talmy began analyzing the semantics of fictive motion in the late 1970s and early 1980s but used the term \"virtual motion\" at that time (e.g. Talmy 1983)."} {"text":"Fictive motion has since been investigated by cognitive scientists interested in whether and how it evokes dynamic imagery. Methods of investigation have included reading tasks, eye-tracking tasks and drawing tasks."} {"text":"It appears that not only does thinking about actual motion influence people's judgments about time, but thinking about fictive motion has the same effect, suggesting that thinking about one abstract domain may influence people's understanding of another. This raises the question of whether the influence of fictive motion on people's understanding of time is rooted in a concrete, embodied conception of motion, such that both time and fictive motion are ultimately understood in terms of simulations of concrete experience, or whether the effects of fictive motion are a product of the way that language influences thought."} {"text":"Transderivational search (often abbreviated to TDS) is a psychological and cybernetics term, meaning when a search is being conducted for a fuzzy match across a broad field. In computing the equivalent function can be performed using content-addressable memory."} {"text":"Unlike usual searches, which look for literal (i.e. exact, logical, or regular expression) matches, a transderivational search is a search for a possible meaning or possible match as part of communication, and without which an incoming communication cannot be made any sense of whatsoever. It is thus an integral part of processing language, and of attaching meaning to communication."} {"text":"A psychological example of TDS is in Ericksonian hypnotherapy, where vague suggestions are used that the patient must process intensely in order to find their own meanings, thus ensuring that the practitioner does not intrude his own beliefs into the subject's inner world."} {"text":"Because TDS is a compelling, automatic and unconscious state of internal focus and processing (i.e. a type of everyday trance state), and often a state of internal lack of certainty, or openness to finding an answer (since something is being checked out at that moment), it can be utilized or interrupted, in order to create, or deepen, trance."} {"text":"TDS is a fundamental part of human language and cognitive processing. Arguably, every word or utterance a person hears, for example, and everything they see or feel and take note of, results in a very brief trance while TDS is carried out to establish a contextual meaning for it."} {"text":"Although TDS is often associated with spoken language, it can be induced in any perceptual system. Thus Milton Erickson's \"hypnotic handshake\" is a technique that leaves the other person performing TDS in search of meaning to a deliberately ambiguous use of touch."} {"text":"Crosslinguistic influence (CLI)\u00a0refers to the different ways in which one language can affect another within an individual speaker. It typically involves two languages that can affect one another in a bilingual speaker.\u00a0 An example of CLI is the influence of Korean on a Korean native speaker who is learning Japanese or French.\u00a0Less typically, it could also refer to an interaction between different\u00a0dialects\u00a0in the mind of a monolingual speaker.\u00a0CLI can be observed across\u00a0subsystems of languages\u00a0including pragmatics, semantics, syntax, morphology, phonology, phonetics, and orthography.\u00a0Discussed further in this article are particular subcategories of CLI\u2014transfer, attrition, the complementarity principle, and additional theories."} {"text":"The question of how languages influence one another within a bilingual individual can be addressed both with respect to mature bilinguals and with respect to bilingual language acquisition. With respect to bilingual language acquisition in children, there are several hypotheses that examine the internal representation of bilinguals' languages. Volterra and Taeschner proposed the \"Single System Hypothesis,\" which states that children start out with one single system that develops into two systems. This hypothesis proposed that bilingual children go through three stages of acquisition."} {"text":"Since the development of the \"Crosslinguistic Hypothesis\", much research has contributed to the understanding of CLI in areas of structural overlap, directionality, dominance, interfaces, the role of input, and the role of processing and production."} {"text":"Jacquelyn Schachter (1992) argues that transfer is not a process at all, but that it is improperly named. She described transfer as \"an unnecessary carryover from the heyday of behaviorism.\" In her view, transfer is more of a constraint on the L2 learners' judgments about the constructions of the acquired L2 language. Schachter stated, \"It is both a facilitating and a limiting condition on the hypothesis testing process, but it is not in and of itself a process.\""} {"text":"Language transfer can be positive or negative. Transfer between similar languages often yields correct production in the new language because the systems of both languages are similar. This correct production would be considered positive transfer. An example involves a Spanish speaker (L1) who is acquiring Catalan (L2). Because the languages are so similar, the speaker could rely on their knowledge of Spanish when learning certain Catalan grammatical features and pronunciation. However, the two languages are distinct enough that the speaker's knowledge of Spanish could potentially interfere with learning Catalan properly."} {"text":"Negative transfer (Interference) occurs when there are little to no similarities between the L1 and L2. It is when errors and avoidance are more likely to occur in the L2. The types of errors that result from this type of transfer are underproduction, overproduction, miscomprehension, and production errors, such as substitution, calques, under\/overdifferentiation and hypercorrection."} {"text":"Overproduction refers to an L2 learner producing certain structures within the L2 with a higher frequency than native speakers of that language. In a study by Schachter and Rutherford (1979), they found that Chinese and Japanese speakers who wrote English sentences overproduced certain types of cleft constructions:"} {"text":"and sentences that contained \"There are\"\/\"There is\" which suggests an influence of the topic marking function in their L1 appearing in their L2 English sentences."} {"text":"French learners have been shown to over-rely on presentational structures when introducing new referents into discourse, in their L2 Italian and English."} {"text":"This phenomenon has been observed even in the case of a target language where the presentational structure does not involve a relative pronoun, as Mandarin Chinese."} {"text":"Substitution is when the L1 speaker takes a structure or word from their native language and replaces it within the L2. Odlin (1989) shows a sentence from a Swedish learner of English in the following sentence."} {"text":"Here the Swedish word \"bort\" has replaced its English equivalent \"away\"."} {"text":"A Calque is a direct \"loan translation\" where words are translated from the L1 literally into the L2."} {"text":"Overdifferentiation occurs when distinctions in the L1 are carried over to the L2."} {"text":"Underdifferentiation occurs when speakers are unable to make distinctions in the L2."} {"text":"Hypercorrection is a process where the L1 speaker finds forms in the L2 they consider to be important to acquire, but these speakers do not properly understand the restrictions or exceptions to formal rules that are in the L2, which results in errors, such as the example below."} {"text":"Other researchers believe that CLI is more than production influences, claiming that this linguistic exchange can impact other factors of a learner's identity. Jarvis and Pavlenko (2008) described such affected areas as experiences, knowledge, cognition, development, attention and language use, to name a few, as being major centers for change because of CLI. These ideas suggest that crosslinguistic influence of syntactic, morphological, or phonological changes may just be the surface of one language's influence on the other, and CLI is instead a different developmental use of one's brain."} {"text":"CLI has been heavily studied by scholars, but there is still much more research needed because of the multitude of components that make up the phenomenon. Firstly, the typology of particular language pairings needs to be researched to differentiate CLI from the general effects bilingualism and bilingual acquisition."} {"text":"Also, research is needed in specific areas of overlap between particular language pairings and the domains that influence and discourage CLI. For example, most of the research studies involve European language combinations, and there is a significant lack of information regarding language combinations involving non-European languages, indigenous languages, and other minority languages."} {"text":"More generally, an area of research to be further developed are the effects of CLI in multilingual acquisition of three or more languages. There is limited research on this occurrence."} {"text":"Gaston, P. (2013)Syntactic error processing in bilingualism: an analysis of the Optional Infinitive stage in child language acquisition (Unpublished doctoral dissertation). Yale University."} {"text":"Odlin, T. (2005). Crosslinguistic Influence And Conceptual Transfer: What Are The Concepts? \"Annual Review of Applied Linguistics,\" \"25\". doi:10.1017\/s0267190505000012"} {"text":"Sentence processing takes place whenever a reader or listener processes a language utterance, either in isolation or in the context of a conversation or a text. Many studies of the human language comprehension process have focused on reading of single utterances (sentences) without context. Extensive research has shown that language comprehension is affected by context preceding a given utterance as well as many other factors."} {"text":"Sentence comprehension has to deal with ambiguity in spoken and written utterances, for example lexical, structural, and semantic ambiguities. Ambiguity is ubiquitous, but people usually resolve it so effortlessly that they do not even notice it. For example, the sentence \"Time flies like an arrow\" has (at least) the interpretations \"Time moves as quickly as an arrow\", \"A special kind of fly, called time fly, likes arrows\" and \"Measure the speed of flies like you would measure the speed of an arrow\". Usually, readers will be aware of only the first interpretation. Educated readers though, spontaneously think about the arrow of time but inhibit that interpretation because it deviates from the original phrase and the temporal lobe acts as a switch."} {"text":"Instances of ambiguity can be classified as local or global ambiguities. A sentence is globally ambiguous if it has two distinct interpretations. Examples are sentences like \"Someone shot the servant of the actress who was on the balcony\" (was it the servant or the actress who was on the balcony?) or \"The cop chased the criminal with a fast car\" (did the cop or the criminal have a fast car?). Comprehenders may have a preferential interpretation for either of these cases, but syntactically and semantically, neither of the possible interpretations can be ruled out."} {"text":"Local ambiguities persist only for a short amount of time as an utterance is heard or written and are resolved during the course of the utterance so the complete utterance has only one interpretation. Examples include sentences like \"The critic wrote the book was enlightening\", which is ambiguous when \"The critic wrote the book\" has been encountered, but \"was enlightening\" remains to be processed. Then, the sentence could end, stating that the critic is the author of the book, or it could go on to clarify that the critic wrote something about a book. The ambiguity ends at \"was enlightening\", which determines that the second alternative is correct."} {"text":"When readers process a local ambiguity, they settle on one of the possible interpretations immediately without waiting to hear or read more words that might help decide which interpretation is correct (the behaviour is called \"incremental processing\"). If readers are surprised by the turn the sentence really takes, processing is slowed and is visible for example in reading times. Locally-ambiguous sentences have, therefore, been used as test cases to investigate the influence of a number of different factors on human sentence processing. If a factor helps readers to avoid difficulty, it is clear that the factor plays a factor in sentence processing."} {"text":"Experimental research has spawned a large number of hypotheses about the architecture and mechanisms of sentence comprehension. Issues like modularity versus interactive processing and serial versus parallel computation of analyses have been theoretical divides in the field."} {"text":"Serial accounts assume that humans construct only one of the possible interpretations at first and try another only if the first one turns out to be wrong. Parallel accounts assume the construction of multiple interpretations at the same time. To explain why comprehenders are usually only aware of one possible analysis of what they hear, models can assume that all analyses ranked, and the highest-ranking one is entertained."} {"text":"There are a number of influential models of human sentence processing that draw on different combinations of architectural choices."} {"text":"The garden path model is a serial modular parsing model. It proposes that a single parse is constructed by a syntactic module. Contextual and semantic factors influence processing at a later stage and can induce re-analysis of the syntactic parse. Re-analysis is costly and leads to an observable slowdown in reading. When the parser encounters an ambiguity, it is guided by two principles: late closure and minimal attachment. The model has been supported with research on the early left anterior negativity, an event-related potential often elicited as a response to phrase structure violations."} {"text":"Late closure causes new words or phrases to be attached to the current clause. For example, \"John said he would leave yesterday\" would be parsed as \"John said (he would leave yesterday)\", and not as \"John said (he would leave) yesterday\" (i.e., he spoke yesterday)."} {"text":"Minimal attachment is a strategy of parsimony: The parser builds the simplest syntactic structure possible (that is, the one with the fewest phrasal nodes)."} {"text":"Constraint-based theories of language comprehension emphasize how people make use of the vast amount of probabilistic information available in the linguistic signal. Through statistical learning, the frequencies and distribution of events in linguistic environments can be picked upon, which inform language comprehension. As such, language users are said to arrive at a particular interpretation over another during the comprehension of an ambiguous sentence by rapidly integrating these probabilistic constraints."} {"text":"The good enough approach to language comprehension, developed by Fernanda Ferreira and others, assumes that listeners do not always engage in full detailed"} {"text":"processing of linguistic input. Rather, the system has a tendency to develop shallow and superficial representations"} {"text":"when confronted with some difficulty. The theory takes an approach that somewhat combines both the garden path model and the constraint based model. The theory focuses on two main issues. The first is that representations formed from complex or difficult material are often shallow and incomplete. The second is that limited information sources are often consulted in cases where the comprehension system encounters difficulty. The theory can be put to test using various experiments in psycholinguistics that involve garden path misinterpretation, etc."} {"text":"Eye tracking has been used to study online language processing. This method has been influential in informing knowledge of reading. Additionally, Tanenhaus et al. (1995) established the visual world paradigm, which takes advantage of eye movements to study online spoken language processing. This area of research capitalizes on the linking hypothesis that eye movements are closely linked to the current focus of attention."} {"text":"The rise of non-invasive techniques provides myriad opportunities for examining the brain bases of language comprehension. Common examples include positron emission tomography (PET), functional magnetic resonance imaging (fMRI), event-related potentials (ERPs) in electroencephalography (EEG) and magnetoencephalography (MEG), and transcranial magnetic stimulation (TMS). These techniques vary in their spatial and temporal resolutions (fMRI has a resolution of a few thousand neurons per pixel, and ERP has millisecond accuracy), and each type of methodology presents a set of advantages and disadvantages for studying a particular problem in language comprehension."} {"text":"Word recognition, according to Literacy Information and Communication System (LINCS) is \"the ability of a reader to recognize written words correctly and virtually effortlessly\". It is sometimes referred to as \"isolated word recognition\" because it involves a reader's ability to recognize words individually from a list without needing similar words for contextual help. LINCS continues to say that \"rapid and effortless word recognition is the main component of fluent reading\" and explains that these skills can be improved by \"practic[ing] with flashcards, lists, and word grids\"."} {"text":"An article in \"ScienceDaily\" suggests that \"early word recognition is key to lifelong reading skills\". There are different ways to develop these skills. For example, creating flash cards for words that appear at a high frequency is considered a tool for overcoming dyslexia. It has been argued that prosody, the patterns of rhythm and sound used in poetry, can improve word recognition."} {"text":"Word recognition is a manner of reading based upon the immediate perception of what word a familiar grouping of letters represents. This process exists in opposition to phonetics and word analysis, as a different method of recognizing and verbalizing visual language (i.e. reading). Word recognition functions primarily on automaticity. On the other hand, phonetics and word analysis rely on the basis of cognitively applying learned grammatical rules for the blending of letters, sounds, graphemes, and morphemes."} {"text":"Word recognition is measured as a matter of speed, such that a word with a high level of recognition is read faster than a novel one. This manner of testing suggests that comprehension of the meaning of the words being read is not required, but rather the ability to recognize them in a way that allows proper pronunciation. Therefore, context is unimportant, and word recognition is often assessed with words presented in isolation in formats such as flash cards Nevertheless, ease in word recognition, as in fluency, enables proficiency that fosters comprehension of the text being read."} {"text":"The intrinsic value of word recognition may be obvious due to the prevalence of literacy in modern society. However, its role may be less conspicuous in the areas of literacy learning, second-language learning, and developmental delays in reading. As word recognition is better understood, more reliable and efficient forms of teaching may be discovered for both children and adult learners of first-language literacy. Such information may also benefit second-language learners with acquisition of novel words and letter characters. Furthermore, a better understanding of the processes involved in word recognition may enable more specific treatments for individuals with reading disabilities."} {"text":"Bouma shape, named after the Dutch vision researcher Herman Bouma, refers to the overall outline, or shape, of a word. Herman Bouma discussed the role of \"global word shape\" in his word recognition experiment conducted in 1973. Theories of bouma shape became popular in word recognition, suggesting people recognize words from the shape the letters make in a group relative to each other. This contrasts the idea that letters are read individually. Instead, via prior exposure, people become familiar with outlines, and thereby recognize them the next time they are presented with the same word, or bouma."} {"text":"The slower pace with which people read words written entirely in upper-case, or with alternating upper- and lower-case letters, supports the bouma theory. The theory holds that a novel bouma shape created by changing the lower-case letters to upper-case hinders a person's recall ability. James Cattell also supported this theory through his study, which gave evidence for an effect he called word superiority. This referred to the improved ability of people to deduce letters if the letters were presented within a word, rather than a mix of random letters. Furthermore, multiple studies have demonstrated that readers are less likely to notice misspelled words with a similar bouma shape than misspelled words with a different bouma shape."} {"text":"Though these effects have been consistently replicated, many of their findings have been contested. Some have suggested that the reading ability of upper-case words is due to the amount of practice a person has with them. People who practice become faster at reading upper-case words, countering the importance of the bouma. Additionally, the word superiority effect might result from familiarity with phonetic combinations of letters, rather than the outlines of words, according to psychologists James McClelland and James Johnson."} {"text":"Parallel letter recognition is the most widely accepted model of word recognition by psychologists today. In this model, all letters within a group are perceived simultaneously for word recognition. In contrast, the serial recognition model proposes that letters are recognized individually, one by one, before being integrated for word recognition. It predicts that single letters are identified faster and more accurately than many letters together, as in a word. However, this model was rejected because it cannot explain the word superiority effect, which states that readers can identify letters more quickly and accurately in the context of a word rather than in isolation."} {"text":"The accuracy with which readers recognize words depends on the area of the retina that is stimulated. Reading in English selectively trains specific regions of the left hemiretina for processing this type of visual information, making this part of the visual field optimal for word recognition. As words drift from this optimal area, word recognition accuracy declines. Because of this training, effective neural organization develops in the corresponding left cerebral hemisphere."} {"text":"Eyes make brief, unnoticeable movements called saccades approximately three to four times per second. Saccades are separated by fixations, which are moments when the eyes are not moving. During saccades, visual sensitivity is diminished, which is called saccadic suppression. This ensures that the majority of the intake of visual information occurs during fixations. Lexical processing does, however, continue during saccades. The timing and accuracy of word recognition relies on where in the word the eye is currently fixating. Recognition is fastest and most accurate when fixating in the middle of the word. This is due to a decrease in visual acuity that results as letters are situated farther from the fixated location and become harder to see."} {"text":"The word frequency effect suggests that words that appear the most in printed language are easier to recognize than words that appear less frequently. Recognition of these words is faster and more accurate than other words. The word frequency effect is one of the most robust and most commonly reported effects in contemporary literature on word recognition. It has played a role in the development of many theories, such as the bouma shape. Furthermore, the neighborhood frequency effect states that word recognition is slower and less accurate when the target has an orthographic neighbor that is higher in frequency than itself. Orthographic neighbors are words of all the same length that differ by only one letter of that word."} {"text":"Serif fonts, i.e.: fonts with small appendages at the end of strokes, hinder lexical access. Word recognition is quicker with sans-serif fonts by an average of 8 ms. These fonts have significantly more inter-letter spacing, and studies have shown that responses to words with increased inter-letter spacing were faster, regardless of word frequency and length. This demonstrates an inverse relationship between fixation duration and small increases in inter-letter spacing, most likely due to a reduction in lateral inhibition in the neural network. When letters are farther apart, it is more likely that individuals will focus their fixations at the beginning of words, whereas default letter spacing on word processing software encourages fixation at the center of words."} {"text":"The role of the frequency effect has been greatly incorporated into the learning process. While the word analysis approach is extremely beneficial, many words defy regular grammatical structures and are more easily incorporated into the lexical memory by automatic word recognition. To facilitate this, many educational experts highlight the importance of repetition in word exposure. This utilizes the frequency effect by increasing the reader's familiarity with the target word, and thereby improving both future speed and accuracy in reading. This repetition can be in the form of flash cards, word-tracing, reading aloud, picturing the word, and other forms of practice that improve the association of the visual text with word recall."} {"text":"Improvements in technology have greatly contributed to advances in the understanding and research in word recognition. New word recognition capabilities have made computer-based learning programs more effective and reliable. Improved technology has enabled eye-tracking, which monitors individuals' saccadic eye movements while they read. This has furthered understanding of how certain patterns of eye movement increases word recognition and processing. Furthermore, changes can be simultaneously made to text just outside the reader's area of focus without the reader being made aware. This has provided more information on where the eye focuses when an individual is reading and where the boundaries of attention lie."} {"text":"With this additional information, researchers have proposed new models of word recognition that can be programmed into computers. As a result, computers can now mimic how a human would perceive and react to language and novel words. This technology has advanced to the point where models of literacy learning can be digitally demonstrated. For example, a computer can now mimic a child's learning progress and induce general language rules when exposed to a list of words with only a limited number of explanations. Nevertheless, as no universal model has yet been agreed upon, the generalizability of word recognition models and its simulations may be limited."} {"text":"Despite this lack of consensus regarding parameters in simulation designs, any progress in the area of word recognition is helpful to future research regarding which learning styles may be most successful in classrooms. Correlations also exist between reading ability, spoken language development, and learning disabilities. Therefore, advances in any one of these areas may assist understanding in inter-related subjects. Ultimately, the development of word recognition may facilitate the breakthrough between \"learning to read\" and \"reading to learn\"."} {"text":"James while John had had had had had had had had had had had a better effect on the teacher"} {"text":"\"James while John had had had had had had had had had had had a better effect on the teacher\" is an English sentence used to demonstrate lexical ambiguity and the necessity of punctuation,"} {"text":"which serves as a substitute for the intonation, stress, and pauses found in speech."} {"text":"In human information processing research, the sentence has been used to show how readers depend on punctuation to give sentences meaning, especially in the context of scanning across lines of text. The sentence is sometimes presented as a puzzle, where the solver must add the punctuation."} {"text":"The sentence refers to two students, James and John, who are required by an English test to describe a man who had suffered from a cold in the past. John writes \"The man had a cold\", which the teacher marks incorrect, while James writes the correct \"The man had had a cold\". Since James's answer was right, it had had a better effect on the teacher."} {"text":"The sentence is easier to understand with added punctuation and emphasis:"} {"text":"In each of the five \"had had\" word pairs in the above sentence, the first of the pair is in the past perfect form. The italicized instances denote emphasis of intonation, focusing on the differences in the students' answers, then finally identifying the correct one."} {"text":"Alternatively, the sentence can also be read as John's answer being better than James', simply by placing the same punctuation in a different arrangement through the sentence:"} {"text":"The sentence can be given as a grammatical puzzle or an item on a test, for which one must find the proper punctuation to give it meaning. Hans Reichenbach used a similar sentence (\"John where Jack had...\") in his 1947 book \"Elements of Symbolic Logic\" as an exercise for the reader, to illustrate the different levels of language, namely object language and metalanguage. The intention was for the reader to add the needed punctuation for the sentence to make grammatical sense."} {"text":"In research showing how people make sense of information in their environment, this sentence was used to demonstrate how seemingly arbitrary decisions can drastically change the meaning, analogous to how changes in the punctuation and quotes in the sentence show that the teacher alternately prefers James's work and John's work (e.g., compare: 'James, while John had had \"had\", had...' vs. 'James, while John had had \"had had\", ...')."} {"text":"The sentence is also used to show the semantic vagueness of the word \"had\", as well as to demonstrate the difference between using a word and mentioning a word."} {"text":"It has also been used as an example of the complexities of language, its interpretation, and its effects on a person's perceptions."} {"text":"For the syntactic structure to be clear to a reader, this sentence requires, at a minimum, that the two phrases be separated by using a semicolon, period, en-dash or em-dash. Still, Jasper Fforde's novel \"The Well of Lost Plots\" employs a variation of the phrase to illustrate the confusion that may arise even from well-punctuated writing:"} {"text":"This effect is more important to humans than what was initially thought. Linguists have pointed out that at least the English language has many false starts and extraneous sounds. The phonemic restoration effect is the brain's way of resolving those imperfections in our speech. Without this effect interfering with our language processing, there would be a greater need for much more accurate speech signals and human speech could require much more precision. For experiments, white noise is necessary because it takes the place of these imperfections in speech. One of the most important factors in language is continuity and in turn intelligibility."} {"text":"The phonemic restoration effect was first documented in a 1970 paper by Richard M. Warren entitled \"Perceptual Restoration of Missing Speech Sounds\". The purpose of the experiment was to give a reason to why in background of extraneous sounds, masked individual phonemes were still comprehensible."} {"text":"In his initial experiments, Warren provided the sentence shown and first replaced the first 's' phoneme in legislatures with extraneous noise, in the form of a cough. In a small group of 20 subjects, 19 did not notice a missing phoneme and one person misidentified the missing phoneme. This indicated that in the absence of a phoneme, the brain filled in the missing phoneme, through top-down processing. This was a phenomenon that was somewhat known at the time, but no one was able to pinpoint why it was occurring or had labeled it. He again did the same experiment with the sentence:"} {"text":"He replaced the 'wh' sound in wheel and the same results were found. All people tested wrote down wheel. Warren later did much research for next several decades on the subject."} {"text":"Since Warren, much research has been done to test the various aspects of the effect. These aspects include how many phonemes can be removed, what noise is played in replacement of the phoneme, and how different contexts alter the effect."} {"text":"Neurally, the signs of interrupted or stopped speech can be suppressed in the thalamus and auditory cortex, possibly as a consequence of top-down processing by the auditory system. Key aspects of the speech signal itself are considered to be resolved somewhere in the interface between auditory and language-specific areas (an example is Wernicke's area), in order for the listener to determine what is being said. Normally, the latter is thought to be instantiated at the end stages of the language processing system, but for restorative processes, much remains unknown about whether the same stages are responsible for the ability to actually fill-in the missing phoneme."} {"text":"People with mild and moderate hearing loss were tested for the effectiveness of phonemic restoration. Those with mild hearing loss performed at the same level of a normal listener. Those with moderate hearing loss had almost no perception and failed to identify the missing phonemes. This research is also dependent on the amount of words the observer is comfortable understanding because of the nature of top-down processing."} {"text":"For people with cochlear implants, acoustic simulations of the implant indicated the importance of spectral resolution. When the brain is using top-down processing, it uses as much information as it can to make a decision on if the filler signal in the gap belongs to the speech, and with lower resolution, there is less information to make a correct guess. A study with actual cochlear implant users indicated that some implant users can benefit from phonemic restoration, but again they seem to need more speech information (longer duty cycle in this case) to achieve this."} {"text":"The age effects were studied in children and older adults, to observe if children can benefit from phonemic restoration and if so, at what capacity, and if older adults maintain the restoration capacity in the face of age-related neurophysiological changes."} {"text":"Children are able to produce results comparable to adults by about the age of 5, however still not doing as well as adults. At such an early age most information is processed through bottom-up processing due to the lack of information to recall from. However, this does mean they are able to use previous knowledge of words to fill in the missing phonemes with much less of their brain developed than adults."} {"text":"Older adults (older than 65 years) with no or minimal hearing loss show benefit from phonemic restoration. In some conditions restoration effect can be stronger in older adults than in younger adults, even when the overall speech perception scores are lower in older adults. This observation is likely due to strong linguistic and vocabulary skills that are maintained in advanced age."} {"text":"In children, there was no effect of gender on phonemic restoration."} {"text":"In adults, instead of completely replacing the phonemes, researchers masked them with tones that are informative(helped the listeners pick the correct phoneme), uninformative(neither helped or hurt the listener select the correct phoneme), or misinformative (hurt the listener in picking the correct phoneme). The results showed that women were much more affected by informative and misinformative cues than men. This evidence suggests that women are influenced by top-down semantic information more than men."} {"text":"The effect reverses in a reverberation room, which echoes real life more so than the typical quiet rooms used for experimentation. This allows for echoes of the spoken phonemes to act as the replacement noise for the missing phonemes. The additional produced white noise that replaces the phoneme adds its own echo and causes listeners to not perform as well."} {"text":"Another study by Warren was done to determine the effect of the duration of the replacement phoneme on comprehension. Because the brain processes information optimally at a certain rate, when the gap became approximately the length of the word is when the effect started top breakdown and become ineffective. At this point the effect is no longer effective because the observer is now cognisant of the gap."} {"text":"Much like the McGurk Effect, when listeners were also able to see the words being spoken, they were much more likely to correctly identify the missing phonemes. Like every sense, the brain will use every piece of information it deems important to make a judgement about what it is perceiving. Using the visual cues of mouth movements, the brain will you both in top-down processing to make a decision about what phoneme is supposed to be heard. Vision is the primary sense for humans and for the most part assists in speech perception the most."} {"text":"Only when the intensity of the noise replacing the phonemes is the same or louder as the surrounding words, does the effect properly work. This effect is made apparent when listeners hear a sentence with gaps replaced by white noise repeat over and over with the white noise volume increasing with each iteration. The sentence becomes more and more clear to the listener as the white noise is louder."} {"text":"When a word with the segment 's' is removed and replaced by silence and a comparable noise segment were presented dichotically. Simply put, one ear was hearing the full sentence without phoneme excision and the other ear was hearing a sentence with a 's' sound removed. This version of the phonemic restoration effect was particularly strong because the brain was doing much less guess work with the sentence, because the information was given to the observer. Observers reported hearing exactly the same sentence in both ears, regardless of one of their ears missing a phoneme."} {"text":"The restoration effect is studied mostly in English and Dutch, where the restoration effect seemed similar between the two languages. While no research directly compared the restoration effect further for other languages, it is assumed that this effect is universal for all languages."} {"text":"That that is is that that is not is not is that it it is"} {"text":"That that is is that that is not is not is that it it is is an English word sequence demonstrating syntactic ambiguity. It is used as an example illustrating the importance of proper punctuation."} {"text":"The sequence can be understood as any of four grammatically correct sequences, each with at least four discrete sentences, by adding punctuation:"} {"text":"The first, second, and fourth relate a simple philosophical proverb in the style of Parmenides that all that is, is, and that anything that does not exist does not. The phrase was noted in \"Brewer's Dictionary of Phrase and Fable\"."} {"text":"This phrase appeared in the 1968 American movie \"Charly\", written to demonstrate punctuation to the main character Charly's teacher, in a scene to demonstrate that the surgical operation to make the character smarter had succeeded."} {"text":"In relation to psychology, pair by association is the action of associating a stimulus with an arbitrary idea or object, eliciting a response, usually emotional. This is done by repeatedly pairing the stimulus with the arbitrary object."} {"text":"For example, repeatedly pairing images of beautiful women in bathing suits elicits a sexual response in most men. Advertising agencies repeatedly pair products with attractive women in television commercials with the intention of eliciting an emotional or sexually aroused response in the consumer. This causes the consumer to be more likely to buy the product than when presented with a similar product without such an association."} {"text":"Additionally, there is ongoing research into the effects ecstasy\/polystimulant use has on paired-associate task\/learning. In a study by Gallagher et al., it was found that those who used ecstasy\/polydrugs had in general more false positive responses, clicking yes (in agreement) when asked if a word pair had been previously presented even if the reality was false, compared to non-users. It was proposed that because creating the association between word pairs requires executive resources, which has been known to be hampered in ecstasy users. This is what has prevented the binding of word pairs. However, as stated by the author, it is not possible to fully attribute these deficits in the task, but it bears noting that there are differences occurring."} {"text":"Behaviorists will often use paired association tests to determine the strength of verbal behavior, in particular, B.F Skinner's concept of the verbal response class called intraverbals."} {"text":"The Max Planck Institute for Psycholinguistics (German: \"Max-Planck-Institut f\u00fcr Psycholinguistik\"; Dutch: \"Max Planck Instituut voor Psycholingu\u00efstiek\") is a research institute situated on the campus of Radboud University Nijmegen located in Nijmegen, Gelderland, the Netherlands. Founded in 1980 by Pim Levelt, it is the only institution in the world entirely dedicated to psycholinguistics, and is also one of only three among a total of 90 within the Max Planck Society to be located outside Germany. The Nijmegen-based institute currently occupies 5th position in the Ranking Web of World Research Centers among all Max Planck institutes (7th by size, 4th by visibility). It currently employs about 235 people."} {"text":"The institute specializes in language comprehension, language production, language acquisition, language and genetics, and the relation between language and cognition. Its mission is to undertake basic research into the psychological, social and biological foundations of language. The goal is to understand how human minds and brains process language, how language interacts with other aspects of mind, and how to learn languages of quite different types. The MPI for Psycholinguistics is a globally recognized center of linguistics and presents with its international archive of endangered languages a significant contribution to the preservation of the common heritage of mankind. This archive is sponsored since 2000 by the Volkswagen Foundation and offers on the internet about 50 projects."} {"text":"The MPI for Psycholinguistics has six primary organizational units:"} {"text":"The Language and Cognition Department, headed by Stephen C. Levinson, investigates the relationship between language, culture and general cognition, making use of the \"natural laboratory\" of language variation. In this way, the department brings the perspective of language diversity to bear on a range of central problems in the language sciences. It maintains over a dozen field sites around the world, where languages are described (often for the first time), field experiments conducted and extended corpora of natural language usage collected. In addition, the department is characterized by a diversity of methods, ranging from linguistic analysis and ethnography to developmental perspectives, from psycholinguistic experimentation to conversation analysis, from corpus statistics to brain imaging, and from phylogenetics to linguistic data mining."} {"text":"Established in October 2010, the Language and Genetics Department is headed by Simon E. Fisher. The department takes advantage of the latest innovations in molecular methods to discover how the human genome helps to build a language-ready brain. It aims to uncover the DNA variations which ultimately affect different facets of human communicative abilities, not only in children with language-related disorders but also in the general population. Crucially our work attempts to bridge the gaps between genes, brains, speech and language, by integrating molecular findings with data from other levels of analysis, including cell biology, experimental psychology and neuroimaging. In addition, it hopes to trace the evolutionary history and worldwide diversity of key genes, which may shed new light on language origins."} {"text":"The Language Comprehension Department, headed by Anne Cutler, undertakes empirical investigation and computational modeling of the understanding of spoken language. Until 2009, the work within the department was largely divided between two research projects: decoding continuous speech and phonological learning for speech perception. From 2009 onwards, most of the work of the department goes into the project called Mechanisms and Representations in Comprehending Speech. This project focuses on core theoretical issues in speech comprehension such as on how episodic memories - such as hearing someone speak in an unfamiliar dialect - influence the speech perception system, or how is prior knowledge about one's language (phonotactic probabilities, lexical knowledge, frequent versus infrequent word combinations) used during perception."} {"text":"While still under reorganization, the Language Acquisition Department until September 2012 investigated processes of language acquisition and use in a broad perspective. The department combined attention to both first and second languages, researching production as well as comprehension of speakers of different ages and cultures, and the developmental relationship between language and cognition. The focus was on morpho-syntax, semantics and discourse structure. Headed by Wolfgang Klein, Language Acquisition previously launched three institute projects, namely, Information Structure in Language Acquisition, Categories in Language and Cognition and Multimodal Interaction. The Language Acquisition Department has reopened in 2016 heading by Caroline Rowland."} {"text":"The Neurobiology of Language Department, headed by Peter Hagoort, focuses on the study of language production, language comprehension, and language acquisition from a cognitive neuroscience perspective. This includes using neuroimaging, behavioral and virtual reality techniques to investigate the language system and its neural underpinnings. Research facilities at the Max Planck Institute include a high-density electroencephalography (EEG) lab, a virtual reality laboratory and several behavioral laboratories. Having a part of the department stationed at the Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, it also has access to a whole-head 275 channel MEG system, MRI-scanners at 1.5, 3 and 7 Tesla, a TMS-lab, and several additional EEG laboratories."} {"text":"The Psychology of Language Department, headed by Antje S. Meyer, identifies characteristics of the cognitive system that determine behavior in a broad range of linguistic tasks and the relationships between language production, comprehension, and learning via speaking, listening and cognition. The department also understands variability in adult language production and comprehension. Using various approaches, the Psychology of Language utilizes a combination of experimental and correlational work and inclusion of diverse samples of participants. With such methods, it has close links to the Language and Genetics and Neurobiology of Language departments."} {"text":"This MPI research group, headed by Daniel Haun, investigates the social and cognitive foundations of human communication in infancy specifically on infants' developing social cognition and social motivation in relation to their emerging prelinguistic communication within social and cultural contexts. Their work is motivated by the idea that there is a psychological basis of human communication that develops ontogenetically prior to language and can be first expressed in gestures."} {"text":"Started in 2009, the research group investigates language diversity and change as part of an integrated cultural evolutionary system. Headed by Michael Dunn, the group takes a modern evolutionary perspective, using computational tools from genetics and biology, and integrating probabilistic, quantified approaches to phylogenetics with rigorous tests of different models of the interaction between elements of language, contact and geography, and cultural variation."} {"text":"The research group, headed by Robert D. Van Valin, Jr., tries to determine the role of information structure in explaining cross-linguistic differences in grammatical systems out of the idea that the interaction of pragmatics and grammar happens on several levels and differs from language to language. Another major task of the group is to investigate and re-evaluate the status of the information structure primitives (topic, focus, contrast, etc.) as cross-linguistically valid categories. To achieve this, the members of the group combine extensive corpus analysis of the data in their respective languages with production experiments; all findings are further cross-checked through standard information structure tests (question-answer pairs, aboutness tests, association with focus sensitive items)."} {"text":"Change from below is linguistic change that occurs from below the level of consciousness. It is language change that occurs from social, cognitive, or physiological pressures from within the system. This is in opposition to change from above, wherein language change is a result of elements imported from other systems."} {"text":"Change from below first enters the language from below the level of consciousness; that is, speakers are generally unaware of the linguistic change. These linguistic changes enter language primarily through the vernacular and spread throughout the community without speakers' conscious awareness. Since change from below is initially non-salient, the changing features are not marked characteristics and are difficult for speakers or linguists to perceive. As the changes occur, they will ultimately become stable changes that are stigmatized."} {"text":"New linguistic changes that enter the language from below are most commonly used by the interior socioeconomic classes, as displayed by William Labov's curvilinear principle. Change from below is seen in Labov's Philadelphia study, where a series of new vowel changes was most often used by the interior classes. Age and gender similarly affect the way changes occur, where younger or female individuals are more likely to exhibit the change than older or male individuals in the community. However, gender, age, and social class act independently in transmission."} {"text":"Change from below challenges societal norms; women (especially upper working class women, and those who are socially entrenched and involved in their community) lead this linguistic change. However, forms that have overt prestige are more prized by these groups, so when changes from below rise to the level of awareness, they are frequently stigmatized and rejected by the very people using them."} {"text":"Change from below typically begins in informal speech. Often, those utilizing the changing forms are young speakers using the language as a form of resistance to authority. The changes made by individuals such as these, who are upwardly mobile and intentionally nonconformist, then diffuse into the speech of broader groups as described by Bill Labov\u2019s Constructive Nonconformity Principle."} {"text":"The first phase of change from below is the acquisition of language by children. Typically, children learn the patterns of female caretakers."} {"text":"The second phase of change from below is the advancement of informal changes by young individuals."} {"text":"The third phase of change from below sees the individual\u2019s speech shift towards more standard forms, and the change become socioeconomically diffused and stigmatized."} {"text":"Experimental pragmatics is an academic area that uses experiments (concerning children's and adults' comprehension of sentences, utterances, or story-lines) to test theories about the way people understand utterances\u2014and, by extension, one another\u2014in context (this is an area known as pragmatics)."} {"text":"Given that an utterance generally does not fully determine the message it is destined to convey, the main question this field asks is, how does a listener fully comprehend a speaker's intention? For example, if one were to read about a singer who says \"That was a brilliant performance\" to her colleague after they both sang beautifully, the utterance would seem sincere and truthful. If the same utterance were made after both sang terribly, the utterance would be perceived as ironic. The very same utterance can have two entirely different interpretations as a function of the speaker's intended meaning."} {"text":"Experimental pragmatics adopts existing cognitive and psycholinguistic techniques in order to carry out its investigations. While developmental progressions can reveal how interlocutors across several ages interpret utterances with clear pragmatic potential, reading times can reveal how sentences are processed (as relatively easy or difficult). While EEGs can reveal sharp on-line measures for determining how a word is integrated into a sentence, fMRI can reveal what areas of the brain are recruited when processing one reading over another."} {"text":"Philosophers have laid the groundwork for much of the work in pragmatics. Modern investigations can be traced back to Paul Grice and his philosophical approach to utterance understanding. Grice\u2019s initial contribution was to propose a novel analysis in which he distinguished between \"sentence meaning\" (what the words and grammar mean) and \"speaker\u2019s meaning\" (what the speaker actually intended to communicate by uttering a sentence). According to Grice, understanding an utterance requires access to, or making hypotheses about, the speaker\u2019s intention and thus involves going beyond the meanings of the words in the sentence."} {"text":"Two major European grants have supported the field. The European Science Foundation's (ESF's) Research Network Program (EURO-XPRAG) sponsored European collaborations, workshops and conferences between 2009 and 2014. The German Research Foundation (DFG) established the priority program XPRAG.de in 2014."} {"text":"In linguistics, predicate transfer is the reassignment of a property to an object which would not otherwise inherently have that property. Thus, the expression \"I am parked out back\" conveys the meaning of \"parked\" from \"car\" to the property of \"I possess a car\". This avoids incorrect polysemous interpretations of \"parked\": that \"people can be parked\", or that \"I am pretending to be a car\", or that \"I am something which can be parked\". This is supported by the morphology: \"We are parked out back\" does not mean that there are multiple cars; rather, that there are multiple passengers (having the property of being in possession of a car)."} {"text":"The question whether the use of language influences spatial cognition is closely related to theories of linguistic relativity\u2014also known as the Sapir-Whorf hypothesis\u2014which states that the structure of a language affects cognitive processes of the speaker. Debates about this topic are mainly focused on the extent to which language influences spatial cognition or if it does at all. Research also concerns differences between perspectives on spatial relations across cultures, what these imply, and the exploration of potentially partaking cognitive mechanisms."} {"text":"Research shows that frames of reference for spatial cognition differ across cultures and that language could play a crucial role in structuring these different frames."} {"text":"Three types of perspectives on space can be distinguished:"} {"text":"Languages like English or Dutch do not exclusively make use of relative descriptions but these appear to be most frequent compared to intrinsic or absolute descriptions. An absolute frame of reference is usually restricted to large scale geographical descriptions in these languages. Speakers of the Australian languages Arrernte, Guugu Yimithirr, and Kuuk Thaayore only use absolute descriptions."} {"text":"The relative and intrinsic perspectives seem to be connected as there is no known language which applies only one of these frames of reference exclusively."} {"text":"(1.) It has been argued that people universally use an egocentric representation to solve non-linguistic spatial tasks which would align with the relative frame of reference."} {"text":"(2.) Other researchers have proposed that people apply multiple frames of reference during their daily lives and that languages reflect these cognitive structures."} {"text":"In the light of the current body of literature the second view seems to be the more plausible one."} {"text":"The dominant frames of reference have found to be reflected in the common types of gesticulation in the respective language. Speakers of absolute languages would typically represent an object moving north with a hand movement towards the north. Whereas speakers of relative languages typically depict a movement of an object to the right with a hand movement to the right, independent of the direction they are facing during speech. Speakers of intrinsic languages would, for example, typically represent human movement from the perspective of the mover with a sagittal hand gesture away from the speaker."} {"text":"A study by Boroditsky and Gaby compared speakers of an absolute language\u2014Pormpuraawans\u2014with English speakers. The task on which they compared them consisted of the spatial arrangement of cards which showed a temporal progression. The result was that the speakers of the relative language (Americans) exclusively chose to represent time spatially as progressing from left (earlier time) to right (later time). Whereas the Pormpuraawans took the direction they faced into account and preferred to depict time as progressing from east (earlier time) to west (later time) the most."} {"text":"Confounding variables could potentially explain a significant proportion of the measured difference in performance between the linguistic frames of reference."} {"text":"These can be categorized into three types of confounding factors:"} {"text":"Gentner, \u00d6zy\u00fcrek, G\u00fcrcanli, and Goldin-Meadow found that deaf children, who lacked a conventional language, did not use gestures to convey spatial relations (see home sign). Building on that, they showed that deaf children performed significantly worse on a task of spatial cognition compared to hearing children. They concluded that the acquisition of (spatial) language is an important factor in shaping spatial cognition."} {"text":"Several mechanisms accounting for or contributing to the possible effect of language on cognition have been suggested:"} {"text":"Code-mixing is the mixing of two or more languages or language varieties in speech."} {"text":"Some scholars use the terms \"code-mixing\" and \"code-switching\" interchangeably, especially in studies of syntax, morphology, and other formal aspects of language. Others assume more specific definitions of code-mixing, but these specific definitions may be different in different subfields of linguistics, education theory, communications etc."} {"text":"Code-mixing is similar to the use or creation of pidgins; but while a pidgin is created across groups that do not share a common language, code-mixing may occur within a multilingual setting where speakers share more than one language."} {"text":"Some linguists use the terms code-mixing and code-switching more or less interchangeably. Especially in formal studies of syntax, morphology, etc., both terms are used to refer to utterances that draw from elements of two or more grammatical systems. These studies are often interested in the alignment of elements from distinct systems, or on constraints that limit switching."} {"text":"Some work defines code-mixing as the placing or mixing of various linguistic units (affixes, words, phrases, clauses) from two different grammatical systems within the same sentence and speech context, while code-switching is the placing or mixing of units (words, phrases, sentences) from two codes within the same speech context. The structural difference between code-switching and code-mixing is the position of the altered elements\u2014for code-switching, the modification of the codes occurs intersententially, while for code-mixing, it occurs intrasententially."} {"text":"In other work the term code-switching emphasizes a multilingual speaker's movement from one grammatical system to another, while the term code-mixing suggests a hybrid form, drawing from distinct grammars. In other words, \"code-mixing\" emphasizes the formal aspects of language structures or linguistic competence, while \"code-switching\" emphasizes linguistic performance."} {"text":"While many linguists have worked to describe the difference between code-switching and borrowing of words or phrases, the term code-mixing may be used to encompass both types of language behavior."} {"text":"While linguists who are primarily interested in the structure or form of code-mixing may have relatively little interest to separate code-mixing from code-switching, some sociolinguists have gone to great lengths to differentiate the two phenomena. For these scholars, code-switching is associated with particular pragmatic effects, discourse functions, or associations with group identity. In this tradition, the terms \"code-mixing\" or \"language alternation\" are used to describe more stable situations in which multiple languages are used without such pragmatic effects. See also Code-mixing as fused lect, below."} {"text":"In studies of bilingual language acquisition, \"code-mixing\" refers to a developmental stage during which children mix elements of more than one language. Nearly all bilingual children go through a period in which they move from one language to another without apparent discrimination. This differs from code-switching, which is understood as the socially and grammatically appropriate use of multiple varieties."} {"text":"Beginning at the babbling stage, young children in bilingual or multilingual environments produce utterances that combine elements of both (or all) of their developing languages. Some linguists suggest that this code-mixing reflects a lack of control or ability to differentiate the languages. Others argue that it is a product of limited vocabulary; very young children may know a word in one language but not in another. More recent studies argue that this early code-mixing is a demonstration of a developing ability to code-switch in socially appropriate ways."} {"text":"For young bilingual children, code-mixing may be dependent on the linguistic context, cognitive task demands, and interlocutor. Code-mixing may also function to fill gaps in their lexical knowledge. Some forms of code-mixing by young children may indicate risk for language impairment."} {"text":"In psychology and in psycholinguistics the label \"code-mixing\" is used in theories that draw on studies of language alternation or code-switching to describe the cognitive structures underlying bilingualism. During the 1950s and 1960s, psychologists and linguists treated bilingual speakers as, in Grosjean's term, \"two monolinguals in one person\". This \"fractional view\" supposed that a bilingual speaker carried two separate mental grammars that were more or less identical to the mental grammars of monolinguals and that were ideally kept separate and used separately. Studies since the 1970s, however, have shown that bilinguals regularly combine elements from \"separate\" languages. These findings have led to studies of code-mixing in psychology and psycholinguistics."} {"text":"Sridhar and Sridhar define code-mixing as \"the transition from using linguistic units (words, phrases, clauses, etc.) of one language to using those of another within a single sentence\". They note that this is distinct from code-switching in that it occurs in a single sentence (sometimes known as \"intrasentential switching\") and in that it does not fulfill the pragmatic or discourse-oriented functions described by sociolinguists. (See Code-mixing in sociolinguistics, above.) The practice of code-mixing, which draws from competence in two languages at the same time suggests that these competences are not stored or processed separately. Code-mixing among bilinguals is therefore studied in order to explore the mental structures underlying language abilities."} {"text":"A \"mixed language\" or a \"fused lect\" is a relatively stable mixture of two or more languages. What some linguists have described as \"codeswitching as unmarked choice\" or \"frequent codeswitching\" has more recently been described as \"language mixing\", or in the case of the most strictly grammaticalized forms as \"fused lects\"."} {"text":"In areas where code-switching among two or more languages is very common, it may become normal for words from both languages to be used together in everyday speech. Unlike code-switching, where a switch tends to occur at semantically or sociolinguistically meaningful junctures, this code-mixing has no specific meaning in the local context. A fused lect is identical to a mixed language in terms of semantics and pragmatics, but fused lects allow less variation since they are fully grammaticalized. In other words, there are grammatical structures of the fused lect that determine which source-language elements may occur."} {"text":"A mixed language is different from a creole language. Creoles are thought to develop from pidgins as they become nativized. Mixed languages develop from situations of code-switching. (See the distinction between code-mixing and pidgin above.)"} {"text":"There are many names for specific mixed languages or fused lects. These names are often used facetiously or carry a pejorative sense. Named varieties include the following, among others."} {"text":"Whereas some are more recent, for example, in his play Pygmalion George Bernard Shaw famously recognised the disparities of accent (even in a native context) when he wrote:"} {"text":"The \"own-accent bias\" is the inclination toward, and more positive judgement of, individuals with the same accent as yourself compared to those with a different accent. There are two main theories that attempt to explain this bias: affective processing and prototype representation."} {"text":"The affective processing approach proposes that the positive-bias exhibited for others who speak with an own-accent is produced by a (potentially unconscious) emotional reaction. Put simply, people like others who have the same accent as themselves for that precise reason; they like it. This theory has developed, and draws support, from neuroscientific research investigating affective prosody (a key component underlying accent) and vocal emotion, which has found activation (predominantly in the right hemisphere) in important brain regions associated with the processing of emotion. These regions include:"} {"text":"Additional to the processing of memory and emotion, the amygdalae have important roles as \u201crelevance detectors\" for the discernment of relevant social information. Therefore, these brain regions that deal with social relevance and vocal emotion are probable candidates for a neural network concerning accent-based group membership that would drive the affective processing of accents."} {"text":"Rebecca Treiman is an American psychologist. She is the Burke and Elizabeth High Baker Professor of Child Developmental Psychology at Washington University in St. Louis and head of the Reading and Language Lab there. Treiman's research focuses on spelling and reading, and especially on the linguistic factors that affect these processes."} {"text":"Born in Princeton, New Jersey to Sam Bard Treiman and Joan Little Treiman, Rebecca Treiman received a B.A. in linguistics from Yale University (1976) and a Ph.D. in psychology from the University of Pennsylvania (1980). She was a faculty member at Indiana University and Wayne State University before moving to Washington University in St. Louis."} {"text":"Treiman has written two books on children's spelling, and has published research articles on the processes involved in reading and spelling in children and adults. She has over 200 publications and an h-index of over 85. In addition, Treiman has edited or co-edited several books on spelling and reading. Treiman was editor in chief of the \"Journal of Memory and Language\" from 1997 to 2001. She was awarded the Distinguished Scientific Contribution Award from the Society for the Scientific Study of Reading in 2014."} {"text":"James Bruce Tomblin (born February 10, 1944) is a language and communication scientist and an expert on the epidemiology and genetics of developmental language disorders (DLD). He holds the position of Professor Emeritus of Communication Sciences and Disorders at the University of Iowa."} {"text":"Tomblin received the Alfred K. Kawana Award for Lifetime Achievement in Publications from the American Speech-Language-Hearing Association (ASHA) in 2009 and ASHA Honors in 2010. He received the Callier Prize in Communication Disorders in 2011 for \"remarkable advances in the epidemiology, etiology, assessment and treatment of children's language disorders.\""} {"text":"Tomblin has co-edited several books including \"Understanding Individual Differences in Language Development Across the School Years\" (with Marilyn Nippold), and U\"nderstanding Developmental Language Disorders: From Theory to Practice\" (with Courtenay Norbury and Dorothy V. M. Bishop)."} {"text":"Tomblin went to La Verne College from 1963\u20131966, where he earned his Bachelor of Arts degree in Psychology. He attended graduate school at the University of Redlands from 1966\u20131967, where he received his Master of Arts in Speech Pathology and was awarded his membership in American Speech and Hearing Association Certificate of Clinical Competence in Speech-Language Pathology (CCC-SLP). Tomblin completed his PhD in Communication Disorders at the University of Wisconsin\u2013Madison in 1970. He held faculty positions at Syracuse University and SUNY Upstate Medical Center prior to joining the faculty of the University of Iowa in 1972."} {"text":"Tomblin was named Spriestersbach Distinguished Professor of Liberal Arts & Sciences at the University of Iowa in 1999 and was named Honorary Fellow of the Murdoch Children's Research Institute in 2013. His research has been supported by grants from the National Institutes of Health and the National Institute on Deafness and Other Communication Disorders."} {"text":"Elissa Lee Newport is a Professor of Neurology and Director of the Center for Brain Plasticity and Recovery at Georgetown University. She specializes in language acquisition and developmental psycholinguistics, focusing on the relationship between language development and language structure, and most recently on the effects of pediatric stroke on the organization and recovery of language."} {"text":"Newport graduated from Ladue Horton Watkins High School in Ladue, Missouri in 1965."} {"text":"Newport attended Wellesley College from 1965 to 1967 and in 1969 graduated from Barnard College of Columbia University. Newport received a Ph.D from the University of Pennsylvania in 1975, where her advisors were Lila Gleitman and Henry Gleitman."} {"text":"She was a member of the faculty in the Department of Psychology at the University of California, San Diego and the University of Illinois before joining the faculty at the University of Rochester, where she was chair of the department and the George Eastman Professor of Brain and Cognitive Sciences. In July 2012, she joined the faculty at Georgetown University where she became the founding director of the newly established Center for Brain Plasticity and Recovery. Dr. Newport is married to Ted Supalla, who is also a professor in the Department of Neurology at Georgetown University."} {"text":"In 2017, Newport and eight other plaintiffs filed a lawsuit with attorney Ann Olivarius against the University of Rochester for sexual misconduct by Professor Florian Jaeger of the Brain and Cognitive Sciences Department. The lawsuit followed upon an independent investigation by the university, about which Newman said \"It is not acceptable to say that people have behaved offensively and inappropriately to our students, but nobody did anything wrong. It is not an acceptable conclusion to arrive at. Shame on you.\" In 2020, the University settled the case for $9.4 million."} {"text":"Newport has been recognized by a number of organizations for the impact of her theoretical and empirical contributions to the field of language acquisition. She has been elected as a fellow in the American Philosophical Society, the Association for Psychological Science, the Society of Experimental Psychologists, the Cognitive Science Society, the American Association for the Advancement of Science, the American Academy of Arts and Sciences, and the National Academy of Sciences. Her research has been supported by grants from the National Institutes of Health (NIH), the National Science Foundation, the James S. McDonnell Foundation, and the Packard Foundation."} {"text":"In 2015, she was awarded the Benjamin Franklin Medal for Computer and Cognitive Sciences. She had previously received the Claude Pepper Award of Excellence from the NIH, and the William James Lifetime Achievement Award for Basic Research, the highest honor given by the Association for Psychological Science (APS)."} {"text":"Victoria Alexandra Fromkin (; May 16, 1923 \u2013 January 19, 2000) was an American linguist who taught at UCLA. She studied slips of the tongue, mishearing, and other speech errors and applied this to phonology, the study of how the sounds of a language are organized in the mind."} {"text":"Fromkin was born in Passaic, New Jersey as \"Victoria Alexandra Landish\" on May 16, 1923. She earned a bachelor's degree in economics from the University of California, Berkeley in 1944. She married Jack Fromkin, a childhood friend from Passaic, in 1948, and they settled in Los Angeles, California. She decided to head back to school to study linguistics in her late 30s. She enrolled at UCLA, received her master's in 1963 and her Ph.D in 1965. Her thesis was entitled, \"Some phonetic specifications of linguistic units: an electromyographic investigation\". That same year, Fromkin joined the faculty of the linguistics department at UCLA."} {"text":"Her line of research mainly dealt with speech errors and slips of the tongue. She collected more than 12,000 examples of slips of the tongue, which were analyzed in a number of scholarly publications, notably her 1971 \"Language\" article and an edited volume, \"Speech Errors as Linguistic Evidence\"."} {"text":"From 1971 to 1975, Fromkin was part of a team of linguistic researchers studying the speech of the \"feral child\" known as Genie. Genie had spent the first 13 years of her life in severe isolation, and Fromkin and her associates hoped that her case would illuminate the process of language acquisition after the critical period. However, the study ended after rancorous disputes over Genie's care, and the loss of funding from the National Institute of Mental Health. Fromkin published several papers about Genie's linguistic development, and her PhD student, Susan Curtiss, wrote a dissertation about Genie's linguistic development under Fromkin's supervision."} {"text":"In 1974, Fromkin was commissioned by the producers of the children's television series \"Land of the Lost\" to create a constructed language for a species of primitive cavemen\/primates called the Pakuni. Fromkin developed a 300-word vocabulary and syntax for the series, and translated scripts into her created Pakuni language for the series' first two seasons."} {"text":"For the action-sci-fi movie Blade (film), Fromkin created another constructed language for the vampires in the film."} {"text":"She became the first woman in the University of California system to be Vice Chancellor of Graduate Programs. She held this position from 1980 to 1989. She was elected President of the Linguistic Society of America in 1985. Fromkin was also chairwoman of the board of governors of the Academy of Aphasia. She was elected to membership in the National Academy of Sciences in 1996."} {"text":"Fromkin died at the age of 76 on January 19, 2000 from colon cancer. The Linguistic Society of America established the \"Victoria A. Fromkin Prize for Distinguished Service\" award in her honor in 2001. This award recognizes individuals who have performed extraordinary service to the discipline and to the Society throughout their career."} {"text":"Fromkin contributed to the area of linguistics known as speech errors. She created \"Fromkin's Speech Error Database\", for which data collection is ongoing."} {"text":"Fromkin recorded nine different types of speech errors. The following are examples of each:"} {"text":"Fromkin theorized that slips of the tongue can occur at many levels including syntactic, phrasal, lexical or semantic, morphological, phonological. She also believed that slips of the tongue could occur as many different process procedures. The different forms were:"} {"text":"Fromkin's research helps support the argument that language processing is not modular. The argument for modularity claims that language is localized, domain-specific, mandatory, fast, and encapsulated. Her research on slips of the tongue has demonstrated that when people make slips of the tongue it usually happens on the same level, indicating that each level has a distinct place in the person's brain. Phonemes switch with phonemes, stems with stems, and morphemes switch with other morphemes."} {"text":"Crain was awarded an Australian Research Council Federation Fellowship (2004\u20132009), and is a Fellow of the Academy of Social Sciences in Australia (2006\u2013current). He is currently the chair of the National Committee on Mind and Brain (Australian Academy of Science), and is a presidential nominee on the MIT Corporation Visiting Committee for the Department of Linguistics and Philosophy. Crain is a visiting professor at the Beijing Language and Culture University, China, and at the Kanazawa Institute of Technology, Japan. He was appointed Macquarie University Distinguished Professor in 2010."} {"text":"Morton Ann Gernsbacher is Vilas Research Professor and Sir Frederic Bartlett Professor of Psychology at the University of Wisconsin\u2013Madison. She is a specialist in autism and psycholinguistics and has written and edited professional and lay books and over 100 peer-reviewed articles and book chapters on these subjects. She is currently on the advisory board of the journal \"Psychological Science in the Public Interest\" and associate editor for \"Cognitive Psychology,\" and she has previously held editorial positions for \"Memory & Cognition\" and \"Language and Cognitive Processes.\" She was also president of the Association for Psychological Science in 2007."} {"text":"Gernsbacher received a B.A. from the University of North Texas in 1976, an M.S. from University of Texas at Dallas in 1980, and a Ph.D. from the University of Texas at Austin in Human Experimental Psychology in 1983. She was employed at the University of Oregon from 1983-1992 before joining the faculty at the University of Wisconsin\u2013Madison, where she has remained ever since."} {"text":"Gernsbacher is married and has one child."} {"text":"Marta Kutas (born September 2, 1949) is a Professor and Chair of cognitive science and an adjunct professor of neuroscience at the University of California, San Diego. She also directs the Center for Research in Language at UCSD. Kutas is known for discovering the N400, an event-related potential (ERP) component typically elicited by unexpected linguistic stimuli, with her colleague Steven Hillyard in one of the first studies in what is now the field of neurolinguistics."} {"text":"Kutas received a B.A. in 1971 from Oberlin College and a Ph.D. in 1977 from the University of Illinois, Urbana-Champaign, and she completed a postdoctoral fellowship at the University of California, San Diego in 1980. She then accepted a position as a research neuroscientist in the Department of Neurosciences at UCSD, and she has been a member of the Department of Cognitive Science at UCSD since its founding in 1988. In 2018 Kutas was elected to the American Academy of Arts and Sciences."} {"text":"Thomas G. Bever (born December 9, 1939) is a Regent's Professor of Psychology, Linguistics, Cognitive Science, and Neuroscience at the University of Arizona. He has been a leading figure in psycholinguistics, focusing on the cognitive and neurological bases of linguistic universals, among other pursuits. Bever received a B.A. in linguistics and psychology from Harvard University in 1961, and a Ph.D. in linguistics from the Massachusetts Institute of Technology in 1967; he studied with Noam Chomsky, George A. Miller, and Jean Piaget. He taught at Rockefeller University from 1967\u20131969, Columbia University from 1970\u20131986 (where he was involved with Project Nim), and the University of Rochester from 1985\u20131995, before accepting his current position at the University of Arizona, where he has remained ever since."} {"text":"Bever is notable for his study of garden path sentences such as \"The horse raced past the barn fell\", as well as his analysis by synthesis model of sentence processing, developed with David Townsend. In recent decades, Bever has studied the differences in language processing between righthanders with familial handedness and righthanders without left-handed relatives."} {"text":"He was a co-founder of the journal \"Cognition\"."} {"text":"Petitto's research and discoveries span several scientific disciplines. Her early work with Nim Chimpsky and her later work with humans, encompasses anthropology, comparative ethology, evolutionary biology, cognitive neuroscience, cognitive science, theoretical linguistics, philosophy, psychology, psycholinguistics, language acquisition, child development, evolutionary psychology, American Sign Language, deaf studies, and bilingualism. Her overall discoveries involve:"} {"text":"Advancement of New Discipline: Petitto had an early role in the creation of a new scientific discipline with her colleague and husband Kevin Niall Dunbar, which they termed Educational Neuroscience. Educational Neuroscience is a sister discipline of Cognitive Neuroscience, in which basic neuroscience and behavioral science discoveries about the developing brain and the growing child are joined with their translational implications, towards the ultimate goal of solving core problems in society and the education of young children."} {"text":"Taken together, Petitto's research discoveries and scientific writings have offered testable hypotheses and theory regarding the neural basis for the brain's specialization for human language, the types of language features a child must minimally be exposed to (and when) in early life (sensitive or critical periods), what happens if early critical periods are missed, and how best to facilitate optimal language learning in all children acquiring all human languages be they signed or spoken."} {"text":"After her undergraduate work with Nim Chimpsky, Petitto went on to make discoveries about the linguistic structure, acquisition, and representation in the brain of the world's natural signed languages, especially American Sign Language (ASL). Using signed languages as a new \"microscope\" to discover the central\/universal properties of human language in the brain (those that are distinct from the modality of language transmission and reception), Petitto focused on the following lines of research:"} {"text":"Petitto's more recent studies involve the use of a combination of four disciplines:"} {"text":"Petitto is the recipient of over twenty international prizes and awards including,"} {"text":"Michael Tomasello (born January 18, 1950) is an American developmental and comparative psychologist, as well as a linguist. He is professor of psychology at Duke University."} {"text":"Earning many prizes and awards from the end of the 1990s onward, he is considered one of today's most authoritative developmental and comparative psychologists. He is \"one of the few scientists worldwide who is acknowledged as an expert in multiple disciplines\". His \"pioneering research on the origins of social cognition has led to revolutionary insights in both developmental psychology and primate cognition.\""} {"text":"Tomasello was born in Bartow, Florida. He received his bachelor's degree 1972 from Duke University and his doctorate in Experimental Psychology 1980 from University of Georgia."} {"text":"He was a professor of psychology and anthropology at Emory University in Atlanta, Georgia, US, during the 1980s and 1990s. Subsequently, he moved to Germany to become co-director of Max Planck Institute for Evolutionary Anthropology in Leipzig, and later also honorary professor at University of Leipzig and co-director of the Wolfgang Kohler Primate Research Center. 2016 he became Professor of Psychology and Neuroscience, Duke University, where he now is James F. Bonk Distinguished Professor."} {"text":"He works on child language acquisition as a crucially important aspect of the enculturation process. He is a critic of Noam Chomsky's universal grammar, rejecting the idea of an innate universal grammar and instead proposing a functional theory of language development (sometimes called the social-pragmatic theory of language acquisition or usage-based approach to language acquisition) in which children learn linguistic structures through intention-reading and pattern-finding in their discourse interactions with others."} {"text":"Tomasello also studies broader cognitive skills in a comparative light at the Wolfgang K\u00f6hler Primate Research Center in Leipzig. With his research team, he created a set of experimental devices to test toddlers' (from 6 months to 24 months) and apes' spatial, instrumental, and social cognition; the outcome of which is that social (even ultrasocial) cognition is what truly sets human apart."} {"text":"Uniqueness of human social cognition: broad outlines."} {"text":"More specifically, Tomasello argues that apes lack a series of skills:"} {"text":"Tomasello sees these skills as being preceded and encompassed by the capacity to share attention and intention (collective intentionality), an evolutionary novelty that would have emerged as a cooperative integrating of apes skills that formerly worked in competition."} {"text":"The sharing of attention and of intention."} {"text":"Tomasello's defense, use and deepening of the shared attention and intention hypothesis rely on the experimental data asserted to above (see also). Tomasello also resorts to an evolutionary two-step scenario (see below), and to philosophical concepts borrowed from Paul Grice, John Searle, Margaret Gilbert, Michael Bratman, and anthropologist Dan Sperber."} {"text":"For Tomasello, this two-step evolutionary path of macro ecological pressures impacting micro-level skills in representation, inferences and self-monitoring, does not hold because natural selection would see internal mechanisms. \"Cognitive processes are a product of natural selection, but they are not its target. Indeed, natural selection cannot even \"see\" cognition; it can only \"see\" the effects of cognition in organizing and regulating overt actions.\" Ecological pressures would have put prior cooperative or mutualistic behaviors at such an advantage against competition as to create a new selective pressure favoring new cognitive skills, which would have posed new challenges, in an autocatalytic way."} {"text":"Echoing the phylogenetic path, humans' unique skills at joint and collective intentionality develop during the individual's lifetime by scaffolding, not only on simple skills like distinguishing animate\/inanimate matter, but also on the communicative conventions and institutions forming the socio-cultural environment, forming feedback loops that enrich and deepen both cultural ground and individual's prior skills. \"[B]asic skills evolve phylogenetically, enabling the creation of cultural products historically, which then provide developing children with the biological and cultural tools they need to develop ontogenetically\"."} {"text":"The sharing of attention and of intention is taken to be prior to language in evolutionary time and in an individual's lifetime, while conditioning language's acquisition through the parsing of joint attentional scenes into actors, objects, events and the like. More broadly, Tomasello sees the sharing of attention and of intention as the roots of human cultural world (the roots of conventions, of group identity, of institutions): \"\"Human reasoning, even when it is done internally with the self, is ... shot through and through with a kind of collective normativity in which the individuals regulate her actions and thinking based on the group's normative conventions and standards\"."} {"text":"Willem Johannes Maria (Pim) Levelt (born 17 May 1938 in Amsterdam) is a Dutch psycholinguist. He is an influential researcher of human language acquisition and speech production. He developed a comprehensive theory of the cognitive processes involved in the act of speaking, including the significance of the \"mental lexicon\". Levelt was the founding director of the Max Planck Institute for Psycholinguistics in Nijmegen. He also served as president of the Royal Netherlands Academy of Arts and Sciences between 2002 and 2005, of which he has been member since 1978."} {"text":"Levelt became a member of the German National Academy of Sciences Leopoldina in 1993. In 2000 he became a foreign associate of the National Academy of Sciences of the United States. Levelt became a corresponding member (living abroad) of the Austrian Academy of Sciences in 2002. In 2010 Levelt was awarded the \"Orden Pour le M\u00e9rite f\u00fcr Wissenschaften und K\u00fcnste\", receiving the orden in person from the President of Germany on 30 May 2011."} {"text":"Dan Isaac Slobin (born May 7, 1939) is a Professor Emeritus of psychology and linguistics at the University of California, Berkeley. Slobin has made major contributions to the study of children's language acquisition, and his work has demonstrated the importance of cross-linguistic comparison for the study of language acquisition and psycholinguistics in general."} {"text":"Slobin received a B.A. in psychology from the University of Michigan in 1960 and a Ph.D. in social psychology from Harvard University in 1964. In addition to working at the University of California, Berkeley, Slobin has served as a visiting professor at several universities around the world, including Bo\u011fazi\u00e7i University, Tel-Aviv University, Max Planck Institute for Psycholinguistics, Centre National de la Recherche Scientifique (CNRS), and Stanford University."} {"text":"Slobin has extensively studied the organization of information about spatial relations and motion events by speakers of different languages, including both children and adults. He has argued that becoming a competent speaker of a language requires learning certain language-specific modes of thinking, which he dubbed \"thinking for speaking\". Slobin's \"thinking for speaking\" view can be described as a contemporary, moderate version of the Sapir\u2013Whorf hypothesis, which claims that the language we learn shapes the way we perceive reality and think about it. This view is often contrasted with the \"language acquisition device\" view of Noam Chomsky and others, who think of language acquisition as a process largely independent of learning and cognitive development."} {"text":"Slobin did a research study, published in 2007, titled the \"Children use canonical sentence schemas: A crosslinguistic study of word order and inflections.\" The aim of the study was to show that we must not generalize that the acquiring of English language in children is the same as the acquiring of \"x\" languages. Slobin proposed that children \"construct a canonical sentence schema as a preliminary organizing structure for language behaviour. This canonical sentence schemas provide a functional explanation for the order of words and inflectional strategies based on each child's attempt to quickly master basic communication skills in his or her languages.\""} {"text":"Slobin believes that language is acquired and is a learning as well as cognitive development in a child. His choice of method is the result of his theoretical stance where, in task-comparison activity, his subjects get exposed to a consistent variety of tests, administered differently over a period of ten days. In task-comparison, his subjects get to perform or answer questions by displaying the instructions given."} {"text":"Slobin also designed a project, along with Ruth Berman in the beginning of 1980. He created \"The frog-story project\", a research tool which was a children's storybook that tells a story in 24 pictures with no words. This makes it possible to elicit narratives that are comparable in content but differing in form, across age and languages. There is now data from dozens of languages and most of the world's major language types. The Berman & Slobin study compared English, German, Spanish, Hebrew and Turkish on a range of dimensions."} {"text":"His project was also mentioned in Raphael Berthele, a professor in the University of Fribourg, Switzerland on her work in the \"Crosslinguistic approaches to the psychology approach\" by Elena Lieven, Jiansheng Guo."} {"text":"Elizabeth Ann Bates (July 26, 1947 \u2013 December 13, 2003) was a Professor of cognitive science at the University of California, San Diego. She was an internationally renowned expert and leading researcher in child language acquisition, psycholinguistics, aphasia, and the neurological bases of language, and she authored 10 books and over 200 peer-reviewed articles and book chapters on these subjects. Bates was well known for her assertion that linguistic knowledge is distributed throughout the brain and is subserved by general cognitive and neurological processes."} {"text":"Elizabeth Bates earned a B.A. from St. Louis University in 1968, and an M.A. and PhD in human development from the University of Chicago in 1971 and 1974, respectively."} {"text":"She was employed as a tenure-track professor at the University of Colorado from 1974-1981 before joining the faculty of the University of California, San Diego, where she worked until late 2003. Bates was one of the founders of the Department of Cognitive Science at UCSD, the first department of its kind in the USA. She was also the director of the UCSD Center of Research in Language and the co-director of the San Diego State University\/UCSD Joint Doctoral Program in Language and Communication Disorders. Bates also served as a visiting professor at the University of California, Berkeley in 1976-1977 and at the National Research Council Institute of Psychology in Rome."} {"text":"On December 13, 2003, Elizabeth Bates died, after a year-long struggle with pancreatic cancer. Over the course of more than thirty years, Bates had established herself as a world leader in a number of fields \u2013 child development, language acquisition, aphasia research, cross-linguistic research, bilingualism, psycholinguistics and their neural underpinnings, and had trained, supported and collaborated with a diverse and international group of researchers and students. The Elizabeth Bates Graduate Research Fund was established at UCSD in her memory to assist graduate students' research."} {"text":"In defense of communication functioning as a main force of language acquisition, she looked to the prelinguistic use of commands by infants that required them to develop and use social skills. She highlighted the reliance on pointing by infants in order to fill their need to communicate before they are able to speak. The child's ability to incorporate imperatives in their gestures in order to make a command or request was found in her research and shows the necessity of communication regardless of language. Bates also coined the term , a word-like utterance made by prelinguistic children that has meaning (e.g. yumyum), but does not represent the adult-like form."} {"text":"Domain-Specificity, Modularity and Neural Plasticity in Language Processing."} {"text":"Bates and colleagues also showed that after brain injury, adult aphasic patients' deficits were not specific to linguistic structures theorized to be localized to specific brain areas, or even restricted to the linguistic domain. Deficits and lesion sites instead overlap in the role that they affect speech fluency and complexity. Language is viewed as interrelated with cognitive processes such as memory, pattern recognition, and spreading activation. This perspective runs counter to the theory of Noam Chomsky, Eric Lenneberg, and Steven Pinker that language is processed in a domain-specific manner, by specific language modules in the mind, and can be localized to specific brain regions such as Broca's and Wernicke's areas."} {"text":"David Swinney (April 21, 1946 \u2013 April 14, 2006) was a prominent psycholinguist. His research on language comprehension contributed to methodological advances in his field."} {"text":"Swinney received his BA in Psychology at Indiana University in 1968, his MA in Language Disorders, Speech Pathology and Audiology (1969), and his PhD in Psycholinguistics and Cognitive Psychology at the University of Texas at Austin (1974)."} {"text":"Swinney's faculty positions included: Tufts University (Department of Psychology), Rutgers University (Psychology and Cognitive Sciences Departments), the City University of New York (Programs in Linguistics, Psychology, Speech and Hearing Science) and University of California, San Diego (Chair, Department of Psychology)."} {"text":"Cross-Modal Priming Task. The Cross-Modal Priming Task (CMPT), developed by David Swinney, is an online measure used to detect activation of lexical and syntactic information during sentence comprehension."} {"text":"Prior to Swinney's introduction of this methodology, studies of lexical access were largely procured by offline measures, such as a phoneme-monitoring task. In these measures, study participants were asked to respond to a syntactic or lexical ambiguity in a sentence only after the entire sentence had been comprehended. Since Swinney considered the system of resolving ambiguities to be an autonomous, fast, and mandatory process, he suggested that the \u201cdownstream\u201d temporal delay between stimulus and response could contaminate results. The CMPT, therefore, was created to probe lexical access in real time."} {"text":"During this task, study participants heard recorded sentences containing lexical or syntactic ambiguities while seated in front of a computer screen. At the same moment when the ambiguous word or phrase was uttered, simultaneously a string of letters---either a word or a non-word---was flashed on the computer screen, and the participant was required to indicate whether it was a word or not, by pressing one of two buttons on a computer keyboard."} {"text":"The uttered words had an ambiguous meaning or were an ambiguous phrase (for example: \"mouse\" - which could be understood as an animal or as a computer input device)."} {"text":"The words shown on the screen - when they were actual lexical words (and not non-words), could be related to one of the meanings of the uttered word or phrase (for example, on the screen the written word could be \"animal\", or \"computer\"), or the written word on the screen could be a control word or phrase, unrelated to the uttered word or phrase (for example, \"sun\")."} {"text":"Study participants were then asked to respond as quickly as possible once the probes were processed (i.e. once they understood them). The test assumed that multiple meanings are activated at the moment an ambiguity is encountered in a sentence, which primes related concepts. Swinney's anticipated quicker recognition of concepts, once related concepts were primed and thus activated, as opposed to words that had not been activated."} {"text":"This study utilized a CMPT, in order to investigate the process by which people resolve lexical ambiguity. Specifically, do people access all meanings of words at such moments, or only one meaning? Subjects listened to pre-recorded series of sentences that contained ambiguous words. These words were equibiased\u2014meaning that there were two possible meanings of each ambiguous word and that one meaning was not favored over the other in common speech. The subjects were informed that they would be tested on their comprehension of these sentences."} {"text":"For example, subjects were presented with the utterance: \"Rumor had it that, for years, the government building had been plagued with problems. The man was not surprised when he found several bugs in the corner of the room.\" Here, the word \"bugs\" was determined to be ambiguous and equibiased towards the meaning of either \"insects\" on one hand, or \"surveillance\" on the other. At the moment of the utterance \"\u2026 bugs\" either \"ANT\" or \"SPY\" or an unrelated word such as \"SEW\" or non-word, were flashed on the screen. Study participants were asked to decide, as quickly as possible, whether the string of letters was a word or not."} {"text":"Additionally, context conditions varied in that some had no biasing context, as above, or they strongly biased the listener towards one meaning or another. For example, \"Rumor had it that, for years, the government building had been plagued with problems. The man was not surprised when he found several spiders, roaches and other bugs in the corner of the room.\""} {"text":"Swinney claimed that if a person activates both meanings of an equibiased ambiguous word simultaneously, then the response times should be the same regardless of which meaning is primed by the stimulus. However, if one meaning or another is activated, then the response time should be quicker for the priming of that meaning."} {"text":"Results indicated that listeners accessed multiple meanings for ambiguous words, even when faced with strong biasing contexts that indicated a single meaning. That is to say that regardless of whether \"the man was not surprised when he found several bugs in the corner of the room\" or \"the man was not surprised when he found several spiders, roaches and other bugs in the corner of the room\" was uttered, both SPY (contextually inappropriate to the second sentence) and ANT (contextually appropriate) appear to have been primed equally, whereas SEW and non-words were not."} {"text":"In this study, Love, Maas and Swinney explored lexical access, using the CMPT, among three different categories of English proficient individuals: monolingual native English speakers (NINES), non-native English speakers (NNES) and bilingual native English speakers (BNES). Particularly, they were interested in how these different groups resolved non-canonical object-relative constructions that contained an ambiguous noun with a strong biasing context. For example, a prior experiment used the following sentence:"} {"text":"\"The professor insisted that the exam be completed in ink, so Jimmy used the new pen (Probe Position1), that his mother-in-law recently (Probe Position2) purchased (Probe Position3) because the multiple colors allowed for more creativity.\""} {"text":"This object-relative construction is considered non-canonical because the direct object \"pen\" occurs before its associated verb \"purchased\". Thus, it can be considered a \"fronted direct object\". The argument relies on the ambiguity of the word, \"pen\" which could mean either a writing instrument, or a jail cell. The Probe Positions 1, 2 and 3 indicated in the sentence above indicate the points at which the study participants were presented with a word on a computer screen, in a cross-modal decision task similar to the one described above. Moreover, the Probes represented either one (\"pencil\") or another interpretation (\"jail\") of the noun \"pen\" or were non-related controls (\"jacket\" or \"tale\") or a non-word of equivalent length."} {"text":"After qualifying language pre-tests and completion of a self-report questionnaire about language proficiency, background, and age of second language acquisition, subjects were classified as either BNES or NNES. The non-English languages identified were of wide variety (e.g. Russian, Cantonese, Greek, Mandarin, Vietnamese, Spanish, Korean), and the researchers emphasize that most of the languages represented place less importance on word order than the English language. The study subjects participated in a CMPT that utilized object-relative sentences such as the one above, or a filler sentence of equivalent length and complexity. Response times were measured and compared."} {"text":"Overall, all the English-proficient individuals tested activated both meanings of the ambiguous direct object as soon as it was presented, despite the strong biasing context. Then, in the NINES group, activation had dissipated 700 ms downstream (PP2), until the primary meaning was reactivated at Probe Position 3, after the verb. However, for the contextually appropriate interpretation, the non-NINES did not reactivate the fronted direct object in Probe Position 3. Researchers attributed this difference to the prior exposure of many in the non-NINES groups to languages that relied less explicitly on word order for comprehension."} {"text":"In this experiment, Zurif, Swinney and Garret built upon existing research on language processing errors in Broca's and Wernicke's aphasia patients. Prior studies indicate that, generally, Broca's aphasia patients demonstrate a slower-than-normal time course of lexical activation than controls; whereas, lexical activation is relatively unimpaired in Wernicke's aphasics. This study compared and contrasted selected patients\u2019 capacities for resolving subject-relative constructions through a process known as gap filling. For example:"} {"text":"\"The gymnast loved the professor* from the northern city who* (t)* complained about the bad coffee.\""} {"text":"Since the displaced \"who\" is intended to modify \"the professor\" in this sentence, reactivation of antecedent \"the professor\" at \"who\" refers to the process of gap filling. Here, the gap between the subject noun phrase and relative pronoun is necessarily resolved through mental reordering of the sentence's structural elements."} {"text":"Findings indicated that, in support of the hypothesis, the capacity and resources available to patients with Wernicke's aphasia to procure appropriate gap filling remain intact. Although this process appears to be preserved, the researchers point out that other related processes, such as higher-level sentence comprehension, might be impaired."} {"text":"On the other hand, the gap filling process in Broca's patients was significantly impaired. Results showed that priming was not activated at any of the probe positions\u2014signifying a poverty of resources available to these patients for real-time processing such subject-relative constructions. The researchers argue, based on these results, that neurological damage to the left anterior cortex implicates this region in resolving gap-filling operations during sentence comprehension."} {"text":"Jeffrey Locke Elman (January 22, 1948 \u2013 June 28, 2018) was an American psycholinguist and professor of cognitive science at the University of California, San Diego (UCSD). He specialized in the field of neural networks."} {"text":"In 1990, he introduced the simple recurrent neural network (SRNN), also known as the 'Elman network', which is capable of processing sequentially ordered stimuli, and has since become widely used."} {"text":"Elman's work was highly significant to our understanding of how languages are acquired and also, once acquired, how sentences are comprehended. Sentences in natural languages are composed of sequences of words that are organized in phrases and hierarchical structures. The Elman network provides an important hypothesis for how neural networks\u2014and, by analogy, the human brain\u2014might be doing the learning and processing of such structures."} {"text":"Elman was also a generous and kind person, beloved by his colleagues at UCSD and around the world."} {"text":"Elman attended Palisades High School in Pacific Palisades, California, then Harvard University, where he graduated in 1969. He received his Ph.D. from the University of Texas at Austin in 1977."} {"text":"With Jay McClelland, Elman developed the TRACE model of speech perception in the mid-80s. TRACE remains a highly influential model that has stimulated a large body of empirical research."} {"text":"In 1990, he introduced the simple recurrent neural network (SRNN; aka 'Elman network'), which is a widely used recurrent neural network that is capable of processing sequentially ordered stimuli. Elman nets are used in a number of fields, including cognitive science, psychology, economics and physics, among many others."} {"text":"In 1996, he co-authored (with Annette Karmiloff-Smith, Elizabeth Bates, Mark H. Johnson, Domenico Parisi, and Kim Plunkett), the book \"Rethinking Innateness\", which argues against a strong nativist (innate) view of development."} {"text":"Elman was an Inaugural Fellow of the Cognitive Science Society, and also was its President, from 1999-2000. He was awarded an honorary doctorate from the New Bulgarian University, and was the 2007 recipient of the David E. Rumelhart Prize for Theoretical Contributions to Cognitive Science. He was founding Co-Director of the Kavli Institute for Brain and Mind at UC San Diego, and holds the Chancellor's Associates Endowed Chair. He was Dean of Social Sciences at UCSD from 2008 until June 2014. Elman was also a founding co-director of the UCSD Hal\u0131c\u0131o\u011flu Data Science Institute, announced March 1, 2018."} {"text":"Suzy J. Styles is a psychologist with Nanyang Technological University (NTU), Singapore. Her research is in the area of psycholinguistics and cognitive approaches to language acquisition. She is the director of the Brain, Language and Intersensory Perception Lab at NTU."} {"text":"In 2017 she and Nora Turoman published a paper in \"Royal Society Open Science\" that found that research subjects could guess the sounds represented by letters from unfamiliar alphabets better than would be expected from simple chance indicating the possibility of an innate ability to understand writing."} {"text":"Li is also President-Elect of the \"Society for Computers in Psychology\" and one of the four chief editors of \", Cambridge University Press\"."} {"text":"Linda B. Smith is a Professor of Psychology and Cognitive Science at Indiana University. Smith earned her Ph.D. from the University of Pennsylvania."} {"text":"Smith is the author (or co-author) of more than 100 publications on cognitive and linguistic development in young children."} {"text":"With Esther Thelen, she co-authored the books \"A Dynamic Systems Approach to Development\" (Smith & Thelen 1993) and \"A Dynamic Systems Approach to the Development of Cognition and Action\" (Thelen & Smith 1994), which look at development from a dynamic systems perspective."} {"text":"She is also well known for her research on the shape bias (Landau et al. 1988), children's tendency to generalize new concrete nouns on the basis of the shape of the object to which they refer."} {"text":"In 1997, she received the Tracy Sonneborn Award, Indiana University's highest award to its faculty. In 2007, she was elected to the American Academy of Arts and Sciences. In 2013 she received the Rumelhart Prize from the Cognitive Science Society. In 2019, she received the Norman Anderson Lifetime Achievement Award from the Society of Experimental Psychologists. Smith is also a member of the Governing Board of the Cognitive Science Society."} {"text":"Janet Dean Fodor (born 1942) is distinguished professor of linguistics at the City University of New York. Her primary field is psycholinguistics, and her research interests include human sentence processing, prosody, learnability theory and L1 (first-language) acquisition."} {"text":"Born Janet Dean, she grew up in England and received her B.A. in 1964 and her M.A. in 1966, both from Oxford University. At Oxford she was a student of the social psychologist Michael Argyle, and their 'equilibrium hypothesis' for nonverbal communication became the basis for affiliative conflict theory: if participants feel the degree of intimacy suggested by a channel of nonverbal communication to be too high, they act to reduce the intimacy conveyed through other channels. She received her Ph.D. in 1970 from MIT, looking at the challenge posed by opaque contexts for semantic compositionality."} {"text":"In 1988, Fodor founded the CUNY Conference on Human Sentence Processing. She was awarded a Guggenheim Fellowship in 1992. She was President of the Linguistic Society of America in 1997. In 2014, she was elected a Corresponding Fellow of the British Academy. A volume of papers in her honor, \"Explicit and Implicit Prosody in Sentence Processing\", was published in 2015."} {"text":"Fodor supervised 27 dissertations of students from both CUNY and the University of Connecticut. In 2017, she received an honorary doctorate from the Paris Diderot University."} {"text":"She was married to Jerry Alan Fodor until his death in 2017."} {"text":"Fodor and Lyn Frazier proposed a new two-stage model of parsing human sentences and the syntactic analysis of these sentences. The first step of this new model is to \u201cassign lexical and phrasal nodes to groups of words within the lexical string that is received\u201d. The second step is to add higher nonterminal nodes and combines these newly created phrases into a sentence. Fodor and Frazier suggest this new method because it can transcend the complexities of language by parsing only a few words at a time. Their model is based on the assumption that initial parsing occurs via the length of the phrase, not the syntactic meaning."} {"text":"Through a series of sentence analyses, Fodor found that the \u201cWH-trace appears in mental representations of sentence structure, but NP-trace does not\u201d. WH-trace is the placement of interrogative words (who, what, where) in a sentence. Her findings did not support those of McElree, Bever, or MacDonald, but she acknowledges that there are different types of sentences that are going to create linguistic issues that linguists don\u2019t know how to deal with yet. Using this same data, Fodor also finds that passive verbs are more memorable than adjectives during language production."} {"text":"In this article, Fodor emphasizes the importance of integrating prosody into research on sentence processing. She argues that past research has focused on syntactic and semantic analysis of sentences, but people use prosody when reading, which affects reading comprehension and sentence analysis. She also brings up the idea that people use prosody when writing, not just reading, which further affects sentence production and sentence structure. She blames technology for this new need, largely because of the newfound availability of information."} {"text":"Building off of the work of her doctoral advisor, Noam Chomsky, Fodor wrote an article on the importance of identifying empty categories in sentence processing. Empty categories can \u201caccount for certain regularities of sentence structure\u201d, and attaching it with a previous word or phrase can help determine what it means. Figuring out and understanding the meaning of empty categories requires a linguistic background, but all language-speakers have the ability to use empty categories."} {"text":"Colin Phillips is a British psycholinguist who is the director of the Maryland Language Science Center at the University of Maryland. He is an elected fellow of the Linguistic Society of America and the American Association for the Advancement of Science. He is also a co-editor of the \"Annual Review of Linguistics\"."} {"text":"Colin Phillips grew up in a rural town in eastern England. He attended Oxford University, where he studied Medieval German literature. He then came to the United States on an exchange scholarship to study at Rochester University for a year, where he became more interested in linguistics. He then attended graduate school at Massachusetts Institute of Technology (MIT), where he planned to study semantics."} {"text":"Philipps researches language acquisition and language processing. In 1997 he was hired at the University of Delaware as an assistant professor. In 2000 he accepted a position as an assistant professor at the University of Maryland, College Park. He was promoted to associate professor in 2002 and full professor in 2008. He became the founding director of the Maryland Language Science Center in 2013."} {"text":"He has been co-editor of the \"Annual Review of Linguistics\" with Mark Y. Liberman since 2021."} {"text":"The Linguistic Society of America elected him as a fellow in 2018."} {"text":"In 2020 he was elected as a fellow of the American Association for the Advancement of Science."} {"text":"During his study-abroad year at Rochester University he met his future wife, Andrea Zukowski. They have one child. In 2016 he and Zukowski founded College Park parkrun, a series of free running events in their area."} {"text":"Helen J. Neville (May 20, 1946 \u2013 October 12, 2018) was a Canadian psychologist and neuroscientist known internationally for her research in the field of human brain development."} {"text":"Neville received a B.A. from the University of British Columbia, an M.A. from Simon Fraser University, and a Ph.D. from Cornell University, and she also completed a postdoctoral fellowship in neuroscience at the University of California, San Diego. She has been employed as Director of the Laboratory for Neuropsychology at the Salk Institute and as a professor in the Department of Cognitive Science at UCSD before joining the faculty at the University of Oregon in 1995, where she remained."} {"text":"Neville was the Robert and Beverly Lewis Endowed Chair and Professor of Psychology and Neuroscience, Director of the Brain Development Lab, and Director of the Center for Cognitive Neuroscience at the University of Oregon."} {"text":"Neville died on October 12, 2018 at the age of 72."} {"text":"Neville studied in cerebral specialization, neuroplasticity of the brain in childhood and adulthood, the roles of biological constraints and experience, and neurolinguistics. In order to investigate these topics, Neville used a variety of methods, including behavioral measures, event-related potentials (ERPs), and structural and functional magnetic resonance imaging (fMRI). Neville's research has helped to distinguish between the brain systems and functions that are largely fixed from those which are modifiable by experience, and with all her work she aimed to make a positive, tangible difference in society. She was involved in a number of outreach programs and charities in addition to scientific research."} {"text":"Neville has been published extensively, in journals including \"Nature\", \"Nature Neuroscience\", \"Journal of Neuroscience\", \"Journal of Cognitive Neuroscience\", \"Cerebral Cortex\" and \"Brain Research\"."} {"text":"Recent topics of research she has been involved in include the neural mechanisms of grammar acquisition in adults, attentional control mechanisms as they relate to working memory, as well as various types of attention and learning mechanisms in young children."} {"text":"Neville and the Brain Development Lab were also responsible for creating \"Changing Brains\", a program of video segments aimed at non-scientists to describe what research has revealed the effects of experience on human brain development. The series aims to inform parents, teachers and policymakers on how to help children develop to their full potential. Neurologist Oliver Sacks said the program was \"...fascinating and very original in form and presentation \u2014 and exactly the way to present (brain) science to non-scientists.\""} {"text":"She is the author of the book \"Temperament tools: working with your child's inborn traits\" (1998)"} {"text":"Neville has won grants from the U.S. Department of Education and National Institutes of Health for her work in neurocognitive development. She is a member of the American Academy of Arts and Sciences and a fellow of the American Psychological Society and Society of Experimental Psychologists. In 2013, she received the William James Fellow Award from the Association for Psychological Science. Other awards that she received for her work in psychology are listed below:"} {"text":"Ursula Bellugi (born February 21, 1931 in Jena, Germany) is a Professor and Director of the Laboratory for Cognitive Neuroscience at the Salk Institute in La Jolla, California. She is also adjunct professor at the University of California San Diego and San Diego State University and an Associate with the Sloan Center for Theoretical Neurobiology. Broadly stated, she conducts research on the biological bases of language. More specifically, she has studied the neurological bases of American Sign Language extensively, and her work has led to the discovery that the left hemisphere of the human brain becomes specialized for language, whether spoken or signed, a striking demonstration of neuronal plasticity."} {"text":"She has also investigated the language abilities of individuals with Williams Syndrome, a puzzling genetically based disorder that leaves language, facial recognition and social skills remarkably well-preserved in contrast to severe inadequacy in other cognitive aptitudes. The search for the underlying biological basis for this disorder is providing new opportunities for understanding how brain structure and function relate to cognitive capabilities."} {"text":"Bellugi received a B.A. from Antioch College in 1952 and an Ed.D. from Harvard University in 1967. Since then, she has held positions as a tenure-track professor at the Salk Institute (1970 forward) and as an adjunct professor at the University of California, San Diego (1977 forward) and San Diego State University (1995 forward)."} {"text":"Bellugi is the daughter of mathematician and optical engineer Maximilian Herzberger. A lot of her research was conducted in collaboration with her husband Edward Klima, a linguist who also specialized in the study of American Sign Language."} {"text":"Martin Dimond Stewart Braine (June 3, 1926 \u2013 April 6, 1996) was a cognitive psychologist known for his research on the development of language and reasoning. He was Professor of Psychology at New York University at the time of his death."} {"text":"Braine was well known for his research on mental logic.\"\" He theorized that people naturally make deductive inferences based on their knowledge of natural language terms like \"if,\" \"all\", \"any\", and \"not.\" Such terms are understood through an intuitive logic that supports commonsense reasoning, but may also produce reasoning fallacies or errors. This natural mental logic was viewed as distinct from the standard logic of mathematicians and philosophers in terms of the inferences it licensed. In contrast to Philip Johnson-Laird and others who suggested that people rely on mental models as opposed to logic when reasoning, Braine took the position that people rely on both mental logic and mental models, with the former closely tied to processes of linguistic comprehension."} {"text":"Braine edited the volumes \"Categories and Processes in Language Acquisition\" by Yonata Levy and Izchak Schlesinger, and \"Mental Logic\" with David O'Brien."} {"text":"Braine was born in Kuala Lumpur on June 3, 1926. He was the son of Edith Braine, a teacher, and Charles Dimond Conway Braine, a civil engineer. His younger brother was the British philosopher David Dimond Conway Braine."} {"text":"Braine received his B.S. degree in mechanical engineering in 1946 at University of Birmingham in England. He subsequently attended the University of London where he received a B.S. in Psychology. In London he attended lectures by Jean Piaget, which influenced his later research on the development of logical reasoning."} {"text":"Braine continued his education at New York University where he received his Ph.D. in Psychology in 1957 under the supervision of Elsa Robinson. Braine worked at SUNY Downstate Medical Center and later at Walker Reed Army Medical Center as a researcher before joining the faculty of the Department of Psychology at the University of California, Santa Barbara. He was awarded a Guggenheim Fellowship in 1965. Braine moved to New York University in 1971 where he remained for the duration of his career."} {"text":"Braine married Lila (Rosensveig) Ghent in 1960. Lila Braine was a Professor of Psychology at Barnard College, Columbia University. They had a son Jonathan in 1961 and a daughter Naomi in 1964. Braine died of cancer in New York City on April 6, 1996."} {"text":"Braine conducted research on child language development and engaged in the empiricism-nativism debate. Prior to Noam Chomsky's arguments for innate linguistic universals, there was a strong belief that the structures of language were learned from the input. Braine offered a compromise position that language acquisition was a process of mapping utterances onto a syntax of thought, supported by semantic primitives and a mental logic."} {"text":"Braine proposed that when learning language, young children use \"limited scope\" formula to produce their first word combinations, with each formula consisting of a relational term with a slot to be filled (e.g. \"all gone ____\"). Braine's view that toddlers learn the combinatorial properties of words on an item-by-item basis paved the way for usage-based, lexicalist approaches to grammatical development. Other work focused on learners' acquisition of grammatical gender categories and their reliance on probabilistic cues to acquire grammatical structure. Braine's research emphasized how linguistic patterns are discovered and strengthened through use and repetition."} {"text":"Duane Girard Watson (born 1976) is an American neuroscientist and Professor of Psychology and Human Development at Vanderbilt University. He holds the Frank W. Mayborn Chair in Cognitive Science and leads the Vanderbilt University Communication and Language Laboratory."} {"text":"Watson is from Las Vegas. Watson studied psychology at Princeton University. He originally intended to be a physician, but a class on linguistics made him change course. He graduated in 1998 and moved to Cambridge, Massachusetts. Here he joined the laboratory of Ted Gibson in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology. In 2002 Watson earned his doctoral degree. His research considered intonational phrasing (that is, sections of spoken text with a particular intonation patterns) in language comprehension. Watson was a postdoctoral researcher at the University of Rochester, where he worked with Michael Tanenhaus."} {"text":"In 2016 Watson joined Vanderbilt University, where he leads the Communication and Language Laboratory (CaLL). CaLL investigate prosody, the patterns and rhythm of spoken word, individual differences in language processing and how language is produced."} {"text":"Watson was appointed Director of the Governing Board of the Psychonomic Society in 2019. He serves as Associate Editor of the . Watson founded the SPARK society, an organisation that looks to support scientists of colour to become innovators in cognitive science. He was promoted to Frank W. Mayborn Chair in 2020."} {"text":"Elena Lieven (born 18 August 1947) is a British psychology and linguistics researcher and educator. She was a Senior Research Scientist in the Department of Developmental and Comparative Psychology in Leipzig, Germany. She is also a professor in the School of Health Sciences at the University of Manchester where she is Director of its Child Study Centre and leads the ESRC International Centre for Language and Communicative Development (LuCiD)."} {"text":"Elena Lieven is the sister of Anatol Lieven, Dominic Lieven, Michael Lieven, and Nathalie Lieven. Ancestors include Dorothea von Lieven and"} {"text":"Christoph von Lieven, prominent members of Baltic German nobility."} {"text":"Lieven attended More House School in London, graduating in 1963, then studying at City of Westminster College in London. She studied experimental psychology during her undergraduate years at New Hall, Cambridge University, earning honors, and then studied language development during her doctoral studies at Cambridge."} {"text":"After Cambridge, Lieven moved to the University of Manchester."} {"text":"She was Editor of the \"Journal of Child Language\" for nearly ten years (1996\u20132005)."} {"text":"Her principal areas of research involve: usage-based approaches to language development; the emergence and construction of grammar; the relationship between input characteristics and the process of language development; and variation in children's communicative environments. She has been involved in the design and collection of naturalistic child language corpora initially funded by the Economic and Social Research Council (ESRC) and, more recently, has collected a number of dense databases funded by the Max Planck Institute."} {"text":"Lieven was previously the president of the International Association for the Study of Child Language. Also, she is a member of The Chintang and Puma Documentation Project, a DOBES project funded by the Volkswagen Foundation aiming at the linguistic and ethnographic description of two endangered Sino-Tibetan languages of Nepal."} {"text":"She has also been the director of the Child Study Centre; Centre lead for the Centre for Developmental Science and Disorders in the Institute of Brain, Behaviour and Mental Health; Director of the ESRC International Centre for Language and Communicative Development (LuCiD) which was established jointly by the University of Manchester, University of Liverpool and University of Lancaster in 2014 on a five-year grant."} {"text":"She has been designated an honorary professor at the University of Leipzig, and she has been a guest researcher at numerous universities, including the Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands; University of Barcelona, University of California, Berkeley, US; and La Trobe University, Melbourne, Australia]]."} {"text":"In July 2018 Lieven was elected Fellow of the British Academy (FBA)."} {"text":"Susan E. Carey (born 1942) is an American psychologist who is a Professor of Psychology at Harvard University. She studies language acquisition and children's development of concepts and is known for introducing the concept of fast mapping, whereby children learn the meanings of words after a single exposure. Her research focuses on analyzing philosophical concepts, and conceptual changes in science over time. She has conducted experiments on infants, toddlers, adults, and non-human primates."} {"text":"Carey was born in 1942 to William and Mary Carey. She received her BA from Radcliffe College in 1964, a Fulbright scholarship to study in University of London in 1965, and her PhD in experimental psychology from Harvard University in 1971."} {"text":"She was employed at the Massachusetts Institute of Technology from 1972 to 1996 in the Department of Brain and Cognitive Sciences. She was an assistant professor from 1972-1977, an associate professor from 1977 to 1984, and a full professor from 1984 to 1996. She was a professor at New York University in the department of psychology from 1996 to 2001. In 2001 she joined the faculty at Harvard University."} {"text":"Susan Carey and Elsa Bartlett coined the term \"fast mapping\" in 1978. This term refers to the hypothesized mental process where a new concept is learned based only on a single exposure. In 1985 Carey wrote \"Conceptual Change in Childhood\", a book about the cognitive differences between children and adults. It is a case study about children's acquisition of biological knowledge and analyzes the ways the knowledge is restructured during development. The book reconciles Jean Piaget's work on animism with later work on children's knowledge of biological concepts."} {"text":"On returning to Harvard, Carey began working alongside Elizabeth Spelke, and they started the Laboratory for Developmental Studies. Carey also studied alongside George Miller, Jerome Butler, and Roger Brown. She conducted experiments on infants, toddlers, adults, and non-human primates. Carey coined the term \"Quinian bootstrapping\", a theory that people build complex concepts out of simple ones."} {"text":"In 2009 Carey wrote \"The Origin of Concepts\", which shows the basis of development of cognitive science. The book won the 2010 Eleanor Maccoby Book Award of the American Psychological Association."} {"text":"Carey has served on editorial boards for the Psychological Review, Psychological Science, Journal of Acquisition, and Development Psychology."} {"text":"Carey is married to the professor of philosophy Ned Block (NYU)."} {"text":"Carey received the Jean Nicod Prize for philosophy of mind in 1998. In 2009 she was the first woman to receive the David E. Rumelhart Prize for significant contributions to the theoretical foundation of human cognition."} {"text":"Carey is a member of the American Philosophical Society, National Academy of Sciences, the American Academy of Arts and Sciences, the United States National Academy of Sciences, the National Academy of Educational Sciences of Ukraine, and the British Academy."} {"text":"She was elected a Fellow of the American Academy of Arts and Sciences in 2001. She is a Fellow of the New York Institute for the Humanities."} {"text":"Susan Carey has received the following fellowships: Radcliffe Institute Fellowship (1976-1978), Sloan Fellow (1980-1981), Institute for Advanced Studies fellow in the Behavioral Sciences (1984-1985), Cattell Fellowship (1995-1996), George A Miller Lecturer for the Society of Cognitive Neuroscience,(1998), Guggenheim Fellowship (1999-2000), National Academy of Education (1999), Society for Experimental Psychology (1999), American Academy of Arts and Sciences (2001), William James Fellow Award, American Psychology Society (2002); The British Academy Corresponding Fellow (2007), Ottawa Township High School Hall of Fame (2009), Distinguished Scientific Contribution Award, American Psychological Association (2009), and Cognitive Development Society Book Award (2001)"} {"text":"Frank Smith (1928-2020) was a Canadian psycholinguist recognized for his contributions in linguistics and cognitive psychology. He was an essential contributor to research on the nature of the reading process together with researchers such as George Armitage Miller, Kenneth S. Goodman, Paul A. Kolers, Jane W. Torrey, Jane Mackworth, Richard Venezky, Robert Calfee, and Julian Hochberg. Smith and Goodman are founders of whole language approach for reading instruction. He was the author of numerous books."} {"text":"Frank Smith was born in England in 1928 and lived on Vancouver Island, British Columbia, Canada. He started out as reporter and editor for several media publications in Europe and Australia before commencing undergraduate studies at the University of Western Australia. He received a PhD in Psycholinguistics from Harvard University in 1967."} {"text":"Smith held positions as professor at the Ontario Institute for Studies in Education for twelve years, professor of Language in Education at the University of Victoria, British Columbia as well as professor and department-head of Applied English Language Studies at the University of the Witwatersrand, South Africa. Before taking the position at the Ontario Institute, Smith briefly worked at the Southwest Regional Laboratory in Los Alamitos, California."} {"text":"He died on December 29, 2020 in Victoria, B.C."} {"text":"Smith's research made important contributions to the development of reading theory. His book \"Understanding Reading: A Psycholinguistic Analysis of Reading and Learning to Read\" is regarded as a fundamental text in the development of the now discredited whole language movement. Amongst others, Smith's research and writings in psycholinguistics inspired cognitive psychologists Keith Stanovich and Richard West's research into the role of context in reading."} {"text":"Smith's work, in particular \"Understanding Reading: A Psycholinguistic Analysis of Reading and Learning to Read\", is a synthesis of psycholinguistic and cognitive psychology research applied to reading. Working from diverse perspectives, Frank Smith and Kenneth S. Goodman developed the theory of a unified single reading process that comprises an interaction between reader, text and language. On the whole, Smith's writing challenges conventional teaching and diverts from popular assumptions about reading."} {"text":"Apart from his research in language, his research interests included the psychological, social and cultural consequences of human technology."} {"text":"Smith advocated the concept that \"children learn to read by reading\". In 1975 he participated in a television documentary filmed by Stephen Rose for the BBC \"Horizon\" TV series while based at the Toronto Institute for Studies in Education. The programme focused on his work with a single -year-old child called Matthew."} {"text":"He was against the 1970s idea that children should first learn the letters and letter combinations that convey the English language's forty-four sounds (Clymer's 45 phonic generalizations) and then they can read whole words by decoding them from their component phonemes. This \"sounding out\" words is a phonics, rather than a whole language, technique which is rooted in intellectual independence. The whole-language theory explained reading as a \"language experience,\" where the reader interacts with the text\/content and this in turn facilitates the link - \"knowledge\" - between the text and meaning. The emphasis is on the process or comprehension of the text."} {"text":"Lise Menn (n\u00e9e Lise J. Waldman, born December 28, 1941, in Philadelphia) is an American linguist who specializes in psycholinguistics, including the study of language acquisition and aphasia. She is currently Professor Emerita of linguistics and was a fellow of the Institute for Cognitive Science at the University of Colorado at Boulder in Boulder, Colorado until her retirement in 2007."} {"text":"Menn earned a bachelor's degree in mathematics in 1962 from Swarthmore College and a master's degree (also in mathematics) from Brandeis University in 1964. After changing fields, she earned a master's and doctorate in linguistics from the University of Illinois at Urbana-Champaign in 1975\u20136."} {"text":"She taught or conducted research at several universities in the Boston area, including a post-doctoral position at MIT under Paula Menyuk and Kenneth N. Stevens, several years as a research associate with Jean Berko Gleason, and six years at the Aphasia Research Center of the Boston University School of Medicine under Harold Goodglass. She also spent a post-doctoral year with Eran Zaidel at UCLA, before being appointed associate professor of linguistics at the University of Colorado in 1986. Her approaches to linguistics, psycholinguistics, and neurolinguistics are considered to be 'bottom-up' (i.e. data-driven), empiricist, and functionalist."} {"text":"She has been a member of the governing committees of the Academy of Aphasia, the Linguistic Society of America, and the Linguistics and Language Sciences section of the American Association for the Advancement of Science. In 2006, she was honored as a Fellow of the Linguistic Society of America."} {"text":", Dr. Menn has written or edited nine books, and more than 50 peer-reviewed articles. Her doctoral advisees and co-advisees include Marjorie Perlman Lorch, Rebecca Burns-Hoffmann, Kevin Markey, Andrea Feldman, Patrick Juola, Harold Wilcox, Debra Biasca, Valerie Wallace, Carolyn J. Buck-Gengler, and Holly Krech Thomas."} {"text":"Dr. Menn was married to William Bright from 1986 until his death in 2006. Her first husband was Michael D. Menn; they were divorced in 1972. She is the mother of Stephen Menn and Joseph Menn and stepmother of Susie Bright."} {"text":"Winfred Philip Lehmann (June 23, 1916August 1, 2007) was an American linguist who specialized in historical, Germanic, and Indo-European linguistics. He was for many years a professor and head of departments for linguistics at the University of Texas at Austin, and served as president of both the Linguistic Society of America and the Modern Language Association. Lehmann was also a pioneer in machine translation. He lectured a large number of future scholars at Austin, and was the author of several influential works on linguistics."} {"text":"Winfred P. Lehmann was born in Surprise, Nebraska on June 23, 1916, the son of the Lutheran minister Philipp Ludwig Lehmann and Elenore Friederike Grosnick. The family was German American and spoke German at home. They moved to Wisconsin while Lehmann was a boy."} {"text":"After graduating from high school, Lehmann studied German and classical philology at Northwestern College, where he received his BA in humanities in 1936. He subsequently enrolled at the University of Wisconsin. At Wisconsin, Lehmann specialized in phonetics and Indo-European and Germanic philology. He studied a variety of topics, including the works of John Milton and Homer, German literature, and became proficient in a diverse number of languages, including Old Church Slavonic, Lithuanian, Old Irish, Sanskrit and Old Persian. His command of languages would eventually extend to Arabic, Hebrew, Japanese, Turkish, and several branches of the Indo-European languages, including Celtic, Germanic, Italic, Balto-Slavic, Hellenic, Anatolian and Indo-Iranian."} {"text":"From 1942 to 1946, Lehmann served in the Signal Corps of the United States Army. During World War II he was an instructor in Japanese for the United States Army, and eventually became officer-in-charge of the Japanese Language School. The administrative experience and knowledge of non-Indo-European languages that he acquired during the war would have a major impact on his later career."} {"text":"Since 1946, Lehmann taught at Washington University in St. Louis, where served as Instructor (1946) and Assistant Professor (1946\u20131949) in German. Wishing to focus more on linguistics and philology rather than only the German language, he arranged with Leonard Bloomfield to spend the summer at Yale University to catch up with advances in linguistics during the war, but these plans came to nothing after Bloomfield suffered a debilitating stroke."} {"text":"In 1949, Lehmann transferred to the University of Texas at Austin, which at the time had about 12,000 students and was known for its strength in philology and for its university library. He subsequently served as Associate Professor (1949\u20131951) and Professor (1951\u20131962) of Germanic Languages at University of Texas at Austin. During this time Lehmann published his influential work \"Proto-Indo-European Phonology\" (1952)."} {"text":"Since 1953, Lehmann served as Chairman of the Department of Germanic Languages (1953\u20131964), Acting Chairman of the Department of Slavic Languages (1960\u20131965). In 1963 he was made Ashbel Smith Professor of Linguistics and Germanic Linguistics (1963\u20131983). The Ashbel Smith professorship accorded him twice the salary of an ordinary professor. In 1964, Lehmann became the founding Chairman of the Department of Linguistics (1964\u20131972)."} {"text":"Lehmann was well known for his teaching style, and notably encouraged his students to seek to understand his lectures rather than just simply writing them down. Instead of only grading his students' papers and exams, he would give them detailed evaluations of their performance, and encouraged them to pursue and develop ideas. Lehmann strongly encouraged his students to seek having their works published in academic journals."} {"text":"Under the leadership of Lehmann, the departments for Germanic languages and linguistics at University of Texas at Austin both became among the top five graduate programs in North America, which they remained for 25 years. Almost ten percent of all PhDs awarded in linguistics in the United States during this time came from the University of Texas at Austin. He supervised more than fifty PhDs and mentored hundreds of students, many of whom would acquire prominent positions in their respective fields."} {"text":"Combined with his teaching and administrative duties, Lehmann was engaged with research and writing. His \"Historical Linguistics: An Introduction\" (1962) has been translated into Japanese, German, Spanish and Italian, and remains a standard work on historical linguistics. He edited the \"Reader in Ninetenth Century Historical Indo-European Linguistics\" (1967), which remains a standard work on both Indo-European, historical, and comparative linguistics. His \"Proto-Indo-European Syntax\" (1974) was hailed as breakthrough by linguist Robert J. Jeffers, who reviewed it in the journal \"Language\". \"Studies in Descriptive and Historical Linguistics\", a festschrift in Lehmann's honor, was published in 1977 under the editorship of Paul Hopper. His influential \"Syntactic Typology\" was published in 1981."} {"text":"In 1983, Lehmann was made Louann and Larry Temple Centennial Professor in the Humanities at University of Texas at Austin. He received the Harry H. Ransom Award for Teaching Excellence in the Liberal Arts in 1983, which he would describe as the greatest honor of his career. In 1984, together with fellow researcher Jonathan Slocum, Lehmann developed a groundbreaking prototype computer program for language translation, which the LRC put into commercial production for Siemens."} {"text":"Lehmann retired as Louann and Larry Temple Centennial Professor Emeritus in the Humanities in 1986. Although having retired from teaching, he was still very active as a researcher at the Linguistics Research Center at the University of Texas at Austin, and continued to write books and articles. In 1986 Lehmann founded the journal \"Computers and Translation\", now \"Machine Translation\", of which he was the founding editor. His \"Gothic Etymological Dictionary\" (1986) has been described as the best work ever published on Germanic etymology. He received the Commander's Cross of the Order of Merit of the Federal Republic of Germany in 1987."} {"text":"Notable works authored by Lehmann during his final years include the third edition of \"Historical Linguistics\" (1992) and \"Theoretical Bases of Indo-European Linguistics\" (1993). \"Language Change and Typological Variation\", a second festschrift in his honor, was published by the Institute for the Study of Man in 1999 under the editorship of Edgar C. Polom\u00e9 and Carol F. Justus. Lehmann completed his final monograph, \"Pre-Indo-European\" (2002), at the age of 86."} {"text":"Lehmann was preceded in death by his wife Ruth and his son Terry, and died in Austin, Texas on August 1, 2007."} {"text":"Throughout his career, Lehmann wrote more than fifty books and special issues of journals, and over 250 articles and more than 140 reviews. These works covered a diverse set of topics, including Middle High German literature, Japanese grammar, Old Irish, Biblical Hebrew, and textbooks on the German language. His contributions to the fields of Indo-European, Germanic and historical linguistics, and machine translation, have been significant, and several of his works on these subjects have remained standard texts up to the present day. He is remembered for his crucial role in establishing the University of Texas at Austin as one of America's leading institutions in linguistics, and for the large numbers of students that he taught and mentored, many of whom have made major contributions to scholarship."} {"text":"Lehmann married Ruth Preston Miller on October 12, 1940, whom he met while studying at the University of Wisconsin. A specialist in Celtic linguistics and Old English, Ruth was Professor of English at the University of Texas at Austin. Winfred and Ruth had two children, Terry Jon and Sandra Jean."} {"text":"Winfred and Ruth were both environmentalists and loved animals. They donated of land in the northwest of Travis County, Texas to The Nature Conservancy to create the Ruth Lehmann Memorial Tract. The family inhabited a spacious house on Lake Travis, where they cared for rescued animals."} {"text":"Aside from linguistics and the environment, Lehmann's great passion was literature, particularly early Germanic literature and the novels of his friend Raja Rao and James Joyce. He was also a skilled pianist. Lehmann was a close friend of John Archibald Wheeler, with whom he shared an interest for literature. Despite his wide circle of friends, Lehmann was nevertheless a very private man."} {"text":"Brian J. Byrne (born 1942) is an Australian social scientist specializing in applied and psycholinguistics, an emeritus professor at the University of New England in Australia, and lead author of publications and articles on research in his field. Byrne was a lead researcher in the 10-year-long, $5 million National Institutes of Health study by an international team of scientists into the development of reading ability in 1,000 pairs of twins. Beginning in 2000, the study found that genetics were more important influences on reading development than environmental factors. In 2012, Byrne was appointed a lead researcher in a similar Australian study of twins."} {"text":"In 2008, the researchers published the results of their research, finding that genetic factors were more influential than environmental ones in the development of reading ability in children. Byrne cautioned, however, that, \"Intensive and well-designed classroom and preschool interventions can make a difference for struggling readers.\" Byrne was subsequently selected in 2012 as lead researcher for a follow-on study of 2,000 twins listed in the National Literacy and Numeracy Assessment."} {"text":"Trevor Harley is emeritus chair of Cognitive Psychology His primary research is in the psychology of language. From 2003 until 2016 he was Head and Dean of the School of Psychology at the University of Dundee, Scotland, United Kingdom. He is author of \"The Psychology of Language\", currently in its fourth edition, published by Psychology Press, and \"Talking the talk\", a book about the psychology of language (psycholinguistics) aimed at a more general audience."} {"text":"Trevor Harley was born in 1958 in London and grew up near Southampton. He was educated at Price's Grammar School, Fareham. His undergraduate degree was in Natural Sciences at St John's College in the University of Cambridge. He stayed at Cambridge to study for his PhD under the supervision of Brian Butterworth. His PhD was on \"Slips of the tongue and what they tell us about speech production\"."} {"text":"For his PhD and later research he collected a corpus of several thousand naturally occurring speech errors, and focused on one word substitutes for another (e.g. saying \"pass the pepper\" instead of \"pass the salt\"). He concluded that speech production is an interactive, parallel process, leading him to an interest in connectionist modeling, and research on computational modeling, ageing, and metacognition."} {"text":"After his PhD he took a temporary lectureship at the University of Dundee. He then moved to the University of Warwick, where he stayed until 1996, then moving to a Senior Lectureship at Dundee. He was awarded a personal chair in 2003, and became Head of Department in the same year, and later Dean in 2006."} {"text":"In addition to his academic work, he is an author of a novel, \"Dirty old rascal\" (), a fantasy about a cook set in the strange Castle where no misdeed goes unpunished. Harley has published an article, \"Why the earth is almost flat: Imaging and the death of cognitive psychology\". He has recently performed as a stand-up comic, performing at the Edinburgh Fringe in 2013."} {"text":"Harley's current main research interest is in metacognition, this interest grew out of his research on ageing and his interest in consciousness. More topics about his research on metacognition is covered in his forthcoming book, \"Cognition: The mindful brain - why we behave as we do\"."} {"text":"Another of his research interest includes how we produce language, although he now studies this in the wider context of how we represent meaning, how language is affected by brain damage, and by normal and pathological ageing (e.g. Alzheimer's and Parkinson's diseases). He also works on how we control our own cognition, and how this ability changes with age. Underlying all his research is a belief that the mind is a parallel, interactive computer, best studied by experimentation and computational modeling. As well as his interest in language and computational modeling, he was also interested in the research of ageing and metacognition."} {"text":"He is also interested in the weather, and maintains a site about severe weather events in Britain and the British weather in general available from trevorharley.com, calling this role as a \"psychometeorologist\". He also carries out psychological research about the weather, including why are people so interested in the weather? He maintains a weather station at Lundie near Dundee."} {"text":"He wrote a famous article called Promises, Promises in which he argued that cognitive neuropsychologists have increasingly deviated from the original goals and methods of the subject."} {"text":"One of Harley's most famous publications is the book \"The Psychology of Language\". In this book, he discusses psycholinguistics, which is the study of relationships that exist between linguistic behaviour and psychological processes. Harley discusses both the low cognitive level processes, including speech and visual word recognition, and the high cognitive level processes that are involved in comprehension. The text covers recent connectionist models of language, describing complex ideas in a clear and approachable manner. Following a strong developmental theme, the text describes how children acquire language (sometimes more than one), and also how they learn to read."} {"text":"Drew Westen is professor in the Departments of Psychology and Psychiatry at Emory University in Atlanta, Georgia; the founder of Westen Strategies, LLC, a strategic messaging consulting firm to nonprofits and political organizations; and a writer. He is also co-founder, with Joel Weinberger, of Implicit Strategies, a market research firm that measures consumers' unconscious responses to advertising and brands."} {"text":"He grew up in North Carolina and Georgia, and received a Bachelor of Arts from Harvard University, a Master of Arts in Social and Political Thought from the University of Sussex (England), and a Doctor of Philosophy in clinical psychology from the University of Michigan, where he taught introductory psychology from 1985 to 1991."} {"text":"Westen is a strategic messaging consultant for major nonprofit organizations and has been a consultant or advisor to progressive and Democratic organizations, including the House and Senate Democratic Caucuses."} {"text":"In addition, Westen is a commentator on television, radio, in print, and online, who has been a frequent contributor to the opinion page of the New York Times, the Washington Post, the Los Angeles Times, CNN.com and the Huffington Post. His 2011 article on Obama's leadership in the Sunday New York Times was one of the most widely read pieces in the history of the Sunday Times and drew considerable attention, including from the White House. The President and his close advisors were so incensed and concerned about its impact, because it captured popular opinion at the time about his leadership style, that they sent out a thirty-plus page email of talking points to friendly journalists to use when he was interviewed on television and radio."} {"text":"At Harvard University and at Emory, Westen's work has focused on alternative ways of assessing and classifying personality disorders and developing and refining the Shedler-Westen Assessment Procedure as a tool for researchers and clinicians to help further the understanding of personality and its disorders. He is unusual among academic clinical psychologists in being both an active researcher and a practicing clinician for 20 years, who has written on what can be learned from both science and practice. This is reflected in over a decade's work on how to revise the diagnostic manual in psychiatry so that it is useful both to clinicians and researchers."} {"text":"Much of Westen's theoretical work has attempted to bridge perspectives, particularly cognitive, psychodynamic, and evolutionary. He has published over 200 research papers in the scientific literature."} {"text":"In January 2006 a group of scientists led by Westen announced at the annual Society for Personality and Social Psychology conference in Palm Springs, California the results of a study in which functional magnetic resonance imaging (fMRI) showed that self-described Democrats and Republicans responded to negative remarks about their political candidate of choice in systematically biased ways."} {"text":"Subjects were then presented with information that exonerated their candidate of choice. When this occurred, areas of the brain involved in reward (notably dopamine-rich regions such as the striatum \/ nucleus accumbens) showed increased activity, essentially reinforcing both their positive feelings toward their favored candidate and defensive reasoning."} {"text":"The study was published in the \"Journal of Cognitive Neuroscience\" 18:11, pp.\u00a01947\u201358, a peer-reviewed scientific journal."} {"text":"In 2007, PublicAffairs published Westen's \"The Political Brain.\" The book has been widely used by political candidates and leaders around the world and is credited as having influenced campaign strategies in a number of races, beginning with the 2008 Presidential race. President Bill Clinton described it as one of the most significant books in politics he had read in a decade."} {"text":"He is divorced and has two children."} {"text":"(Maria) Fernanda Ferreira (born 22 September 1960) is a cognitive psychologist known for empirical investigations in psycholinguistics and language processing. Ferreira is Professor of Psychology and the Principal investigator of the Ferreira Lab at University of California, Davis."} {"text":"In 1995, Ferreira was awarded the Distinguished Scientific Award for Early Career Contribution to Psychology for cognition and human learning by the American Psychological Association. She is a Fellow of the Association for Psychological Science, the Cognitive Science Society, and the Royal Society of Edinburgh (FRSE)."} {"text":"Ferreira received her BA (Honours) in Psychology from the University of Manitoba in 1982. She went on to complete postgraduate work at the University of Massachusetts, Amherst, obtaining degrees in Linguistics (MA 1986) and Psychology (MS 1985, PhD 1988). At U Mass Amherst. Ferreira worked under the supervision of Charles (Chuck) Clifton, Jr investigating relationships between syntactic processing and phonology. Her dissertation, \"Planning and Timing in Sentence Production: The Syntax-to-Phonology Conversion,\" provided evidence that phonological structures and representations, rather than syntactic structures, impact the timing of sentence-level speech."} {"text":"Ferreira served as Programme Director for Linguistics for the National Science Foundation from 1996-1997. From 2004 until 2006, Ferreira was the Director of the Center for the Integrated Study of Vision and Language at Michigan State University. She was Chair of Language and Cognition and Professor in Psychology at the University of Edinburgh from 2006 until 2010."} {"text":"Ferreira is an editor of \"Collabra: Psychology\", an open access psychology journal published by the University of California Press. She is also an associate editor of the journal \"Cognitive Psychology\". She previously served as an associate editor of the \"Journal of Experimental Psychology\" (1997\u20132000) and the \"Journal of Memory and Language\" (2001\u20132004)."} {"text":"Ferreira was born in Portugal, and raised in Manitoba, Canada. She is married to John Henderson, a frequent collaborator and fellow professor at the University of California, Davis. They met whilst they both read at University of Massachusetts, Amherst. Her younger brother, Victor Ferreira, is also a psycholinguist, and a Professor of Psychology at the University of California, San Diego."} {"text":"Judith F. Kroll is a Distinguished Professor of Language Science at University of California, Irvine. She specializes in psycholinguistics, focusing on second language acquisition and bilingual language processing. With Randi Martin and Suparna Rajaram, Kroll co-founded the organization Women in Cognitive Science in 2001. She is a Fellow of the American Association for the Advancement of Science (AAAS), the American Psychological Association (APA), the Psychonomic Society, the Society of Experimental Psychologists, and the Association for Psychological Science (APS)."} {"text":"Kroll's research program examines the cognitive processes underlying bilingualism. Her research has been supported by The National Science Foundation (NSF) and The National Institutes of Health (NIH). With Annette de Groot, she co-edited the \"Handbook of Bilingualism: Psycholinguistic Approaches.\" In 2013, Kroll was awarded a Guggenheim Fellowship to conduct research exploring how learning a second language and becoming a bilingual person impacts processing of one's native language."} {"text":"One of Kroll's research foci has to do with language selection in bilingual speech. She discovered that when one language is spoken, both languages are active."} {"text":"Roger William Brown (April 14, 1925 \u2013 December 11, 1997) was an American psychologist. He was known for his work in social psychology and in children's language development."} {"text":"Brown taught at Harvard University from 1952 until 1957 and from 1962 until 1994, and at Massachusetts Institute of Technology (MIT) from 1957 until 1962. His scholarly books include \"Words and Things: An Introduction to Language\" (1958), \"Social Psychology\" (1965), \"Psycholinguistics\" (1970), \"A First Language: The Early Stages\" (1973), and \"Social Psychology: The Second Edition\" (1985). He authored numerous journal articles and book chapters."} {"text":"He was the doctoral adviser or a post-doctoral mentor of many researchers in child language development and psycholinguistics, including Jean Berko Gleason, Susan Ervin-Tripp, Camile Hanlon, Dan Slobin, Ursula Bellugi, Courtney Cazden, Richard F. Cromer, David McNeill, Eric Lenneberg, Colin Fraser, Eleanor Rosch (Heider), Melissa Bowerman, Steven Pinker, Kenji Hakuta, Jill de Villiers, and Peter de Villiers. A \"Review of General Psychology\" survey, published in 2002, ranked Brown as the 34th-most cited psychologist of the 20th century."} {"text":"Born in Detroit, Brown earned an undergraduate psychology degree in 1948 and a Ph.D. in 1952 from the University of Michigan. He started his career in 1952 as an instructor and then assistant professor of psychology at Harvard University. In 1957 he left Harvard for an associate professorship at MIT, and became a full professor of psychology there in 1960. In 1962, he returned to Harvard as a full professor, and served as chair of the Department of Social Relations from 1967 to 1970. From 1974 until his retirement in 1994, he held the title of John Lindsley Professor of Psychology in Memory of William James."} {"text":"Roger Brown's research and teaching focused on social psychology, the relationship between language and thought, and the linguistic development of children. The clarity, directness, and humor of his scholarly writing are often praised; Pinker describes him as \"perhaps the best writer in psychology since James himself\"."} {"text":"Brown's book \"Words and Things: An Introduction to Language\" (1957) examines the mutual influence of thought and language, described as \"the first book on the psychology of language coming out of the cognitive revolution\". His writing in this area became an inspiration for much work in the relation between language and cognition, including Eleanor Rosch (Heider)'s work on color names and color memory and Steven Pinker's 1994 book \"The Language Instinct\"."} {"text":"Brown taught social psychology and published his first textbook, \"Social Psychology\", in 1965. The book was completely rewritten and published in 1986 as \"Social Psychology: The Second Edition\". Brown also wrote an introductory textbook on psychology, co-authored with his colleague Richard Herrnstein. Pinker noted that these two books \"live in publishing infamy as a lesson of what happens to textbooks that are unconventional, sophisticated, and thought-provoking: they don't sell.\""} {"text":"Other important works by Brown include his 1976 paper on \"Flashbulb Memories\", concerning people's memories of what they were doing at the time they heard about major traumatic events such as the JFK assassination. The breadth of his interests is seen in the papers reprinted in his 1970 book \"Psycholinguistics\", which includes work with David McNeill on the 'tip of the tongue state', a study with Albert Gilman of the social factors involved in choosing familiar versus polite second-person pronouns (\"tu, vous\") in languages like French and Spanish, and a review of the novel \"Lolita\" by Harvard colleague Vladimir Nabokov."} {"text":"Brown was known for the grace with which he treated and referred to his colleagues, whether junior or senior. An example of this is found in his brief autobiography: \"Jerome Bruner, then as now, had the gift of providing intellectual stimulus, but also the rarer gift of giving his colleagues the strong sense that psychological problems of great antiquity were on the verge of solution that afternoon by the group there assembled.\""} {"text":"Linguistic Determinism and the Part of Speech (1957)"} {"text":"In his \u201cHow Shall a Thing Be Called?\u201d article, Brown wrote about how objects have many names, but often share a common name. He proposed the frequency-brevity principle, by which he theorized that children use words that are shorter in length because shorter words are more common for objects in the English language\u2014for example, referring to a dog as \"dog\" and not \"animal\". He elaborated on the frequency-brevity principle and how it may be violated (for example, referring to a pineapple as \"pineapple\" and not \"fruit\"). He further argued that children progress from concrete naming to more abstract categorizations as they age."} {"text":"The Pronouns of Power and Solidarity (1960)"} {"text":"The Tip of The Tongue Phenomenon (1966)"} {"text":"To test the Tip of the Tongue phenomenon empirically, Brown and David McNeill conducted a study in which they asked participants to look over a list of words and definitions and then listen to the definition one of the words on the list. Those in the \u201ctip of the tongue\u201d state were asked to fill out a chart assessing the related words that they are able to come up with. Brown and McNeill were able to identify two types of recall: abstract and partial, that participants exhibited when attempting to remember the target words. Abstract recall relies on the number of syllables in the target word or the location of stressed syllables in the word while partial recall relies on the number of letters the target word."} {"text":"Brown was a Guggenheim Fellow in 1966\u201367. He was elected to the American Academy of Arts and Sciences (1963) and the National Academy of Sciences (1972). In 1971 he received the Distinguished Scientific Achievement Award of the American Psychological Association, in 1973, the G. Stanley Hall Prize in Developmental Psychology of the American Association, and in 1984, the Fyssen International Prize in Cognitive Science. He also was awarded several honorary doctorates."} {"text":"Roger Brown was born in Detroit, one of four brothers. His family, like many others, was hit hard by the Depression. He attended Detroit public schools, and began undergraduate studies at the University of Michigan, but World War II interrupted his education. He joined the Navy during his freshman year, and was accepted into the V-12 program, which included midshipman training at Columbia University, and served as an ensign in the U.S. Navy. During his time in the navy, he became interested in psychology. With the help of the GI BIll, he completed his university education after the war. Brown became a dedicated opera fan, with a particular admiration for Metropolitan Opera soprano Renata Scotto."} {"text":"Lila Gleitman (born December 10, 1929) is a professor emerita of psychology and linguistics at the University of Pennsylvania. She is an internationally renowned expert on language acquisition and developmental psycholinguistics, focusing on children's learning of their first language. Gleitman's research interests include, Language acquisition, morphology and syntactic structure, Psycholinguistics, syntax, and construction of the lexicon. Notable former students include Elissa Newport, Barbara Landau, and Susan Goldin-Meadow."} {"text":"She was married to fellow psychologist Henry Gleitman, who was also a professor emeritus of psychology at the University of Pennsylvania, until his death on September 2, 2015."} {"text":"Gleitman received a B.A. in literature from Antioch College in 1952, an M.A. in linguistics from the University of Pennsylvania in 1962, and a Ph.D. in linguistics from the University of Pennsylvania in 1967. She was employed as an assistant professor at Swarthmore College before accepting a position as the William T. Carter Professor of Education at the University of Pennsylvania from 1972 to 1973, and then subsequently serving as a professor of linguistics and as the Steven and Marcia Roth Professor of Psychology at the University of Pennsylvania from 1973 until her retirement."} {"text":"The impact of Gleitman's research in language acquisition has been recognized by numerous organizations, and she has been elected as a fellow in the American Psychological Association, the Association for Psychological Science, the Society of Experimental Psychologists, the American Association for the Advancement of Science, the American Academy of Arts and Sciences, and the National Academy of Sciences. She won the David Rumelhart Prize in 2017 and also served as President of the Linguistic Society of America in 1993."} {"text":"Gleitman herself describes her linguistic interests on the member page for the National Academy of Sciences below:"} {"text":"\"One of my main interests concerns the architecture and semantic content of the mental lexicon, i.e., the psychological representation of the forms and meanings of words. My second major interest is in how children acquire both the lexicon and the syntactic structure of the native tongue.\""} {"text":"Charles Perfetti is the director of, and Senior Scientist for, the Learning and Research Development Center at the University of Pittsburgh. His research is centered on the cognitive science of language and reading processes, including but not limited to lower- and higher-level lexical and syntactic processes and the nature of reading proficiency. He conducts cognitive behavioral studies involving ERP, fMRI and MEG imaging techniques. His goal is to develop a richer understanding of how language is processed in the brain."} {"text":"This experiment was performed on twenty-one graduates from the University of Pittsburgh who were native Chinese speakers. Participants had to perform a written sentence task where they would read a sentence that interrupted the approved continuation with a relative clause. The results of what was called the norming study revealed that approval of subject verb-object continuations were high both subject extracted and object extracted clauses. Participants read experimental sentences that contain one of the two types of relative clauses. One version of experimental sentences was read within a session and the other version was read between five and ten minutes later. An electroencephalogram recording was collected for each participant who read a sentence in Chinese."} {"text":"For the lexical decision task, greater activation was found in the character-writing condition more than the pinyin-writing condition. The areas of the brain that were activated are the bilateral superior parietal lobules and the inferior parietal and postcentral gyrus. The results suggest that the identifying of learned characters in the character-writing training condition promoted activation of components used for the previous training exercise before. When activation was greater in the character-writing condition for the pinyin-writing condition than the character-writing condition, it existed in the right inferior frontal gyrus. There was also activation found in the bilateral middle occipital gyri, precuneus, and left temporal gyrus for learned characters than novel characters."} {"text":"In exploring the Lexical Quality Hypothesis Charles Perfetti focuses on analyzing the brain\u2019s fundamentals of being able to read. In \"Reading Ability: Lexical Quality to comprehension\", Perfetti states, that differences in characteristics of word comprehension impacts reading ability and comprehension. High-lexical qualities partly involve the spelling of a word as well as the manipulation of meaning about a word which allows meaning retrieval at a rapid pace. However, low quality representations of a word promote word-related difficulties in comprehension of a text. His first set of results reveals that comprehension depends on lexical skill and describes the disconnections that focus specifically on comprehension skills. As for word linguistic processing, studies reveal skill difference are found in through the analysis of confusing word meanings."} {"text":"The Event Related Potentials performed on rare vocabulary meaning unveil how skilled readers acquire words better and reveal stronger ERP indications of word learning. In addition, these results propose that there are skill differences in understanding the orthographic representation of a word. ERP results also show that there are skill differences in comprehending and processing of ordinary words. Finally, they demonstrate problems for low-skilled readers with interpreting words with prior text. In doing so, Perfetti provides findings that suggest word-knowledge impacts the processing of word meaning and comprehension."} {"text":"For several semesters, Perfetti tested 800 psychology students over the course of several semesters. The students were given reading task to assess their levels of spelling, word sounding and comprehension skills. His claim was that the rate at which words are experienced and reading skill administers a readers experience with words. He found out that by making sure participants knew both meanings of homophone pairs, achievement of skilled and less skilled readers could be assessed. Furthermore, educating participants on lower exposed members of homophones can reverse the confusion of comprehension to the point that they become higher frequency words."} {"text":"The results showed faster reaction times for learned and familiar words than that of unlearned and rare words while the response times for correct decisions were faster than incorrect decisions. There was a significant main effect for word type, but not for word type x relatedness. This kind of word type revealed how learners were faster at responding to related trials in the orthography-to-meaning and phonology trials. The interaction for word type x correctness revealed a difference in decision times. This was found in the orthography-to-meaning and phonology-to-meaning conditions for familiar words. The results conclude that reinforcing the words orthography might help readers recognize a word in future encounters which will influence the process of incremental learning."} {"text":"John L. Locke is an American biolinguist who has contributed to the understanding of language development and the evolution of language. His work has focused on how language emerges in the social context of interaction between infants, children and caregivers, how speech and language disorders can shed light on the normal developmental process and vice versa, how brain and cognitive science can help illuminate language capability and learning, and on how the special life history of humans offers perspectives on why humans are so much more intensely social and vocally communicative than their primate relatives. In recent time he has authored widely accessible volumes designed for the general public on the nature of human communication and its origins."} {"text":"Locke has studied and worked in the United States and the United Kingdom. He received a B. A. in speech communication from Ripon College in 1963, and both an M. A. and a Ph.D. in speech pathology, audiology and speech science from Ohio University in 1965 and 1968 respectively. He went on to postdoctoral fellowships in psychology at Yale University and Oxford University in the United Kingdom (UK) from 1972-74."} {"text":"He is currently a Professor of Language Science at Lehman College, the City University of New York. He has previously been on the faculty at the University of Illinois, the University of Maryland, Harvard University, and at the University of Sheffield, and Cambridge University in the UK."} {"text":"Locke\u2019s research has been funded by a wide variety of sources including the National Institutes of Health, the Axe-Houghton Foundation, the James S. McDonnell Foundation, the March of Dimes, the Cape Branch Foundation, and the Commonwealth Fund. He has held significant roles in the American Speech. Language, and Hearing Association, the Linguistic Society of America, and the Society for Research in Child Development."} {"text":"He has been honored as a recipient of the Science Award from Ohio University (2002) and the Faculty Recognition Award for Research and Scholarship from Lehman College (2009)."} {"text":"He was a founding editor of the journal Applied Psycholinguistics, and has served on numerous other editorial boards. His administrative roles have included: Director of the Interdepartmental Program in Linguistics, Lehman College, City University of New York (2003\u20132007), Head of the Department of Human Communication Science, University of Sheffield, Sheffield, England (1995\u20141998), Founding Director and Senior Research Scientist, Neurolinguistics Laboratory, Massachusetts General Hospital, Boston, Massachusetts (1984\u20131995), Director and Professor, Graduate Program in Communication Sciences and Disorders, MGH Institute of Health Professions (1983\u20131995), Director and Professor, Linguistic Institute, University of Maryland, College Park (1982), and Director, Speech and Hearing Laboratory, Institute for Child Behavior and Development, University of Illinois at Urbana-Champaign (1969\u20131980)."} {"text":"Locke is the author of two volumes that have played central roles in the understanding of child language development in a biological context, the first focused on the development of phonological capabilities, which Locke views as greatly under-emphasized in the study of the emergence of human language, and the second a far-ranging synthesis of evidence related to the acquisition of language. These works have been cited hundreds of times in the scientific literature, and have influenced works related specifically to phonological development, to language development in general, to language evolution, and to broad topics on developmental theory."} {"text":"He has recently authored two additional volumes directing attention to the significance of speech communication in the modern world, (reviewed by, among others, the New York Times and the Washington Times) and to eavesdropping and gender differences in understanding of human communication and the human condition."} {"text":"Maryellen MacDonald is Donald P. Hayes Professor of Psychology at the University of Wisconsin\u2013Madison. She specializes in psycholinguistics, focusing specifically on the relationship between language comprehension and production and the role of working memory. MacDonald received a Ph.D. from the University of California, Los Angeles in 1986. She is a Fellow member of the Cognitive Science Society. She is married to fellow psychologist Mark Seidenberg and has two children."} {"text":"Glenn David McNeill (born 1933 in California, United States) is an American psychologist and writer specializing in scientific research into psycholinguistics and especially the relationship of language to thought, and the gestures that accompany discourse."} {"text":"David McNeill is a professor of the University of Chicago in Illinois, and a writer."} {"text":"McNeill studied for and was awarded a Bachelor of Arts in 1953 and a Doctor of Philosophy in 1962, both in psychology, at the University of California, Berkeley. He went on to study at the Center for Cognitive Studies, Harvard University in 1963."} {"text":"As well as being a member of Phi Beta Kappa and Sigma Xi and holding several academic fellowships including a Guggenheim Fellowship in 1973-1974, McNeill was Gustaf Stern Lecturer at the University of G\u00f6teborg, Sweden in 1999; and Vice President of the International Society for Gesture Studies from 2002\u20132005."} {"text":"In 1995, McNeill won the Award for Outstanding Faculty Achievement, University of Chicago; and in 1995 he was awarded the Gordon J. Laing Award from the University of Chicago Press for the book \"Hand and Mind\"."} {"text":"In 2004, the National-Louis University (a multi-campus institution in Chicago) Office of Institutional Management Grants Center received an American Psychological Association Grant for Gale Stam Psychology College of Arts and Sciences to provide \"a Festschrift conference honoring Professor David McNeill of the University of Chicago.\""} {"text":"McNeill specializes in psycholinguistics, and in particular scientific research into the relationship of language to thought, and the gestures that accompany discourse."} {"text":"In his research, McNeill has studied videoed discourses of the same stimulus stories being retold \"together with their co-occurring spontaneous gestures\" by \"speakers of different languages, [...] by non-native speakers at different stages of learning English, by children at various ages, by adolescent deaf children not exposed to language models, and by speakers with neurological impairments (aphasic, right hemisphere damaged, and split-brain patients).\""} {"text":"This and other research has formed the subject matter of a number of books which McNeill has written through his career."} {"text":"Research on the psychology of language and gesture."} {"text":"The \"growth point\" is a key theoretical concept in McNeill's approach to psycholinguistics and is central to his work on gestures, specifically those spontaneous and unwitting hand movements that regularly accompany informal speech. The growth point, or GP, posits that gestures and speech are unified and need to be considered jointly. For McNeill, gestures are in effect (or, McNeill would say, in reality) the speaker's thought in action, and integral components of speech, not merely accompaniments or additions. Much evidence supports this idea, but its full implications have not always been recognized."} {"text":"Speech and gesture, taken together, comprise minimal units of human linguistic cognition. Following Lev Vygotsky in defining a \"unit\" as the smallest package that retains the quality of being a whole, in this case the whole of a gesture-language unity, McNeill calls the minimal psychological unit a Growth Point because it is meant to be the initial pulse of thinking-for-(and while)-speaking, out of which a dynamic process of organization emerges. The linguistic component of speech categorizes the visual and actional imagery of the gesture; the imagery of the gesture grounds the linguistic categories in a visual spatial frame."} {"text":"McNeill furthers this conception of the material carrier by turning to Maurice Merleau-Ponty for insight into the duality of gesture and language. Gesture, the instantaneous, global, nonconventional component, is \"not an external accompaniment\" of speech, which is the sequential, analytic, combinatoric component; it is not a \"representation\" of meaning, but instead meaning \"inhabits\" it. Merleau-Ponty links gesture and existential significance:"} {"text":"The link between the word and its living meaning is not an external accompaniment to intellectual processes, the meaning inhabits the word, and language 'is not an external accompaniment to intellectual processes'. We are therefore led to recognize a gestural or existential significance to speech. \u2026 Language certainly has inner content, but this is not self-subsistent and self-conscious thought. What then does language express, if it does not express thoughts? It presents or rather it \"is\" the subject\u2019s taking up of a position in the world of his meanings. [emphasis in the original]"} {"text":"To make a gesture, from this perspective, is to bring thought into existence on a concrete plane, just as writing out a word can have a similar effect. The greater the felt departure of the thought from the immediate context, the more likely is its materialization in a gesture, because of this contribution to being. Conversely, when \"newsworthiness\" is minimal materialization diminishes and in some cases disappears, even though a GP is active; in these cases gestures may cease while (empty) speech continues, or vice versa, speech ceases and a vague gesture takes place. Thus, gestures are more or less elaborated and GPs more or less materialized depending on the importance of material realization to the \"existence\" of the thought."} {"text":"Mead's Loop and the mirror neuron \"twist\" would be naturally selected in scenarios where sensing one's own actions as social is advantageous. For example, in imparting information to infants, where it gives the adult the sense of being an instructor as opposed to being just a doer with an onlooker, as is the case with chimpanzees. Entire cultural practices of childrearing depend upon this sense. Self-awareness as an agent is necessary for this advantage to take hold. For Mead's Loop to have been selected the adult must be sensitive to her own gestures as social actions."} {"text":"McNeill's books have received coverage in a number of academic journals and in the general press."} {"text":"A 1991 article in the \"Chicago Reader\"; a 2006 article in the \"Scientific American, Mind\" magazine; and a 2008 article in \"Boston Globe\" describe McNeill's work on the language of gesture in detail."} {"text":"\"The Acquisition of Language\" was reviewed in the \"International Journal of Language & Communication Disorders\" in 1971."} {"text":"\"The Conceptual Basis of Language\" was reviewed in \"The Conceptual Basis of Language\" in 1980."} {"text":"\"Hand and Mind\" was reviewed in \"Language and Speech\"; the \"American Journal of Psychology\"; and \"Language\" in 1994."} {"text":"\"Gesture and Thought\" was reviewed in \"Language in Society\" and \"Metaphor and Symbol\" in 2007."} {"text":"Aniruddh (Ani) D. Patel is a cognitive psychologist known for his research on music cognition and the cognitive neuroscience of music. He is Professor of Psychology at Tufts University. From a background in evolutionary biology, his work includes empirical research, theoretical studies, brain imaging techniques, and acoustical analysis applied to areas such as cognitive musicology (how humans process music), parallel relationships between music and language, and evolutionary musicology (cross-species comparisons). Patel received a Guggenheim Fellowship in 2018 to support his work on the evolution of musical cognition."} {"text":"Patel received the Deems Taylor Award from the American Society of Composers, Authors and Publishers, and the Music Has Power Award from the Institute for Music and Neurologic Function for his 2008 book, \"Music, Language and the Brain.\" Oliver Sacks considered \"Music, Language, and the Brain\" \"a major synthesis that will be indispensable to neuroscientists.\" Josh McDermott, head of MIT's Laboratory for Computational Audition, found Patel's focus on the syntax of music and language with its potential for revelations into similarities in their underlying mechanical operations especially significant. Ray Jackendoff, co-author with Fred Lerdahl of \"A Generative Theory of Tonal Music\", suggested a cautious approach in distinguishing parallels between music and language without accounting for other cognitive domains that may share such capacities."} {"text":"Following graduate school, Patel joined The Neurosciences Institute in San Diego, CA, under the direction of Gerald Edelman. In 2005, he was appointed the Esther J. Burnham Senior Fellow, and he remained at the Institute until 2012 when he joined Tufts University as an Associate Professor in the Department of Psychology. At Tufts, he is a participating member of the Stibel Dennett Consortium, a faculty group that encourages teaching initiatives and scholarship relating to the brain and cognition. In addition to research and academic activities, Patel is been active in a number of related organizations. From 2009-2011, he was president of the Society for Music Perception and Cognition (SMPC), an organization dedicated to the study of musical cognition."} {"text":"Patel is a Fellow at the Canadian Institute for Advanced Research (CIFAR), a global research organization that recognizes and supports international, innovative, high-impact research. He was named a Fellow of the Radcliffe Institute for Advanced Studies (Social Sciences) for 2018-2019 and was a Visiting Scholar in the Department of Human Evolutionary Biology at Harvard University."} {"text":"With John Iversen and others, Patel has explored how brain mechanisms perceive and process rhythm as well as the relationship of music and language processing. A 2021 study with J.J. Cannon focuses on how beat anticipation is neurally implemented by processes occurring in the supplementary motor area and the dorsal striatum."} {"text":"A significant area of interest for Patel concerns communications among and across species, and the evolutionary roots of human language and music. His search for the origins of rhythm (beat) and melody have led to explorations of the vocal and rhythmic behavior of monkeys, birds, and parrots. After failing to find anticipated rhythmic correlational correspondences in chimpanzees, Patel was surprised to learn about Snowball, a cockatoo with a fine sense of rhythm. By 2019, he had not only studied the cockatoo's timing but also the creativity involved in its various moves."} {"text":"An especially informative video episode is part of a MathScienceMusic series from New York University. Patel provides the Normalized Pairwise Variability Index (nPVI) equation long used by linguists to compare patterns in speech. After establishing stylistic contrasts in stress patterns in French and English spoken language, he applies the formula to musical compositions by native composers of both countries. While strong contrasts are found in the language examples, similar although weaker contrasts are found in the musical excerpts. The strength of this comparison is particularly important because the musical contrasts do not rely on beat, pulse or other characteristic aspects of music that are not found in language. Rather they demonstrate a parallel with the prevalent characteristics of the stressed and unstressed syllables of the spoken language."} {"text":"Susan Moore Ervin-Tripp (1927\u20132018) was an American linguist whose psycholinguistic and sociolinguistic research focused on the relation between language use and the development of linguistic forms, especially the developmental changes and structure of interpersonal talk among children."} {"text":"Born Susan Moore Ervin on June 29, 1927, in Minneapolis, Minnesota, she earned her undergraduate degree in Art History at Vassar College. She earned a PhD from the University of Michigan in 1955 for her thesis, entitled \"The Verbal Behaviour of Bilinguals: The Effect of Language of Report upon the Thematic Apperception Test Stories of Adult French Bilinguals\", under the supervision of Theodore Newcomb. She taught at the University of California at Berkeley. In her academic work she conducted research on child language acquisition and bilingualism among children and has made contributions to the fields of linguistics, psychology, child development, sociology, anthropology, rhetoric, and women's studies."} {"text":"She was a doctoral advisor of Daniel Kahneman, a 2002 Nobel Prize winner."} {"text":"Ervin-Tripp was a Guggenheim Fellow in 1974."} {"text":"A festschrift dedicated to Ervin-Tripp was published in 1996."} {"text":"A tribute to the work of Susan Ervin-Tripp with a comprehensive bibliography was published by A. Kyratzis in 2020."} {"text":"Charles Egerton Osgood (20 November 1916 \u2013 15 September 1991) was an American psychologist and professor at the University of Illinois. He was known for his research on behaviourism versus cognitivism, semantics (he introduced the term \"semantic differential), cross-culturalism, psycholinguistic theory, and peace studies. He is credited with helping in the early development of psycholinguistics. Charles Osgood was recognized distinguished and highly honored psychologist throughout his career."} {"text":"Charles Egerton Osgood was born in Somerville, Massachusetts, on 20 November 1916. His father was a manager at the Jordan Marsh department store in Boston. Osgood described having an unhappy childhood as his parents were divorced by the time he was six. When he was ten, his aunt, Grace Osgood, gave him a copy of Roget's \"Thesaurus\". This gift was described by Osgood an \u201cobject of aesthetic pleasure\u201d, sparking his fascination with words and their meanings."} {"text":"Osgood attended Brookline High School, where he began writing for the school newspaper, and eventually founded a school magazine. Osgood attended Dartmouth College where he intended to graduate and work as a writer for newspapers. During his second year, he enrolled in a class taught by Theodore Karwoski, thus inspiring him to switch his major in order to pursue a degree in psychology."} {"text":"Charles Osgood earned his B.A. in 1939 from Dartmouth, and in the same year, married Cynthia Luella Thornton. Osgood then went on to study at Yale University where he completed his Ph.D. in 1945. During his time at Yale, he worked as an assistant for Robert Sears, and collaborated with the likes of Arnold Gesell, Walter Miles, Charles Morris, and Irvin Child. However, the person with the greatest influence on his career and future work was Clark Hull. Though Osgood was heavily influenced through working alongside Hull; he stated the experience was one of the determining reasons for him pursuing a career as a researcher, rather than a clinician."} {"text":"In addition to this,Osgood completed a fellowship at the Center for Advanced Study in the Behavioral Sciences at Stanford University from 1958 to 1959; and was given an honorary doctorate from the Dartmouth College in 1962. Osgood also acted as a visiting professor at the University of Hawaii from 1964 to 1965."} {"text":"Charles Osgood's career ended somewhat abruptly and prematurely after developing an acute case of Korsakoff's\u00a0syndrome. He was left with severe anterograde amnesia, but recovered well enough to continue working, though in a much lighter capacity as he was restricted to working from home."} {"text":"Toward the end of his career, Osgood decided to devote his time to three main projects. With the help of other scholars, Osgood intended on completing the interpretation of data obtained from the cross-cultural project; along with publishing 2 books, one of them, a summary of his theory of psycholinguistics (to be titled \"Toward an Abstract Performance Grammar\"), and the other on international affairs. Osgood was never able to complete any of these due to the effects of his illness, which, after a few years, forced him into complete retirement, until his death on September 15, 1991."} {"text":"Osgood worked on many studies mainly on cross-cultural studies in different aspects. He devoted most of his time to studies regarding Social Psychology, Cognitive-Behaviour Psychology and also on Psycholinguistics. He was renowned for four of his major works and these works have pathed the way for future researchers by facilitating them for validating their works with researches tools proposed by Osgood, also promoting international research studies on cross-cultural topics."} {"text":"Osgood's Mediation theory\u2014The psycholinguistics foundations in human behaviour and communication process."} {"text":"Osgood proposed the mediation theory which suggested that the physical stimuli exist in our environment have elicited our internal response and lead to our interpretation of the underlined meaning of those presented stimulus. With our 3-level of thought process, we will have our internal stimuli, which are our thoughts and emotion towards the physical stimulus and the internal stimulus will bring up the outward response(s), which are visible feedbacks to the physical stimulus in the environment. Osgood also suggested that by measuring the visible outward response we can determine the intensity of emotion that has been brought up by the physical stimulus."} {"text":"Osgood also proposed a two-stage Meditation learning theory in the language acquisition process in 1954. The theory suggested that the use of language is an expression of mental process which is related to the cultural context of an individual. It suggested that the language acquisition process involves coding and decoding of the psychological structure within the language. His research in language, cognition, and neurophysiology had provided insight into future studies about multilingual language acquisition with a cross-cultural framework."} {"text":"Osgood introduced a semantic technique for researchers to measure the connotative meaning of objects and concepts from the human Ecology aspect. The Semantic differential technique focused on three affective dimensions of Evaluation, Potency, and Activity (E-P-A) to evaluate social and cultural related concepts in a valid and reliable way. The practice of the semantic differential technique is being used broadly in social and behavioural science studies."} {"text":"Development of the atlas of affectivemeanings (1960s\u20131980s)."} {"text":"To further improve the validity of the semantic differential technique, Osgood took the lead to develop the Atlas of Affective Meanings project from the 1960s to the 1980s. The project is indices of the affective meanings with 20 basic and derived measures of over 600 functionally equivalent concepts by analyzing over 30 language\/culture communities from Mexico, Brazil, Japan, Hong Kong, Thailand, India, Iran, Lebanon, Israel, Turkey, Greece, Yugoslavia, Italy, Spain, Portugal, France, German, Netherlands, Finland, etc."} {"text":"With the development of the Atlas, affective meanings are used as universal functional markers with the E-P-A dimension and they have high validity in measuring indigenous and cross-cultural comparisons. These affective meanings are being widely applied on social-cultural studies on social dynamics, international communication, mental illness stigma and connotation of racial concepts, etc. It has a great contribution to the development of cross-cultural researches and also international communications."} {"text":"Graduated reciprocation in tension reduction (GRIT) strategies."} {"text":"With the rise of the nuclear arms race that was brought up by the United States and the Soviet Union during the Cold War. Osgood proposed the GRIT strategies (Graduated Reciprocation in Tension reduction) in 1962, which means to provide a psychological approach to resolve the tension brought up from the nuclear arm race between the two superpowers. The GRIT strategies are based on the concept of reciprocity and used to rebuild a negotiation platform for two parties who are deadlocked. The introduction of GRIT strategies not only reduced the tension between the two superpowers but also has contributed to solving various social, cultural and political conflicts worldwide."} {"text":"Charles Osgood earned many distinctions and honors within the field of psychology throughout his distinguished career. In 1960, the American Psychological Association presented Osgood with the Award for Distinguished Scientific Contributions; three years later, Osgood was elected as president of the American Psychological Association.\u00a0In addition to this, the Society for the Psychological Study of Social Issues presented Charles E. Osgood with the Kurt Lewin Memorial Award in 1971. In the following year, he was elected to the National Academy of Sciences, and as president of the Peace Science Society in 1976. Osgood was also the recipient of the Guggenheim Fellowship twice, in 1955 and again in 1972 in the field of philosophy."} {"text":"Mervyn Etienne is an English karateka. He is the winner of multiple European Karate Championships and World Karate Championships medals. Since retiring from karate competitions Etienne has become a \"cognitive performance coach\", physical therapist and co-founder of Bio-Performance Sciences Ltd."} {"text":"Mark Seidenberg is Vilas Research Professor and Donald O. Hebb Professor of Psychology at the University of Wisconsin\u2013Madison and a Senior Scientist at Haskins Laboratories. He is a specialist in psycholinguistics, focusing specifically on the cognitive and neurological bases of language and reading. Seidenberg received his Ph.D. from Columbia University under the mentorship of Thomas Bever and completed a postdoctoral fellowship at the Center for the Study of Reading at the University of Illinois. He has held academic positions at McGill University, the University of Southern California, and since 2001 at the University of Wisconsin. Seidenberg has published over a hundred scientific articles and is the author of Language at the Speed of Light (2017). Seidenberg is married to fellow psychologist Maryellen MacDonald and has two children."} {"text":"Herbert Herb Clark (born 1940) is a psycholinguist currently serving as Professor of Psychology at Stanford University. His focuses include cognitive and social processes in language use; interactive processes in conversation, from low-level disfluencies through acts of speaking and understanding to the emergence of discourse; and word meaning and word use. Clark is known for his theory of \"common ground\": individuals engaged in conversation must share knowledge in order to be understood and have a meaningful conversation (Clark, 1985). Together with Deanna Wilkes-Gibbs (1986), he also developed the collaborative model, a theory for explaining how people in conversation coordinate with one another to determine definite references. Clark's books include \"Semantics and Comprehension, Psychology and Language: An Introduction to Psycholinguistics, Arenas of Language Use and Using Language.\""} {"text":"Clark, born in 1940, attended Stanford University until 1962 and received a B.A. with distinction. He attended Johns Hopkins University for post-graduate training, where he obtained his MA and his PhD, in 1964 and in 1966 respectively. The same year he finished his PhD, he completed his post-doctoral studies at the Linguistics Institute of UCLA. He has since worked at Carnegie-Mellon University, Stanford University."} {"text":"Clark's early work explored theories of comprehension. He found that people interpret verb phrases, particularly eponymous verb phrases, against a hierarchy of information presumed to be common knowledge between the listener and the speaker. This hierarchy of beliefs is composed of"} {"text":"For example, when a person instructed, \u201cDo a Napoleon for the camera,\u201d the listener would identify Napoleon, recognize acts that were done by Napoleon (such as smiling, saying \u2018fromage\u2019, or posing for paintings), and then use the context to identify the act being referred to (tucking one's hand into one's jacket.)"} {"text":"Another important finding by Clark was that salience is necessary for two people to understand exactly what is being referred to. Napoleon did eat and sleep during his lifetime, but saying, \u201cDo a Napoleon at the kitchen table,\u201d to mean \u201ceat\u201d would create comprehension problems, because the salience of the act is limited."} {"text":"In his study of irony, Clark examined the pretense theory, which states that two speakers in a conversation do not announce the pretense they make when speaking with irony, but do nevertheless expect the listener to see through it. Thus, common ground must be had by both speakers in order for the effect of irony to work."} {"text":"Irony contains three important features: asymmetry of affect, victims of irony, and ironic tone of voice."} {"text":"Asymmetry of affect speaks to the higher likelihood of making ironic positive statements (\u201cWhat a smart idea!\u201d to a bad idea) than ironic negative statements (\u201cWhat a stupid idea!\u201d to a good one). Since those who are ignorant of irony would be more likely to cling to the general tendency of seeing the world in terms of success and excellence, these are the people that ironists pretend to be."} {"text":"Victims of irony are the people in conversation presumed not to understand the irony, such as the person that the speaker is pretending to be, or the person that could be the listener who wouldn't understand the irony in the speech."} {"text":"The ironic tone of voice is the voice a speaker takes on in lieu of his own in order to fully convey the pretense. Ironic tones of voices tend to be exaggerated and caricatured, like taking on a heavily conspiratorial voice when discussing a widely known piece of gossip."} {"text":"The second way is illustrated in more frequent and general situations where the obstacle isn't well known or specific. So if the speaker were to ask a passing stranger near the arena about the start time of the concert, he might formulate, \u201cCan you tell me when the concert starts?\u201d The expected obstacle is formed by lack of ability and willingness of the stranger to answer the question. It is a useful convention due to how it provides the stranger with a broad range of graceful excuses not to give the desired answer."} {"text":"The last way of framing to overcome obstacles is for situations where the person being addressed seems unwilling to provide the information. Then the speaker can ask for related information that the addressee is willing to divulge, and the speaker appears polite while the addressee is not being forced to admit unwillingness. Whether the obstacle is being addressed directly or sidestepped, the speaker is still designing requests that best overcome the greatest expected obstacle."} {"text":"A similar study by the same researchers examined \u2018uh\u2019 and \u2018um\u2019 in spontaneous speaking. Like and \"thuh\", \"um\" and \"uh\" signal varying degrees of delay, which \"um\" creating a major pause and \"uh\" creating a minor one. Because of how they are incorporated into speech, such as specifically put to use at certain pauses in speech, attached as clitics onto other words, and prolonged for additional meaning, they have become a part of spontaneous speech that have meaning. What they argued was that \"um\" and \"uh\" are conventional English words and speakers plan for them, formulate them, and produce them just like any other vocabulary."} {"text":"James Lloyd \"Jay\" McClelland, FBA (born December 1, 1948) is the Lucie Stern Professor at Stanford University, where he was formerly the chair of the Psychology Department. He is best known for his work on statistical learning and Parallel Distributed Processing, applying connectionist models (or neural networks) to explain cognitive phenomena such as spoken word recognition and visual word recognition. McClelland is to a large extent responsible for the large increase in scientific interest in connectionism in the 1980s."} {"text":"McClelland was born on December 1, 1948 to Walter Moore and Frances (Shaffer) McClelland. He received a B.A. in Psychology from Columbia University in 1970, and a Ph.D. in Cognitive Psychology from the University of Pennsylvania in 1975. He married Heidi Marsha Feldman on May 6, 1978, and has two daughters."} {"text":"In 1986 McClelland published \"Parallel Distributed Processing: Explorations in the Microstructure of Cognition\" with David Rumelhart, which some still regard as a bible for cognitive scientists. His present work focuses on learning, memory processes, and psycholinguistics, still within the framework of connectionist models. He is a former chair of the Rumelhart Prize committee, having collaborated with Rumelhart for many years, and himself received the award in 2010 at the Cognitive Science Society Annual Conference in Portland, Oregon."} {"text":"McClelland and David Rumelhart are known for their debate with Steven Pinker and Alan Prince regarding the necessity of a language-specific learning module."} {"text":"In fall 2006 McClelland moved to Stanford University from Carnegie Mellon University, where he was a professor of psychology and Cognitive Neuroscience. He also holds a part-time appointment as Consulting Professor at the Neuroscience and Aphasia Research Unit (NARU) within the School of Psychological Sciences, University of Manchester."} {"text":"In July 2017, McClelland was elected a Corresponding Fellow of the British Academy (FBA), the United Kingdom's national academy for the humanities and social sciences."} {"text":"Royal Jon Skousen (; born August 5, 1945) is a professor of linguistics and English at Brigham Young University (BYU), where he is editor of the Book of Mormon Critical Text Project. He is \"the leading expert on the textual history of the Book of Mormon\" and the founder of the analogical modeling approach to language modeling."} {"text":"Skousen was born in Cleveland, Ohio, to Leroy Bentley Skousen and Helen Louise Skousen, a Latter-day Saint family and was one of eleven children. Royal is a nephew to W. Cleon Skousen. Royal graduated from Sunset High School in Beaverton, Oregon."} {"text":"After his father unexpectedly died from lung cancer in 1964 despite having never smoked, Skousen served as a missionary in Finland from 1965 to 1967. He is fluent in Finnish."} {"text":"Skousen received his B.A. degree from BYU, with a major in English and a minor in mathematics. Skousen went on to study linguistics at the University of Illinois at Urbana-Champaign, earning his Ph.D. degree there in 1972."} {"text":"He was then an assistant professor of linguistics at the University of Texas at Austin until 1979, when he joined the faculty of BYU. He was also a visiting professor at the University of California, San Diego in 1981, a Fulbright lecturer at the University of Tampere in Finland in 1982, and a research fellow at the Max Planck Institute for Psycholinguistics in Nijmegen, Netherlands in 2001. In 1999, BYU presented him the Karl G. Maeser Excellence in Research and Creative Arts Awards."} {"text":"Since 1999, Skousen has served as the president of the Utah Association of Scholars, an affiliate of the National Association of Scholars. He has also been associate editor of the \"Journal of Quantitative Linguistics\" since 2003."} {"text":"Skousen married Sirkku Unelma H\u00e4rk\u00f6nen in 1968. They had seven children and lived in Orem, Utah. They now live in Spanish Fork, Utah."} {"text":"Lera Boroditsky (born c.1976) is a cognitive scientist and professor in the fields of language and cognition. She is currently one of the main contributors to the theory of linguistic relativity. She is a Searle Scholar, a McDonnell Scholar, recipient of a National Science Foundation Career award, and an American Psychological Association Distinguished Scientist. She is Professor of Cognitive Science at UCSD. She previously served on the faculty at MIT and at Stanford."} {"text":"Boroditsky was born in Belarus to a Jewish family. When she was 12 years old, her family emigrated to the United States, where she learned to speak English as her fourth language. As a teenager she began thinking about the degree to which language differences could shape an argument and exaggerate the differences between people. She received her B.A. degree in cognitive science at Northwestern University in 1996. She went to graduate school at Stanford University, where she obtained her Ph.D. in cognitive psychology in 2001. She worked under Gordon Bower who was her thesis advisor at Stanford. Boroditsky also conducted research at Stanford University."} {"text":"She became an assistant professor in the department of brain and cognitive sciences at MIT before she was hired by Stanford in 2004. Gordon Bower says: \"It's exceedingly rare for us to hire back our own graduate students.. [s]he brought a very high IQ and a tremendous ability for penetrating analysis.\" At Stanford, she was an assistant professor of psychology, philosophy, and linguistics."} {"text":"Boroditsky is currently professor of cognitive science at the University of California, San Diego (UCSD). She studies language and cognition, focusing on interactions between language, cognition, and perception. Her research combines insights and methods from linguistics, psychology, neuroscience, and anthropology."} {"text":"Her work has provided new insights into the controversial question of whether the languages we speak shape the way we think (Linguistic relativity). She uses powerful examples of cross-linguistic differences in thought and perception that stem from syntactic or lexical differences between languages. Her papers and lectures have influenced the fields of psychology, philosophy, and linguistics in providing evidence and research against the notion that human cognition is largely universal and independent of language and culture."} {"text":"She was named a Searle Scholar and has received several awards for her research, including an NSF CAREER award, the Marr Prize from the Cognitive Science Society, and the McDonnell Scholar Award."} {"text":"In addition to scholarly work, Boroditsky also gives popular science lectures to the general public, and her work has been covered in news and media outlets. Boroditsky talks about how all the languages differ from one another, whether in grammatical differences or contain different sounds, vocabulary, or patterns. Boroditsky studies how the languages we speak shape the way we think."} {"text":"Boroditsky is known for her research relating to cognitive science, how language affects the way we think, and other linguistic related topics. One of her main research topics focuses on how people with different linguistic backgrounds act or have different behaviors when exposed to certain events. On the individual level, Boroditsky is interested in how the languages we speak influence and shape the way we think."} {"text":"She has done studies comparing English to other native speakers of a different language and seeing the differences in the way they think and act given a certain scenario. For example, English and Russian differentiate between cups and glasses. In Russian, the difference between a cup and a glass is based on its shape instead of its material as in English."} {"text":"A study published in 2000, observed that \"the processing of the concrete domain of space could modulate the processing of the abstract domain of time, but not the other way around.\" The frequent use of a mental metaphor connects it to the abstract concept and helps the mind to store non-concrete informations in the long-term memory. Boroditsky has also done research on metaphors and their relation to crime. Her work has suggested that some conventional and systematic metaphors influence the way people reason about the issues they describe. For instance, previous work has found that people were more likely to want to fight back against a crime \"beast\" by increasing the police force but more likely to want to diagnose and treat a crime \"virus\" through social reform."} {"text":"(From over 200 books, chapters, journal articles, and technical reports; see footnote 8 for a complete bibliography)."} {"text":"Wallace E. Lambert (December 31, 1922 \u2013 August 23, 2009) was a Canadian psychologist and a professor in the psychology department at McGill University (1954\u20131990). Among the founders of psycholinguistics and sociolinguistics, he is known for his contributions to social and cross-cultural psychology (intergroup attitudes, child-rearing values, and psychological consequences of living in multicultural societies), language education (the French immersion program), and bilingualism (measurement of language dominance, attitudes and motivation in second-language learning, and social, cognitive, and neuropsychological consequences of bilingualism)."} {"text":"Wallace (\"Wally\") Lambert was born in Amherst, Nova Scotia, Canada, on December 31, 1922. When he was 4 years old, his family moved to Taunton, Massachusetts, where he was raised. Lambert received his undergraduate education at Brown University (1940\u20131947), where his studies were interrupted for 3 years of U.S. military service in the European Theatre of Operations. While on release from the army, he studied psychology, philosophy, and economics at Cambridge University, and French language and literature at the Universit\u00e9 de Paris and the Universit\u00e9 d'Aix-en-Provence. Lambert received his master's degree in psychology from Colgate University in 1950, and his doctorate in 1953 from the University of North Carolina at Chapel Hill."} {"text":"Lambert met his future wife Janine in France after the second world war. They had two children, Philippe and Sylvie. Watching his children grow up to be fluently bilingual in a household in Montreal with an English-speaking father and a French-speaking mother is said to have sparked his interest in bilingualism-biculturalism."} {"text":"In 1954, Lambert took up a position in the Psychology Department at McGill University in Montreal, where he published nearly 200 journal articles, monographs, and books on the topic of bilingualism. Among Lambert's former graduate students are: Allan Paivio, Robert C. Gardner, Leon Jakobovits, Malcolm Preston, Moshe Anisfeld, Elizabeth Peal Anisfeld, G. Richard Tucker, Josiane Hamers, Allan Reynolds, Gary Cziko, and Jyotsna Vaid. Lambert remained at McGill University as an emeritus professor from 1990 until his death in 2009. Over the course of his career, Lambert further served as an editor for five academic journals, and as a consultant for the United States Office of Education."} {"text":"Lambert's many contributions led to multiple honours, including Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University (1964\u20131965), Fellow of the Royal Society of Canada (1973), Fellow of the National Academy of Education (1976), the Queen Elizabeth II Golden Jubilee Medal (1978), Honorary President of the Canadian Psychological Association (1982\u20131983), Distinguished Alumni Award from the University of North Carolina (1983), Canadian Psychological Association Award for Distinguished Contribution to Psychology (1984), American Psychological Association Distinguished Scientific Award for the Applications of Psychology (1990), James McKeen Cattell Fellow Award of the Association for Psychological Science (1992), Visiting Fellow, Netherlands Institute for Advanced Study, Wassenaar (1987), and five honorary doctorates."} {"text":"Jean E. Fox Tree is a professor in the Department of Psychology at the University of California at Santa Cruz."} {"text":"Fox Tree studies collateral signals that people use in spontaneous speech, such as fillers (e.g. \u2018you know\u2019), prosodic information (e.g. pauses between words, the melody of a sentence), fillers (e.g. \u2018uh\u2019 and \u2018um\u2019), and speech disfluencies."} {"text":"Traditionally, such phenomena were given little attention by scholars, either because they were viewed as flaws in speech to be avoided or ignored, or because many psycholinguistic studies focused on speech that was prepared in advance rather than spontaneous speech. Rather than unwanted errors, Fox Tree's research has shown that collateral signals are actually meaningful and relevant to both speaker and listener, and that removing them from speech can negatively effect comprehension."} {"text":"This view counters that proposed by Noam Chomsky, the well-known linguist from MIT who regarded such utterances as errors in performance and not part of proper language. Fox Tree showed, however, collateral signals are essential to successful communication in everyday situations and are beneficial to listeners."} {"text":"Fox Tree studied the use of \u2018oh\u2019 in several experiments. They found that \u2018oh\u2019 can be used by speakers to signal to that the information they are providing is not connected to the information that just preceded it. That is, while an utterance that follows another is usually connected to the one that preceded it, \u2018oh\u2019 can be used to signal that the utterance is not connected to the one that directly before it, but rather to something further back (for example \u201cI went to the market and bought some fruit. I got apples, pears, grapes, and oranges. It was really crowded there today. Oh, and kiwis.\u201d) (1999)."} {"text":"Other topics that Fox Tree has researched include the use of expressions such as \u2018you know\u2019 and \u2018I mean\u2019, the effects of false starts and repetitions in the comprehension of spontaneous speech, the use of prosody in syntactic disambiguation, the interpretation of pauses in spontaneous speaking, and the recognition of verbal irony in spontaneous speech."} {"text":"Fox Tree's work contributes both theory and data to many disciplines, such as computer technology and artificial intelligence (how machines can recognize and reproduce collateral signals), psychology (the role that collateral signals have in speech production and recognition), sociology (how various groups use collateral signals), linguistics (the structure and the function of collateral signals), and communication\/media studies (the effect that the frequent editing of collateral signals from spontaneous radio talk might have on meaning)."} {"text":"Viorica Marian is a Moldovan-born American Psycholinguist, Cognitive Scientist, and Psychologist known for her research on bilingualism and multilingualism. She is the Ralph and Jean Sundin Endowed Professor of Communication Sciences and Disorders, and Professor of Psychology at Northwestern University. Marian is the Principal Investigator of the Bilingualism and Psycholinguistics Research Group. She received her PhD in Psychology from Cornell University, and master's degrees from Emory University and from Cornell University. Marian studies language, cognition, the brain, and the consequences of knowing more than one language for linguistic, cognitive, and neural architectures."} {"text":"At the University of Alaska, Marian studied with Alaska's only cognitive psychologist at the time, Dr. Robert Madigan. At Emory, she was influenced by psychologists Philippe Rochat, Robyn Fivush, Eugene Winograd, Carolyn Mervis, John Pani, Michael Tomasello, Frans de Waal, and others. At Cornell, Marian was trained in eye-tracking by Michael Spivey, and in functional neuroimaging by Joy Hirsch, and was also influenced by Stephen Ceci, Urie Bronfenbrenner, Frank Keil, Joan Sereno, Daryl Bem, David Field, Carol Krumhansl, Thomas Gilovich, Shimon Edelman, James Cutting, and others."} {"text":"Viorica Marian's research areas include Psycholinguistics, Neurolinguistics, Cognitive Science, Language and Cognition, Linguistic and Cultural Diversity, Communication Sciences and Disorders, Bilingualism, and Multilingualism. She studies language processing, language and memory, language learning, language development, audio-visual integration, bilingual assessment, neurolinguistics of bilingualism, and computational models of bilingual language processing. Marian uses multiple approaches, including eye-tracking, EEG, fMRI, mouse-tracking, computational modeling, and cognitive tests to understand how bilingualism and multilingualism change human function. Funding for her research comes from the National Institutes of Health, the National Science Foundation, private foundations, and Northwestern University."} {"text":"Parallel activation of both languages in bilinguals."} {"text":"Marian's research has contributed to demonstrating a bilingual advantage in novel language learning. She and her students showed that bilinguals outperform monolinguals at learning a new language and used eye tracking and mouse tracking trajectories to demonstrate that bilinguals were better at controlling interference from the native language when using a newly learned language."} {"text":"Marian's neuroimaging work examined overlap and differences in language networks across bilinguals\u2019 two languages during language processing. She showed that bilingual experience changes not only linguistic and cognitive processing, but also neural organization and function."} {"text":"Marian's lab has developed various research tools that are widely used by the language science community and are freely available from Marian's Bilingualism and Psycholinguistics Research Group website. The Language Experience and Proficiency Questionnaire has been translated into over thirty languages and used in hundreds of studies worldwide; the Cross-Linguistics Easy-Access Resource for Phonological and Orthographic Neighborhood Densities database is currently the most extensive multilingual database of lexical neighborhoods available online; and the Bilingual Language Interaction Network for Comprehension of Speech provides the only existing dynamic self-organizing computational model of bilingual spoken language comprehension."} {"text":"Marian teaches courses on linguistic and cultural diversity at Northwestern University and is an advocate for increased representation of individuals from linguistically, culturally, racially, and otherwise diverse backgrounds in science and education."} {"text":"Marian is a Fellow of the Psychonomic Society, a recipient of the Clarence Simon Award for Outstanding Teaching and Mentoring, the University of Alaska Alumni of Achievement Award, and the Editor\u2019s Award for best paper from the Journal of Speech, Language, and Hearing Research."} {"text":"Marian graduated college and started her PhD studies at the age of 19."} {"text":"In 2008, she was featured in the Get-Out-the-Vote episode of the Oprah Winfrey Show."} {"text":"In 2018, one of her tweets went viral and was viewed by over thirty million people across platforms: \"I once taught an 8 am college class. So many grandparents died that semester. I then moved my class to 3 pm. No more deaths. And that, my friends, is how I save lives.\""} {"text":"In 1996, she worked as interpreter and envoy during the Olympic Games in Atlanta."} {"text":"George Philip Lakoff (; born May 24, 1941) is an American cognitive linguist and philosopher, best known for his thesis that people's lives are significantly influenced by the conceptual metaphors they use to explain complex phenomena."} {"text":"Between 2003 and 2008, Lakoff was involved with a progressive think tank, the now defunct Rockridge Institute. He is a member of the scientific committee of the Fundaci\u00f3n IDEAS (IDEAS Foundation), Spain's Socialist Party's think tank."} {"text":"The more general theory that elaborated his thesis is known as embodied mind. Lakoff served as a professor of linguistics at the University of California, Berkeley, from 1972 until his retirement in 2016."} {"text":"Although some of Lakoff's research involves questions traditionally pursued by linguists, such as the conditions under which a certain linguistic construction is grammatically viable, he is best known for his reappraisal of the role that metaphors play in the socio-political life of humans."} {"text":"Metaphor has been seen within the Western scientific tradition as a purely linguistic construction. The essential thrust of Lakoff's work has been the argument that metaphors are a primarily conceptual construction and are in fact central to the development of thought."} {"text":"According to Lakoff, non-metaphorical thought is possible only when we talk about purely physical reality; the greater the level of abstraction, the more layers of metaphor are required to express it. People do not notice these metaphors for various reasons, including that some metaphors become 'dead' in the sense that we no longer recognize their origin. Another reason is that we just don't \"see\" what is \"going on\"."} {"text":"In intellectual debate, for instance, the underlying metaphor according to Lakoff is usually that argument is war (later revised to \"argument is struggle\"):"} {"text":"According to Lakoff, the development of thought has been the process of developing better metaphors. He also points out that the application of one domain of knowledge to another offers new perceptions and understandings."} {"text":"Lakoff began his career as a student and later a teacher of the theory of transformational grammar developed by Massachusetts Institute of Technology professor Noam Chomsky. In the late 1960s, however, he joined with others to promote generative semantics as an alternative to Chomsky's generative syntax. In an interview he stated:"} {"text":"Lakoff's claim that Chomsky asserts independence between syntax and semantics has been rejected by Chomsky, who holds the following view:"} {"text":"In response to Lakoff's making the above claim about Chomsky's view, Chomsky claimed that Lakoff has \"virtually no comprehension of the work he is discussing\". Despite Lakoff's mischaracterization of Chomsky's view on the matter, their linguistic positions diverge significantly; this rift between Generative Grammar and Generative Semantics led to fierce, acrimonious debates among linguists that have come to be known as the \"linguistics wars\"."} {"text":"When Lakoff claims the mind is \"embodied\", he is arguing that almost all of human cognition, up through the most abstract reasoning, depends on and makes use of such concrete and \"low-level\" facilities as the sensorimotor system and the emotions. Therefore, embodiment is a rejection not only of dualism vis-a-vis mind and matter, but also of claims that human reason can be basically understood without reference to the underlying \"implementation details\"."} {"text":"Lakoff offers three complementary but distinct sorts of arguments in favor of embodiment. First, using evidence from neuroscience and neural network simulations, he argues that certain concepts, such as color and spatial relation concepts (e.g. \"red\" or \"over\"; see also \"qualia\"), can be almost entirely understood through the examination of how processes of perception or motor control work."} {"text":"Second, based on cognitive linguistics' analysis of figurative language, he argues that the reasoning we use for such abstract topics as warfare, economics, or morality is somehow rooted in the reasoning we use for such mundane topics as spatial relationships. (See conceptual metaphor.)"} {"text":"Finally, based on research in cognitive psychology and some investigations in the philosophy of language, he argues that very few of the categories used by humans are actually of the black-and-white type amenable to analysis in terms of necessary and sufficient conditions. On the contrary, most categories are supposed to be much more complicated and messy, just like our bodies."} {"text":"\"We are neural beings\", Lakoff states, \"Our brains take their input from the rest of our bodies. What our bodies are like and how they function in the world thus structures the very concepts we can use to think. We cannot think just anything \u2014 only what our embodied brains permit.\""} {"text":"Lakoff believes consciousness to be neurally embodied, however he explicitly states that the mechanism is not just neural computation alone. Using the concept of disembodiment, Lakoff supports the physicalist approach to the afterlife. If the soul can not have any of the properties of the body, then Lakoff claims it can not feel, perceive, think, be conscious, or have a personality. If this is true, then Lakoff asks what would be the point of the afterlife?"} {"text":"Many scientists share the belief that there are problems with falsifiability and foundation ontologies purporting to describe \"what exists\", to a sufficient degree of rigor to establish a reasonable method of empirical validation. But Lakoff takes this further to explain why hypotheses built with complex metaphors cannot be directly falsified. Instead, they can only be rejected based on interpretations of empirical observations guided by other complex metaphors. This is what he means when he says that falsifiability itself can never be established by any reasonable method that would not rely ultimately on a shared human bias. The bias he's referring to is the set of conceptual metaphors governing how people interpret observations."} {"text":"Mathematical reviewers have generally been critical of Lakoff and N\u00fa\u00f1ez, pointing to mathematical errors. Lakoff claims that these errors have been corrected in subsequent printings. Although their book attempts a refutation of some of the most widely accepted viewpoints in philosophy of mathematics and advice for how the field might proceed, they have yet to elicit much of a reaction from philosophers of mathematics themselves. The small community specializing in the psychology of mathematical learning, to which N\u00fa\u00f1ez belongs, is paying attention."} {"text":"Lakoff has publicly expressed some of his political views and his ideas about the conceptual structures that he views as central to understanding the political process. He almost always discusses the former in terms of the latter."} {"text":"\"Moral Politics\" (1996, revisited in 2002) gives book-length consideration to the conceptual metaphors that Lakoff sees as present in the minds of American \"liberals\" and \"conservatives\". The book is a blend of cognitive science and political analysis. Lakoff makes an attempt to keep his personal views confined to the last third of the book, where he explicitly argues for the superiority of the liberal vision."} {"text":"Between 2003 and 2008, Lakoff was involved with a progressive think tank, the Rockridge Institute, an involvement that follows in part from his recommendations in \"Moral Politics\". Among his activities with the Institute, which concentrates in part on helping liberal candidates and politicians with re-framing political metaphors, Lakoff has given numerous public lectures and written accounts of his message from \"Moral Politics.\" In 2008, Lakoff joined Fenton Communications, the nation's largest public interest communications firm, as a Senior Consultant."} {"text":"One of his political works, \"Don't Think of an Elephant! Know Your Values and Frame the Debate\", self-labeled as \"the Essential Guide for Progressives\", was published in September 2004 and features a foreword by former Democratic presidential candidate Howard Dean."} {"text":"Paul van Geert is a Dutch linguist. He is currently a professor of developmental psychology at the University of Groningen, Netherlands. He is renowned for his work on developmental psychology and the application of dynamical systems theory in social science."} {"text":"He is one of the members of the \"Dutch School of Dynamic Systems\" who proposed to apply time series data to study second language development along with de Bot, Lowie, and Verspoor."} {"text":"Between 1967 and 1971 van Geert studied psychology and educational sciences at the Ghent University, Belgium. In 1975 he was awarded with a PhD in Developmental psychology. In 1976 he became a lecturer at the University of Groningen."} {"text":"In 1976 he was appointed as a Lecturer and in 1978 he became a Senior Lecturer at the University of Groningen. Between 1978 and 1979 he was a fellow at the Netherlands Institute of Advanced Studies in the Humanities and Social Sciences. In 1985 he was appointed as a Professor of Psychology and a Chair of Developmental Psychology at the University of Groningen. Between 1990 and 1992 he was the Dean of the Department of Psychology at the University of Groningen. Between 1992 and 1993 he was a fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University in California."} {"text":"Paul van Geert was the first to apply the logistic function to model first language development in 1991."} {"text":"He developed a Microsoft Excel VBA code to model developmental data in 2003."} {"text":"In 2002 he created new techniques and methods to measure the degree of variability by applying min-max graphs, resampling techniques, and Monte Carlo method along with Marijn van Dijk."} {"text":"He supervised his future colleague at the University of Groningen, Marijn van Dijk, who obtained her PhD Degree in 2004. The title of her Phd Thesis was \"Child Language Cuts Capers: Variability and Ambiguity in Early Child Development\"."} {"text":"Michael Tanenhaus is an American psycholinguist, author, and lecturer. He is the Beverly Petterson Bishop and Charles W. Bishop Professor of Brain and Cognitive Sciences and Linguistics at the University of Rochester. From 1996\u20132000 and 2003\u20132009 he served as Director of the Center for Language Sciences at the University of Rochester."} {"text":"Tanenhaus\u2019s research focuses on processes which underlie real-time spoken language and reading comprehension. He is also interested in the relationship between linguistic and various non-linguistic contexts."} {"text":"Integration of Visual and Linguistic Information in Spoken Language Comprehension"} {"text":"In this study Tanenhaus looked at visual context and its effects on language comprehension. Tanenhaus wanted to investigate whether comprehension of language is informationally encapsulated or modular, as thought by many theorists and researchers including Jerry Fodor."} {"text":"When the subject is presented with the first scene, in Figure A, they become confused. We see this by the many eye movements of the subjects who are not quite sure which items to manipulate. In the second scene the subject clearly understands the sentence more easily. In this scene the pencil is replaced by another apple on a napkin. This disambiguates the phrase because the subject understands that on the towel is modifying the apple, and is not referring to a destination."} {"text":"The results strongly support the hypothesis that language comprehension, specifically at the syntactic level, is informed by visual information. This is a clearly non-modular result. These results also seem to support Just and Carpenter\u2019s \u201cStrong Eye Mind Hypothesis\u201d that rapid mental processes which make up the comprehension of spoken language can be observed by eye movements."} {"text":"Actions and Affordances in Syntactic Ambiguity Resolution"} {"text":"The results suggest that referents were assessed in terms of how compatible they were with the instructions. This supports the hypothesis that non-linguistic domain restrictions can influence syntactic ambiguity resolution. The participants applied situation specific, contextual properties to the way in which they followed these instructions. The results show that language is processed incrementally, as an utterance unfolds, and that visual information and context play a role in the processing."} {"text":"Tanenhaus has collaborated with others to edit two books. His first book \u201cLexical Ambiguity Resolution: Perspective from Psycholinguistics, Neuropsychology, and Artificial Intelligence\u201d was published in 1988. This book contains eighteen original papers which look at the concept of Lexical Ambiguity Resolution. His most recent work \u201cApproaches to Studying World- Situated Language Use: Bridging the Language and Product and Language as Action Traditions\u201d was published in 2004. This book was published to show the importance of looking at both social and cognitive aspects when studying language processing. The book is made up of papers and reports of relevant experimental findings."} {"text":"Many organizations and academic institutions, including the International Association for the Study of Child Language, National Research Council, and Brain Map Advisory Board, have honored MacWhinney for the quality of his research and scholarship. MacWhinney's professional service activities include active participation on the governing boards of several professional associations, academic journals, and grant agencies, and he has also served as a university program reviewer and as an ad hoc reviewer for several prestigious journals including \"Science\", \"Nature\", and \"Psychological Bulletin and Review\". He holds membership and fellowship in many prominent professional societies, including the American Educational Research Association, American Psychological Society, Association for Computational Linguistics, Cognitive Science Society, International Association for Child Language, Linguistic Society of America, Psychonomic Society, and Society for Research in Child Development."} {"text":"MacWhinney is married and has two sons. He is fluent in six languages, including English, Hungarian, German, French, Spanish, and Italian, and has presented his research in many countries around the world."} {"text":"MacWhinney has developed a model of first and second language acquisition as well as language processing called the competition model. This model views language acquisition as an emergentist phenomenon that results from competition between lexical items, phonological forms, and syntactic patterns, accounting for language processing on the synchronic, ontogenetic, and phylogenetic time scales. Empirical studies based on the competition model have shown that learning of language forms is based on the accurate recording of many exposures to words and patterns in different contexts. The predictions of the competition model have been supported by research in the realms of psycholinguistics, cognitive neuroscience, and cognitive development."} {"text":"MacWhinney developed and directs the CHILDES and TalkBank corpora, two widely used databases for language acquisition research. He manages FluencyBank, a TalkBank project, together with Nan Bernstein Ratner."} {"text":"The CHILDES system provides tools for studying conversational interactions. These tools include a database of transcripts, programs for computer analysis of transcripts, methods for linguistic coding, and systems for linking transcripts to digitized audio and video. The CHILDES database includes a rich variety of computerized transcripts from language learners. Most of these transcripts record spontaneous conversational interactions. There are also transcripts from bilingual children, older school-aged children, adult second-language learners, children with various types of language disabilities, and aphasics who are trying to recover from language loss. The transcripts include data on the learning of 26 different languages."} {"text":"TalkBank contains CHILDES as well as additional linguistic data from older children and adults, including people with aphasia, second language learners, adult conversation, and classroom language learning data."} {"text":"Support for the construction and maintenance of the databases comes from the National Institute of Child Health and Human Development (NIH-NICHD) and the National Science Foundation Linguistics Program."} {"text":"James Earle Deese (1921\u20131999) was an American psychologist. He joined the faculty of the University of Virginia in 1970 after having taught for many years (since 1950) at Johns Hopkins University. During his tenure at Johns Hopkins, Deese became Chairman of the Psychology Department and also served a term as Chairman of the American Psychological Association. Deese later became the Chairman of the Psychology Department at University of Virginia until his partial retirement, later remaining as professor emeritus. He received the Hugh Scott Hamilton award for his distinguished service."} {"text":"Deese, a 1\/2 Lumbee Indian was born in Salt Lake City, Utah, on December 14, 1921. James Deese's father was Thomas D. Deese, a full-blooded Lumbee Indian whose parents, James M. Deese and Sarah Jane Chavis were from Burnt Swamp N.C. Deese's mother, Serene Jane Johnson was from Wisconsin. Deese was first cousin of American aerospace engineer and scientist James Henry Deese. Deese was raised in Southern California and during his early college years, he worked as a page at the early television studios. Deese retained a love for Southern California\u2014its geography, its culture and history his entire life. Deese married Ellin Ruth Krauss in 1948."} {"text":"Deese died at his home in Charlottesville, Virginia in 1999, just three months before his wife also died. The couple had just celebrated their 50th anniversary on Christmas Eve of 1998."} {"text":"Deese attended Chapman College in Orange, CA, where he earned his B.A. degree in psychology. Deese later earned his doctorate at Indiana University. Later in his career, Deese was honored with an honorary Doctorate from Chapman. While attending Indiana University, Deese became fascinated by animal behavior and how it related to human behavior, particularly in the area of communication. He studied under B.F. Skinner and W. N. Kellogg. Later, Deese moved more into the area of Psycho-Linguistics and worked with other early pioneers in that field such as Noam Chomsky. Deese became mentor to many doctoral students who went on further to develop the field of learning, cognition, and language, such as Leonard M. Horowitz, William P. Banks, Allyssa McCabe, and Herbert H. Clark."} {"text":"Deese was revered by his students and highly respected by his peers. He has authored or partnered in 14 books addressing various aspects of the Psychology of Learning (several books authored by Deese or co-authored with Stewart Hulse) and later, Psycholinguistics. A popular book from 1965 was called The Structure of Associations in Language and Thought (Johns Hopkins University Press, 1965). Later, having branched out more into the area of Social Psychology, Deese wrote, American Freedom and the Social Sciences (Columbia University Press, 1985). Deese and his wife Ellin Krauss Deese co-authored the popular student manual, How to Study, which remains in print and regularly used by Freshman College Students into the 21st Century."} {"text":"Nick C. Ellis is a Welsh psycholinguist. He is currently a Professor of Psychology and a Research Scientist at the English Language Institute of the University of Michigan. His research focuses on applied linguistics more broadly with a special focus on second language acquisition, corpus linguistics, psycholinguistics, emergentism, complex dynamic systems approaches to language, reading and spelling acquisition in different languages, computational modeling and cognitive linguistics."} {"text":"Ellis received his Bachelor of Arts degree in Psychology at the University of Oxford in 1974. He obtained a PhD degree in Psychology at the University College of North Wales in 1978."} {"text":"Between 1976 and 1991 he was a part-time Tutor at the Open University and between 1978 and 1990 a lecturer in Psychology at the University College of North Wales. In 1990 he became a Senior Lecturer in Psychology at the University College of North Wales until 1994. In 1992 he was a Visiting Professor at the Temple University of Japan. Between 1994 and 1998 he was a Reader in Psychology at the University College of North Wales and between 1998 and 2004 a Professor of Psychology at the University of Wales Bangor."} {"text":"He was the Editor, Language Learning between 1998 and 2002 and since 2006 he has been a General Editor of Language Learning."} {"text":"Ellis conducts research on several topics relating to second language acquisition, including the connection between explicit and implicit learning, reading, vocabulary and phraseology, applications of psychological theory in language testing and instruction, and the role of the brain. Ellis currently serves as General Editor of the journal \"Language Learning\"."} {"text":"Ellis has published in prestigious journals such as Language Learning, Applied Linguistics, The Modern Language Journal, Memory and Cognition, , and Studies in Second Language Acquisition."} {"text":"Ellis has published articles with Diane Larsen-Freeman, Alister Cumming, Lourdes Ortega and Kathleen Bardovi-Harlig."} {"text":"Most cited articles based on Google Scholar (in chronological order):"} {"text":"Deborah Frances Tannen (born June 7, 1945) is an American author and professor of linguistics at Georgetown University in Washington, D.C. Best known as the author of \"You Just Don't Understand\", she has been a McGraw Distinguished Lecturer at Princeton University and was a fellow at the Center for Advanced Study in the Behavioral Sciences following a term in residence at the Institute for Advanced Study in Princeton, NJ."} {"text":"Tannen is the author of thirteen books, including \"That's Not What I Meant!\" and \"You Just Don't Understand\", the latter of which spent four years on the \"New York Times\" Best Sellers List, including eight consecutive months at number one. She is also a frequent contributor to \"The New York Times\", \"The Washington Post\", \"The Atlantic\", and \"TIME\" magazine, among other publications."} {"text":"Tannen graduated from Hunter College High School and completed her undergraduate studies at Harpur College (now part of Binghamton University) with a B.A. in English Literature. Tannen went on to earn a Masters in English Literature at Wayne State University. Later, she continued her academic studies at UC Berkeley, earning an M.A. and a Ph.D. in Linguistics."} {"text":"Tannen has written and edited numerous academic publications on linguistics, discourse analysis, and interpersonal communication. She has published many books including \"Conversational Style: Analyzing Talk Among Friends\"; \"Talking Voices: Repetition, Dialogue and Imagery in Conversational Discourse\"; \"Gender and Discourse\"; and \"The Handbook of Discourse Analysis\". Her major theoretical contribution, presented in \"Talking Voices\", is a poetics of conversation. She demonstrates that everyday conversation is made up of linguistic features that are traditionally regarded as literary, such as repetition, dialogue, and imagery."} {"text":"Tannen has also written nine general-audience books on interpersonal communication and public discourse as well as a memoir. She became well known in the United States after her book \"You Just Don't Understand: Women and Men in Conversation\" was published in 1990. It remained on the \"New York Times\" Best Seller list for nearly four years, and was subsequently translated into 30 other languages. She has written several other general-audience books and mainstream articles between 1983 and 2017."} {"text":"Two of her other books, \"You Were Always Mom's Favorite!: Sisters in Conversation Throughout Their Lives\" and \"You're Wearing THAT?: Understanding Mothers and Daughters in Conversation\" were also \"New York Times\" best-sellers. \"The Argument Culture\" received the Common Ground Book Award, and \"I Only Say This Because I Love You\" received a Books for a Better Life Award."} {"text":"Deborah Tannen's main research has focused on the expression of interpersonal relationships in conversational interaction. Tannen has explored conversational interaction and style differences at a number of different levels and as related to different situations, including differences in conversational style as connected to the gender and cultural background, as well as speech that is tailored for specific listeners based on the speaker's social role. In particular, Tannen has done extensive gender-linked research and writing that focused on miscommunications between men and women; however, some linguists have argued against Tannen's claims from a feminist standpoint."} {"text":"Tannen's research began when she analyzed her friends while working on her Ph.D. Since then, she has collected several naturally occurring conversations on tape and conducted interviews as forms of data for later analysis. She has also compiled and analyzed information from other researchers in order to draw out notable trends in various types of conversations, sometimes borrowing and expanding on their terminology to emphasize new points of interest."} {"text":"Interplay of connection maneuvers and power maneuvers in family conversations."} {"text":"Tannen once described family discourse as \"a prime example\u2026of the nexus of needs for both power and connection in human relationships. She coined the term \"connection maneuvers\" to describe interactions that take place in the closeness dimension of the traditional model of power and connection; this term is meant to contrast with the \"control maneuvers,\" which, according to psychologists Millar, Rogers, and Bavelas, take place in the power dimension of the same model."} {"text":"Tannen challenged the conventional view of power (hierarchy) and connection (solidarity) as \"unidimensional and mutually exclusive\" and offered her own kind of model for mapping the interplay of these two aspects of communication, which takes the form of a two-dimensional grid (Figure 1)."} {"text":"Tannen also highlights ventriloquizing \u2013 which she explains as a \"phenomenon by which a person speaks not only for another but also as another\" \u2013 as a strategy for integrating connection maneuvers into other types of interactions. As an example of this, she cites an exchange recorded by her research team in which a mother attempts to convince her son to pick up his toys by ventriloquizing the family's dogs: \"[extra high pitch] We're naughty, but we're not as naughty as Jared\"."} {"text":"Deborah Tannen describes the notion of conversational style as \"a semantic process\" and \"the way meaning is encoded in and derived from speech\". She cites the work of R. Lakoff and J. Gumperz as the inspiration behind her thinking. According to Tannen, some features of conversational style are topic (which includes type of topics and how transitions occur), genre (storytelling style), pace (which includes rate of speech, occurrence or lack of pauses, and overlap), and expressive paralinguistics (pitch\/amplitude shifts and other changes in voice quality)."} {"text":"Tannen has expressed her stance against taking indirect speech as a sign of weakness or as a lack of confidence; she also set out to debunk the idea that American women are generally more indirect than men. She reached this conclusion by looking through transcripts of conversations and interviews, as well as through correspondence with her readers. One example she uses against the second idea comes from a letter from a reader, who mentioned how his Navy superior trained his unit to respond to the indirect request \"It's hot in this room\" as a direct request to open the window. A different letter mentions the tendency of men to be more indirect when it comes to expressing feelings than women."} {"text":"Tannen also mentions exchanges where both participants are male, but the two participants are not of equal social status. As a specific example, she mentions a \"black box\" recording between a plane captain and a co-pilot in which the captain's failure to understand the co-pilot's indirect conversational style (which was likely a result of his relatively inferior rank) caused a crash."} {"text":"During a trip to Greece, Tannen observed that comments she had made to her hosts about foods she had not seen yet in Greece (specifically, scrambled eggs and grapes) had been interpreted as indirect requests for the foods. This was surprising to her, since she had just made the comments in the spirit of small talk. Tannen observed this same tendency of Greeks and Greek-Americans to interpret statements indirectly in a study that involved interpreting the following conversation between a husband and a wife:"} {"text":"The participants \u2013 some Greeks, some Greek-Americans, and some non-Greek Americans \u2013 had to choose between the following two paraphrases of the second line in the exchange:"} {"text":"Tannen's findings showed that 48% of Greeks chose the first (more indirect) paraphrase, while only 32% of non-Greek Americans chose the same one, with the Greek-Americans scoring closer to the Greeks than the other Americans at 43%. These percentages, combined with other elements of the study, suggest that the degree of indirectness a listener generally expects may be affected through sociocultural norms."} {"text":"Tannen analyzed the agonistic framing of academic texts, which are characterized by their \"ritualized adversativeness\". She argued that expectations for academic papers in the US place the highest importance on presenting the weaknesses of an existing, opposing, argument as a basis for bolstering the author's replacement argument. According to her, agonism limits the depth of arguments and learning, since authors who follow the convention pass up opportunities to acknowledge strengths in the texts they are arguing against; in addition, this places the newest, attention-grabbing works in prime positions to be torn apart."} {"text":"Gary F. Marcus (born February 8, 1970) is an American scientist, author, and entrepreneur who is a professor in the Department of Psychology at New York University and was founder and CEO of Geometric Intelligence, a machine learning company later acquired by Uber."} {"text":"His books include \"Guitar Zero\", which appeared on the \"New York Times\" Best Seller list and \"Kluge: The Haphazard Construction of the Human Mind\", a \"New York Times\" Editors' Choice. With Jeremy Freeman, he was co-editor of \"The Future of the Brain: Essays by the World's Leading Neuroscientists\"."} {"text":"Marcus attended Hampshire College, where he designed his own major, cognitive science, working on human reasoning. He continued on to graduate school at Massachusetts Institute of Technology, where his advisor was the experimental psychologist Steven Pinker. He received his Ph.D. in 1993."} {"text":"His books include \"The Algebraic Mind: Integrating Connectionism and Cognitive Science, The Birth of the Mind: How a Tiny Number of Genes Creates the Complexities of Human Thought,\" \"Kluge: The Haphazard Construction of the Human Mind\", a New York Times Editors' Choice, and \"Guitar Zero\", which appeared on the New York Times Bestseller list. He edited \"The Norton Psychology Reader,\" and was co-editor with Jeremy Freeman of The Future of the Brain: Essays by the World's Leading Neuroscientist, which included Nobel Laureates May-Britt Moser and Edvard Moser."} {"text":"In 2014, he founded Geometric Intelligence, a machine learning company. It was acquired by Uber in 2016."} {"text":"Marcus' research and theories focus on the intersection between biology and psychology. How do the brain and mind relate when it comes to understanding language? Marcus takes an innatism stance on this debate and through his psychological evidence has given many answers to open questions such as, \"If there is something built in at birth, how does it get there?\" He challenged connectionist theories which posit that the mind is only made up of randomly arranged neurons. Marcus argues that neurons can be put together to build circuits in order to do things such as process rules or process structured representations."} {"text":"Marcus\u2019 early work focused on why children produce overregularizations, such as \"breaked\" and \"goed\", as a test case for the nature of mental rules."} {"text":"In his first book, \"The Algebraic Mind: Integrating Connectionism and Cognitive Science\", Marcus challenged the idea that the mind might consist of largely undifferentiated neural networks. He argued that understanding the mind would require integrating connectionism with classical ideas about symbol-manipulation."} {"text":"In his second book, published in 2004, \"The Birth of the Mind: How a Tiny Number of Genes Creates the Complexities of Human Thought\", Marcus goes into a more detailed explanation of the genetic support systems of human thought. He explains how a small number of genes account for the intricate human brain, common false impressions of genes, and the problems they may cause for the future of genetic engineering."} {"text":"In 2005, Marcus was editor of \"The Norton Psychology Reader\", including selections by cognitive scientists on modern science of the human mind."} {"text":"Marcus' 2012 book, \"Guitar Zero\", explores the process of taking up a musical instrument as an adult."} {"text":"Frieda Goldman-Eisler (born Frymet Leib, also known as Frieda Eisler) (1907\u20131982) was a psychologist and pioneer in the field of psycholinguistics. She is known for her research on speech disfluencies; a volume dedicated in her honor calls her \"the modern pioneer of the science of pausology\"."} {"text":"Goldman-Eisler was born in Tarn\u00f3w, Galicia. She was German-Jewish, and a communist. After her marriage to the writer Willy Goldman in 1934, due to the growing threat of Nazi Germany, she moved from Austria to London, where she lived the rest of her life. In the early 1950s, she began pausological experiments, and continued doing research in this area for the rest of her career. She cancelled presenting at a workshop in Kassel in 1978 due to illness, and died in 1982."} {"text":"She earned a PhD in German studies from the University of Vienna in 1931, while also studying psychology under Karl B\u00fchler."} {"text":"During World War II, Goldman-Eisler briefly worked for Mass Observation."} {"text":"She was a member of the Medical Research Council\u2019s scientific staff at the Maudsley Hospital."} {"text":"Goldman-Eisler refers to being offered and accepting a \"home\" in the Department of Phonetics at University College London in 1955, though it is not clear what her position was at that time. In 1965 Goldman-Eisler was appointed a Reader at University College London, where she continued her career. She became the UK's first Professor of Psycholinguistics in 1970. She was eventually given the titles Emeritus Professor of Psycholinguistics and honorary Research Fellow at University College London."} {"text":"Jean Berko Gleason (born 1931) is a psycholinguist and professor emerita in the Department of Psychological and Brain Sciences at Boston University who has made fundamental contributions to the understanding of language acquisition in children, aphasia, gender differences in language development, and parent\u2013child interactions."} {"text":"Gleason created the Wug Test, in which a child is shown pictures with nonsense names and then prompted to complete statements about them, and used it to demonstrate that even young children possess implicit knowledge of linguistic morphology. Menn and Ratner have written that \"Perhaps no innovation other than the invention of the tape recorder has had such an indelible effect on the field of child language research\", the \"wug\" (one of the imaginary creatures Gleason drew in creating the Wug Test) being \"so basic to what [psycholinguists] know and do that increasingly it appears in the popular literature without attribution to its origins.\""} {"text":"Jean Berko was born to Hungarian immigrant parents in Cleveland, Ohio. As a child, she has said, \"I was under the impression that whatever you said meant something in some language.\" Her older brother's cerebral palsy made it difficult for most people to understand his speech, but"} {"text":"After graduating from Cleveland Heights High School in 1949, Gleason earned a B.A. in history and literature from Radcliffe College, then an M.A. in linguistics, and a combined Ph.D. in linguistics and psychology, at Harvard; from 1958 to 1959 she was a postdoctoral fellow at MIT. In graduate school she was advised by Roger Brown, a founder in the field of child language acquisition. In January 1959 she married Harvard mathematician Andrew Gleason; they had three daughters."} {"text":"Most of Gleason's professional career has been at Boston University, where she served as Psychology Department chair and director of the Graduate Program in Applied Linguistics; Lise Menn and Harold Goodglass were among her collaborators there."} {"text":"She has been a visiting scholar at Harvard University, Stanford University, and at the Linguistics Institute of the Hungarian Academy of Sciences. Although officially retired and no longer teaching, she to be involved in research."} {"text":"Gleason is the author or co-author of some 125 papers on language development in children, language attrition, aphasia, and gender and cultural aspects of language acquisition and use; and is editor\/coeditor of two widely used textbooks, \"The Development of Language\" (first edition 1985, ninth edition 2016) and \"Psycholinguistics\" (1993). She is a Fellow of the American Association for the Advancement of Science and of the American Psychological Association, and was president of the International Association for the Study of Child Language from 1990 to 1993, and of the Gypsy Lore Society 1996 to 1999."} {"text":"She has also served on the editorial boards of numerous academic and professional journals and was associate editor of \"Language\" from 1997 to 1999."} {"text":"Gleason was profiled in \"Beyond the Glass Ceiling: Forty Women Whose Ideas Shape the Modern World\" (1996)."} {"text":"A festschrift in her honor, \"Methods for Studying Language Production\", was published in 2000."} {"text":"In 2016 she received an honorary Doctor of Science degree from Washington & Jefferson College for her work as \"a pioneer in the field of psycholinguistics\","} {"text":"and in 2017 the Roger Brown Award (recognizing \"outstanding contribution to the international child language community\") from the International Association for the Study of Child Language."} {"text":"Since 2007 she has delivered the \"Welcome, welcome\" and \"Goodbye, goodbye\" speeches at the annual Ig Nobel Awards ceremonies."} {"text":"Children's learning of English morphologythe Wug Test."} {"text":"Gleason devised the Wug Test as part of her earliest research (1958), which used nonsense words to gauge children's acquisition of morphological rulesfor example, the \"default\" rule that most English plurals are formed by adding an , or sound depending on the final consonant, e.g., \" \""} {"text":"A child is shown simple pictures of a fanciful creature or activity,"} {"text":"with a nonsense name, and prompted to complete a statement about it:"} {"text":"Each \"target\" word was a made-up (but plausible-sounding) pseudoword, so that the child cannot have heard it before."} {"text":"A child who knows that the plural of \"witch\" is \"witches\" may have heard and memorized that pair, but a child responding that the plural of \"wug\" (which the child presumably has never heard) is \"wugs\" (\/w\u028cgz\/, using the \/z\/ allomorph since \"wug\" ends in a voiced consonant) has apparently inferred (perhaps unconsciously) the basic rule for forming plurals."} {"text":"The Wug Test also includes questions involving verb conjugations, possessives, and other common derivational morphemes such as the agentive \"-er\" (e.g. \"A man who 'zibs' is a ________?\"),"} {"text":"and requested explanations of common compound words e.g. \"Why is a birthday called a birthday?\""} {"text":"Gleason's major finding was that even very young children are able to connect suitable endingsto produce plurals, past tenses, possessives, and other formsto nonsense words they have never heard before, implying that they have internalized systematic aspects of the linguistic system which no one has necessarily tried to teach them."} {"text":"However, she also identified an earlier stage at which children can produce such forms for real words, but not yet for nonsense wordsimplying that children start by memorizing singularplural pairs they hear spoken by others, then eventually extract rules and patterns from these examples which they apply to novel words."} {"text":"The Wug Test's fundamental role in the development of psycholinguistics as a discipline has been mapped by studying references to Gleason's work in \"seminal journals\" in the field, many of which carried articles referencing it in their founding issues:"} {"text":"According to Ratner and Menn, \"As an enduring concept in psycholinguistic research, the wug has become generic, like [\"kleenex\"] or [\"xerox\"], a concept so basic to what we know and do that increasingly it appears in the popular literature without attribution to its origins... Perhaps no innovation other than the invention of the tape recorder has had such an indelible effect on the field of child language research.\""} {"text":"It has been proposed that Wug Testlike instruments be used in the diagnosis of learning disabilities, but in practice success in this direction has been limited."} {"text":"Another of Gleason's early papers \"Fathers and Other Strangers: Men's Speech to Young Children\" (1975) explored differences between mothers' and fathers' spoken interaction with their children, primarily using data produced by two female and two male daycare teachers at a large university, and by three mothers and three fathers, mostly during family dinners."} {"text":"Among other conclusions, this study found that:"} {"text":"In contrast, both male and female daycare teachers used language that was similar both quantitatively and qualitatively, with both focusing on a dialogue based in the present and on the immediate needs of the children."} {"text":"Differences included that the male teachers tended to address the children by name more often than did the female teachers and that the male teachers issued more imperatives than did the female teachers."} {"text":"Gleason's research eventually extended into the study of children's acquisition of routinesthat is, standardized chunks of language (or language-plus-gesture) that the culture expects of everyone, such as greetings, farewells, and expressions of thanks. Gleason was one of the first to study the acquisition of politeness, examining English-speaking children's use of routines such as \"thank you\", \"please\", and \"I'm sorry\". Researchers in this area have since studied both verbal and non-verbal routinization, and the development of politeness routines in a variety of cultures and languages."} {"text":"Gleason's 1976 paper with Weintraub, \"The Acquisition of Routines in Child Language\","} {"text":"analyzed performance on the culturally standardized Halloween Trick-or-treat routine in 115 children aged two to sixteen years."} {"text":"Alterations in ability and the function of parental contribution were analyzed concerning cognitive and social components."} {"text":"They discovered that in the acquisition of routines"} {"text":"(in contrast to the acquisition of much of the rest of language) parents' major interest is for their children to achieve accurate performance, with little stress on children's understanding of what they are expected to say."} {"text":"Gleason and Weintraub found that the parents rarely if ever explain to children the meaning of such routines as \"Bye-bye\" or \"Trick or treat\"there was no concern with the child's thoughts or intentions as long as the routine was performed as expected at the appropriate times."} {"text":"Thus, parents' role in the acquisition of routines is very different from their role in most of the rest of language development."} {"text":"Gleason and Greif analyzed children's acquisition of three ubiquitous routines in \"Hi, Thanks, and Goodbye: More Routine Information\" (1980)."} {"text":"The subjects were eleven boys and eleven girls and their parents."} {"text":"At the conclusion of a parent-child play period, an assistant entered the playroom bearing a present, in order to evoke routines from the children."} {"text":"The study's purpose was to analyze how parents communicate these routines to their children; major questions proposed included whether or not some routines were more obligatory than others, and whether mothers and fathers provide different models of politeness behavior for their children. The results suggest that children's \"spontaneous\" construction of the three routines was low, with \"Thank you\" the rarest."} {"text":"However, parents strongly encouraged their children to generate routines and, typically, the children complied."} {"text":"In addition, parents were more likely to prompt the \"Thank you\" routine than the \"Hi\" and \"Goodbye\" routines."} {"text":"Parents practiced the routines themselves, though mothers were more likely than fathers to speak \"Thank you\" and \"Goodbye\" to the assistant."} {"text":"Gleason and Ely made an in-depth study of apologies in children's dialogue in their paper, \"I'm sorry I said that: apologies in young children's discourse\" (2006),"} {"text":"which analyzed apology term usage (in parent\u2013child dialogue) of five boys and four girls, aged one to six years."} {"text":"Their research suggested that apologies appear later in children than do other politeness routines, and that as the children grew older they developed a progressively refined expertise with this routine, gradually requiring fewer direct prompts and producing more elaborate apologies instead of just saying \"I'm sorry\"."} {"text":"They also found that parents and other adults play an important role in fostering growth of apologetic abilities through the setting of examples, by encouraging the children to apologize, and by speaking specifically and purposefully to them about apologies."} {"text":"With Ely, MacGibbon, and Zaretsky, Gleason also explored the discourse of middle-class parents and their children at the dinner table in, \"Attention to Language: Lessons Learned at the Dinner Table\" (2001),"} {"text":"finding that the everyday language of these parents involves a remarkable portion of attention to language."} {"text":"The dinner-table conversation of twentytwo middle-class families, each with a child between two years and five and onehalf years old, were recorded,"} {"text":"then analyzed for the existence and activity of languagecentered terms, including words like \"ask\", \"tell\", \"say\", and \"speak\"."} {"text":"Mothers spoke more about language than did fathers, and fathers spoke more about it than did children:"} {"text":"roughly eleven percent of mothers' sentences contained one or more languagecentered terms, and the corresponding proportions for fathers and children were seven percent and four percent."} {"text":"Uses that were metalinguistic (for example, accounting for and remarking on speech) exceeded uses that were pragmatic (for example, managing how and when speech appears)."} {"text":"The more that mothers used language-centered terms, the more the children did as wellbut this was not true for fathers."} {"text":"The results imply that in routine family conversations, parents supply children with considerable information on the way language is used to communicate information."} {"text":"Gleason has carried out significant research involving the learning and maintenance of second languages by sequential bilinguals. She has studied the acquisition of a second language while retaining the first (additive bilingualism),"} {"text":"examining discourse behaviors of parents who follow the one person-one language principle by using different languages with their child"} {"text":"She has also studied language attrition, the loss of a known language through lack of use,"} {"text":"and suggests that the order in which a language is learned is less important in predicting its retention than the thoroughness with which it is learned."} {"text":"An unusual study carried out with Harris and Aycicegi, \"Taboo words and reprimands elicit greater autonomic reactivity in a first language than in a second language\" (2003), investigated the involuntary psychophysiological reactions of bilingual speakers to taboo words."} {"text":"Thirtytwo TurkishEnglish bilinguals judged an array of words and phrases for \"pleasantness\" in Turkish (their first language), and in English (their second), while their skin conductance was monitored via fingertip electrodes."} {"text":"Participants manifested greater autonomic arousal in response to taboo words and childhood reprimands in their first language than to those in their second language, confirming the commonplace claim that speakers of two languages are less uncomfortable speaking taboo words and phrases in their second language than in their native language."} {"text":"In \"Maintaining Foreign Language Skills\", which discusses \"the personal, cultural, and instructional factors involved with keeping up foreign language skills\" (1988), Gleason and Pan consider both humans' remarkable capacity for language acquisition and their ability to lose it."} {"text":"In addition to brain damage, strokes, trauma and other physical causes of language loss, individuals may lose language skills due to the absence of a linguistically supportive social environment in which to maintain such skills, such as when a speaker of a given language relocates to a place where that language is not spoken. Culture also factors in. More often than not, individuals speaking two or more languages come into contact with one another, for reasons ranging from emigration and interrelationships to alterations in political borders. The result of such contact is typically that the community of speakers undergoes a progressive shift in usage from one language to the other."} {"text":"Gleason has also done significant research on aphasia,"} {"text":"a condition (usually due to brain injury) in which a person's ability to understand and\/or to produce language, including their ability to find the words they need and their use of basic morphology and syntax, is impaired in a variety of ways."} {"text":"In \"Some Linguistic Structures in the Speech of a Broca's Aphasic\" (1972) Gleason, Goodglass, Bernholtz, and Hyde discuss an experiment carried out with a man who, after a stroke, had been left with Broca's aphasia\/agrammatism,"} {"text":"a specific form of aphasia typically impairing the production of morphology and syntax more than it impairs comprehension."} {"text":"This experiment employed the Story Completion Test (often used to probe a subject's capacity for producing various common grammatical forms)"} {"text":"as well as free conversation and repetition to elicit speech from the subject;"} {"text":"this speech was then analyzed to evaluate how well he used inflectional morphology"} {"text":"(e.g. plural and past tense word endings) and basic syntax (the formation of, for example, simple declarative, imperative, and interrogative sentences)."} {"text":"To do this the investigator, in a few sentences, began a simple story about a pictured situation, then asked the subject to conclude the narrative."} {"text":"The stories were so designed that a nonlanguageimpaired person's response would typically employ particular structures, for example, the plural of a noun, the past tense of a verb, or a simple but complete yesno question (e.g. \"Did you take my shoes?\")."} {"text":"Gleason, Goodglass, Bernholtz, and Hyde concluded that the transition from verb to object was easier for this subject than was the transition from subject to verb and that auxiliary verbs and verb inflections were the parts of speech most likely to be omitted by the subject. There was considerable variation among consecutive repeat trials of the same test item, although responses on successive attempts usually came closer to those a normal speaker would have produced. The study concluded that the subject's speech was not the product of a stable abnormal grammar, and could not be accounted for by assuming that he was simply omitting words to minimize his effort in producing themquestions"} {"text":"of significant theoretical controversy at the time."} {"text":"Brian Lewis Butterworth FBA (born 3 January 1944) is emeritus professor of cognitive neuropsychology in the Institute of Cognitive Neuroscience at University College London. His research has ranged from speech errors and pauses, short-term memory deficits, dyslexia, reading both in alphabetic scripts and Chinese, and mathematics and dyscalculia. His book \"The Mathematical Brain\" has been translated into four languages. He was Editor-in-Chief of \"Linguistics\" (1978\u20131983) and a founding editor of the journals \"Language and Cognitive Processes\" and \"Mathematical Cognition\". He is a Fellow of the British Academy."} {"text":"In 1984 he diagnosed President Ronald Reagan on the basis of speech errors in his presidential re-election speeches in an article in the Sunday Times as having Alzheimer's disease ten years before this was formally identified. He was a coauthor in 1971 of a pamphlet, \"Marked for life\", critical of university examinations."} {"text":"He designed the world's largest mathematical experiment involving over 18,000 people at Explore-At-Bristol. In the serious game for elementary school children with dyscalculia, \"Meister Cody\", he lends his voice to Quoun, the Guardian of the Trees."} {"text":"Published in the same year in the US as \"What Counts\" New York: Simon & Schuster."} {"text":"Powell A. Butterworth B. (1971). \"Marked for life: a criticism of assessment at universities\". London, Anarchist Group"} {"text":"Butterworth B. (1980). \"Language Production Volume 1: Speech and talk\" Academic Pr"} {"text":"Butterworth B. (1983). \"Language Production Volume 2: Development, Writing and Other Language Processes\" Academic Pr"} {"text":"Butterworth B. Comrie B. Dahl O. (1984). \"Explanations for Language\" Universals Mouton De Gruyter"} {"text":"Butterworth, B. (2004). \"Dyscalculia Guidance Helping Pupils with Specific Learning Difficulties in Maths\". David Fulton"} {"text":"Judit Kormos () (born 1970) is a Hungarian-born British linguist. She is a professor and the Director of Studies for the MA TESOL Distance programme at the Department of Linguistics and English Language at Lancaster University, United Kingdom. She is renowned for her work on motivation in second language learning, and self-regulation in second language writing. Her current interest is in dyslexia in second language learning."} {"text":"Along with Rosa Manch\u00f3n she has been noted for her work on the cognitive dimension of the acquisition and use of second languages, with emphasis on the psycholinguistic dimension of textual production and along with Cumming, Hyland, Manch\u00f3n, Matsuda, Ortega, Polio, Storch and Verspoor she has been considered as one of the most influential researchers on second language writing."} {"text":"Kormos graduated at the E\u00f6tv\u00f6s Lor\u00e1nd University in Budapest, Hungary in 1994. Kormos gained her PhD at the E\u00f6tv\u00f6s Lor\u00e1nd University in 1999. Her PhD was supervised by Zolt\u00e1n D\u00f6rnyei. Kormos took up a lecturer position at the Lancaster University in 2008. and was promoted to a Readership in 2012. She chose to be called \"Reader in Second Language Acquisition\". On 8 January 2015, Kormos was awarded a personal chair. Her title became \"Professor of Second Language Acquisition\"."} {"text":"She is the coordinator of the Dyslexia For Teachers Of English Foreign Language Project, funded by the European Commission. Since 2011, she has been a member of the editorial board of the Journal of Second Language Writing. She has been an Editor of Special Thematic Issues and Associate Journal Editor of the Language Learning."} {"text":"In 2012, Kormos was interviewed by the Hungarian television channel ATV on recent changes in foreign language teaching policies in Hungary. She emphasised the important role of teaching students to learn foreign languages independently and autonomously with the help of modern technological tools. On 21 May 2014, Pearson Education released a new video lecture series on dyslexia and foreign language learning on YouTube. Kormos features in the first video of the series and discusses the psychological effects of dyslexia on the processes of foreign language learning."} {"text":"In 2014, Kormos together with a European team from five partner countries won the ELTons award of the British Council in the Excellence in Course Innovation category."} {"text":"On 20 June 2014, she was cited in the Education webpage of the \"Guardian\" in a recent article on teaching languages to students with disabilities. She said that teaching methods and materials need to be adapted for dyslexic students, instead of taking them out of second language classes. Dyslexic students are able to acquire another language successfully and they have to be provided a chance. The teacher should be aware of the dyslexia and teach a bit differently. For example, teachers should include more visual materials, act things out and explain things slightly more explicitly than they would to other students. Some learners are more receptive to audio channels of learning, others to visual. Therefore using a combination of the two may be really effective."} {"text":"Top 5 articles and chapters based on Google Scholar."} {"text":"David Green is a professorial research fellow in the Department of Cognitive, Perceptual & Brain Sciences, an honorary senior research associate, an emeritus professor of psychology in the Division of Psychology & Language Sciences, and on the faculty of Brain Sciences at University College London. He has researched widely on subjects such as mental models, both construction and manipulation, the lexical organisation, and modelling control processes in speech production, language control particularly biliginual and the imaging of language and object recognition in the neurologically damaged. He is one of the four chief editors of the academic journal ."} {"text":"Lyn Frazier (born October 15, 1952, in Madison, Wisconsin) is an experimental linguist, focusing on psycholinguistic research of adult sentence comprehension."} {"text":"Frazier received her PhD in 1978 from the University of Connecticut under the supervision of Janet Dean Fodor, on the subject of parsing strategies in syntax. She is currently a Professor in the Department of Linguistics at the University of Massachusetts, Amherst. She was named the first Distinguished Graduate Mentor at University of Massachusetts and received an award from the University of Massachusetts system for Outstanding Accomplishments in Research and Creative Activity."} {"text":"Frazier's work has examined how listeners approach the task of processing the incoming language stream. She has proposed and refined syntactic parsing models, including a two-tier parsing system, the garden path model, and the Active Filler Hypothesis. Her recent work has focused on how listeners parse ellipsis."} {"text":"She is co-editor of the book series \"Studies in Theoretical Psycholinguistics\", published by Springer."} {"text":"In linguistics, comparative illusions (CIs) or Escher sentences are certain comparative sentences which initially seem to be acceptable but upon closer reflection have no well-formed meaning. The typical example sentence used to typify this phenomenon is \"More people have been to Russia than I have\". The effect has also been observed in other languages. Some studies have suggested that, at least in English, the effect is stronger for sentences whose predicate is repeatable. The effect has also been found to be stronger in some cases when there is a plural subject in the second clause."} {"text":"Escher sentences are ungrammatical because a matrix clause subject like \"more people\" is making a comparison between two sets of individuals, but there is no such set of individuals in the second clause. For the sentence to be grammatical, the subject of the second clause must be a bare plural. Linguists have marked that it is \"striking\" that, despite the grammar of these sentences not possibly having a meaningful interpretation, people so often report that they sound acceptable, and that it is \"remarkable\" that people seldom notice any error."} {"text":"Mario Montalbetti's 1984 Massachusetts Institute of Technology dissertation has been credited as being the first to note these sorts of sentences; in his prologue he gives acknowledgements to Hermann Schultze \"for uttering the most amazing *\/? sentence I've ever heard: \"More people have been to Berlin than I have\"\", although the dissertation itself does not discuss such sentences. Parallel examples with \"Russia\" instead of \"Berlin\" were briefly discussed in psycholinguistic work in the 1990s and 2000s by Thomas Bever and colleagues."} {"text":"Geoffrey K. Pullum wrote about this phenomenon in a 2004 post on \"Language Log\" after Jim McCloskey brought it to his attention. In a post the following day, Mark Liberman gave the name \"Escher sentences\" to such sentences in reference to M. C. Escher's 1960 lithograph \"Ascending and Descending\". He wrote:"} {"text":"Although rare, actual attestations of this construction have appeared in natural speech. \"Language Log\" has noted examples such as:"} {"text":"Another attested example is the following tweet from Dan Rather:"} {"text":"Experiments on the acceptability of comparative illusion sentences has found results which are \"highly variable both within and across studies\". While the illusion of acceptability for comparative illusions has also been informally reported for speakers of Faroese, German, Icelandic, Polish, and Swedish, systematic investigation has mostly centered on English, although Aarhus University neurolinguist Ken Ramsh\u00f8j Christensen has run several experiments on comparative illusions in Danish."} {"text":"When Danish and Swedish speakers were asked what (1) means, their responses fell into one of the following categories:"} {"text":"Paraphrase (d) is in fact the only possible interpretation of (1); this is possible due to the lexical ambiguity of \"have\" between an auxiliary verb and a lexical verb just as the English \"have\"; however the majority of participants (da: 78.9%; sv: 56%) gave a paraphrase which does not follow from the grammar. Another study where Danish participants had to pick from a set of paraphrases, say it meant something else, or say it was meaningless found that people selected \"It does not make sense\" for comparative illusions 63% of the time and selected it meant something 37% of the time."} {"text":"The first study examining what affects acceptability of these sentences was presented at the 2004 CUNY Conference on Human Sentencing Processing. Scott Fults and Collin Phillips found that Escher sentences with ellipsis (a) were found to be more acceptable than the same sentences without ellipsis (b)."} {"text":"Responses to this study noted that it only compared elided material to nothing, and that even in grammatical comparatives, ellipsis of repeated phrases is preferred. In order to control for the awkwardness of identical predicates, Alexis Wellwood and colleagues compared comparative illusions with ellipsis to those with a different predicate."} {"text":"They found that both CI-type and control sentences were found to be slightly more acceptable with ellipsis, which led them to reject the hypothesis that ellipsis was responsible for the acceptability of CIs. Rather, it is possible people just prefer shorter sentences in general. Patrick Kelley's Michigan State University dissertation found similar results."} {"text":"Alexis Wellwood and colleagues have found in experiments that the illusion of grammaticality is greater when the sentence's predicate is repeatable. For instance, (a) is experimentally found to be more acceptable than (b)."} {"text":"The comparative must be in the subject position for the illusion to work; sentences like (a) which also have verb phrase ellipsis are viewed as unacceptable without any illusion of acceptability:"} {"text":"A pilot study by Iria de Dios-Flores also found that repeatability of the predicate had an effect on the acceptability of CIs in English. However, Christensen's study on comparative illusions in Danish did not find a significant difference in acceptability for sentences with repeatable predicates (a) and those without (b)."} {"text":"The lexical ambiguity of the English quantifier \"more\" has led to a hypothesis where the acceptability of CIs is due to people reinterpreting a \"comparative\" \"more\" as an \"additive\" \"more\". As \"fewer\" does not have such an ambiguity, Wellwood and colleagues tested to see if there was any difference in acceptability judgements depending on whether the sentences used \"fewer\" or \"more\". In general, their study found significantly higher acceptability for sentences with \"more\" than with \"fewer\" but the difference did not disproportionately affect the comparative illusion sentences compared to the controls."} {"text":"Christensen found no significant difference in acceptability for Danish CIs with (\"more\") compared to those with (\"fewer\")."} {"text":"De Dios-Flores examined if there was an effect depending on whether or not the \"than\"-clause subject could be a subset of the matrix subject as in (a) compared to those where it could not be due to a gender mismatch as in (b). No significant differences were found."} {"text":"In a study of Danish speakers, CIs with prepositional sentential adverbials like \"in the evening\" were found to be less acceptable than those without."} {"text":"Comparatives in Bulgarian can optionally have the degree operator (); sentences with this morpheme (a) are immediately found unacceptable but those without it (b) produce the same illusion of acceptability."} {"text":"A neuroimaging study of Danish speakers found less activation in the left inferior frontal gyrus, left premotor cortex (BA 4, 6), and left posterior temporal cortex (BA 21, 22) when processing CIs like (a) than when processing grammatical clausal comparatives like (b). Christensen has suggested this shows CIs are easy to process but as they are nonsensical, processing is \"shallow\". Low LIFG activation levels also suggest that people do not perceive CIs as being semantically anomalous."} {"text":"Townsend and Bever have posited that Escher sentences get perceived as acceptable because they are an apparent blend of two grammatical templates."} {"text":"Wellwood and colleagues have noted in response that the possibility of each clause being grammatical in a different sentence (a, b) does not guarantee a blend (c) would be acceptable."} {"text":"Wellwood and colleagues also interpret Townsend and Bever's theory as requiring a shared lexical element in each template. If this version is right, they predict (c) would be viewed as less acceptable due to the ungrammaticality of (b):"} {"text":"Wellwood and colleagues, based on their experimental results, have rejected Townsend and Bever's hypothesis and instead support their event comparison hypothesis, which states that comparative illusions are due to speakers reinterpreting these sentences as discussing a comparison of events."} {"text":"The term \"comparative illusion\" has sometimes been used as an umbrella term which also encompasses \"depth charge\" sentences like \"No head injury is too trivial to be ignored.\" This example, first discussed by Peter Cathcart Wason and Shuli Reich in 1979, is very often initially perceived as having the meaning \"No head injury should be ignored\u2014even if it's trivial\", even though upon careful consideration the sentence actually says \"All head injuries should be ignored\u2014even trivial ones.\""} {"text":"Phillips and colleagues have discussed other \"grammatical illusions\" with respect to attraction, case in German, binding, and negative polarity items; speakers initially find such sentences acceptable, but later realize they are ungrammatical."} {"text":"The conversational model of psychotherapy was devised by the English psychiatrist Robert Hobson, and developed by the Australian psychiatrist Russell Meares. Hobson listened to recordings of his own psychotherapeutic practice with more disturbed clients, and became aware of the ways in which a patient's self\u2014their unique sense of personal being\u2014can come alive and develop, or be destroyed, in the flux of the conversation in the consulting room."} {"text":"The conversational model views the aim of therapy as allowing the growth of the patient's self through encouraging a form of conversational relating called 'aloneness-togetherness'. This phrase is reminiscent of Winnicott's idea of the importance of being able to be 'alone in the presence of another'. The client comes to eventually feel recognised, accepted and understood as who they are; their sense of personal being, or self, is fostered; and they can start to drop the destructive defenses which disrupt their sense of personal being."} {"text":"The development of the self implies a capacity to embody and span the dialectic of 'aloneness-togetherness'\u2014rather than being disposed toward either schizoid isolation (aloneness) or merging identification with the other (togetherness). Although the therapy is described as psychodynamic, and is accordingly concerned to identify activity and personal meaning in the midst of apparent passivity, it relies more on careful empathic listening and the development of a common 'feeling language' than it does on psychoanalytic interpretation."} {"text":"In its manualised form ('PIT'), the conversational model is presented as having seven interconnected components. These are:"} {"text":"The conversational model, which has been manualised as Psychodynamic-Interpersonal Therapy, has been subject to outcome research, and has demonstrated effectiveness in the treatment of depression, psychosomatic disorders, self-harm, and borderline personality disorder."} {"text":"Language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be a uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives."} {"text":"The division of the two streams first occurs in the auditory nerve where the anterior branch enters the anterior cochlear nucleus in the brainstem which gives rise to the auditory ventral stream. The posterior branch enters the dorsal and posteroventral cochlear nucleus to give rise to the auditory dorsal stream."} {"text":"Language processing can also occur in relation to signed languages or written content."} {"text":"The auditory ventral stream (AVS) connects the auditory cortex with the middle temporal gyrus and temporal pole, which in turn connects with the inferior frontal gyrus. This pathway is responsible for sound recognition, and is accordingly known as the auditory 'what' pathway. The functions of the AVS include the following."} {"text":"The auditory dorsal stream connects the auditory cortex with the parietal lobe, which in turn connects with inferior frontal gyrus. In both humans and non-human primates, the auditory dorsal stream is responsible for sound localization, and is accordingly known as the auditory 'where' pathway. In humans, this pathway (especially in the left hemisphere) is also responsible for speech production, speech repetition, lip-reading, and phonological working memory and long-term memory."} {"text":"The auditory dorsal stream also has non-language related functions, such as sound localization and guidance of eye movements. Recent studies also indicate a role of the ADS in localization of family\/tribe members, as a study that recorded from the cortex of an epileptic patient reported that the pSTG, but not aSTG, is selective for the presence of new speakers. An fMRI study of fetuses at their third trimester also demonstrated that area Spt is more selective to female speech than pure tones, and a sub-section of Spt is selective to the speech of their mother in contrast to unfamiliar female voices."} {"text":"Neuroscientific research has provided a scientific understanding of how sign language is processed in the brain. There are over 135 discrete sign languages around the world- making use of different accents formed by separate areas of a country."} {"text":"By resorting to lesion analyses and neuroimaging, neuroscientists have discovered that whether it be spoken or sign language, human brains process language in general, in a similar manner regarding which area of the brain is being used. Lesion analyses are used to examine the consequences of damage to specific brain regions involved in language while neuroimaging explore regions that are engaged in the processing of language."} {"text":"Previous hypotheses have been made that damage to Broca's area or Wernicke\u2019s area does not affect sign language being perceived; however, it is not the case. Studies have shown that damage to these areas are similar in results in spoken language where sign errors are present and\/or repeated. In both types of languages, they are affected by damage to the left hemisphere of the brain rather than the right -usually dealing with the arts."} {"text":"There are obvious patterns for utilizing and processing language. In sign language, Broca\u2019s area is activated while processing sign language employs Wernicke\u2019s area similar to that of spoken language"} {"text":"There have been other hypotheses about the lateralization of the two hemispheres. Specifically, the right hemisphere was thought to contribute to the overall communication of a language globally whereas the left hemisphere would be dominant in generating the language locally. Through research in aphasias, RHD signers were found to have a problem maintaining the spatial portion of their signs, confusing similar signs at different locations necessary to communicate with another properly. LHD signers, on the other hand, had similar results to those of hearing patients. Furthermore, other studies have emphasized that sign language is present bilaterally but will need to continue researching to reach a conclusion."} {"text":"There is a comparatively small body of research on the neurology of reading and writing. Most of the studies performed deal with reading rather than writing or spelling, and the majority of both kinds focus solely on the English language. English orthography is less transparent than that of other languages using a Latin script. Another difficulty is that some studies focus on spelling words of English and omit the few logographic characters found in the script."} {"text":"In terms of spelling, English words can be divided into three categories \u2013 regular, irregular, and \u201cnovel words\u201d or \u201cnonwords.\u201d Regular words are those in which there is a regular, one-to-one correspondence between grapheme and phoneme in spelling. Irregular words are those in which no such correspondence exists. Nonwords are those that exhibit the expected orthography of regular words but do not carry meaning, such as nonce words and onomatopoeia."} {"text":"An issue in the cognitive and neurological study of reading and spelling in English is whether a single-route or dual-route model best describes how literate speakers are able to read and write all three categories of English words according to accepted standards of orthographic correctness. Single-route models posit that lexical memory is used to store all spellings of words for retrieval in a single process. Dual-route models posit that lexical memory is employed to process irregular and high-frequency regular words, while low-frequency regular words and nonwords are processed using a sub-lexical set of phonological rules."} {"text":"The single-route model for reading has found support in computer modelling studies, which suggest that readers identify words by their orthographic similarities to phonologically alike words. However, cognitive and lesion studies lean towards the dual-route model. Cognitive spelling studies on children and adults suggest that spellers employ phonological rules in spelling regular words and nonwords, while lexical memory is accessed to spell irregular words and high-frequency words of all types. Similarly, lesion studies indicate that lexical memory is used to store irregular words and certain regular words, while phonological rules are used to spell nonwords."} {"text":"Far less information exists on the cognition and neurology of non-alphabetic and non-English scripts. Every language has a morphological and a phonological component, either of which can be recorded by a writing system. Scripts recording words and morphemes are considered logographic, while those recording phonological segments, such as syllabaries and alphabets, are phonographic. Most systems combine the two and have both logographic and phonographic characters."} {"text":"In terms of complexity, writing systems can be characterized as \u201ctransparent\u201d or \u201copaque\u201d and as \u201cshallow\u201d or \u201cdeep.\u201d A \u201ctransparent\u201d system exhibits an obvious correspondence between grapheme and sound, while in an \u201copaque\u201d system this relationship is less obvious. The terms \u201cshallow\u201d and \u201cdeep\u201d refer to the extent that a system\u2019s orthography represents morphemes as opposed to phonological segments. Systems that record larger morphosyntactic or phonological segments, such as logographic systems and syllabaries put greater demand on the memory of users. It would thus be expected that an opaque or deep writing system would put greater demand on areas of the brain used for lexical memory than would a system with transparent or shallow orthography."} {"text":"A Growth Point is a technical term in cognitive linguistics and gesture research. It refers to the earliest beginnings of a spoken utterance in the mind of a speaker, combining the beginnings of a mimetic gesture with the preliminary verbal expression of the person's thought."} {"text":"An alternate theory of deriving the meaning of newly learned words by young children during language acquisition stems from John Locke's \"associative proposal theory\". Compared to the \"intentional proposal theory\", associative proposal theory refers to the deduction of meaning by comparing the novel object to environmental stimuli. A study conducted by Yu & Ballard (2007), introduced cross-situational learning, a method based on Locke's theory. Cross-situtational learning theory is a mechanism in which the child learns meaning of words over multiple exposures in varying contexts in an attempt to eliminate uncertainty of the word's true meaning on an exposure-by-exposure basis."} {"text":"Some researchers are concerned that experiments testing for fast mapping are produced in artificial settings. They feel that fast mapping doesn't occur as often in more real life, natural situations. They believe that testing for fast mapping should focus more on the actual understanding of a word instead of just its reproduction. For some, testing to see if the child can use the new word in a different situation constitutes true knowledge of a word, rather than simply identifying the new word."} {"text":"Variables affecting an individual's fast mapping ability."} {"text":"When learning novel words, it is believed that early exposure to multiple linguistic systems facilitates the acquisition of new words later in life. This effect was referred to by Kaushanskaya and Marian (2009) as the bilingual advantage. That being said, a bilingual individual's ability to fast map can vary greatly throughout their life."} {"text":"During the language acquisition process, a child may require a greater amount of time to determine a correct referent than a child who is a monolingual speaker. By the time a bilingual child is of school age, they perform equally on naming tasks when compared to monolingual children. By the age of adulthood, bilingual individuals have acquired word-learning strategies believed to be of assistance on fast mapping tasks. One example is speech practice, a strategy where the participant listens and reproduces the word in order to assist in remembering and decrease the likelihood of forgetting ."} {"text":"Bilingualism can increase an individual's cognitive abilities and contribute to their success in fast mapping words, even when they are using a nonnative language."} {"text":"Children growing up in a low-socioeconomic status environment receive less attention than those in high-socioeconomic status environments. As a result, these children may be exposed to fewer words and therefore their language development may suffer. On norm-references vocabulary tests, children from low- socioeconomic homes tend to score lower than same-age children from a high-socioeconomic environment. However, when examining their fast mapping abilities there were no significant differences observed in their ability to learn and remember novel words. Children from low SES families were able to use multiple sources of information in order to fast map novel words. When working with children from low SES homes, providing a context of the word that attributes meaning, is a linguistic strategy that can benefit the child's word knowledge development."} {"text":"Three learning supports that have been proven to help with the fast mapping of words are saliency, repetition and generation of information. The amount of face-to-face interaction a child has with their parent affects his or her ability to fast map novel words. Interaction with a parent leads to greater exposure to words in different contexts, which in turn promotes language acquisition. Face to face interaction cannot be replaced by educational shows because although repetition is used, children do not receive the same level of correction or trial and error from simply watching. When a child is asked to generate the word it promotes the transition to long-term memory to a larger extent."} {"text":"Evidence of fast mapping in other animals."} {"text":"It appears that fast mapping is not only limited to humans, but can occur in dogs as well."} {"text":"The first example of fast mapping in dogs was published in 2004. In it, a dog named Rico was able to learn the labels of over 200 various items. He was also able to identify novel objects simply by exclusion learning. Exclusion learning occurs when one learns the name of a novel object because one is already familiar with the names of other objects belonging to the same group. The researchers, who conducted the experiment, mention the possibility that a language acquisition device specific to humans does not control fast mapping. They believe that fast mapping is possibly directed by simple memory mechanisms."} {"text":"In 2010, a second example was published. This time, a dog named Chaser demonstrated, in a controlled research environment, that she had learned over 1000 object names. She also demonstrated that she could attribute these objects to named categories through fast mapping inferential reasoning. It's important to note that, at the time of publication, Chaser was still learning object names at the same pace as before. Thus, her 1000 words, or lexicals, should not be regarded as an upper limit, but a benchmark. While there are many components of language that were not demonstrated in this study, the 1000 word benchmark is remarkable because many studies on language learning correlate a 1000 lexical vocabulary with, roughly, 75% spoken language comprehension."} {"text":"Another study on Chaser was published in 2013. In this study, Chaser demonstrated flexible understanding of simple sentences. In these sentences, syntax was altered in various contexts to prove she had not just memorized full phrases or inferred the expectation through gestures from her evaluators. Discovering this skill in a dog is noteworthy on its own, but verb meaning can be fast mapped through syntax. This creates questions about what parts of speech dogs could infer, as previous studies focused on nouns. These findings create further questions about the fast mapping abilities of dogs when viewed in light of a study published in Science in 2016 that proved dogs process lexical and intonational cues separately. That is, they respond to both tone and word meaning."} {"text":"However, excitement about the fast-mapping skills of dogs should be tempered. Research in humans has found fast-mapping abilities and vocabulary size are not correlated in unenriched environments. Research has determined that language exposure alone is not enough to develop vocabulary through fast-mapping. Instead, the learner needs to be an active participant in communications to convert fast-mapping abilities into vocabulary."} {"text":"It is not commonplace to communicate with dogs, nor any non-primate animal, in a productive fashion as they are non-verbal. As such, Chaser's vocabulary and sentence comprehension is attributed to Dr. Pilley's rigorous methodology."} {"text":"An experiment was performed to assess fast mapping in adults with typical language abilities, disorders of spoken\/written language (hDSWL), and adults with hDSWL and ADHD."} {"text":"The conclusion draws from the experiment revealed that adults with ADHD were the least accurate at \"mapping semantic features and slower to respond to lexical labels.\""} {"text":"The article reasoned that the tasks of fast mapping requires high attentional demand and so \"a lapse in attention could lead to diminished encoding of the new information.\""} {"text":"Research in artificial intelligence and machine learning to reproduce computationally this ability, termed one-shot learning. This is pursued to reduce the learning curve, as other models like reinforcement learning need thousand of exposures to a situation to learn it."} {"text":"Autoclitics are verbal responses that modify the effect on the listener of the primary operants that comprise B.F. Skinner's classification of Verbal Behavior."} {"text":"An autoclitic is a verbal behavior that modifies the functions of other verbal behaviors. For example, \"I think it is raining\" possesses the autoclitic \"I think,\" which moderates the strength of the statement \"it is raining.\" Research that involves autoclitics includes Lodhi & Greer (1989)."} {"text":"Skinner describes grammatical manipulations, such as the order or grouping of responses, as autoclitic. The ordering of patterns may be a function of relevant strength, temporal ordering, or other factors. Skinner speaks to the use of predication and the use of tags, contrasting the Latin forms, which use tags\u2014and English, which uses grouping and ordering. Skinner proposes the relational autoclitic as a descriptor for these kinds of relationships."} {"text":"Composition represents a special class of autoclitic responding, because the responding is itself a response to previously existing verbal responses. The autoclitic is controlled not only by the effects on the listener but upon the speaker as listener of their own responses. Skinner notes that \"emotional and imaginal\" behavior has little to do with grammar and syntax. Obscene words and poetry are likely to be effective, even when emitted non-grammatically."} {"text":"Self-editing as a compositional process follows the autoclitic process of manipulating responses. After the responses are changed with autoclitics they are examined for their effects and then \"rejected or released.\" Conditions may prevent self-editing, such as a very high response strength."} {"text":"The physical topography of the rejection of verbal behavior in the process of editing varies from the partial emission of a written word to the apparent non-emission of a vocal response. It may include ensuring that responses simply do not reach a listener, as in not delivering a manuscript or letter. Manipulative autoclitics can revoke words by striking them out, as in a court of law. Similar effects may arise from expression like \"Forget it.\""} {"text":"A speaker may fail to react as a listener to their own speech under conditions where the emission of verbal responses is very quick. The speed may be a function of strength or of differential reinforcement. Physical interruption may arise as in the case of those who are hearing impaired, or under conditions of mechanical impairment such as ambient noise. Skinner argues the Ouija board may operate to mask feedback and so produce unedited verbal behavior."} {"text":"The main use of language is to transfer thoughts from one mind, to another mind. The bits of linguistic information that enter into one person's mind, from another, cause people to entertain a new thought with profound effects on his world knowledge, inferencing, and subsequent behavior. Language neither creates nor distorts conceptual life. Thought comes first, while language is an expression. There are certain limitations among language, and humans cannot express all that they think."} {"text":"Language of thought theories rely on the belief that mental representation has linguistic structure. Thoughts are \"sentences in the head\", meaning they take place within a mental language. Two theories work in support of the language of thought theory. Causal syntactic theory of mental practices hypothesizes that mental processes are causal processes defined over the syntax of mental representations. Representational theory of mind hypothesizes that propositional attitudes are relations between subjects and mental representations. In tandem, these theories explain how the brain can produce rational thought and behavior. All three of these theories were inspired by the development of modern logical inference. They were also inspired by Alan Turing's work on causal processes that require formal procedures within physical machines."} {"text":"LOTH hinges on the belief that the mind works like a computer, always in computational processes. The theory believes that mental representation has both a combinatorial syntax and compositional semantics. The claim is that mental representations possess combinatorial syntax and compositional semantic\u2014that is, mental representations are sentences in a mental language. Alan Turing's work on physical machines implementation of causal processes that require formal procedures was modeled after these beliefs."} {"text":"Another prominent linguist, Stephen Pinker, developed this idea of a mental language in his book \"The Language Instinct\" (1994). Pinker refers to this mental language as \"mentalese\". In the glossary of his book, Pinker defines mentalese as a hypothetical language used specifically for thought. This hypothetical language houses mental representations of concepts such as the meaning of words and sentences."} {"text":"Different cultures use numbers in different ways. The Munduruku culture for example, has number words only up to five. In addition, they refer to the number 5 as \"a hand\" and the number 10 as \"two hands\". Numbers above 10 are usually referred to as \"many\"."} {"text":"Language may influence color processing. Having more names for different colors, or different shades of colors, makes it easier both for children and for adults to recognize them. Research has found that all languages have names for black and white and that the colors defined by each language follow a certain pattern (i.e. a language with three colors also defines red, one with four defines green OR yellow, one with six defines blue, then brown, then other colors)."} {"text":"The Sapir\u2013Whorf hypothesis is the premise of the 2016 science fiction film \"Arrival\". The protagonist explains that \"the Sapir\u2013Whorf hypothesis is the theory that the language you speak determines how you think\"."} {"text":"A psycholinguist is a social scientist who studies psycholinguistics, which connects psychology and linguistics. Psycholinguistics is interdisciplinary in nature and is studied by people in a variety of fields, such as, psychology, cognitive science, linguistics, neuroscience and many more. The main aim of psycholinguistics is to outline and describe the process of producing and comprehending communication."} {"text":"More specifically, a psycholinguist studies language, speech production, and comprehension by using behavioral and neurological methods traditionally developed in the field of psychology, but other methods such as corpus analysis are also widely used. Psycholinguists typically receive undergraduate degrees in linguistics or psychology and then seek a higher degree. Psycholinguistics is not usually a degree of its own; graduate degrees range from scientific studies to criminal justice. The majority of students who become psycholinguists receive a master's degree or a Ph.D.; however, there are also some opportunities available for those who choose not to attend graduate school."} {"text":"Psycholinguists currently represent a widely diverse field. Many psycholinguists are also considered to be neurolinguists, cognitive linguists, neurocognitive linguists, or are associated with those who are. There are subtle differences between the titles, though they are all attempting to use different facets of similar issues. Psycholinguists are sometimes categorized into separate groups by the models and theories in which they believe. The two main groups, either interactive or autonomous, are based on ideas of language processing. Psycholinguists who support the interactive side, believe that our levels of processing for language work side-by-side and share information as words are received. The other argument is the autonomous side, which believes that the levels of processing for language occur independent of one another."} {"text":"When conducting research, psycholinguists use a variety of techniques that can involve qualitative and\/or quantitative data. Typical methods of research include: observation (language recording), experimentation (issuing language tests), and self-reports (participants report what they are experiencing). The research tends to result in either theoretical evidence or a realistic application."} {"text":"There are many associations that include professionals in the psycholinguist field worldwide, such as the following:"} {"text":"In psycholinguistics, the collaborative model (or conversational model) is a theory for explaining how speaking and understanding work in conversation, specifically how people in conversation coordinate to determine definite references."} {"text":"The model was initially proposed in 1986 by psycholinguists Herb Clark and Deanna Wilkes-Gibbs. It asserts that conversation partners must act collaboratively to reach a mutual understanding \u2013 i.e. the speaker must tailor their utterances to better suit the listener, and the listener must indicate to the speaker that they have understood."} {"text":"In this ongoing process, both conversation partners must work together in order to establish what a given noun phrase is referring to. The referential process can be initiated by the speaker using one of at least six types of noun phrases: the elementary noun phrase, the episodic noun phrase, the installment noun phrase, the provisional noun phrase, the dummy noun phrase, and\/or the proxy noun phrase."} {"text":"Once this presentation is made, the listener must accept it either through presupposing acceptance (i.e. letting the speaker continue uninterrupted) or asserting acceptance (i.e. through a continuer such as \"yes\", okay\", or a head nod). The speaker must then acknowledge this signal of acceptance. In this process, presentation and acceptance goes back and forth, and some utterances can simultaneously be both presentations and acceptances. This model also posits that conversationalists strive for minimum collaborative effort by making references based more on permanent properties than temporary properties and by refining perspective on referents through simplification and narrowing ."} {"text":"The collaborative model finds its roots in Grice's cooperative principle and four Gricean maxims, theories which prominently established the idea that conversation is a collaborative process between speaker and listener."} {"text":"However, until the Clark & Wilkes-Gibbs study, the prevailing theory was the literary model (or autonomous model or traditional model). This model likened the process of a speaker establishing reference to an author writing a book to distant readers. In the literary model, the speaker is the one who retains complete control and responsibility over the course of referent determination. The listener, in this theory, simply hears and understands the definite description as if they were reading it and, if successful, figures out the identity of the referent on their own."} {"text":"This autonomous view of reference establishment wasn't challenged until a paper by D.R. Olson was published in 1970. It was then suggested that there very well could be a collaborative element in the process of establishing reference. Olson, while still holding to the literary model, suggested that speakers select the words they do based on context and what they believe the listener will understand."} {"text":"Clark and Wilkes-Gibbs criticized the literary model in their 1986 paper; they asserted that the model failed to account for the dynamic nature of verbal conversations."} {"text":"In the same paper, they proposed the Collaborative Model as an alternative. They believed this model was more able to explain the aforementioned features of conversation. They had conducted an experiment to support this theory and also to further determine how the acceptance process worked."} {"text":"The experiment consisted of two participants seated at tables separated by an opaque screen. On the tables in front of each participant were a series of Tangram figures arranged in different orders. One participant, called the director, was tasked with getting the other participant, called the matcher, to accurately match his configuration of figures through conversation alone. This process was to be repeated 5 additional times by the same individuals, playing the same roles."} {"text":"The collaborative model they proposed allowed them to make several predictions about what would happen. They predicted that it would require many more words to establish reference the first time, as the participants would need to use non-standard noun phrases which would make it difficult to determine which figures were being talked about. However, they hypothesized that later references to the same figures would take fewer words and a shorter amount of time, because by this point definite reference would have been mutually established, and also because the subjects would be able to rely on established standard noun phrases."} {"text":"The results of the study confirmed many of their beliefs, and outlined some of the processes of collaborative reference, including establishing the types of noun phrases used in presentation, and their frequency."} {"text":"The following actions were observed in participants working towards mutual acceptance of a reference;"} {"text":"Grounding is the final stage in the collaborative process. The concept was proposed by Herbert H. Clark and Susan E. Brennan in 1991. It comprises the collection of \"mutual knowledge, mutual beliefs, and mutual assumptions\" that is essential for communication between two people. Successful grounding in communication requires parties \"to coordinate both the content and process\"."} {"text":"The parties engaging in grounding exchange information over what they do or do not understand over the course of a communication and they will continue to clarify concepts until they have agreed on grounding criterion. There are generally two phases in grounding:"} {"text":"Subsequent studies affirmed many of Clark and Wilkes-Gibbs' theories. These included a study by Clark and Michael Schober in 1989 that dealt with overhearers and contrasting how well they understand compared to direct addressees. In the literary model, overhearers would be expected to understand as well as addressees, while in the collaborative model, overhearers would be expected to do worse, since they are not part of the collaborative process and the speaker is not concerned with making sure anyone but the addressee understands."} {"text":"The study conducted by the pair mimicked the Clark\/Wilkes-Gibbs study, but included a silent overhearer as part of the process. The speaker and addressee were allowed to converse, while the overhearer attempted to arrange his figures according to what the speaker was saying. In different versions of this study, overhearers had access to a tape recording of the speaker's directions, while in another they simply all sat in the same room."} {"text":"The study found that overhearers had significantly more difficulty than addressees in both experiments, therefore, according to the researchers, lending credence to the collaborative model."} {"text":"The literary model described above still stands as a directly opposing viewpoint to the collaborative model. Subsequent studies also sought to point out weaknesses in the theory. One study, by Brown and Dell, took issue with the aspect of the theory that suggests that speakers have particular listeners in mind when determining reference. Instead, they suggested, speakers have generic listeners in mind. This egocentric theory proposed that people's estimates of another's knowledge are biased towards their own and that early syntactic choices may be made without regard to the addressees' needs, while beliefs about the addressees knowledge did not affect utterance choices until later on, usually in the form of repairs."} {"text":"Another study, in 2002 by Barr and Keysar, also criticized the particular listener view and partner-specific reference. In the experiment, addresses and speakers established definite references for a series of objects on a wall. Then, another speaker entered, using the same references. The theory was that, if the partner-specific view of establishing reference was correct, the addressee would be slower to identify objects(as measured by eye movement) out of confusion because the reference used had been established with another speaker. They found this not to be the case, in fact, reaction time was similar."} {"text":"In neuroscience and psychology, the term language center refers collectively to the areas of the brain which serve a particular function for speech processing and production. Language is a core system, which gives humans the capacity to solve difficult problems and provides them with a unique type of social interaction. Language allows individuals to attribute symbols (e.g. words or signs) to specific concepts and display them through sentences and phrases that follow proper grammatical rules. Moreover, speech is the mechanism in which language is orally expressed."} {"text":"Information is exchanged in a larger system including language-related regions. These regions are connected by white matter fiber tracts that make possible the transmission of information between regions. The white matter fiber bunches were recognized to be important for language production after suggesting that it is possible to make a connection between multiple language centers. The three classical language areas that are involved in language production and processing are Broca\u2019s and Wernicke's areas, and the angular gyrus."} {"text":"Broca's Area was first suggested to play a role in speech function by the French neurologist and anthropologist Paul Broca in 1861. The basis for this discovery was the analysis of speech problems resulting from injuries to this region of the brain, located in the inferior frontal gyrus. Paul Broca had a patient called Leborgne who could only pronounce the word \u201ctan\u201d when speaking. Paul Broca, after working with another patient with similar impairment, concluded that damage in the inferior frontal gyrus affected articulate language."} {"text":"Broca\u2019s area is well-known for being the syntactic processing \u00a0\u201ccenter\u201d. It has been known since Paul Broca associated speech production with an area in the posterior inferior frontal gyrus, which he called \u201cBroca\u2019s area\u201d. Although this area is in charge of speech production, its particular role in the language system is unknown. However, it is involved in phonological, semantic, and syntactic processing and working memory. The anterior region of Broca\u2019s area is involved in semantic processing, while the posterior region in the phonological processing (Bohsali, 2015). Moreover, the whole of Broca\u2019s area has been shown to have a higher activation while doing reading tasks than other types of tasks."} {"text":"In a simple explanation of speech production, this area approaches phonological word representation chronologically divided into segments of syllables which then is sent to different motor areas where they are converted into a phonetic code. The study of how this area produces speech has been made with paradigms using both single and complex words."} {"text":"Broca\u2019s area is correlated with phonological segmentation, unification, and syntactic processing, which are all connected to linguistic information. This area, although it synchronizes the transformation of information within cortical systems involved in spoken word production, does not contribute to the production of single words. The inferior frontal lobe is the one in charge of word production."} {"text":"Furthermore, Broca\u2019s area is structurally related to the thalamus and both are engaged in language processing. The connectivity between both areas is two thalamic nuclei, the pulvinar, and the ventral nucleus, which are involved in language processing and linguistic functions similar to BA 44 and 45 in Broca\u2019s area. Pulvinar is connected to many frontal regions of the frontal cortex and ventral nucleus is involved in speech production. The frontal speech regions of the brain have been shown to participate in speech sound perception."} {"text":"Broca's Area is today still considered an important language center, playing a central role in processing syntax, grammar, and sentence structure."} {"text":"Wernicke\u2019s area was named for German doctor Carl Wernicke, who discovered it in 1874 in the course of his research into aphasias (loss of ability to speak).This area of the brain is involved in language comprehension. Therefore, Wernicke\u2019s area is for understanding oral language. Besides Wernicke\u2019s area, the left posterior superior temporal gyrus (pSTG), middle temporal gyrus (MTG), inferior temporal gyrus (ITG), supramarginal gyrus (SMG), and angular gyrus (AG) participate in language comprehension. Therefore, language comprehension is not located in a specific area. Contrarily, it involves large regions of the inferior parietal lobe and left temporal."} {"text":"While the finale of speech production is a sequence of muscle movements, the activation of knowledge about the sequence of phonemes (consonants and vowel speech sounds) that creates a word is a phonological retrieval. Wernicke\u2019s area contributes to phonological retrieval. All speech production tasks (e.g. word retrieval, repetition, and reading aloud) require phonological retrieval. The phonological retrieval system involved in speech repetition is the auditory phoneme perception system and the visual letter perception system is the one that serves for reading aloud. The communicative speech production entails a phase preceding phonological retrieval. The speech comprehension implicates representing sequences of phonemes onto word meaning."} {"text":"The angular gyrus is an important element in processing concrete and abstract concepts. It also has a role in verbal working memory during retrieval for verbal information and in visual memory for when turning written language into spoken language. The left AG is activated in semantic processing requiring concept retrieval and conceptual integration. Moreover, the left AG is activated during problems of multiplication and addition requiring retrieval of arithmetic factors in verbal memory. Therefore, it is involved in verbal coding of numbers."} {"text":"The insula is implicated in speech and language, taking part in functional and structural connections with motor, linguistic, sensory, and limbic brain areas. The knowledge about the function of the insula in speech production comes from different studies with patients who suffered from apraxia of speech. These studies have led researchers to know about the involvement of different parts of the insula. These parts are: the left anterior insula, which is related to speech production; and the bilateral anterior insula, involved in misleading speech comprehension."} {"text":"Many different sources state that the study of the brain and therefore, language disorders, originated in the 19th century and linguistic analysis of those disorders began throughout the 20th century. Studying language impairments in the brain after injuries aids to comprehend how the brain works and how it changes after an injury. When this happens, the brain suffers an impairment that is referred to as \u201caphasia\u201d. Lesions to Broca's Area resulted primarily in disruptions to speech production; damage to Wernicke's Area, which is located in the lower part of the temporal lobe, lead mainly to disruptions in speech reception."} {"text":"There are numerous distinctive ways in which language can be affected. Phonemic paraphasia, an attribute of conduction aphasia and Wernicke aphasia, is not the speech comprehension impairment. Instead, it is the speech production damage, where the desire phonemes are selected erroneously or in an incorrect sequence. Therefore, although Wernicke\u2019s aphasia, a combination of phonological retrieval and semantic systems impairment, affects speech comprehension, it also involves speech production damage. Phonemic paraphasia and anomia (impaired word retrieval) are the results of phonological retrieval impairment."} {"text":"Another lesion that involves impairment in language production and processing is the \u201capraxia of speech\u201d, a difficulty synchronizing articulators essential for speech production. This lesion is located in the superior pre-central gyrus of the insula and is more likely to occur to patients with Broca\u2019s aphasia. Dominant ventral anterior (VA) nucleus, another type of lesion, is the result of word-finding and semantic paraphasia\u2019s difficulties engaging in language processing. Moreover, individuals with thalamic lesions experience difficulties linking semantic concepts with correct phonological representations in word production."} {"text":"Dyslexia is a language processing disorder. It involves learning difficulties such as reading, writing, word recognition, phonological recording, numeracy, and spelling. Although having access to appropriate intervention during childhood, these difficulties continue throughout the lifespan. Moreover, children are diagnosed with dyslexia when more than one factor affecting learning, such as reading, appears visible. Children diagnosed with dyslexia that have difficulties in concrete cognitive functioning is called an assumption of specificity, and it helps to diagnose dyslexia."} {"text":"Some characteristics that distinguish dyslexics are incompetent phonological processing abilities causing misread of unfamiliar words and affecting comprehension; inadequacy of working memory affecting speaking, reading, and writing; errors in oral reading; oral skills difficulties as expressing oneself; and writing skills problems like expressing and spelling errors. Dyslexics not only experience learning difficulties but also other secondary characteristics as having difficulties organizing, planning, social interactions, motor skills, visual perception, and short-term memory. These characteristics affect personal and academic life."} {"text":"Dysarthria is a motor speech disorder caused by damage in the central and\/or peripheral nervous system and it is related to degenerative neurological diseases, such as Parkinson\u2019s disease, cerebrovascular accident (CVA) and traumatic brain injury (TBI). Dysarthria is caused by a mechanical difficulty in the vocal cords or neurological disease-producing abnormal articulation of phonemes, such as instead of \u201cb\u201d a \u201cp\u201d. A type of dyspraxia based on distortions of words is called apraxic dysarthria This type is related to facial apraxia and motor aphasia if Broca\u2019s area is involved."} {"text":"Improvements in computer technology, in the late 20th century, has allowed a better understanding of the correlation between brain and language, and the disorder that this entails. This improvement has permitted a better visualization of the brain structure in high resolution three-dimensional images. It has also allowed to observe brain activity through the blood flow (Dronkers, Ivanova, & Baldo, 2017)."} {"text":"In the past, research was primarily based on observations of loss of ability resulting from damage to the cerebral cortex. Indeed, medical imaging has represented a radical step forward for research on speech processing. Since then, a whole series of relatively large areas of the brain are involved in speech processing. In more recent research, subcortical regions (those lying below the cerebral cortex such as the putamen and the caudate nucleus), as well as the pre-motor areas (BA 6), have received increased attention. It is now generally assumed that the following structures of the cerebral cortex near the primary and secondary auditory cortices play a fundamental role in speech processing:"} {"text":"\u00b7 \u00a0 \u00a0 \u00a0 \"Superior temporal gyrus\" (STG): morphosyntactic processing (anterior section), integration of syntactic and semantic information (posterior section)"} {"text":"\u00b7 \u00a0 \u00a0 \u00a0 \"Inferior frontal gyrus\" (IFG, Brodmann area (BA) 45\/47): syntactic processing, working memory"} {"text":"\u00b7 \u00a0 \u00a0 \u00a0 \"Inferior frontal gyrus\" (IFG, BA 44): syntactic processing, working memory"} {"text":"\u00b7 \u00a0 \u00a0 \u00a0 \"Middle temporal gyrus\" (MTG): lexical semantic processing"} {"text":"\u00b7 \u00a0 \u00a0 \u00a0 Angular gyrus (AG): semantic processes (posterior temporal cortex)"} {"text":"The left hemisphere is usually dominant in right-handed people, although bilateral activations are not uncommon in the area of syntactic processing. It is now accepted that the right hemisphere plays an important role in the processing of suprasegmental acoustic features like prosody; which is \u201cthe rhythmic and melodic variations in speech\u201d. There are two types of prosodic information: emotional prosody (right hemisphere), which is the emotional that the speaker gives to the speech, and linguistic prosody (left hemisphere), the syntactic and thematic structure of the speech."} {"text":"Most areas of speech processing develop in the second year of life in the dominant half (hemisphere) of the brain, which often (though not necessarily) corresponds to the opposite of the dominant hand. 98% of right-handed people are left-hemisphere dominant, and the majority of left-handed people are as well."} {"text":"Computerized tomographic (CT) scans is another technique of the 1970s, which produce low spatial resolution but provides the location of the injury \"in vivo\". Moreover, Voxel-based Lesion Symptom Mapping (VLSM) and Voxel-Based Morphometry (VBM) techniques contributed to the understanding that specific brain regions have different roles when supporting speech processing. VLSM has been used to observe complex language functions sustained by different regions. Furthermore, VBM is a helpful technique to analysis language impairments related to neurodegenerative disease."} {"text":"In summary, these early research efforts demonstrated that semantic and structural speech production takes place in different areas of the brain."} {"text":"Fluency (also called volubility and eloquency) is the property of a person or of a system that delivers information quickly and with expertise."} {"text":"Language fluency is one of a variety of terms used to characterize or measure a person's language ability, often used in conjunction with accuracy and complexity. Although there are no widely agreed-upon definitions or measures of language fluency, someone is typically said to be fluent if their use of the language appears \"fluid\", or natural, coherent, and easy as opposed to slow, halting use. In other words, fluency is often described as the ability to produce language on demand and be understood."} {"text":"Language fluency is sometimes contrasted with accuracy (or correctness of language use, especially grammatical correctness) and complexity (or a more encompassing knowledge of vocabulary and discourse strategies). Fluency, accuracy, and complexity are distinct but interrelated components of language acquisition and proficiency."} {"text":"There are four commonly discussed types of fluency: reading fluency, oral fluency, oral-reading fluency, and written or compositional fluency. These types of fluency are interrelated, but do not necessarily develop in tandem or linearly. One may develop fluency in certain type(s) and be less fluent or nonfluent in others."} {"text":"In the sense of proficiency, \"fluency\" encompasses a number of related but separable skills:"} {"text":"So although it is often assumed that young children learn languages more easily than adolescents and adults, the reverse is in fact true; older learners are faster. The only exception to this rule is in pronunciation. Young children invariably learn to speak their second language with native-like pronunciation, whereas learners who start learning a language at an older age only rarely reach a native-like level."} {"text":"Since childhood is a critical period, widespread opinion holds that it is easier for young children to learn a second language than it is for adults. Children can even acquire native fluency when exposed to the language on a consistent basis with rich interaction in a social setting. In addition to capacity, factors like; 1) motivation, 2) aptitude, 3) personality characteristics, 4) age of acquisition 5) first language typology 6) socio-economic status and 7) quality and context of L2 input play a role in L2 acquisitions rate and building fluency. Second language acquisition (SLA) has the ability to influence children\u2019s cognitive growth and linguistic development."} {"text":"Skill that consists of ability to produce words in target language develops until adolescence. Natural ability to acquire a new language with a deliberate effort may begin to diminish around puberty i.e. 12\u201314 years of age. Learning environment, comprehensible instructional materials, teacher, and the learner are indispensable elements in SLA and developing fluency in children. Extensive reading in L2 can offer twofold benefits in foreign language learning i.e. \"reading to comprehend English and reading to learn English\"."} {"text":"Paradis (2006) study on childhood language acquisition and building fluency examines how first and second language acquisition patterns are generally similar including vocabulary and morphosyntax. Phonology of first language is usually apparent in SLA and initial L1 influence can be lifelong, even for child L2 learners."} {"text":"Children can acquire a second language simultaneously (learn L1 and L2 at the same time) or sequentially (learn L1 first and then L2). In the end, they develop fluency in both with one dominant language which is spoken largely by the community they live in."} {"text":"According to one source, there are five stages of SLA and developing fluency:"} {"text":"The process of learning a second language or \"L2,\" among older learners differs from younger learners because of their working memory. Working memory, also connected to fluency because it deals with automatic responses, is vital to language acquisition. This happens when information is stored and manipulated temporarily. During working memory, words are filtered, processed, and rehearsed, and information is stored while focusing on the next piece of interaction. These false starts, pauses or repetitions found in fluency assessments, can also be found within one's working memory as part of communication."} {"text":"Those with education at or below a high school level are least likely to take language classes. It has also been found that women and young immigrants are more likely to take language classes. Further, highly educated immigrants who are searching for skilled jobs \u2013 which require interpersonal and intercultural skills that are difficult to learn \u2013 are the most affected by lower fluency in the L2."} {"text":"Fluency is a speech language pathology term that means the smoothness or flow with which sounds, syllables, words and phrases are joined together when speaking quickly. \"Fluency disorders\" is used as a collective term for cluttering and stuttering. Both disorders have breaks in the fluidity of speech, and both have the fluency breakdown of repetition of parts of speech."} {"text":"Studies in the assessment of creativity list fluency as one of the four primary elements in creative thinking, the others being flexibility, originality and elaboration. Fluency in creative thinking is seen as the ability to think of many diverse ideas quickly."} {"text":"The Meaning of Meaning: A Study of the Influence of Language upon Thought and of the Science of Symbolism (1923) is a book by C. K. Ogden and I. A. Richards. It is accompanied by two supplementary essays by Bronis\u0142aw Malinowski and F. G. Crookshank. The conception of the book arose during a two-hour conversation between Ogden and Richards held on a staircase in a house next to the Cavendish Laboratories at 11 pm on Armistice Day, 1918."} {"text":"The original text was published in 1923 and has been used as a textbook in many fields including linguistics, philosophy, language, cognitive science and most recently semantics and semiotics in general. The book has been in print continuously since 1982. The most recent edition is the critical edition prepared by W. Terrence Gordon as volume 3 of the 5-volume set \"C. K. Ogden & Linguistics\" (London: Routledge\/Thoemmes Press, 1995). The full publication history, including serialised publication in \"The Cambridge Magazine\" prior to the first edition of the book, is in W. Terrence Gordon's, \"C. K. Ogden: a bio-bibliographical study\"."} {"text":"Richards sets forth a contextual theory of Signs: that Words and Things are connected \u201cthrough their occurrence together with things, their linkage with them in a \u2018context\u2019 that Symbols come to play that important part in our life [even] the source of all our power over the external world\u201d (47). In this context system, Richards develops a tri-part semiotics\u2014symbol, thought and referent with three relations between them (thought to symbol=correct, thought\u2013referent=adequate, symbol\u2013reference=true) (11). Symbols are \u201cthose signs which men use to communicate one with another and as instruments of thought, occupy a peculiar place\u201d (23). \u201cAll discursive symbolization involves [\u2026] weaving together of contexts into higher contexts\u201d (220). So for a word to be understood \u201crequires that it form a context with further experiences\u201d (210)."} {"text":"The book would later influence A. J. Ayer's \"Language, Truth, and Logic\", an introduction to logical positivism, and both the Richards\u2013Ogden book and the Ayer book would, in turn, influence Alec King and Martin Ketley in the writing of their book \"The Control of Language\", which appeared in 1939, and which influenced C. S. Lewis in the writing of his defence of natural law and objective values, \"The Abolition of Man\" (1943)."} {"text":"TRACE is a connectionist model of speech perception, proposed by James McClelland and Jeffrey Elman in 1986. It is based on a structure called \"the Trace,\" a dynamic processing structure made up of a network of units, which performs as the system's working memory as well as the perceptual processing mechanism. TRACE was made into a working computer program for running perceptual simulations. These simulations are predictions about how a human mind\/brain processes speech sounds and words as they are heard in real time."} {"text":"\"TRACE was the first model that instantiated the activation of multiple word candidates that match any part of the speech input.\" A simulation of speech perception involves presenting the TRACE computer program with mock speech input, running the program, and generating a result. A successful simulation indicates that the result is found to be meaningfully similar to how people process speech."} {"text":"It is generally accepted in psycholinguistics that (1) when the beginning of a word is heard, a set of words that share the same initial sound become activated in memory, (2) the words that are activated compete with each other while more and more of the word is heard, (3) at some point, due to both the auditory input and the lexical competition, one word is recognized."} {"text":"For example, a listener hears the beginning of \"bald\", and the words bald, ball, bad, bill become active in memory. Then, soon after, only bald and ball remain in competition (bad, bill have been eliminated because the vowel sound doesn't match the input). Soon after, bald is recognized. TRACE simulates this process by representing the temporal dimension of speech, allowing words in the lexicon to vary in activation strength, and by having words compete during processing. Figure 1 shows a line graph of word activation in a simple TRACE simulation."} {"text":"Speakers usually don't leave pauses in between words when speaking, yet listeners seem to have no difficulty hearing speech as a sequence of words. This is known as the segmentation problem, and is one of the oldest problems in the psychology of language. TRACE proposed the following solution, backed up by simulations. When words become activated and recognized, this reveals the location of word boundaries. Stronger word activation leads to greater confidence about word boundaries, which informs the hearer of where to expect the next word to begin."} {"text":"The TRACE model is a connectionist network with an input layer and three processing layers: pseudo-spectra (feature), phoneme and word. Figure 2 shows a schematic diagram of TRACE. There are three types of connectivity: (1) feedforward excitatory connections from input to features, features to phonemes, and phonemes to words; (2) lateral (i.e., within layer) inhibitory connections at the feature, phoneme and word layers; and (3) top-down feedback excitatory connections from words to phonemes."} {"text":"The input to TRACE works as follows. The user provides a phoneme sequence that is converted into a multi-dimensional feature vector. This is an approximation of acoustic spectra extended in time."} {"text":"The input vector is revealed a little at a time to simulate the temporal nature of speech. As each new chunk of input is presented, this sends activity along the network connections, changing the activation values in the processing layers. Features activate phoneme units, and phonemes activate word units. Parameters govern the strength of the excitatory and inhibitory connections, as well as many other processing details."} {"text":"There is no specific mechanism that determines when a word or a phoneme has been recognized. If simulations are being compared to reaction time data from a perceptual experiment (e.g. lexical decision), then typically an activation threshold is used. This allows for the model behavior to be interpreted as recognition, and a recognition time to be recorded as the number of processing cycles that have elapsed. For deeper understanding of TRACE processing dynamics, readers are referred to the original publication and to a TRACE software tool that runs simulations with a graphical user interface."} {"text":"Models of language processing can be used to conceptualize the nature of impairment in persons with speech and language disorder. For example, it has been suggested that language deficits in expressive aphasia may be caused by excessive competition between lexical units, thus preventing any word from becoming sufficiently activated. Arguments for this hypothesis consider that mental dysfunction can be explained by slight perturbation of the network model's processing. This emerging line of research incorporates a wide range of theories and models, and TRACE represents just one piece of a growing puzzle."} {"text":"Psycholinguistic models of speech perception, e.g. TRACE, must be distinguished from computer speech recognition tools. The former are psychological theories about how the human mind\/brain processes information. The latter are engineered solutions for converting an acoustic signal into text. Historically, the two fields have had little contact, but this is beginning to change."} {"text":"TRACE\u2019s influence in the psychology literature can be assessed by the number of articles that cite it. There are 345 citations of McClelland and Elman (1986) in the PsycINFO database. Figure 3 shows the distribution of those citations over the years since publication. The figure suggests that interest in TRACE grew significantly in 2001, and has remained strong, with about 30 citations per year."} {"text":"Baby talk is a type of speech associated with an older person speaking to a child. It is also called caretaker speech, infant-directed speech (IDS), child-directed speech (CDS), child-directed language (CDL), caregiver register, parentese, or motherese."} {"text":"CDS is characterized by a \"sing song\" pattern of intonation that differentiates it from the more monotone style used with other adults e.g., CDS has higher and wider pitch, slower speech rate and shorter utterances. It can display vowel hyperarticulation (an increase in distance in the formant space of the peripheral vowels e.g., [i], [u], and [a]) and words tend to be shortened and simplified. There is evidence that the exaggerated pitch modifications are similar to the affectionate speech style employed when people speak to their pets (pet-directed speech). However, the hyperarticulation of vowels appears to be related to the propensity for the infant to learn language, as it is not exaggerated in speech to infants with hearing loss or to pets."} {"text":"CDS is a clear and simplified strategy for communicating to younger children, used by adults and by older children. The vocabulary is limited, speech is slowed with a greater number of pauses, and the sentences are short and grammatically simplified, often repeated. Although CDS features marked auditory characteristics, other factors aid in development of language. Three types of modifications occur to adult-directed speech in the production of CDS \u2014"} {"text":"The younger the child, the more exaggerated the adult's CDS is. The attention of infants is held more readily by CDS over normal speech, as with adults. The more expressive CDS is, the more likely infants are to respond to this method of communication by adults."} {"text":"A key visual aspect of CDS is the movement of the lips. One characteristic is the wider opening of the mouth present in those using CDS versus adult-directed speech, particularly in vowels. Research suggests that with the larger opening of the lips during CDS, infants are better able to grasp the message being conveyed due to the heightened visual cues."} {"text":"Through this interaction, infants are able to determine who positive and encouraging caregivers will be in their development. When infants use CDS as a determinant of acceptable caregivers, their cognitive development seems to thrive because they are being encouraged by adults who are invested in the development of the given infants. Because the process is interactive, caregivers are able to make significant progress through the use of CDS."} {"text":"Studies have shown that from birth, infants prefer to listen to CDS, which is more effective than regular speech in getting and holding an infant's attention."} {"text":"Some researchers believe that CDS is an important part of the emotional bonding process between the parents and their child, and helps the infants learn the language. Researchers at Carnegie Mellon University and the University of Wisconsin found that using basic \u201cbaby talk\u201d may support babies in picking up words faster. Infants pay more attention when parents use CDS, which has a slower and more repetitive tone than used in regular conversation."} {"text":"CDS has been observed in languages other than English."} {"text":"Purposes and benefits of CDS include support the ability of infants to bond with their caregivers. In addition, infants begin the process of speech and language acquisition and development through CDS."} {"text":"Children learn fastest who receive the most acknowledgement and encouragement of what they say, who are given time and attention to speak and share, and who are questioned. Infants are able to apply this to larger words and sentences as they learn to process language."} {"text":"CDS aids infants in bonding to caregivers. Although infants have a range of social cues available to them regarding who will provide adequate care, CDS serves as an additional indicator as to which caregivers will provide developmental support. When adults engage in CDS with infants, they are providing positive emotion and attention, signaling to infants that they are valued."} {"text":"CDS can also serve as a priming tool for infants to notice the faces of their caregivers. Infants are more sensitive to the pitch and emphasized qualities of this method. Therefore, when caregivers use CDS, they expand the possibility for their infants to observe and process facial expressions. This effect could in part be due to infants associating CDS with positive facial expressions such as smiling, being more likely to respond to CDS if they expect to receive a positive response from their caregiver."} {"text":"CDS may promote processing of word forms, allowing infants to remember words when asked to recall them in the future. As words are repeated through CDS, infants begin to create mental representations of each word. As a result, infants who experience CDS are able to recall words more effectively than infants who do not."} {"text":"Infants can pick up on the vocal cues of CDS and will often pattern their babbling after it."} {"text":"The use of baby talk is not limited to interactions between adults and infants, as it may be used among adults, or by people to animals. In these instances, the outward style of the language may be that of baby talk, but is not considered actual \"parentese\", as it serves a different linguistic function (see pragmatics)."} {"text":"Baby talk and imitations of it may be used by one non-infant to another as a form of verbal abuse, in which the talk is intended to infantilize the victim. This can occur during bullying, when the aggressor uses baby talk to assert that the victim is weak, cowardly, overemotional, or otherwise inferior."} {"text":"Baby talk may be used as a form of flirtation between sexual or romantic partners. In this instance, the baby talk may be an expression of tender intimacy, and may perhaps form part of affectionate sexual roleplaying in which one partner speaks and behaves childishly, while the other acts motherly or fatherly, responding in \"parentese\". One or both partners might perform the child role. Terms of endearment, such as \"poppet\" (or, indicatively, \"baby\"), may be used for the same purpose in communication between the partners."} {"text":"A significant difference is that CDL contains many more sentences about specific bits of information, such as \"This cup is red,\" because they are intended to teach children about language and the environment. Pet-speech contains perhaps half the sentences of this form, as rather than instructive, its primary purpose is as a social function for humans; whether the dog learns anything does not seem to be a concern."} {"text":"As well as the raised vocal pitch, pet-speech strongly emphasizes intonations and emotional phrasing. There are diminutives such as \"walkie\" for walk and \"bathie\" for bath."} {"text":"Researchers Bryant and Barrett (2007) have suggested (as have others before them, e.g., Fernald, 1992) that CDL exists universally across all cultures and is a species-specific adaptation. Other researchers contend that it is not universal among the world's cultures, and argue that its role in helping children learn grammar has been overestimated, pointing out that in some societies (such as certain Samoan tribes), adults do not speak to their children at all until the children reach a certain age. Furthermore, even where baby-talk is used, it has many complicated grammatical constructions, and mispronounced or non-standard words."} {"text":"Other evidence suggests that baby talk is not a universal phenomenon: for example Schieffelin & Ochs (1983) describe the Kaluli tribe of Papua New Guinea who do not typically employ CDS. Language acquisition in Kaluli children was not found to be significantly impaired."} {"text":"The extent to which caregivers rely on and use CDS differs based on cultural differences. Mothers in regions that display predominately introverted cultures are less likely to display a great deal of CDS, although it is still used. Further, the personality of each child experiencing CDS from a caregiver deeply impacts the extent to which a caregiver will use this method of communication."} {"text":"As noted above, baby talk often involves shortening and simplifying words, with the possible addition of slurred words and nonverbal utterances, and can invoke a vocabulary of its own. Some utterances are invented by parents within a particular family unit, or are passed down from parent to parent over generations, while others are quite widely known and used within most families, such as \"wawa\" for water, \"num-num\" for a meal, \"ba-ba\" for bottle, or \"beddy-bye\" for bedtime, and are considered \"standard\" or \"traditional\" words, possibly differing in meaning from place to place."} {"text":"Baby talk, language regardless, usually consists of a muddle of words, including names for family members, names for animals, eating and meals, bodily functions and genitals, sleeping, pain, possibly including important objects such as diaper, blanket, pacifier, bottle, etc., and may be sprinkled with nonverbal utterances, such as \"goo goo ga ga\". The vocabulary of made-up words, such as those listed below, may be quite long with terms for a large number of things, rarely or possibly never using proper language, other times quite short, dominated by real words, all nouns. Most words invented by parents have a logical meaning, although the nonverbal sounds are usually completely meaningless and just fit the speech together."} {"text":"Sometimes baby talk words escape from the nursery and get into adult vocabulary, for example \"nanny\" for \"children's nurse\" or \"nursery governess\"."} {"text":"Moreover, many words can be derived into baby talk following certain rules of transformation. In English, adding a terminal \/i\/ sound at the end, usually written and spelled as \u2039ie\u203a, \u2039y\u203a, or \u2039ey\u203a, is a common way to form a diminutive which is often used as part of baby talk. Many languages have their own unique form of diminutive suffix (see list of diminutives by language for international examples)."} {"text":"Still other transformations, but not in all languages, include elongated vowels, such as \"kitty\" and \"kiiiitty\", (emphasized \/i\/) meaning the same thing. While this is understood by English speaking toddlers, it is not applicable with Dutch toddlers as they learn that elongated vowels reference different words."} {"text":"Linguistic competence is the system of linguistic knowledge possessed by native speakers of a language. It is distinguished from linguistic performance, which is the way a language system is used in communication. Noam Chomsky introduced this concept in his elaboration of generative grammar, where it has been widely adopted and competence is the only level of language that is studied."} {"text":"According to Chomsky, competence is the ideal language system that enables speakers to produce and understand an infinite number of sentences in their language, and to distinguish grammatical sentences from ungrammatical sentences. This is unaffected by \"grammatically irrelevant conditions\" such as speech errors. In Chomsky's view, competence can be studied independently of language use, which falls under \"performance\", for example through introspection and grammaticality judgments by native speakers."} {"text":"Many other linguists \u2013 functionalists, cognitive linguists, psycholinguists, sociolinguists and others \u2013 have rejected this distinction, critiquing it as a concept that considers empirical work irrelevant, leaving out many important aspects of language use. Also, it has been argued that the distinction is often used to exclude real data that is, in the words of William Labov, \"inconvenient to handle\" within generativist theory."} {"text":"Linguistic theory is concerned primarily with an ideal speaker-listener, in a completely homogeneous speech-community, who knows its (the speech community's) language perfectly and is unaffected by such grammatically irrelevant conditions as memory limitations, distractions, shifts of attention and interest, and errors (random or characteristic) in applying his knowledge of this language in actual performance. ~Chomsky,1965 (page 3)"} {"text":"Chomsky differentiates competence, which is an idealized capacity, from performance being the production of actual utterances. According to him, competence is the ideal speaker-hearer's knowledge of his or her language and it is the 'mental reality' which is responsible for all those aspects of language use which can be characterized as 'linguistic'. Chomsky argues that only under an idealized situation whereby the speaker-hearer is unaffected by grammatically irrelevant conditions such as memory limitations and distractions will performance be a direct reflection of competence. A sample of natural speech consisting of numerous false starts and other deviations will not provide such data. Therefore, he claims that a fundamental distinction has to be made between the competence and performance."} {"text":"Chomsky dismissed criticisms of delimiting the study of performance in favor of the study of underlying competence, as unwarranted and completely misdirected. He claims that the descriptivist limitation-in-principle to classifying and organizing data, the practice of \"extracting patterns\" from a corpus of observed speech, and the describing of \"speech habits\" are core factors precluding the development of a theory of actual performance."} {"text":"Linguistic competence is treated as a more comprehensive term for lexicalists, such as Jackendoff and Pustejovsky, within the generative school of thought. They assume a modular lexicon, a set of lexical entries containing semantic, syntactic and phonological information deemed necessary to parse a sentence. In the generative lexicalist view this information is intimately tied up with linguistic competence. Nevertheless, their models are still in line with the mainstream generative research in adhering to strong innateness, modularity and autonomy of syntax."} {"text":"Ray S. Jackendoff's model deviates from the traditional generative grammar in that it does not treat syntax as the main generative component from which meaning and phonology is developed unlike Chomsky. According to him, a generative grammar consists of five major components: the lexicon, the base component, the transformational component, the phonological component and the semantic component."} {"text":"Against the syntax-centered view of generative grammar(syntactocentrism), he specifically treats phonology, syntax and semantics as three parallel generative processes, coordinated through interface processes. He further subdivides each of those three processes into various \"tiers\", themselves coordinated by interfaces. Yet, he clarifies that those interfaces are not sensitive to every aspect of the processes they coordinate. For instance, phonology is affected by some aspects of syntax, but not vice versa."} {"text":"In contrast to the static view of word meaning (where each word is characterized by a predetermined number of word senses) which imposes a tremendous bottleneck on the performance capability of any natural language processing system, Pustejovsky proposes that the lexicon becomes an active and central component in the linguistic description. The essence of his theory is that the lexicon functions generatively, first by providing a rich and expressive vocabulary for characterizing lexical information; then, by developing a framework for manipulating fine-grained distinctions in word descriptions; and finally, by formalizing a set of mechanisms for specialized composition of aspects of such descriptions of words, as they occur in context, extended and novel sense are generated."} {"text":"Katz and Fodor suggests that a grammar should be thought of as a system of rules relating the externalized form of the sentences of a language to their meanings that are to be expressed in a universal semantic representation, just as sounds are expressed in a universal semantic representation. They hope that by making semantics an explicit part of generative grammar, more incisive studies of meaning would be possible. Since they assume that semantic representations are not formally similar to syntactic structure, they suggest a complete linguistic description must therefore include a new set of rules, a semantic component, to relate meanings to syntactic and\/or phonological structure. Their theory can be reflected by their slogan \"linguistic description minus grammar equals semantics\"."} {"text":"A broad front of linguists have critiqued the notion of linguistic competence, often severely. Functionalists, who argue for a usage-based approach to linguistics, argue that linguistic competence is derived from and informed by language use, performance, taking the directly opposite view to the generative model. As a result, in functionalist theories emphasis is placed on experimental methods to understand the linguistic competence of individuals."} {"text":"Sociolinguists have argued that the competence\/performance distinction basically serves to privilege data from certain linguistic genres and socio-linguistic registers as used by the prestige group, while discounting evidence from low-prestige genres and registers as being simply mis-performance."} {"text":"Noted linguist John Lyons, who works on semantics, has said:"} {"text":"Dell Hymes, quoting Lyons as above, says that \"probably now there is widespread agreement\" with the"} {"text":"Many linguists including M.A.K. Halliday and Labov have argued that the competence\/performance distinction makes it difficult to explain language change and grammaticalization, which can be viewed as changes in performance rather than competence."} {"text":"Another critique of the concept of linguistic competence is that it does not fit the data from actual usage where the felicity of an utterance often depends largely on the communicative context."} {"text":"Neurolinguist Harold Goodglass has argued that performance and competence are intertwined in the mind, since, \"like storage and retrieval, they are inextricably linked in brain damage.\""} {"text":"Cognitive Linguistics is a loose collection of systems that gives more weightage to semantics, and considers all usage phenomenon including metaphor and language change. Here, a number of pioneers such as George Lakoff, Ronald Langacker, and Michael Tomasello have strongly opposed the competence-performance distinction. The text by Vyvyan Evans and Melanie Green write:"} {"text":"\"In rejecting the distinction between competence and performance cognitive linguists argue that knowledge of language is derived from patterns of language use, and further, that knowledge of language is knowledge of how language is used.\" p.\u00a0110"} {"text":"Numerous experiments on infants in the last two decades have shown that they are able to segment words (frequently co-occurring sound sequences) from other sounds in a stream of meaningless syllables. This together with computational results that recurrent neural networks can learn syntax-like patterns, resulted in a wide questioning of nativist assumptions underlying psycholinguistic work up to the nineties."} {"text":"According to experimental linguist N.S. Sutherland, the task of psycholinguistics is not to confirm Chomsky's account of linguistic competence by undertaking experiments. It is by doing experiments, to find out what are the mechanisms that underlie linguistic competence. Psycholinguistics generally reject the distinction between performance and competence."} {"text":"Psycholinguists have also decried the competence-performance distinction on the ability to model dialogue:"} {"text":"The narrow definition of competence espoused by generativists resulted in the field of pragmatics where concerns other than language have become dominant. This has resulted in a more inclusive notion called communicative competence, to include social aspects \u2013 as proposed by Dell Hymes. This situation has had some unfortunate side effects:"} {"text":"The major criticism towards Chomsky's notion of linguistic competence by Hymes is the inadequate distinction of competence and performance. Furthermore, he commented that it is unreal and that no significant progress in linguistics is possible without studying forms along with the ways in which they are used. As such, linguistic competence should fall under the domain of communicative competence since it comprises four competence areas, namely, linguistic, sociolinguistic, discourse and strategic."} {"text":"Linguistic competence is commonly used and discussed in many language acquisition studies. Some of the more common ones are in the language acquisition of children, aphasics and multilinguals."} {"text":"The Chomskyan view of language acquisition argues that humans have an innate ability \u2013 universal grammar \u2013 to acquire language. However, a list of universal aspects underlying all languages has been hard to identify."} {"text":"Another view, held by scientists specializing in Language acquisition, such as Tomasello, argues that young children's early language is concrete and item-based which implies that their speech is based on the lexical items known to them from the environment and the language of their caretakers. In addition, children do not produce creative utterances about past experiences and future expectations because they have not had enough exposure to their target language to do so. Thus, this indicates that the exposure to language plays more of a role in a child's linguistic competence than just their innate abilities."} {"text":"Aphasia refers to a family of clinically diverse disorders that affect the ability to communicate by oral or written language, or both, following brain damage. In aphasia, the inherent neurological damage is frequently assumed to be a loss of implicit linguistic competence that has damaged or wiped out neural centers or pathways that are necessary for maintenance of the language rules and representations needed to communicate. The measurement of implicit language competence, although apparently necessary and satisfying for theoretic linguistics, is complexly interwoven with performance factors. Transience, stimulability, and variability in aphasia language use provide evidence for an access deficit model that supports performance loss."} {"text":"The definition of a multilingual is one that has not always been very clear-cut. In defining a multilingual, the pronunciation, morphology and syntax used by the speaker in the language are key criteria used in the assessment. Sometimes the mastery of the vocabulary is also taken into consideration but it is not the most important criteria as one can acquire the lexicon in the language without knowing the proper use of it."} {"text":"When discussing the linguistic competence of a multilingual, both communicative competence and grammatical competence are often taken into consideration as it is imperative for a speaker to have the knowledge to use language correctly and accurately. To test for grammatical competence in a speaker, grammaticality judgments of utterances are often used. Communicative competence on the other hand, is assessed through the use of appropriate utterances in different setting."} {"text":"Language is often implicated in humor. For example, the structural ambiguity of sentences is a key source for jokes. Take Groucho Marx's line from \"Animal Crackers\": \"One morning I shot an elephant in my pajamas; how he got into my pajamas I'll never know.\" The joke is funny because the main sentence could theoretically mean either that (1) the speaker, while wearing pajamas, shot an elephant or (2) the speaker shot an elephant that was inside his pajamas."} {"text":"The Hopi time controversy is the academic debate about how the Hopi language grammaticalizes the concept of time, and about whether the differences between the ways the English and Hopi languages describe time are an example of linguistic relativity or not. In popular discourse the debate is often framed as a question about whether the Hopi \"had a concept of time\", despite it now being well established that they do."} {"text":"The Hopi language is a Native American language of the Uto-Aztecan language family, which is spoken by some 5,000 Hopi people in the Hopi Reservation in Northeastern Arizona, US."} {"text":"In the large ' there is no word exactly corresponding to the English noun \"time\". Hopi employs different words to refer to \"a duration of time\" (\"p\u00e0asa \"for that long\"), to a point in time (\"p\u00e0asat\" \"at that time\"), and time as measured by a clock (\"pah\u00e0ntawa\"), as an occasion to do something (\"hisat\" or \"qeni\"), a turn or the appropriate time for doing something (\"qeniptsi\" (noun)), and to have time for something (\"aw n\u00e1naptsiwta\" (verb))."} {"text":"Time reference can be marked on verbs using the suffix \"-ni\""} {"text":"The -ni suffix is also used in the word \"naatoniqa\" which means \"that which will happen yet\" in reference to the future. This word is formed from the adverb \"naato\" \"yet\", the \"-ni\" suffix and the clitic -qa that forms a relative clause with the meaning \"that which...\"."} {"text":"The -\"ni\" suffix is also obligatory on the main verb in conditional clauses:"} {"text":"The suffix is also used in conditional clauses referring to a past context then often combined with the particle \"as\" that carries past tense or counterfactual meaning, or describes unachieved intent:"} {"text":"The suffix \"-ngwu\" describes actions taking place habitually or as a general rule."} {"text":"Whorf published several articles on Hopi grammar, focusing particularly on the ways in which the grammatical categories of Hopi encoded information about events and processes, and how this correlated with aspects of Hopi culture and behavior. After his death his full sketch of Hopi grammar was published by his friend the linguist Harry Hoijer, and some essays on Native American linguistics, many of which had been previously published in academic journals, were published in 1956 in the anthology \"Language, Thought, and Reality\" by his friend psychologist John Bissell Carroll."} {"text":"Whorf's most frequently cited statement regarding Hopi time is the strongly worded introduction of his 1936 paper \"An American Indian model of the Universe\", which was first published posthumously in Carroll's edited volume. Here he writes that"} {"text":"Whorf argues that in Hopi units of time are not represented by nouns, but by adverbs or verbs. Whorf argues that all Hopi nouns include the notion of a boundary or outline, and that consequently the Hopi language does not refer to abstract concepts with nouns. This, Whorf argues, is encoded in Hopi grammar, which does not allow durations of time to be counted in the same way objects are. So instead of saying, for example, \"three days\", Hopi would say the equivalent of \"on the third day\", using ordinal numbers. Whorf argues that the Hopi do not consider the process of time passing to produce another new day, but merely as bringing back the daylight aspect of the world."} {"text":"Whorf gives slightly different analyses of the grammatical encoding of time in Hopi in his different writings. His first published writing on Hopi grammar was the paper \"The punctual and segmentative aspects of verbs in Hopi\", published in 1936 in \"Language\", the journal of the Linguistic Society of America. Here Whorf analyzed Hopi as having a tense system with a distinction between three tenses: one used for past or present events (which Whorf calls the \"Factual\" tense or \"present-past\"); one for future events; and one for events that are generally or universally true (here called \"usitative\"). This analysis was repeated in a 1937 letter to J. B. Carroll, who later published it as part of his selected writings under the title \"Discussion of Hopi Linguistics\"."} {"text":"In the 1940 article \"Science and Linguistics\", Whorf gave the same three-way classification based on the speaker's assertion of the validity of his statement: \"The timeless Hopi verb does not distinguish between the present, past and future of the event itself but must always indicate what type of validity the intends the statement to have: a. report of an event .. b. expectation of an event ..; generalization or law about events.\""} {"text":"In his interpretation of Hopi time Whorf was influenced by Albert Einstein's theory of relativity, which was developed in the first decades of the century and impacted the general Zeitgeist. Whorf, an engineer by profession, in fact made occasional reference to physical relativity, and he adopted the term \"linguistic relativity,\" reflecting the general concept of the different but equally valid interpretations of some aspects of physical reality by different observers due to differences in their (for Einstein) physical circumstances or (for Whorf) their psychological-linguistic circumstances."} {"text":"The most salient points involve the concepts of \"simultaneity\" and \"spacetime\". In his 1905 Special Relativity paper, Einstein maintained that two given events can legitimately be called simultaneous if and only if they take place at the same point in time and in the same point in space. No two events which take place at a spatial distance from one another can legitimately be declared to be simultaneous in any absolute sense, for the judgement of simultaneity or non-simultaneity will depend on the physical circumstances (to be exact: the relative motion) of the observers. This difference is no artifact; each of the observers is correct (and is wrong only to the extent he or she insists that another observer is incorrect)."} {"text":"Hermann Minkowski, in his seminal 1908 address to the Congress of German Physicists, translated Einstein's 1905 mathematical equations into geometric terms. Minkowski famously declared:"} {"text":"\"Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.\""} {"text":"Spatial distance and temporal distance between any two events was now replaced by a single absolute distance in spacetime."} {"text":"Heynick points to several passages in Whorf's writings on the Hopis which parallel Einsteinian concepts such as:"} {"text":"\"time varies with each observer and does not permit of simultaneity\" (1940)"} {"text":"\"The Hopi metaphysics does not raise the question whether the things at a distant village exist at the same moment as those in one's own village, for it ... says that any 'events' in the distant village can be compared to any events in one's own village only by an interval of magnitude that has both time and space forms in it.\" (c.1936)"} {"text":"The concept of a \"simultaneous now\" throughout the cosmos was formulated by Aristotle, Newton, and most succinctly in John Locke's \"Essay Concerning Human Understanding\" (1690):"} {"text":"\"For this moment is common to all things that are now in being ... they all exist in the same moment of time.\""} {"text":"Whorf saw this notion as derived from the Standard Average European languages in which these thinkers thought: \"Newtonian space, time, and matter are no intuitions. They are recepts from culture and language. That is where Newton got them.\""} {"text":"Heynick, who claimed no personal knowledge of the Hopi language, posits alternative weaker and stronger interpretations of the influence of Einsteinian relativity on Whorf's analysis of the Hopi language and the Hopi concept of time. In the weaker version, the (then) new questioning of the nature of time and space brought about by the Einsteinian revolution in physics enabled Whorf to approach the Hopis and their language unburdened by traditional Western concepts and presumptions. The stronger version is that Whorf under the influence of Einstein tended inadvertently to \"read into\" his linguistic and cultural data relativistic concepts where they perhaps were not."} {"text":"In 1964 John Greenway published a humorous portrait of American culture, \"The Inevitable Americans\", in which he wrote: \"You have a watch, because Americans are obsessed with time. If you were a Hopi Indian, you would have none, the Hopi have no concept of time\". And even the 1971 ethnography of the Hopi by Euler and Dobyns claimed that \"The English concept of time is nearly incomprehensible to the Hopi\". The myth quickly became a staple element of New Age conceptualizations of the Hopi."} {"text":"In 1959 philosopher Max Black published a critique of Whorf's arguments in which he argued that the principle of linguistic relativity was obviously wrong because translation between languages is always possible, even when there are no exact correspondences between the single words or concepts in the two languages."} {"text":"Most of \"Hopi Time\" is dedicated to the detailed description of the Hopi usage of words and constructions related to time. Malotki describes in detail the usage of a large amount of linguistic material: temporal adverbs, time units, time counting practices such as the Hopi calendar, the way that days are counted and time is measured."} {"text":"Linguists and psychologists who work in the universalist tradition such as Steven Pinker and John McWhorter, have seen Malotki's study as being the final proof that Whorf was an inept linguist and had no significant knowledge or understanding of the Hopi language. This interpretation has been criticized by relativist scholars as unfounded and based on a lack of knowledge of Whorf's work."} {"text":"In spite of Malotki's refutation, the myth that \"the Hopi have no concept of time\" lived on in the popular literature. For example, in her 1989 novel \"Sexing the Cherry\", Jeanette Winterson wrote of the Hopi: \"...their language has no grammar in the way we recognize it. And most bizarre of all, they have no tenses for past, present and future. They do not sense time in that way. For them time is one.\" And the myth continues to be an integral part of New Age thinking that draws on stereotypical depictions of \"timeless Hopi culture\"."} {"text":"Some linguists working on Universals of semantics, such as Anna Wierzbicka and Cliff Goddard, argue that there is a Natural Semantic Metalanguage that has a basic vocabulary of semantic \"primes\" including concepts such as . They have argued that Malotki's data show that the Hopi share these primes with English and all other languages, even though it is also clear that the precise way in which these concepts fit into the larger pattern of culture and language practices is different in each language, as illustrated by the differences between Hopi and English."} {"text":"Historian of science, G E R Lloyd held that Malotki's investigation \"made it abundantly clear that the Hopi had, and have, no difficulty whatsoever in drawing distinctions between past, present, and future. Some investigators of Puebloan astronomical knowledge have taken a compromise position, noting that while Malotki's study of Hopi temporal concepts and timekeeping practices \"has clearly refuted Whorf's assertion that Hopi is a 'timeless' language, and in doing so has destroyed Whorf's strongest example for linguistic relativity, he presents no naively positivist assertion of the total independence of language and thought.\""} {"text":"In a book review of Hopi Time, Leanne Hinton echoes Lucy's observation that Malotki wrongly characterizes Whorf's claim that Hopi have no concept of time or cannot express time. She further claims that Malotki's glosses of Hopi often use English terms for time that do not exactly translate time terms (e.g., translating \"three-repetitions\" in Hopi as \"three times\"), thereby \"mak[ing] the error of attributing temporality to any Hopi sentence that translates into English with a temporal term\". Further, without delineating \"Hopi views of time from the views expressed by English translations\" \"What is meant by the word 'time', and what are the criteria for determining whether or not a concept is \"temporal\" is never answered by Malotki, thus begging the question."} {"text":"In 1991 Penny Lee published a comparison of Malotki and Whorf's analyses of the adverbial word class that Whorf had called \"tensors\". She argues that Whorf's analysis captured aspects of Hopi grammar that were not captured by simply describing tensors as falling within the class of temporal adverbs."} {"text":"In 2006 anthropologist David Dinwoodie published a severe critique of Malotki's work, questioning his methods and his presentation of data as well as his analysis. Dinwoodie argues that Malotki fails to adequately support his claim of having demonstrated that the Hopi have a concept of time \"as we know it\". He provides ethnographic examples of how some Hopi speakers explain the way they experience the difference between a traditional Hopi way of experiencing time as tied closely to cycles of ritual and natural events, and the Anglo-American concept of clock-time or school-time."} {"text":"Looked at from the perspective of the History of Science, Hopi conceptions of time and space, which underlie their well-developed observational solar calendar, raise the question of how to translate Hopi conceptions into terms intelligible to Western ears."} {"text":"A reduced relative clause is a relative clause that is \"not\" marked by an explicit relative pronoun or complementizer such as \"who\", \"which\" or \"that\". An example is the clause \"I saw\" in the English sentence \"This is the man \"I saw\".\" Unreduced forms of this relative clause would be \"This is the man \"that I saw\".\" or \"...\"whom I saw\".\""} {"text":"Another form of reduced relative clause is the \"reduced object passive relative clause\", a type of nonfinite clause headed by a past participle, such as the clause \"found here\" in: \"The animals \"found here\" can be dangerous.\""} {"text":"Reduced relative clauses are given to ambiguity or garden path effects, and have been a common topic of psycholinguistic study, especially in the field of sentence processing."} {"text":"Regular relative clauses are a class of dependent clause (or \"subordinate clause\") that usually modifies a noun. They are typically introduced by one of the relative pronouns \"who\", \"whom\", \"whose\", \"what\", or \"which\"and, in English, by the word \"that\", which may be analyzed either as a relative pronoun or as a relativizer (complementizer); see That a relativizer."} {"text":"Reduced relative clauses have no such relative pronoun or complementizer introducing them. The example below contrasts an English non-reduced relative clause and reduced relative clause."} {"text":"Because of the omission of function words, the use of reduced relative clauses, particularly when nested, can give rise to sentences which, while theoretically correct grammatically, are not readily parsed by listeners. A well-known example put forward by linguists is \"Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo\", which contains the reduced relative clause \"Buffalo buffalo buffalo\" (meaning \"which buffalo from Buffalo (do) buffalo\")."} {"text":"While reduced relative clauses are not the only structures that create garden path sentences in English (other forms of garden path sentences include those caused by lexical ambiguity, or words that can have more than one meaning), they are the \"classic\" example of garden path sentences, and have been the subject of the most research."} {"text":"Not all grammatical frameworks include reduced relative clauses. The term reduced relative clause comes from transformational generative grammar, which assumes deep structures and surface structures in language. Frameworks that assume no underlying form label non-finite reduced relative clauses as participial phrases."} {"text":"In languages with head-final relative clauses, such as Chinese, Japanese, and Turkish, non-reduced relative clauses may also cause temporary ambiguity because the complementizer does not precede the relative clause (and thus a person reading or hearing the relative clause has no \"warning\" that they are in a relative clause)."} {"text":"In psycholinguistics, a lemma (plural \"lemmas\" or \"lemmata\") is an abstract conceptual form of a word that has been mentally selected for utterance in the early stages of speech production. A lemma represents a specific meaning but does not have any specific sounds that are attached to it."} {"text":"This two-staged model is the most widely supported theory of speech production in psycholinguistics, although it has been challenged. For example, there is some evidence to indicate that the grammatical gender of a noun is retrieved from the word's phonological form (the lexeme) rather than from the lemma. This can be explained by models that do not assume a distinct level between the semantic and the phonological stages (and so lack a lemma representation)."} {"text":"During the process of language activation, lemma retrieval is the first step in lexical access. In this step, meaning and the syntactic elements of a lexical item are realized as the lemma. Lemma retrieval, as explained through a spreading-activation theory, is part of a network of separate elements consisting of the abstract concept, the lemma and the lexeme. Lemma retrieval is aided by the activation level of the concept that has yet to be verbalized. When activation takes place on the lemma level, the highest activated lemma element is selected."} {"text":"Lexical selection experiments have provided evidence that lemma retrieval is affected by the frequency of the word. This indicates that word frequency not only has an effect on the phonological elements of a word but also the semantic and syntactic elements that make of the lemma."} {"text":"Experiments that have studied the Tip-of-the tongue (TOT) phenomenon have provided evidence that less strong connections of phonological elements (lexemes) and lexical and syntactic representation (lemmas) lead to inability to retrieve a lexical item. TOT utterances provide evidence that the lemmas and lexemes are separate processes in language activation."} {"text":"The concept of lemma is similar to the Sanskrit \"spho\u1e6da\" (6th century), an invariant mental word, to which the sound is intimately \u2013 but not indivisibly \u2013 connected."} {"text":"Maledictology (from Latin \"maledicere\", \"to say [something] (\"dicere\") bad (\"male\")\" and Greek \"logia\", \"study of\") is a branch of psychology that does research into cursing and swearing. It is influenced by American psychologist Timothy Jay (Massachusetts College of Liberal Arts) and the philologist and researcher in swearwords Reinhold Aman (California). They assume that swearing is part of human life and can even act as a passive self-defense, since it prevents palpable argument."} {"text":"Dieter Gilberto Hillert () is a German-American biolinguist and cognitive scientist. His research focuses on the human language faculty as a cognitive and neurological system. He is known for work on the neurobiology of language, real-time sentence processing, and language evolution. He advocates comparative evolutionary studies of cognition, argues against tablua rasa models, and favors computational theories of mind."} {"text":"He received several awards from the Alexander von Humboldt Foundation and the Japan Society for the Promotion of Science."} {"text":"The early left anterior negativity (commonly referred to as ELAN) is an event-related potential in electroencephalography (EEG), or component of brain activity that occurs in response to a certain kind of stimulus. It is characterized by a negative-going wave that peaks around 200 milliseconds or less after the onset of a stimulus, and most often occurs in response to linguistic stimuli that violate word-category or phrase structure rules (as in *\"the in room\" instead of \"in the room\"). As such, it is frequently a topic of study in neurolinguistics experiments, specifically in areas such as sentence processing. While it is frequently used in language research, there is no evidence yet that it is necessarily a language-specific phenomenon."} {"text":"More recent work has criticized the design of many of the foundational studies that characterized the ELAN, such that apparent ELAN effects might be the result of spillover from words prior to the onset of the critical word. This raises important questions about whether the ELAN is a true ERP component or an artifact of certain experimental designs."} {"text":"The ELAN was first reported by Angela D. Friederici as a response to German sentences with phrase structure violations, such as *\"the pizza was in the eaten\" (as opposed to \"the pizza was eaten\"); it can be elicited by English phrase structure violations such as *\"Max's of proof\" (as opposed to \"Max's proof\") or *\"your write\" (as opposed to \"you write\"). The ELAN is not elicited by sentences with other kinds of grammatical errors, such as subject-verb disagreement (*\"\"he go to the store\" rather than \"he goes to the store\"\") or grammatically dispreferred and \"awkward\" sentences (such as \"\"the doctor charged the patient was lying\" rather than \"the doctor charged that the patient was lying\"\"); it only appears when it is impossible to build local phrase structure."} {"text":"It appears rapidly, peaking between 100 and 300 milliseconds after the onset of the grammatically incorrect stimulus (other reports have placed its time course, or \"latency\", between 100 and 200ms, \"under 200ms\", \"around 125 ms\", or \"about 160ms\"). The speed of the ELAN may also be affected by characteristic of the violating stimuli; the ELAN appears later to visual stimuli that are fuzzy or difficult to see, and may occur earlier in morphologically complex spoken words where much information about the meaning of the word precedes the word's recognition point."} {"text":"Its name derives from the fact that it is picked up most robustly by EEG sensors on the left front regions of the scalp; it may sometimes, however, have a bilateral (both sides of the scalp) distribution."} {"text":"Some authors consider the ELAN to be a separate response from the left anterior negativity (LAN), while others label it as just an early version of the LAN."} {"text":"The ELAN has been reported in languages such as English, German, Dutch, Chinese, and Japanese. It is possible, though, that it is not a response specific to language (in other words, that the ELAN might also occur in response to non-linguistic stimuli)."} {"text":"Bilingual interactive activation plus (BIA+) is a model for understanding the process of bilingual language comprehension and consists of two interactive subsystems: the word identification subsystem and task\/decision subsystem. It is the successor of the Bilingual Interactive Activation (BIA) model which was updated in 2002 to include phonologic and semantic lexical representations, revise the role of language nodes, and specify the purely bottom-up nature of bilingual language processing."} {"text":"The BIA+ is one of many models that was defined based on data from psycholinguistic or behavioral studies which investigate how the languages of bilinguals are manipulated during listening, reading and speaking each of them; however, BIA+ is now being supported by neuroimaging data linking this model to more neurally inspired ones which have a greater focus on the brain areas and mechanisms involved in these tasks."} {"text":"The two basic tools in these studies are the event-related potential (ERP) which has high temporal resolution but low spatial resolution and the functional magnetic resonance imaging (fMRI) which typically has high spatial resolution and low temporal resolution. When used together, however, these two methods can generate a more complete picture of the time course and interactivity of bilingual language processing according to the BIA+ model. These methods, however, do need to be considered carefully as overlapping activation areas in the brain do not imply that there is no functional separation between the two languages at the neuronal or higher-order level."} {"text":"Distinction of 2 subsystems: word identification vs. task\/decision."} {"text":"According to the BIA+ model shown in the figure, during word identification, the visual input activates the sublexical orthographic representations which simultaneously activate both the orthographic whole-word lexical and the sublexical phonological representations. Both whole-word orthographic and phonological representations then activate the semantic representations and language nodes which indicate membership to a particular language. All of this information is then used in the task\/decision subsystem to carry out the remainder of the task at hand. The two subsystems are further described by the assumptions associated with them below."} {"text":"This assumption states that language nodes\/tags exist to provide a representation for the language of membership based on the information from upstream orthographic and phonological word ID processes. According to the BIA+ model, these tags have no effect on the activation level representation of words. The focus of activation of these nodes is postlexical: the existence of these nodes enables bilingual individuals not to get too much interference from the nontarget language while they process one of their language."} {"text":"Parallel access assumes that language is nonselective and that both potential word choices are activated in the bilingual brain when exposed to the same stimulus. For example, test subjects reading in their second language have been found to unconsciously translate to their primary language. N400 stimulus response activation measurements show that semantic priming effects were seen in both languages and an individual cannot consciously focus their attention to only one language, even when told to ignore the second."} {"text":"This language nonselective lexical access has been shown during semantic activation across languages, but also at the orthographic and phonological levels."} {"text":"The temporal delay assumption is based on the principle of resting potential activation which reflects the frequency of word use by the bilingual such that high frequency words correlate to high resting level activation potentials, and words used with little frequency correlate to low resting level activation potentials. A high resting potential is one that is less negative or closer to zero, the point of activation, and therefore needs less stimuli in order to become activated. Because the less commonly used words of L2 have a lower resting level activation, L1 is likely to be activated before L2 as seen by N400 ERP patterns."} {"text":"This resting level activation of words also reflects the proficiency level of bilinguals and their frequency of usage of the two languages. When a bilingual\u2019s language proficiency is lower in L2 than L1, the activation of L2 lexical representations will be further delayed as more extensive or higher-level brain activation is necessary for language control. Both low and high proficiency bilinguals have parallel activation of the word representations, however the less proficient language, L2, becomes active more slowly contributing to the temporal delay assumption."} {"text":"The locations of many of the word identification processing tasks have been determined with fMRI studies. Word retrieval is localized in Broca's area of the prefrontal cortex, whereas storage of information is localized in the inferior temporal lobes."} {"text":"Globally, the same brain areas have been shown to be activated across the L1 and L2 in highly proficient bilinguals. Some subtle differences between L1 and L2 activations emerge though when testing lower proficient bilinguals."} {"text":"The task\/decision subsystem of the BIA+ model determines which actions must be executed for the task at hand based on the relevant information that becomes available after word identification processing. This subsystem involves many of the executive processes including monitoring and control associated with the prefrontal cortex."} {"text":"\"Bottom-up control of task\/decision from word identification\"."} {"text":"Action plans that meet the task at hand are executed by the task\/decision system on the basis of activation information from the word identification subsystem. Studies that tested bilinguals with homographs showed that conflicts between target and non-target language readings of the homographs still led to a difference in activation between it and a control, implying that bilinguals are not able to regulate activation in the word identification system. Therefore, the action plans of the task\/decision system have no direct influence on activations of word identification language subsystem."} {"text":"The neural correlates of the task\/ decision subsystem consist of multiple components that map onto different areas of prefrontal cortex responsible for executing control functions. For example, the general executive functions of language switching have been found to activate the anterior cingulate cortex and dorsolateral prefrontal cortex areas.,"} {"text":"Translation, on the other hand, requires controlled actions in language representations and has been associated with the left basal ganglia, The left caudate nucleus has been associated with control of in-use language, and the left mid-prefrontal cortex is responsible for monitoring interference and suppressing competing responses between languages.,"} {"text":"According to the BIA+ model, when a bilingual with English as their primary language and Spanish as their secondary language translates the word \"advertencia\" from Spanish to English, several steps occur. The bilingual would use the orthographic and phonological cues to differentiate this word from the similar English word \"advertisement\". At this point, however, the bilingual automatically derives the semantic meaning of the word, not only for the correct Spanish meaning of advertencia which is \"warning\" but also for the Spanish meaning of advertisement which is \"publicidad\"."} {"text":"This information would then be stored in the bilingual\u2019s working memory and used in the task\/decision system to determine which of the two translations best fits the task at hand. Since the original instructions were to translate from Spanish to English, the bilingual would choose the correct translation of \"advertencia\" to be \"warning\" and not \"advertisement\"."} {"text":"While the BIA+ models shares several similarities with its predecessor, the BIA model, there are a few distinct differences that exist between the two. First and most notable is the purely bottom-up nature of the BIA+ model which assumes that information from the task\/decision subsystem cannot influence the word identification subsystem, while the BIA model assumes that the two systems can fully interact."} {"text":"Second is that the language membership nodes of the BIA+ model do not affect the activation levels of the word identification system, whereas they play an inhibitory role in the BIA model."} {"text":"Finally participant expectations could potentially affect the task\/decision system in the BIA+ model; however the BIA model assumes there is no strong effect on the activation state of words based on expectations."} {"text":"The BIA+ model has been supported by many of the quantitative neuroimaging studies but more research needs to be completed in order to strengthen the model as a frontrunner in the accepted models for bilingual language processing. In the task\/decision system, the task components are well-defined (e.g. translation, language switching) but the decision components involved in the execution of these tasks in the subsystem are underspecified. The relationship of the components in this subsystem need further exploration in order to be fully understood."} {"text":"Scientists are also considering the use of magnetoencephalography (MEG) in future studies. This technology would link the spatial activation processes with the temporal patterns of brain response more accurately than simultaneously considering the response data from ERP and fMRI which are more limited."} {"text":"Not only have studies suggested that the executive functioning of bilingualism extends beyond the language system, but bilinguals have also been shown to be faster processors who display fewer conflict effects than monolinguals in attentional tasks This research implies that there may be some spillover effects of learning a second language on other areas of cognitive function that could be explored."} {"text":"One future direction theories on bilingual word recognition should take is the investigation of developmental aspects of bilingual lexical access. Most studies have investigated highly proficient bilinguals, but not many have looked at low-proficient bilinguals or even L2 learners. This new direction should prove to bring a lot of educational applications."} {"text":"Expressive aphasia, also known as Broca's aphasia, is a type of aphasia characterized by partial loss of the ability to produce language (spoken, manual, or written), although comprehension generally remains intact. A person with expressive aphasia will exhibit effortful speech. Speech generally includes important content words but leaves out function words that have more grammatical significance than physical meaning, such as prepositions and articles. This is known as \"telegraphic speech\". The person's intended message may still be understood, but their sentence will not be grammatically correct. In very severe forms of expressive aphasia, a person may only speak using single word utterances. Typically, comprehension is mildly to moderately impaired in expressive aphasia due to difficulty understanding complex grammar."} {"text":"It is caused by acquired damage to the anterior regions of the brain, such as Broca's area. It is one subset of a larger family of disorders known collectively as aphasia. Expressive aphasia contrasts with receptive aphasia, in which patients are able to speak in grammatical sentences that lack semantic significance and generally also have trouble with comprehension. Expressive aphasia differs from dysarthria, which is typified by a patient's inability to properly move the muscles of the tongue and mouth to produce speech. Expressive aphasia also differs from apraxia of speech, which is a motor disorder characterized by an inability to create and sequence motor plans for speech."} {"text":"Broca's (expressive) aphasia is a type of non-fluent aphasia in which an individual's speech is halting and effortful. Misarticulations or distortions of consonants and vowels, namely phonetic dissolution, are common. Individuals with expressive aphasia may only produce single words, or words in groups of two or three. Long pauses between words are common and multi-syllabic words may be produced one syllable at a time with pauses between each syllable. The prosody of a person with Broca's aphasia is compromised by shortened length of utterances and the presence of self-repairs and disfluencies. Intonation and stress patterns are also deficient."} {"text":"Self-monitoring is typically well preserved in patients with Broca's aphasia. They are usually aware of their communication deficits, and are more prone to depression and outbursts from frustration than are patients with other forms of aphasia.[7]"} {"text":"In general, word comprehension is preserved, allowing patients to have functional receptive language skills. Individuals with Broca's aphasia understand most of the everyday conversation around them, but higher-level deficits in receptive language can occur. Because comprehension is substantially impaired for more complex sentences, it is better to use simple language when speaking with an individual with expressive aphasia. This is exemplified by the difficulty to understand phrases or sentences with unusual structure. A typical patient with Broca's aphasia will misinterpret \"the man is bitten by the dog\" by switching the subject and object to \u201cthe dog is bitten by the man.\u201d"} {"text":"Typically, people with expressive aphasia can understand speech and read better than they can produce speech and write. The person's writing will resemble their speech and will be effortful, lacking cohesion, and containing mostly content words. Letters will likely be formed clumsily and distorted and some may even be omitted. Although listening and reading are generally intact, subtle deficits in both reading and listening comprehension are almost always present during assessment of aphasia."} {"text":"Because Broca's area is anterior to the primary motor cortex, which is responsible for movement of the face, hands, and arms, a lesion affecting Broca's areas may also result in hemiparesis (weakness of both limbs on the same side of the body) or hemiplegia (paralysis of both limbs on the same side of the body). The brain is wired contralaterally, which means the limbs on right side of the body are controlled by the left hemisphere and vice versa. Therefore, when Broca's area or surrounding areas in the left hemisphere are damaged, hemiplegia or hemiparesis often occurs on the right side of the body in individuals with Broca's aphasia."} {"text":"Severity of expressive aphasia varies among patients. Some people may only have mild deficits and detecting problems with their language may be difficult. In the most extreme cases, patients may be able to produce only a single word. Even in such cases, over-learned and rote-learned speech patterns may be retained\u2013 for instance, some patients can count from one to ten, but cannot produce the same numbers in novel conversation."} {"text":"In addition to difficulty expressing oneself, individuals with expressive aphasia are also noted to commonly have trouble with comprehension in certain linguistic areas. This agrammatism overlaps with receptive aphasia, but can be seen in patients who have expressive aphasia without being diagnosed as having receptive aphasia. The most well-noted of these are object-relative clauses, object Wh- questions, and topicalized structures (placing the topic at the beginning of the sentence). These three concepts all share phrasal movement, which can cause words to lose their thematic roles when they change order in the sentence. This is often not an issue for people without agrammatic aphasias, but many people with aphasia rely heavily on word order to understand roles that words play within the sentence."} {"text":"The most common cause of expressive aphasia is stroke. A stroke is caused by hypoperfusion (lack of oxygen) to an area of the brain, which is commonly caused by thrombosis or embolism. Some form of aphasia occurs in 34 to 38% of stroke patients. Expressive aphasia occurs in approximately 12% of new cases of aphasia caused by stroke."} {"text":"In most cases, expressive aphasia is caused by a stroke in Broca's area or the surrounding vicinity. Broca's area is in the lower part of the premotor cortex in the language dominant hemisphere and is responsible for planning motor speech movements. However, cases of expressive aphasia have been seen in patients with strokes in other areas of the brain. Patients with classic symptoms of expressive aphasia in general have more acute brain lesions, whereas patients with larger, widespread lesions exhibit a variety of symptoms that may be classified as global aphasia or left unclassified."} {"text":"Expressive aphasia can also be caused by trauma to the brain, tumor, cerebral hemorrhage and by extradural abscess."} {"text":"Understanding lateralization of brain function is important for understanding which areas of the brain cause expressive aphasia when damaged. In the past, it has been believed that the area for language production differs between left and right-handed individuals. If this were true, damage to the homologous region of Broca's area in the right hemisphere should cause aphasia in a left-handed individual. More recent studies have shown that even left-handed individuals typically have language functions only in the left hemisphere. However, left-handed individuals are more likely to have a dominance of language in the right hemisphere."} {"text":"Less common causes of expressive aphasia include primary autoimmune phenomenon and autoimmune phenomenon that are secondary to cancer (as a paraneoplastic syndrome) have been listed as the primary hypothesis for several cases of aphasia, especially when presenting with other psychiatric disturbances and focal neurological deficits. Many case reports exist describing paraneoplastic aphasia, and the reports that are specific tend to describe expressive aphasia. Although most cases attempt to exclude micrometastasis, it is likely that some cases of paraneoplastic aphasia are actually extremely small metastasis to the vocal motor regions."} {"text":"Neurodegenerative disorders may present with aphasia. Alzheimer's disease may present with either fluent aphasia or expressive aphasia. There are case reports of Creutzfeldt-Jakob disease presenting with expressive aphasia."} {"text":"Expressive aphasia is classified as non-fluent aphasia, as opposed to fluent aphasia. Diagnosis is done on a case-by-case basis, as lesions often affect the surrounding cortex and deficits are highly variable among patients with aphasia."} {"text":"A physician is typically the first person to recognize aphasia in a patient who is being treated for damage to the brain. Routine processes for determining the presence and location of lesion in the brain include magnetic resonance imaging (MRI) and computed tomography (CT) scans. The physician will complete a brief assessment of the patient's ability to understand and produce language. For further diagnostic testing, the physician will refer the patient to a speech-language pathologist, who will complete a comprehensive evaluation."} {"text":"In order to diagnose a patient who is suffering from Broca's aphasia, there are certain commonly used tests and procedures. The Western Aphasia Battery (WAB) classifies individuals based on their scores on the subtests; spontaneous speech, auditory comprehension, repetition, and naming. The Boston Diagnostic Aphasia Examination (BDAE) can inform users what specific type of aphasia they may have, infer the location of lesion, and assess current language abilities. The Porch Index of Communication Ability (PICA) can predict potential recovery outcomes of the patients with aphasia. Quality of life measurement is also an important assessment tool. Tests such as the Assessment for Living with Aphasia (ALA) and the Satisfaction with Life Scale (SWLS) allow for therapists to target skills that are important and meaningful for the individual."} {"text":"In addition to formal assessments, patient and family interviews are valid and important sources of information. The patient's previous hobbies, interests, personality, and occupation are all factors that will not only impact therapy but may motivate them throughout the recovery process. Patient interviews and observations allow professionals to learn the priorities of the patient and family and determine what the patient hopes to regain in therapy. Observations of the patient may also be beneficial to determine where to begin treatment. The current behaviors and interactions of the patient will provide the therapist with more insight about the client and their individual needs. Other information about the patient can be retrieved from medical records, patient referrals from physicians, and the nursing staff."} {"text":"In non-speaking patients who use manual languages, diagnosis is often based on interviews from the patient's acquaintances, noting the differences in sign production pre- and post-damage to the brain. Many of these patients will also begin to rely on non-linguistic gestures to communicate, rather than signing since their language production is hindered."} {"text":"Currently, there is no standard treatment for expressive aphasia. Most aphasia treatment is individualized based on a patient's condition and needs as assessed by a speech language pathologist. Patients go through a period of spontaneous recovery following brain injury in which they regain a great deal of language function."} {"text":"In the months following injury or stroke, most patients receive traditional treatment for a few hours per day. Among other exercises, patients practice the repetition of words and phrases. Mechanisms are also taught in traditional treatment to compensate for lost language function such as drawing and using phrases that are easier to pronounce."} {"text":"Emphasis is placed on establishing a basis for communication with family and caregivers in everyday life. Treatment is individualized based on the patient's own priorities, along with the family's input."} {"text":"A patient may have the option of individual or group treatment. Although less common, group treatment has been shown to have advantageous outcomes. Some types of group treatments include family counseling, maintenance groups, support groups and treatment groups."} {"text":"Melodic intonation therapy was inspired by the observation that individuals with non-fluent aphasia sometimes can sing words or phrases that they normally cannot speak. \"Melodic Intonation Therapy was begun as an attempt to use the intact melodic\/prosodic processing skills of the right hemisphere in those with aphasia to help cue retrieval words and expressive language.\" It is believed that this is because singing capabilities are stored in the right hemisphere of the brain, which is likely to remain unaffected after a stroke in the left hemisphere. However, recent evidence demonstrates that the capability of individuals with aphasia to sing entire pieces of text may actually result from rhythmic features and the familiarity with the lyrics."} {"text":"The goal of Melodic Intonation Therapy is to utilize singing to access the language-capable regions in the right hemisphere and use these regions to compensate for lost function in the left hemisphere. The natural musical component of speech was used to engage the patients' ability to produce phrases. A clinical study revealed that singing and rhythmic speech may be similarly effective in the treatment of non-fluent aphasia and apraxia of speech. Moreover, evidence from randomized controlled trials is still needed to confirm that Melodic Intonation Therapy is suitable to improve propositional utterances and speech intelligibility in individuals with (chronic) non-fluent aphasia and apraxia of speech."} {"text":"A pilot study reported positive results when comparing the efficacy of a modified form of MIT to no treatment in people with nonfluent aphasia with damage to their left-brain. A randomized controlled trial was conducted and the study reported benefits of utilizing modified MIT treatment early in the recovery phase for people with nonfluent aphasia."} {"text":"Melodic Intonation Therapy is used by music therapists, board-certified professionals that use music as a therapeutic tool to effect certain non-musical outcomes in their patients. Speech language pathologists can also use this therapy for individuals who have had a left hemisphere stroke and non-fluent aphasias such as Broca's or even apraxia of speech."} {"text":"Two important principles of constraint-induced aphasia therapy are that treatment is very intense, with sessions lasting for up to 6 hours over the course of 10 days and that language is used in a communication context in which it is closely linked to (nonverbal) actions. These principles are motivated by neuroscience insights about learning at the level of nerve cells (synaptic plasticity) and the coupling between cortical systems for language and action in the human brain. Constraint-induced therapy contrasts sharply with traditional therapy by the strong belief that mechanisms to compensate for lost language function, such as gesturing or writing, should not be used unless absolutely necessary, even in everyday life."} {"text":"It is believed that CIAT works by the mechanism of increased neuroplasticity. By constraining an individual to use only speech, it is believed that the brain is more likely to reestablish old neural pathways and recruit new neural pathways to compensate for lost function."} {"text":"The strongest results of CIAT have been seen in patients with chronic aphasia (lasting over 6 months). Studies of CIAT have confirmed that further improvement is possible even after a patient has reached a \"plateau\" period of recovery. It has also been proven that the benefits of CIAT are retained long term. However, improvements only seem to be made while a patient is undergoing intense therapy. Recent work has investigated combining constraint-induced aphasia therapy with drug treatment, which led to an amplification of therapy benefits."} {"text":"In addition to active speech therapy, pharmaceuticals have also been considered as a useful treatment for expressive aphasia. This area of study is relatively new and much research continues to be conducted."} {"text":"The following drugs have been suggested for use in treating aphasia and their efficacy has been studied in control studies."} {"text":"The most effect has been shown by piracetam and amphetamine, which may increase cerebral plasticity and result in an increased capability to improve language function. It has been seen that piracetam is most effective when treatment is begun immediately following stroke. When used in chronic cases it has been much less efficient."} {"text":"Bromocriptine has been shown by some studies to increase verbal fluency and word retrieval with therapy than with just therapy alone. Furthermore, its use seems to be restricted to non-fluent aphasia."} {"text":"Donepezil has shown a potential for helping chronic aphasia."} {"text":"No study has established irrefutable evidence that any drug is an effective treatment for aphasia therapy. Furthermore, no study has shown any drug to be specific for language recovery. Comparison between the recovery of language function and other motor function using any drug has shown that improvement is due to a global increase plasticity of neural networks."} {"text":"In transcranial magnetic stimulation (TMS), magnetic fields are used to create electrical currents in specified cortical regions. The procedure is a painless and noninvasive method of stimulating the cortex. TMS works by suppressing the inhibition process in certain areas of the brain. By suppressing the inhibition of neurons by external factors, the targeted area of the brain may be reactivated and thereby recruited to compensate for lost function. Research has shown that patients can demonstrate increased object naming ability with regular transcranial magnetic stimulation than patients not receiving TMS. Furthermore, research suggests this improvement is sustained upon the completion of TMS therapy. However, some patients fail to show any significant improvement from TMS which indicates the need for further research of this treatment."} {"text":"It has been proven that, among all types of therapies, one of the most important factors and best predictors for a successful outcome is the intensity of the therapy. By comparing the length and intensity of various methods of therapies, it was proven that intensity is a better predictor of recovery than the method of therapy used."} {"text":"In most individuals with expressive aphasia, the majority of recovery is seen within the first year following a stroke or injury. The majority of this improvement is seen in the first four weeks in therapy following a stroke and slows thereafter. However, this timeline will vary depending upon the type of stroke experienced by the patient. Patients who experienced an ischemic stroke may recover in the days and weeks following the stroke, and then experience a plateau and gradual slowing of recovery. On the contrary, patients who experienced a hemorrhagic stroke experience a slower recovery in the first 4\u20138 weeks, followed by a faster recovery which eventually stabilizes."} {"text":"Numerous factors impact the recovery process and outcomes. Site and extent of lesion greatly impacts recovery. Other factors that may affect prognosis are age, education, gender, and motivation. Occupation, handedness, personality, and emotional state may also be associated with recovery outcomes."} {"text":"Studies have also found that prognosis of expressive aphasia correlates strongly with the initial severity of impairment. However, it has been seen that continued recovery is possible years after a stroke with effective treatment. Timing and intensity of treatment is another factor that impacts outcomes. Research suggests that even in later stages of recovery, intervention is effective at improving function, as well as, preventing loss of function."} {"text":"Unlike receptive aphasia, patients with expressive aphasia are aware of their errors in language production. This may further motivate a person with expressive aphasia to progress in treatment, which would affect treatment outcomes. On the other hand, awareness of impairment may lead to higher levels of frustration, depression, anxiety, or social withdrawal, which have been proven to negatively affect a person's chance of recovery."} {"text":"Expressive aphasia was first identified by the French neurologist Paul Broca. By examining the brains of deceased individuals having acquired expressive aphasia in life, he concluded that language ability is localized in the ventroposterior region of the frontal lobe. One of the most important aspects of Paul Broca's discovery was the observation that the loss of proper speech in expressive aphasia is due to the brain's loss of ability to produce language, as opposed to the mouth's loss of ability to produce words."} {"text":"The discoveries of Paul Broca were made during the same period of time as the German Neurologist Carl Wernicke, who was also studying brains of aphasiacs post-mortem and identified the region now known as Wernicke's area. Discoveries of both men contributed to the concept of localization, which states that specific brain functions are all localized to a specific area of the brain. While both men made significant contributions to the field of aphasia, it was Carl Wernicke who realized the difference between patients with aphasia that could not produce language and those that could not comprehend language (the essential difference between expressive and receptive aphasia)."} {"text":"Aphasiology is the study of language impairment usually resulting from brain damage, due to neurovascular accident\u2014hemorrhage, stroke\u2014or associated with a variety of neurodegenerative diseases, including different types of dementia. It is also the name of a scientific journal covering the area. These specific language deficits, termed aphasias, may be defined as impairments of language production or comprehension that cannot be attributed to trivial causes such as deafness or oral paralysis. A number of aphasias have been described, but two are best known: expressive aphasia (Broca's aphasia) and receptive aphasia (Wernicke's or sensory aphasia)."} {"text":"Acute aphasias are often the result of tissue damage following a stroke."} {"text":"Lesions exclusively to Broca's area (the foot of the inferior frontal gyrus) do not produce Broca's aphasia, but instead mild dysprosody and agraphia, sometimes accompanied by word-finding pauses and mild dysarthria. Not much is known about what other areas must be damaged in order to produce Broca's aphasia, but some maintain damage to the inferior pre-Rolandic motor strip (the motor cortex region responsible glossopharyngeal muscle control) is also necessary."} {"text":"A fascinating corollary of this has come from research on aphasias in deaf users of sign language, who show deficits in signing and comprehension analogous to Expressive and Receptive aphasias in hearing populations. These studies demonstrate that the grammatical functions of Broca's area and the semantic functions of Wernicke's area are indeed deep, abstract properties of the language system independent of its modality of expression."} {"text":"Another less commonly known aphasia is global aphasia, which generally manifests itself after a stroke affecting an extensive portion of the brain occurs, including infarction of both divisions of the middle cerebral artery and generally both Broca's area and Wernicke's area. Survivors with global aphasia may have great difficulty understanding and forming words and sentences, and generally experience a great deal of difficulty when trying to communicate. With considerable speech therapy rehabilitation, global aphasia may progress into expressive aphasia or receptive aphasia."} {"text":"A person with anomic aphasia have word-finding difficulties. Anomic aphasia, also known as anomia, is a non-fluent aphasia, which means the person speaks hesitantly because of a difficulty naming words and\/or producing correct syntax. The person struggles to find the right words for speaking and writing. Subjects tend to use circumlocutions, in which they speak around the word they can not find, to make up for their loss. People also with anomic aphasia tend to know how to use an object, but rather can not name the aforementioned object. Any damage in or near the zone of language can result in anomic aphasia. Other forms of aphasia often transition into a syndrome of primarily anomic aphasia in the process of recovery."} {"text":"Conduction Aphasia is a rare form of aphasia in which fibres in the arcuate fasciculus and superior longitudinal fasciculus are damaged. These fibres are the link between the Wernicke's and Broca's area. Damage to the area connecting comprehension and expression together has the following symptoms: fluent speech, good comprehension, poor oral reading, repetition is poor and transpositions of sounds within words is very common."} {"text":"Primary progressive aphasia is a rare disorder where people slowly lose their ability to talk, read, write, and comprehend what they hear in conversation over a period of time. It was first described as a distinct syndrome by Mesulam in 1982. There are three variants: progressive nonfluent aphasia (PNFA), semantic dementia (SD), and logopenic progressive aphasia (LPA)."} {"text":"MMN refers to the mismatch response in electroencephalography (EEG); MMF or MMNM refer to the mismatch response in magnetoencephalography (MEG)."} {"text":"The auditory MMN was discovered in 1978 by Risto N\u00e4\u00e4t\u00e4nen, A. W. K. Gaillard, and S. M\u00e4ntysalo at the Institute for Perception, TNO in The Netherlands."} {"text":"The first report of a visual MMN was in 1990 by Rainer Cammer. For a history of the development of the visual MMN, see Pazo-Alvarez et al. (2003)."} {"text":"The auditory MMN can occur in response to deviance in pitch, intensity, or duration. The auditory MMN is a fronto-central negative potential with sources in the primary and non-primary auditory cortex and a typical latency of 150-250 ms after the onset of the deviant stimulus. Sources could also include the inferior frontal gyrus, and the insular cortex. The amplitude and latency of the MMN is related to how different the deviant stimulus is from the standard. Large deviances elicit MMN at earlier latencies. For very large deviances, the MMN can even overlap the N100."} {"text":"The visual MMN can occur in response to deviance in such aspects as color, size, or duration. The visual MMN is an occipital negative potential with sources in the primary visual cortex and a typical latency of 150-250 ms after the onset of the deviant stimulus."} {"text":"As kindred phenomena have been elicited with speech stimuli, under passive conditions that require very little active attention to the sound, a version of MMN has been frequently used in studies of neurolinguistic perception, to test whether or not these participants neurologically distinguish between certain kinds of sounds. The MMN response has been used to study how fetuses and newborns discriminate speech sounds. In addition to these kinds of studies focusing on phonological processing, some research has implicated the MMN in syntactic processing. Some of these studies have attempted to directly test the automaticity of the MMN, providing converging evidence for the understanding of the MMN as a task-independent and automatic response."} {"text":"MMN is evoked by an infrequently presented stimulus (\"deviant\"), differing from the frequently-occurring stimuli (\"standards\") in one or several physical parameters like duration, intensity, or frequency. In addition, it is generated by a change in spectrally complex stimuli like phonemes, in synthesised instrumental tones, or in the spectral component of tone timbre. Also the temporal order reversals elicit an MMN when successive sound elements differ either in frequency, intensity, or duration. The MMN is not elicited by stimuli with deviant stimulus parameters when they are presented without the intervening standards. Thus, the MMN has been suggested to reflect change detection when a memory trace representing the constant standard stimulus and the neural code of the stimulus with deviant parameter(s) are discrepant."} {"text":"The MMN data can be understood as providing evidence that stimulus features are separately analysed and stored in the vicinity of auditory cortex (for a discussion, please see the theory section below). The close resemblance of the behaviour of the MMN to that of the previously behaviourally observed \"echoic\" memory system strongly suggests that the MMN provides a non-invasive, objective, task-independently measurable physiological correlate of stimulus-feature representations in auditory sensory memory."} {"text":"The experimental evidence suggests that the auditory sensory memory index MMN provides sensory data for attentional processes, and, in essence, governs certain aspects of attentive information processing. This is evident in the finding that the latency of the MMN determines the timing of behavioural responses to changes in the auditory environment. Furthermore, even individual differences in discrimination ability can be probed with the MMN. The MMN is a component of the chain of brain events causing attention switches to changes in the environment. Attentional instructions also affect MMN."} {"text":"The MMN has been documented in a number of studies to disclose neuropathological changes."} {"text":"Presently, the accumulated body of evidence suggests that while the MMN offers unique opportunities to basic research of the information processing of a healthy brain, it might be useful in tapping neurodegenerative changes as well."} {"text":"MMN, which is elicited irrespective of attention, provides an objective means for evaluating possible auditory discrimination and sensory-memory anomalies in such clinical groups as dyslexics and patients with aphasia, who have a multitude of symptoms including attentional problems. Recent results suggest that a major problem underlying the reading deficit in dyslexia might be an inability of the dyslexics' auditory cortex to adequately model complex sound patterns with fast temporal variation. According to the results of an ongoing study, MMN might also be used in the evaluation of auditory perception deficits in aphasia."} {"text":"Alzheimer's patients demonstrate decreased amplitude of MMN, especially with long inter-stimulus intervals; this is thought to reflect reduced span of auditory sensory memory. Parkinsonian patients do demonstrate a similar deficit pattern, whereas alcoholism would appear to enhance the MMN response. This latter, seemingly contradictory, finding could be explained by hyperexcitability of CNS neurones resulting from neuroadaptive changes taking place during a heavy drinking bout."} {"text":"While the results obtained thus far seem encouraging, several steps need to be taken before the MMN can be used as a clinical tool in patient treatment. A focus of research in the late 1990s aimed to tackle some of the key signal-analysis problems encountered in development of clinical use of MMN and challenges still remain. Nevertheless, as it stands, clinical research employing the MMN has already produced significant knowledge on the CNS functional changes related to cognitive decline in the aforementioned clinical disorders."} {"text":"A 2010 study found that MMN durations were reduced in a group of schizophrenia patients who later went on to have psychotic episodes, suggesting that MMN durations may predict future psychosis."} {"text":"The mainstream \"memory trace\" interpretation of MMN is that it is elicited in response to violations of simple rules governing the properties of information. It is thought to arise from violation of an automatically formed, short-term neural model or memory trace of physical or abstract environmental regularities. However, other than MMN, there is no other neurophysiological evidence for the formation of the memory representation of those regularities."} {"text":"Integral to this memory trace view is that there are:"} {"text":"i) a population of sensory afferent neuronal elements that respond to sound, and;"} {"text":"ii) a separate population of memory neuronal elements that build a neural model of standard stimulation and respond more vigorously when the incoming stimulation violates that neural model, eliciting an MMN."} {"text":"An alternative \"fresh afferent\" interpretation is that there are no memory neuronal elements, but the sensory afferent neuronal elements that are tuned to properties of the standard stimulation respond less vigorously upon repeated stimulation. Thus when a deviant activates a distinct new population of neuronal elements that is tuned to the different properties of the deviant rather than the standard, these fresh afferents respond more vigorously, eliciting an MMN."} {"text":"A third view is that the sensory afferents are the memory neurons."} {"text":"The bi-directional hypothesis of language and action proposes that the sensorimotor and language comprehension areas of the brain exert reciprocal influence over one another. This hypothesis argues that areas of the brain involved in movement and sensation, as well as movement itself, influence cognitive processes such as language comprehension. In addition, the reverse effect is argued, where it is proposed that language comprehension influences movement and sensation. Proponents of the bi-directional hypothesis of language and action conduct and interpret linguistic, cognitive, and movement studies within the framework of embodied cognition and embodied language processing. Embodied language developed from embodied cognition, and proposes that sensorimotor systems are not only involved in the comprehension of language, but that they are necessary for understanding the semantic meaning of words."} {"text":"The theory that sensory and motor processes are coupled to cognitive processes stems from action-oriented models of cognition. These theories, such as the embodied and situated cognitive theories, propose that cognitive processes are rooted in areas of the brain involved in movement planning and execution, as well as areas responsible for processing sensory input, termed sensorimotor areas or areas of action and perception. According to action-oriented models, higher cognitive processes evolved from sensorimotor brain regions, thereby necessitating sensorimotor areas for cognition and language comprehension. With this organization, it was then hypothesized that action and cognitive processes exert influence on one another in a bi-directional manner: action and perception influence language comprehension, and language comprehension influences sensorimotor processes."} {"text":"Effects of Language Comprehension on Systems of Action."} {"text":"Language comprehension tasks can exert influence over systems of action, both at the neural and behavioral level. This means that language stimuli influence both electrical activity in sensorimotor areas of the brain, as well as actual movement."} {"text":"The ability of language to influence neural activity of motor systems also manifests itself behaviorally by altering movement. Semantic priming has been implicated in these behavioral changes, and has been used as evidence for the involvement of the motor system in language comprehension. The Action-Sentence Compatibility Effect (ACE) is indicative of these semantic priming effects. Understanding language that implies action may invoke motor facilitation, or prime the motor system, when the action or posture being performed to indicate language comprehension is compatible with action or posture implied by the language. Compatible ACE tasks have been shown to lead to shorter reaction times. This effect has been demonstrated on various types of movements, including hand posture during button pressing, reaching, and manual rotation."} {"text":"Effects of Systems of Action on Language Comprehension."} {"text":"The bi-directional hypothesis of action and language proposes that altering the activity of motor systems, either through altered neural activity or actual movement, influences language comprehension. Neural activity in specific areas of the brain can be altered using transcranial magnetic stimulation (TMS), or by studying patients with neuropathologies leading to specific sensory and\/or motor deficits. Movement is also used to alter the activity of neural motor systems, increasing overall excitability of motor and pre-motor areas."} {"text":"Lesions of sensory and motor areas have also been studied to elucidate the effects of sensorimotor systems on language comprehension. One such example of this is the patient JR; this patient has a lesion in areas in the auditory association cortex implicated in processing auditory information. This patient showcases significant impairments in conceptual and perceptual processing of sound-related language and objects. For example, processing the meaning of words describing sound-related objects (e.g., \"bell') was significantly impaired in JR as compared to non-sound-related objects (e.g., \"armchair\"). These data suggest that damage of sensory regions involved in processing auditory information specifically impair processing of sound-related conceptual information, highlighting the necessity of sensory systems for language comprehension."} {"text":"Movement has been shown to influence language comprehension. This has been demonstrated by priming motor areas with movement, increasing the excitability of motor and pre-motor areas associated with the body part being moved. It has been demonstrated that motor engagement of a specific body part decreases neural activity in language processing areas when processing words related to that body part. This decreased neural activity is a feature of semantic priming, and suggests that activation of specific motor areas through movement can facilitate language comprehension in a semantically-dependent manner. An interference effect has also been demonstrated. During incompatible ACE conditions, neural signatures of language comprehension have been shown to be inhibited. Combined, these pieces of evidence have been used to support a semantic role of the motor system."} {"text":"Movement can also inhibit language comprehension tasks, particularly tasks of verbal working memory. When asked to memorize and verbally recall four-word sequences of either arm or leg action words, performing complex, rhythmic movements after presentation of the word sequences was demonstrated to interfere with memory performance. This performance deficit was body-part specific, where movement of the legs impaired performance of recall of leg words, and movement of the arms impaired recall of arm words. These data indicate that sensorimotor systems exhibit cortically specific \"inhibitory casual effects\" on memory of action words, as impairment was specific to motor engagement and bodily association of the words."} {"text":"Relating cognitive functions to brain structures is done in the field of cognitive neuroscience. This field attempts to map cognitive processes, such as language comprehension, onto neural activation of specific brain structures.The bi-directional hypothesis of language and action requires that action and language processes have overlapping brain structures, or shared neural substrates, thereby necessitating motor areas for language comprehension. The neural substrates of embodied cognition are often studied using the cognitive tasks of object recognition, action recognition, working memory tasks, and language comprehension tasks. These networks have been elucidated with behavioral, computational, and imaging studies, but the discovery of their exact organization is ongoing."} {"text":"A Jabberwocky sentence is a type of sentence of interest in neurolinguistics. Jabberwocky sentences take their name from the language of Lewis Carroll's well-known poem \"Jabberwocky\". In the poem, Carroll uses correct English grammar and syntax, but many of the words are made up and merely suggest meaning. A Jabberwocky sentence is therefore a sentence which uses correct grammar and syntax but contains nonsense words, rendering it semantically meaningless."} {"text":"A second study by Silva-Pereyra et al. showed that preschoolers at the age of 36 months demonstrate similar processing patterns compared to adults when processing normal sentences with phrase structure violations, showing ERP activity analogous to the N150 and P600 in adults, but shifted later in time. When presented with phrase-structure violations in Jabberwocky sentences, however, preschoolers demonstrate activity analogous to a N400, typically associated with the extraction of meaning from words in adults, along with a diminished P600. This implies that semantics plays a role in syntactic processing in children and provides neurobiological evidence for interactive theories over modular theories of semantic and syntactic processing."} {"text":"The motor theory of speech perception is the hypothesis that people perceive spoken words by identifying the vocal tract gestures with which they are pronounced rather than by identifying the sound patterns that speech generates. It originally claimed that speech perception is done through a specialized module that is innate and human-specific. Though the idea of a module has been qualified in more recent versions of the theory, the idea remains that the role of the speech motor system is not only to produce speech articulations but also to detect them."} {"text":"The hypothesis has gained more interest outside the field of speech perception than inside. This has increased particularly since the discovery of mirror neurons that link the production and perception of motor movements, including those made by the vocal tract."} {"text":"The theory was initially proposed in the Haskins Laboratories in the 1950s by Alvin Liberman and Franklin S. Cooper, and developed further by Donald Shankweiler, Michael Studdert-Kennedy, Ignatius Mattingly, Carol Fowler and Douglas Whalen."} {"text":"The hypothesis has its origins in research using pattern playback to create reading machines for the blind that would substitute sounds for orthographic letters. This led to a close examination of how spoken sounds correspond to the acoustic spectrogram of them as a sequence of auditory sounds. This found that successive consonants and vowels overlap in time with one another (a phenomenon known as coarticulation). This suggested that speech is not heard like an acoustic \"alphabet\" or \"cipher,\" but as a \"code\" of overlapping speech gestures."} {"text":"Initially, the theory was associationist: infants mimic the speech they hear and that this leads to behavioristic associations between articulation and its sensory consequences. Later, this overt mimicry would be short-circuited and become speech perception. This aspect of the theory was dropped, however, with the discovery that prelinguistic infants could already detect most of the phonetic contrasts used to separate different speech sounds."} {"text":"The behavioristic approach was replaced by a cognitivist one in which there was a speech module. The module detected speech in terms of hidden distal objects rather than at the proximal or immediate level of their input. The evidence for this was the research finding that speech processing was special such as duplex perception."} {"text":"Initially, speech perception was assumed to link to speech objects that were both"} {"text":"This was later revised to include the phonetic gestures rather than motor commands, and then the gestures intended by the speaker at a prevocal, linguistic level, rather than actual movements."} {"text":"The \"speech is special\" claim has been dropped, as it was found that speech perception could occur for nonspeech sounds (for example, slamming doors for duplex perception)."} {"text":"The discovery of mirror neurons has led to renewed interest in the motor theory of speech perception, and the theory still has its advocates, although there are also critics."} {"text":"If speech is identified in terms of how it is physically made, then nonauditory information should be incorporated into speech percepts even if it is still subjectively heard as \"sounds\". This is, in fact, the case."} {"text":"If people can hear the gestures in speech, then the imitation of speech should be very fast, as in when words are repeated that are heard in headphones as in speech shadowing. People can repeat heard syllables more quickly than they would be able to produce them normally."} {"text":"Evidence exists that perception and production are generally coupled in the motor system. This is supported by the existence of mirror neurons that are activated both by seeing (or hearing) an action and when that action is carried out. Another source of evidence is that for common coding theory between the representations used for perception and action."} {"text":"The motor theory of speech perception is not widely held in the field of speech perception, though it is more popular in other fields, such as theoretical linguistics. As three of its advocates have noted, \"it has few proponents within the field of speech perception, and many authors cite it primarily to offer critical commentary\".p.\u00a0361 Several critiques of it exist."} {"text":"Speech perception is affected by nonproduction sources of information, such as context. Individual words are hard to understand in isolation but easy when heard in sentence context. It therefore seems that speech perception uses multiple sources that are integrated together in an optimal way."} {"text":"The motor theory of speech perception would predict that speech motor abilities in infants predict their speech perception abilities, but in actuality it is the other way around. It would also predict that defects in speech production would impair speech perception, but they do not. However, this only affects the first and already superseded behaviorist version of the theory, where infants were supposed to learn \"all\" production-perception patterns by imitation early in childhood. This is no longer the mainstream view of motor-speech theorists."} {"text":"Several sources of evidence for a specialized speech module have failed to be supported."} {"text":"As a result, this part of the theory has been dropped by some researchers."} {"text":"The evidence provided for the motor theory of speech perception is limited to tasks such as syllable discrimination that use speech units not full spoken words or spoken sentences. As a result, \"speech perception is sometimes interpreted as referring to the perception of speech at the sublexical level. However, the ultimate goal of these studies is presumably to understand the neural processes supporting the ability to process speech sounds under ecologically valid conditions, that is, situations in which successful speech sound processing ultimately leads to contact with the mental lexicon and auditory comprehension.\" This however creates the problem of \" a tenuous connection to their implicit target of investigation, speech recognition\"."} {"text":"It has been suggested that birds also hear each other's bird song in terms of vocal gestures."} {"text":"The coining of the term \"neurolinguistics\" is attributed to Edith Crowell Trager, Henri Hecaen and Alexandr Luria, in the late 1940s and 1950s; Luria's book \"Problems in Neurolinguistics\" is likely the first book with Neurolinguistics in the title. Harry Whitaker popularized neurolinguistics in the United States in the 1970s, founding the journal \"Brain and Language\" in 1974."} {"text":"Although aphasiology is the historical core of neurolinguistics, in recent years the field has broadened considerably, thanks in part to the emergence of new brain imaging technologies (such as PET and fMRI) and time-sensitive electrophysiological techniques (EEG and MEG), which can highlight patterns of brain activation as people engage in various language tasks; electrophysiological techniques, in particular, emerged as a viable method for the study of language in 1980 with the discovery of the N400, a brain response shown to be sensitive to semantic issues in language comprehension. The N400 was the first language-relevant event-related potential to be identified, and since its discovery EEG and MEG have become increasingly widely used for conducting language research."} {"text":"Neurolinguistics is closely related to the field of psycholinguistics, which seeks to elucidate the cognitive mechanisms of language by employing the traditional techniques of experimental psychology; today, psycholinguistic and neurolinguistic theories often inform one another, and there is much collaboration between the two fields."} {"text":"Neurolinguistics research is carried out in all the major areas of linguistics; the main linguistic subfields, and how neurolinguistics addresses them, are given in the table below."} {"text":"Neurolinguistics research investigates several topics, including where language information is processed, how language processing unfolds over time, how brain structures are related to language acquisition and learning, and how neurophysiology can contribute to speech and language pathology."} {"text":"Much work in neurolinguistics has, like Broca's and Wernicke's early studies, investigated the locations of specific language \"modules\" within the brain. Research questions include what course language information follows through the brain as it is processed, whether or not particular areas specialize in processing particular sorts of information, how different brain regions interact with one another in language processing, and how the locations of brain activation differ when a subject is producing or perceiving a language other than his or her first language."} {"text":"Another area of neurolinguistics literature involves the use of electrophysiological techniques to analyze the rapid processing of language in time. The temporal ordering of specific patterns of brain activity may reflect discrete computational processes that the brain undergoes during language processing; for example, one neurolinguistic theory of sentence parsing proposes that three brain responses (the ELAN, N400, and P600) are products of three different steps in syntactic and semantic processing."} {"text":"Another topic is the relationship between brain structures and language acquisition. Research in first language acquisition has already established that infants from all linguistic environments go through similar and predictable stages (such as babbling), and some neurolinguistics research attempts to find correlations between stages of language development and stages of brain development, while other research investigates the physical changes (known as neuroplasticity) that the brain undergoes during second language acquisition, when adults learn a new language."} {"text":"Neuroplasticity is observed when both Second Language acquisition and Language Learning experience are induced, the result of this language exposure concludes that an increase of gray and white matter could be found in children, young adults and the elderly."} {"text":"Ping Li, Jennifer Legault, Kaitlyn A. Litcofsky, May 2014."} {"text":"Neuroplasticity as a function of second language learning: Anatomical changes in the human brain"} {"text":"Cortex: A Journal Devoted to the Study of the Nervous System & Behavior, 410.1016\/j.cortex.2014.05.00124996640"} {"text":"Neurolinguistic techniques are also used to study disorders and breakdowns in language, such as aphasia and dyslexia, and how they relate to physical characteristics of the brain."} {"text":"Since one of the focuses of this field is the testing of linguistic and psycholinguistic models, the technology used for experiments is highly relevant to the study of neurolinguistics. Modern brain imaging techniques have contributed greatly to a growing understanding of the anatomical organization of linguistic functions. Brain imaging methods used in neurolinguistics may be classified into hemodynamic methods, electrophysiological methods, and methods that stimulate the cortex directly."} {"text":"In addition to PET and fMRI, which show which areas of the brain are activated by certain tasks, researchers also use diffusion tensor imaging (DTI), which shows the neural pathways that connect different brain areas, thus providing insight into how different areas interact. Functional near-infrared spectroscopy (fNIRS) is another hemodynamic method used in language tasks."} {"text":"Neurolinguists employ a variety of experimental techniques in order to use brain imaging to draw conclusions about how language is represented and processed in the brain. These techniques include the \"subtraction\" paradigm, \"mismatch design\", \"violation-based\" studies, various forms of \"priming\", and \"direct stimulation\" of the brain."} {"text":"Many language studies, particularly in fMRI, use the subtraction paradigm, in which brain activation in a task thought to involve some aspect of language processing is compared against activation in a baseline task thought to involve similar non-linguistic processes but not to involve the linguistic process. For example, activations while participants read words may be compared to baseline activations while participants read strings of random letters (in attempt to isolate activation related to lexical processing\u2014the processing of real words), or activations while participants read syntactically complex sentences may be compared to baseline activations while participants read simpler sentences."} {"text":"In psycholinguistics and neurolinguistics, \"priming\" refers to the phenomenon whereby a subject can recognize a word more quickly if he or she has recently been presented with a word that is similar in meaning or morphological makeup (i.e., composed of similar parts). If a subject is presented with a \"prime\" word such as \"doctor\" and then a \"target\" word such as \"nurse\", if the subject has a faster-than-usual response time to \"nurse\" then the experimenter may assume that word \"nurse\" in the brain had already been accessed when the word \"doctor\" was accessed. Priming is used to investigate a wide variety of questions about how words are stored and retrieved in the brain and how structurally complex sentences are processed."} {"text":"In many neurolinguistics experiments, subjects do not simply sit and listen to or watch stimuli, but also are instructed to perform some sort of task in response to the stimuli. Subjects perform these tasks while recordings (electrophysiological or hemodynamic) are being taken, usually in order to ensure that they are paying attention to the stimuli. At least one study has suggested that the task the subject does has an effect on the brain responses and the results of the experiment."} {"text":"The lexical decision task involves subjects seeing or hearing an isolated word and answering whether or not it is a real word. It is frequently used in priming studies, since subjects are known to make a lexical decision more quickly if a word has been primed by a related word (as in \"doctor\" priming \"nurse\")."} {"text":"Many studies, especially violation-based studies, have subjects make a decision about the \"acceptability\" (usually grammatical acceptability or semantic acceptability) of stimuli. Such a task is often used to \"ensure that subjects [are] reading the sentences attentively and that they [distinguish] acceptable from unacceptable sentences in the way the [experimenter] expect[s] them to do.\""} {"text":"Experimental evidence has shown that the instructions given to subjects in an acceptability judgment task can influence the subjects' brain responses to stimuli. One experiment showed that when subjects were instructed to judge the \"acceptability\" of sentences they did not show an N400 brain response (a response commonly associated with semantic processing), but that they did show that response when instructed to ignore grammatical acceptability and only judge whether or not the sentences \"made sense\"."} {"text":"Some studies use a \"probe verification\" task rather than an overt acceptability judgment; in this paradigm, each experimental sentence is followed by a \"probe word\", and subjects must answer whether or not the probe word had appeared in the sentence. This task, like the acceptability judgment task, ensures that subjects are reading or listening attentively, but may avoid some of the additional processing demands of acceptability judgments, and may be used no matter what type of violation is being presented in the study."} {"text":"Subjects may be instructed not to judge whether or not the sentence is grammatically acceptable or logical, but whether the proposition expressed by the sentence is true or false. This task is commonly used in psycholinguistic studies of child language."} {"text":"Another related form of experiment is the double-task experiment, in which a subject must perform an extra task (such as sequential finger-tapping or articulating nonsense syllables) while responding to linguistic stimuli; this kind of experiment has been used to investigate the use of working memory in language processing."} {"text":"Some relevant journals include the \"Journal of Neurolinguistics\" and \"Brain and Language\". Both are subscription access journals, though some abstracts may be generally available."} {"text":"Clinical linguistics is a sub-discipline of applied linguistics involved in the description, analysis, and treatment of language disabilities, especially the application of linguistic theory to the field of Speech-Language Pathology. The study of the linguistic aspect of communication disorders is of relevance to a broader understanding of language and linguistic theory."} {"text":"The International Clinical Phonetics and Linguistics Association is the unofficial organization of the field and was formed in 1991. The Journal of Clinical Linguistics and Phonetics is the major research journal of the field and was founded by Martin J. Ball."} {"text":"Practitioners of clinical linguistics typically work in Speech-Language Pathology departments or linguistics departments. They conduct research with the aims of improving the assessment, treatment, and analysis of disordered speech and language, and offering insights to formal linguistic theories. While the majority of clinical linguistics journals still focus only on English linguistics, there is an emerging movement toward comparative clinical linguistics across multiple languages."} {"text":"The study of communication disorders has a history that can be traced all the way back to the ancient Greeks. Modern clinical linguistics, however, largely has its roots in the twentieth century, with the term \u2018clinical linguistics\u2019 gaining wider currency in the 1970s, with it being used as the title of a book by prominent linguist David Crystal in 1981. Widely credited as the \u2018father of clinical linguistics\u2019, Crystal's book \"Clinical Linguistics\" went on to become one of the most influential books of the field, as this new discipline was mapped out in great detail."} {"text":"These are the main disciplines of clinical linguistics:"} {"text":"Phonetics is a branch of linguistics that studies the sounds of human speech. Clinical phonetics involve applications of phonetics to describe speech differences and disorders, including information about speech sounds and the perceptual skills used in clinical settings."} {"text":"Phonology is one of the branches of linguistics that is concerned with the systematic organization of sounds in spoken languages and signs in sign languages. Unlike clinical phonetics, clinical phonology focuses on the application of phonology on interpretations of speech sounds in a particular language and how it deals with phoneme."} {"text":"In linguistics, prosody is concerned with elements of speech that are not individual phonetic segments (vowels and consonants) but are properties of syllables and larger units of speech. Prosody is essential in communicative functions such as expressing emotions or affective states."} {"text":"Morphology is the study of words, how they are formed, and their relationship to other words in the same language. It analyses the structure of words and part of words, such as stems, root words, prefixes, and suffixes."} {"text":"Syntax is the set of rules, principles and processes that govern the structure of sentences in a given language, usually including word order. Every language has a different set of syntactic rules, but all languages have some form of syntax."} {"text":"Semantics is the study of the interpretation of signs or symbols used in agents or communities within particular circumstances and contexts ."} {"text":"Pragmatics is a subfield of linguistics and semiotics that studies the ways in which context contributes to meaning. It refers to the description and classification of pragmatic impairments, their elucidation in terms of various pragmatic, linguistics, cognitive and neurological theories, and their assessment and treatment."} {"text":"In corpus linguistics, discourse refers to the study of language expressed in corpora (samples) of \u201creal world\u201d text, the codified language of a field of enquiry, or a statement that determines the connections among language and structure and agency."} {"text":"Linguistic concepts and theories are applied to assess, diagnose and administer language disorders. These theories and concepts commonly involve psycholinguistics and sociolinguistics. Clinical linguists adopt the understanding of language and the linguistic disciplines, as mentioned above, to explain language disorders and find approaches to treat them. Crystal pointed out that applications of linguistics to clinical ends are highly relational. In his book \u2018Clinical Linguistics\u2019, Crystal references many commonly known disorders with linguistic knowledge. Some examples from his book are as follows:"} {"text":"Some broad linguistics methods that are commonly used in the treatment of patients mentioned by Cummings (2017) include:"} {"text":"The past works of linguists such as Crystal were applicable to a wide range of communication disorders at every linguistic level. However, with the influx of new insights from disciplines such as genetics, cognitive neuroscience and neurobiology (among others), it is no longer sufficient to just focus on the linguistic characteristics of a particular speech impairment."} {"text":"In today's context, one of the challenges in clinical linguistics includes identifying methods to bridge the knowledge of different fields to build a more holistic understanding. The translation of general research that has been done into effective tools for clinical practice is another aspect that requires future work."} {"text":"The N400 is a component of time-locked EEG signals known as event-related potentials (ERP). It is a negative-going deflection that peaks around 400 milliseconds post-stimulus onset, although it can extend from 250-500 ms, and is typically maximal over centro-parietal electrode sites. The N400 is part of the normal brain response to words and other meaningful (or potentially meaningful) stimuli, including visual and auditory words, sign language signs, pictures, faces, environmental sounds, and smells."} {"text":"An example of an experimental task used to study the N400 is a priming paradigm. Subjects are shown a list of words in which a prime word is either associatively related to a target word (e.g. bee and honey), semantically related (e.g. sugar and honey) or a direct repetition (e.g. honey and honey). The N400 amplitude seen to the target word (honey) will be reduced upon repetition due to semantic priming. The amount of reduction in amplitude can be used to measure the degree of relatedness between the words."} {"text":"Another widely used experimental task used to study the N400 is sentence reading. In this kind of study, sentences are presented to subjects centrally, one word at a time, until the sentence is completed. Alternatively, subjects could listen to a sentence as natural auditory speech. Again, subjects may be asked to respond to comprehension questions periodically throughout the experiment, although this is not necessary. Experimenters can choose to manipulate various linguistic characteristics of the sentences, including contextual constraint or the cloze probability of the sentence-final word (see below for a definition of cloze probability) to observe how these changes affect the waveform's amplitude."} {"text":"As previously mentioned, the N400 response is seen to all meaningful, or potentially meaningful, stimuli. As such, a wide range of paradigms can be used to study it. Experiments involving the presentation of spoken words, acronyms, pictures embedded at the end of sentences, music, words related to current context or orientation and videos of real-word events, have all been used to study the N400, just to name a few."} {"text":"Extensive research has been carried out to better understand what kinds of experimental manipulations do and do not affect the N400. General findings are discussed below."} {"text":"The frequency of a word's usage is known to affect the amplitude of the N400. With all else being constant, highly frequent words elicit reduced N400s relative to infrequent words. As previously mentioned, N400 amplitude is also reduced by repetition, such that a word's second presentation exhibits a more positive response when repeated in context. These findings suggest that when a word is highly frequent or has recently appeared in context, it eases the semantic processing thought to be indexed by the N400, reducing its amplitude."} {"text":"N400 amplitude is also sensitive to a word's orthographic neighborhood size, or how many other words differ from it by only one letter (e.g. \"boot\" and \"boat\"). Words with large neighborhoods (that have many other physically similar items) elicit larger N400 amplitudes than do words with small neighborhoods. This finding also holds true for pseudowords, or pronounceable letter strings that are not real words (e.g. flom), which are not themselves meaningful but look like words. This has been taken as evidence that the N400 reflects general activation in the comprehension network, such that items that look like many words (regardless of whether it itself is a word) partially activate the representations of similar-looking words, producing a more negative N400."} {"text":"The N400 is sensitive to priming: in other words, its amplitude is reduced when a target word is preceded by a word that is semantically, morphologically, or orthographically related to it."} {"text":"Factors that do not affect N400 amplitude."} {"text":"Additionally, grammatical violations do not elicit a large N400 response. Rather, these types of violations show a large positivity from about 500-1000 ms after stimulus onset, known as the P600."} {"text":"A striking feature of the N400 is the general invariance of its peak latency. Although many different experimental manipulations affect the amplitude of the N400, few factors (aging and disease states and language proficiency being rare examples) alter the time it takes for the N400 component to reach a peak amplitude."} {"text":"Although localization of the neural generators of an ERP signal is difficult due to the spreading of current from the source to the sensors, multiple techniques can be used to provide converging evidence about possible neural sources. Using methods such as recordings directly off the surface of the brain or from electrodes implanted in the brain, evidence from brain damaged patients, and magnetoencephalographic (MEG) recordings (which measure magnetic activity at the scalp associated with the electrical signal measured by ERPs), the left temporal lobe has been highlighted as an important source for the N400, with additional contributions from the right temporal lobe. More generally, however, activity in a wide network of brain areas is elicited in the N400 time window, suggesting a highly distributed neural source."} {"text":"There is still much debate as to exactly what kind of neural and comprehension processes the N400 indexes. Some researchers believe that the underlying processes reflected in the N400 occur after a stimulus has been recognized. For example, Brown and Hagoort (1993) believe that the N400 occurs late in the processing stream, and reflects the integration of a word's meaning into the preceding context (see Kutas & Federmeier, in press, for a discussion). However, this account has not explained why items that themselves have no meaning (e.g. pseudowords without defined associations) also elicit the N400. Other researchers believe that the N400 occurs much earlier, before words are recognized, and represents orthographic or phonological analysis."} {"text":"More recent accounts posit that the N400 represents a broader range of processes indexing access to semantic memory. According to this account, it represents the binding of information obtained from stimulus input with representations from short- and long-term memory (such as recent context, and accessing a word's meaning in long term memory) that work together to create meaning from the information available in the current context (Federmeier & Laszlo, 2009; see Kutas & Federmeier, in press)."} {"text":"Another account is that the N400 reflects prediction error or surprisal. Word-based surprisal was a strong predictor of N400 amplitude in an ERP corpus. In addition, connectionist models make use of prediction error for learning and linguistic adaptation, and these models can explain several N400\/P600 results in terms of prediction error propagation for learning."} {"text":"As research in the field of electrophysiology continues to progress, these theories will likely be refined to include a complete account of just what the N400 represents."} {"text":"The notion of a dedicated language module in the human brain originated with Noam Chomsky's theory of Universal Grammar (UG). The debate on the issue of modularity in language is underpinned, in part, by different understandings of this concept. There is, however, some consensus in the literature that a module is considered committed to processing specialized representations (domain-specificity) (Bryson and Stein, 2001) in an informationally encapsulated way. A distinction should be drawn between anatomical modularity, which proposes there is one 'area' in the brain that deals with this processing, and functional modularity that obviates anatomical modularity whilst maintaining information encapsulation in distributed parts of the brain."} {"text":"The available evidence points towards no one anatomical area solely devoted to processing language. The Wada test, where sodium amobarbital is used to anaesthetise one hemisphere, shows that the left-hemisphere appears to be crucial in language processing. Yet, neuroimaging does not implicate any single area but rather identifies many different areas as being involved in different aspects of language processing. and not just in the left hemisphere. Further, individual areas appear to subserve a number of different functions. Thus, the extent to which language processing occurs within an anatomical module is considered to be minimal. Nevertheless, as many have suggested, modular processing can still exist even when implemented across the brain; that is, language processing could occur within a functional module."} {"text":"No double dissociation \u2013 acquired or developmental."} {"text":"A common way to demonstrate modularity is to find a double dissociation. That is two groups: First, people for whom language is severely damaged and yet have normal cognitive abilities and, second, persons for whom normal cognitive abilities are grossly impaired and yet language remains intact. Whilst extensive lesions in the left hemisphere perisylvian area can render persons unable to produce or perceive language (global aphasia), there is no known acquired case where language is completely intact in the face of severe non-linguistic deterioration. Thus, functional module status cannot be granted to language processing based on this evidence."} {"text":"Thus, the evidence from double dissociations does not support modularity, although lack of dissociation is not evidence against a module; this inference cannot be logically made."} {"text":"Indeed, if language were a module it would be informationally encapsulated. Yet, there is evidence to suggest that this is not the case. For instance, in the McGurk effect, watching lips say one phoneme whilst another is played creates the percept of a blended phoneme. Further, Tanenhaus, Spivey-Knowlton, Eberhard and Sedivy (1995) demonstrated visual information mediating syntactic processing. In addition, the putative language module should process only that information relevant to language (i.e., be domain-specific). Yet evidence suggests that areas purported to subserve language also mediate motor control and non-linguistic sound comprehension. Although it is possible that separate processes could be occurring but below the resolution of current imaging techniques, when all this evidence is taken together the case for information encapsulation is weakened."} {"text":"The alternative, as it is framed, is that language occurs within a more general cognitive system. The counterargument is that there appears to be something \u2018special\u2019 about human language. This is usually supported by evidence such as all attempts to teach animals human languages to any great success have failed (Hauser et al. 2003) and that language can be selectively damaged (a single dissociation) suggesting proprietary computation may be required. Instead of postulating 'pure' modularity, theorists have opted for a weaker version, domain-specificity implemented in functionally specialised neural circuits and computation (e.g. Jackendoff and Pinker\u2019s words, we must investigate language \u201cnot as a monolith but as a combination of components, some special to language, others rooted in more general capacities\u201d)."} {"text":"Multilingualism is the use of more than one language, either by an individual speaker or by a group of speakers. It is believed that multilingual speakers outnumber monolingual speakers in the world's population. More than half of all Europeans claim to speak at least one language other than their mother tongue; but many read and write in one language. Always useful to traders, multilingualism is advantageous for people wanting to participate in globalization and cultural openness. Owing to the ease of access to information facilitated by the Internet, individuals' exposure to multiple languages is becoming increasingly possible. People who speak several languages are also called polyglots."} {"text":"Multilingual speakers have acquired and maintained at least one language during childhood, the so-called first language (L1). The first language (sometimes also referred to as the mother tongue) is usually acquired without formal education, by mechanisms about which scholars disagree. Children acquiring two languages natively from these early years are called simultaneous bilinguals. It is common for young simultaneous bilinguals to be more proficient in one language than the other."} {"text":"People who speak more than one language have been reported to be more adept at language learning compared to monolinguals."} {"text":"Multilingualism in computing can be considered part of a continuum between internationalization and localization. Due to the status of English in computing, software development nearly always uses it (but not in the case of non-English-based programming languages). Some commercial software is initially available in an English version, and multilingual versions, if any, may be produced as alternative options based on the English original."} {"text":"The definition of multilingualism is a subject of debate in the same way as that of language fluency. At one end of a sort of linguistic continuum, one may define multilingualism as complete competence in and mastery of more than one language. The speaker would presumably have complete knowledge and control over the languages and thus sound like a native speaker. At the opposite end of the spectrum would be people who know enough phrases to get around as a tourist using the alternate language. Since 1992, Vivian Cook has argued that most multilingual speakers fall somewhere between minimal and maximal definitions. Cook calls these people \"multi-competent\"."} {"text":"In addition, there is no consistent definition of what constitutes a distinct language. For instance, scholars often disagree whether Scots is a language in its own right or merely a dialect of English. Furthermore, what is considered a language can change, often for purely political reasons. One example is the creation of Serbo-Croatian as a standard language on the basis of the Eastern Herzegovinian dialect to function as umbrella for numerous South Slavic dialects; after the breakup of Yugoslavia it was split into Serbian, Croatian, Bosnian and Montenegrin. Another example is that Ukrainian was dismissed as a Russian dialect by the Russian tsars to discourage national feelings."} {"text":"Many small independent nations' schoolchildren are today compelled to learn multiple languages because of international interactions. For example, in Finland, all children are required to learn at least three languages: the two national languages (Finnish and Swedish) and one foreign language (usually English). Many Finnish schoolchildren also study further languages, such as German or Russian."} {"text":"In some large nations with multiple languages, such as India, schoolchildren may routinely learn multiple languages based on where they reside in the country."} {"text":"In many countries, bilingualism occurs through international relations, which, with English being the global lingua franca, sometimes results in majority bilingualism even when the countries have just one domestic official language. This is occurring especially in Germanic regions such as Scandinavia, the Benelux and among Germanophones, but it is also expanding into some non-Germanic countries."} {"text":"Many myths and much prejudice have grown around the notions of bi- and multilingualism in some Western countries where monolingualism is the norm. Researchers from the UK and Poland have listed the most common misconceptions:"} {"text":"These are all harmful convictions that have long been debunked, yet persist among many parents. In reality, bilingual children have lower scores than their monolingual peers when they are assessed in only one of the languages they are acquiring, but have substantially greater total lingual resources."} {"text":"One view is that of the linguist Noam Chomsky in what he calls the human language acquisition device\u2014a mechanism which enables a learner to recreate correctly the rules and certain other characteristics of language used by surrounding speakers. This device, according to Chomsky, wears out over time, and is not normally available by puberty, which he uses to explain the poor results some adolescents and adults have when learning aspects of a second language (L2)."} {"text":"If language learning is a cognitive process, rather than a language acquisition device, as the school led by Stephen Krashen suggests, there would only be relative, not categorical, differences between the two types of language learning."} {"text":"Rod Ellis quotes research finding that the earlier children learn a second language, the better off they are, in terms of pronunciation. European schools generally offer secondary language classes for their students early on, due to the interconnectedness with neighbor countries with different languages. Most European students now study at least two foreign languages, a process strongly encouraged by the European Union."} {"text":"Based on the research in Ann Fathman's \"The Relationship between age and second language productive ability,\" there is a difference in the rate of learning of English morphology, syntax and phonology based upon differences in age, but that the order of acquisition in second language learning does not change with age."} {"text":"People who learn multiple languages may also experience positive transfer \u2013 the process by which it becomes easier to learn additional languages if the grammar or vocabulary of the new language is similar to those of languages already spoken. On the other hand, students may also experience negative transfer \u2013 interference from languages learned at an earlier stage of development while learning a new language later in life."} {"text":"In sequential bilingualism, learners receive literacy instruction in their native language until they acquire a \"threshold\" literacy proficiency. Some researchers use age 3 as the age when a child has basic communicative competence in their first language (Kessler, 1984). Children may go through a process of sequential acquisition if they migrate at a young age to a country where a different language is spoken, or if the child exclusively speaks his or her heritage language at home until he\/she is immersed in a school setting where instruction is offered in a different language."} {"text":"In simultaneous bilingualism, the native language and the community language are simultaneously taught. The advantage is literacy in two languages as the outcome. However, the teacher must be well-versed in both languages and also in techniques for teaching a second language."} {"text":"The phases children go through during sequential acquisition are less linear than for simultaneous acquisition and can vary greatly among children. Sequential acquisition is a more complex and lengthier process, although there is no indication that non-language-delayed children end up less proficient than simultaneous bilinguals, so long as they receive adequate input in both languages."} {"text":"A coordinate model posits that equal time should be spent in separate instruction of the native language and the community language. The native language class, however, focuses on basic literacy while the community language class focuses on listening and speaking skills. Being bilingual does not necessarily mean that one can speak, for example, English and French."} {"text":"Research has found that the development of competence in the native language serves as a foundation of proficiency that can be transposed to the second language \u2014 the common underlying proficiency hypothesis. Cummins' work sought to overcome the perception propagated in the 1960s that learning two languages made for two competing aims. The belief was that the two languages were mutually exclusive and that learning a second required unlearning elements and dynamics of the first to accommodate the second. The evidence for this perspective relied on the fact that some errors in acquiring the second language were related to the rules of the first language. How this hypothesis holds under different types of languages such as Romance versus non-Western languages has yet to undergo research."} {"text":"Another new development that has influenced the linguistic argument for bilingual literacy is the length of time necessary to acquire the second language. While previously children were believed to have the ability to learn a language within a year, today researchers believe that within and across academic settings, the period is nearer to five years."} {"text":"An interesting outcome of studies during the early 1990s, however, confirmed that students who do complete bilingual instruction perform better academically. These students exhibit more cognitive elasticity including a better ability to analyze abstract visual patterns. Students who receive bidirectional bilingual instruction where equal proficiency in both languages is required perform at an even higher level. Examples of such programs include international and multi-national education schools."} {"text":"A multilingual person is someone who can communicate in more than one language actively (through speaking, writing, or signing). Multilingual people can speak any language they write in, but cannot necessarily write in any language they speak. More specifically, bilingual and trilingual people are those in comparable situations involving two or three languages, respectively. A multilingual person is generally referred to as a polyglot, a term that may also refer to people who learn multiple languages as a hobby."} {"text":"Multilingual speakers have acquired and maintained at least one language during childhood, the so-called first language (L1). The first language (sometimes also referred to as the mother tongue) is acquired without formal education, by mechanisms heavily disputed. Children acquiring two languages in this way are called simultaneous bilinguals. Even in the case of simultaneous bilinguals, one language usually dominates over the other."} {"text":"The reverse phenomenon, where people who know more than one language end up losing command of some or all of their additional languages, is called language attrition. It has been documented that, under certain conditions, individuals may lose their L1 language proficiency completely, after switching to the exclusive use of another language, and effectively \"become native\" in a language that was once secondary after the L1 undergoes total attrition."} {"text":"This is most commonly seen among immigrant communities and has been the subject of substantial academic study. The most important factor in spontaneous, total L1 loss appears to be age; in the absence of neurological dysfunction or injury, only young children typically are at risk of forgetting their native language and switching to a new one. Once they pass an age that seems to correlate closely with the critical period, around the age of 12, total loss of a native language is not typical, although it is still possible for speakers to experience diminished expressive capacity if the language is never practiced."} {"text":"There are differences between those who learn a language in a class environment and those who learn through total immersion, usually living in a country where the target language is widely spoken. Without the possibility to actively translate in a classroom setting, due to a lack of first language communication opportunity, the comparison between languages is reduced. In an immersive environment, the new language is almost independently learned, like the mother tongue for a child, with a direct concept-to-language translation that can become more natural than word structures learned as a subject. Added to this, the uninterrupted, immediate and exclusive practice of the new language reinforces and deepens the attained knowledge."} {"text":"Bilinguals might have important labor market advantages over monolingual individuals as bilingual people can carry out duties that monolinguals cannot, such as interacting with customers who only speak a minority language. A study in Switzerland has found that multilingualism is positively correlated with an individual's salary, the productivity of firms, and the gross domestic production (GDP); the authors state that Switzerland's GDP is augmented by 10% by multilingualism. A study in the United States by Agirdag found that bilingualism has substantial economic benefits as bilingual persons were found to have around $3,000 per year more salary than monolinguals."} {"text":"While many polyglots know up to six languages, the number drops off sharply past this point. People who speak many more than this\u2014Michael Erard suggests eleven or more\u2014are sometimes classed as \"hyperpolyglots\". Giuseppe Caspar Mezzofanti, for example, was an Italian priest reputed to have spoken anywhere from 30 to 72 languages. The causes of advanced language aptitude are still under research; one theory suggests that a spike in a baby's testosterone levels while in the uterus can increase brain asymmetry, which may relate to music and language ability, among other effects."} {"text":"It is important to note that terms past trilingual are rarely used. People who speak four or more languages are generally just referred to as multilingual."} {"text":"Widespread multilingualism is one form of language contact. Multilingualism was common in the past: in early times, when most people were members of small language communities, it was necessary to know two or more languages for trade or any other dealings outside one's town or village, and this holds good today in places of high linguistic diversity such as Sub-Saharan Africa and India. Linguist Ekkehard Wolff estimates that 50% of the population of Africa is multilingual."} {"text":"In multilingual societies, not all speakers need to be multilingual. Some states can have multilingual policies and recognize several official languages, such as Canada (English and French). In some states, particular languages may be associated with particular regions in the state (e.g., Canada) or with particular ethnicities (e.g., Malaysia and Singapore). When all speakers are multilingual, linguists classify the community according to the functional distribution of the languages involved:"} {"text":"N.B. the terms given above all refer to situations describing only two languages. In cases of an unspecified number of languages, the terms polyglossia, omnilingualism, and multipart-lingualism are more appropriate."} {"text":"Whenever two people meet, negotiations take place. If they want to express solidarity and sympathy, they tend to seek common features in their behavior. If speakers wish to express distance towards or even dislike of the person they are speaking to, the reverse is true, and differences are sought. This mechanism also extends to language, as described in the Communication Accommodation Theory."} {"text":"Some multilinguals use code-switching, which involves swapping between languages. In many cases, code-switching is motivated by the wish to express loyalty to more than one cultural group, as holds for many immigrant communities in the New World. Code-switching may also function as a strategy where proficiency is lacking. Such strategies are common if the vocabulary of one of the languages is not very elaborated for certain fields, or if the speakers have not developed proficiency in certain lexical domains, as in the case of immigrant languages."} {"text":"This code-switching appears in many forms. If a speaker has a positive attitude towards both languages and towards code-switching, many switches can be found, even within the same sentence. If however, the speaker is reluctant to use code-switching, as in the case of a lack of proficiency, he might knowingly or unknowingly try to camouflage his attempt by converting elements of one language into elements of the other language through calquing. This results in speakers using words like \"courrier noir\" (literally mail that is black) in French, instead of the proper word for blackmail, \"chantage\"."} {"text":"With emerging markets and expanding international cooperation, business users expect to be able to use software and applications in their own language. Multilingualisation (or \"m17n\", where \"17\" stands for 17 omitted letters) of computer systems can be considered part of a continuum between internationalization and localization:"} {"text":"Translating the user interface is usually part of the software localization process, which also includes adaptations such as units and date conversion. Many software applications are available in several languages, ranging from a handful (the most spoken languages) to dozens for the most popular applications (such as office suites, web browsers, etc.). Due to the status of English in computing, software development nearly always uses it (but see also Non-English-based programming languages), so almost all commercial software is initially available in an English version, and multilingual versions, if any, may be produced as alternative options based on the English original."} {"text":"According to Hewitt (2008) entrepreneurs in London from Poland, China or Turkey use English mainly for communication with customers, suppliers, and banks, but their native languages for work tasks and social purposes."} {"text":"Even in English-speaking countries immigrants are still able to use their mother tongue in the workplace thanks to other immigrants from the same place. Kovacs (2004) describes this phenomenon in Australia with Finnish immigrants in the construction industry who spoke Finnish during working hours."} {"text":"But even though foreign languages may be used in the workplace, English is still a must-know working skill. Mainstream society justifies the divided job market, arguing that getting a low-paying job is the best newcomers can achieve considering their limited language skills."} {"text":"With companies going international they are now focusing more and more on the English level of their employees. Especially in South Korea since the 1990s, companies are using different English language testing to evaluate job applicants, and the criteria in those tests are constantly upgrading the level for good English. In India, it is even possible to receive training to acquire an English accent, as the number of outsourced call centers in India has soared in the past decades."} {"text":"Meanwhile, Japan ranks 53rd out of 100 countries in 2019 EF English Proficiency Index, amid calls for this to improve in time for the 2020 Tokyo Olympics."} {"text":"Within multiracial countries such as Malaysia and Singapore, it is not unusual for one to speak two or more languages, albeit with varying degrees of fluency. Some are proficient in several Chinese dialects, given the linguistic diversity of the ethnic Chinese community in both countries."} {"text":"Not only in multinational companies is English an important skill, but also in the engineering industry, in the chemical, electrical and aeronautical fields. A study directed by Hill and van Zyl (2002) shows that in South Africa young black engineers used English most often for communication and documentation. However, Afrikaans and other local languages were also used to explain particular concepts to workers in order to ensure understanding and cooperation."} {"text":"In Europe, as the domestic market is generally quite restricted, international trade is a norm. Languages, that are used in multiple countries, include:"} {"text":"English is a commonly taught second language at schools, so it is also the most common choice for two speakers, whose native languages are different. However, some languages are so close to each other that it is generally more common when meeting to use their mother tongue rather than English. These language groups include:"} {"text":"In multilingual countries such as Belgium (Dutch, French, and German), Switzerland (German, French, Italian and Romansh), Luxembourg (Luxembourgish, French and German) or Spain (Spanish, Catalan, Basque and Galician), it is common to see employees mastering two or even three of those languages."} {"text":"Many minor Russian ethnic groups, such as Tatars, Bashkirs and others, are also multilingual. Moreover, with the beginning of compulsory study of the Tatar language in Tatarstan, there has been an increase in its level of knowledge of the Russian-speaking population of the republic."} {"text":"Continued global diversity has led to an increasingly multilingual workforce. Europe has become an excellent model to observe this newly diversified labor culture. The expansion of the European Union with its open labor market has provided opportunities both for well-trained professionals and unskilled workers to move to new countries to seek employment. Political changes and turmoil have also led to migration and the creation of new and more complex multilingual workplaces. In most wealthy and secure countries, immigrants are found mostly in low paid jobs but also, increasingly, in high-status positions."} {"text":"It is extremely common for music to be written in whatever the contemporary lingua franca is. If a song is not written in a common tongue, then it is usually written in whatever is the predominant language in the musician's country of origin, or in another widely recognized language, such as English, German, Spanish, or French."} {"text":"The bilingual song cycles \"there...\" and \"Sing, Poetry\" on the 2011 contemporary classical album \"Troika\" consist of musical settings of Russian poems with their English self-translation by Joseph Brodsky and Vladimir Nabokov, respectively."} {"text":"Songs with lyrics in multiple languages are known as macaronic verse."} {"text":"American novelists who use foreign languages (outside of their own cultural heritage) for literary effect, include Cormac McCarthy who uses untranslated Spanish and Spanglish in his fiction."} {"text":"Multilingual poetry is prevalent in US Latino literature where code-switching and translanguaging between English, Spanish, and Spanglish is common within a single poem or throughout a book of poems. Latino poetry is also written in Portuguese and can include phrases in Nahuatl, Mayan, Huichol, Arawakan, and other indigenous languages related to the Latino experience. Contemporary multilingual poets include Giannina Braschi, Ana Castillo, Sandra Cisneros, and Guillermo G\u00f3mez-Pe\u00f1a"} {"text":"The P600 is an event-related potential (ERP), or peak in electrical brain activity measured by electroencephalography (EEG). It is a language-relevant ERP and is thought to be elicited by hearing or reading grammatical errors and other syntactic anomalies. Therefore, it is a common topic of study in neurolinguistic experiments investigating sentence processing in the human brain."} {"text":"The P600 was first reported by Lee Osterhout and Phillip Holcomb in 1992. It is also sometimes called the syntactic positive shift (SPS), since it has a positive polarity and is usually elicited by syntactic phenomena."} {"text":"The P600 was originally considered a \"syntactic\" ERP, as it is elicited by several types of syntactic phenomena, including ungrammatical stimuli, garden-path sentences that require reanalysis, complex sentences with a large number of thematic roles, and the processing of filler-gap dependencies (such as wh-words that appear at the beginning of a sentence in English but are actually interpreted somewhere else)."} {"text":"A P600 may be elicited by several kinds of grammatical errors in sentences, such as problems in agreement, such as \"the child *\"throw\" the toy\". In addition to this sort of subject-verb disagreement, P600s have also been elicited by disagreements in tense, gender, number, and case, as well as phrase structure violations. A 2009 study has suggested that these errors elicit stronger P600s than the other syntactic stimuli that have been implicated."} {"text":"P600s are also elicited by errors in musical harmony, such as when a chord is played out of key with the rest of a musical phrase. This implies that P600s are not \"language-specific,\" but \"can be elicited in nonlinguistic (but rule-governed) sequences.\""} {"text":"demonstrated a so-called \"semantic P600\" in sentences that are grammatically correct but semantically anomalous, and in which syntactic reanalysis is more appealing than semantic reanalysis. For example, a P600 may be elicited in the following sentence: The hearty meal was devouring the kids.This suggests that the reader would rather interpret the sentence as containing a morphosyntactic error (saying \"devour\"ing\"\" instead of \"devour\"ed by\"\") rather than a semantic one (meals can't devour kids, but can be devoured by them). The interpretation of \"semantic P600s\" has attracted considerable attention and controversy in the literature."} {"text":"Articulatory phonology is a linguistic theory originally proposed in 1986 by Catherine Browman of Haskins Laboratories and Louis M. Goldstein of Yale University and Haskins. The theory identifies theoretical discrepancies between phonetics and phonology and aims to unify the two by treating them as low- and high-dimensional descriptions of a single system."} {"text":"Unification can be achieved by incorporating into a single model the idea that the physical system (identified with phonetics) constrains the underlying abstract system (identified with phonology), making the units of control at the abstract planning level the same as those at the physical level."} {"text":"The plan of an utterance is formatted as a gestural score, which provides the input to a physically based model of speech production \u2013 the task dynamic model of Elliot Saltzman. The gestural score graphs locations within the vocal tract where constriction can occur, indicating the planned or target degree of constriction. A computational model of speech production developed at Haskins Laboratories combines articulatory phonology, task dynamics, and the Haskins articulatory synthesis system developed by Philip Rubin and colleagues."} {"text":"The intentional stance is a term coined by philosopher Daniel Dennett for the level of abstraction in which we view the behavior of an entity in terms of mental properties. It is part of a theory of mental content proposed by Dennett, which provides the underpinnings of his later works on free will, consciousness, folk psychology, and evolution."} {"text":"Dennett (1971, p.\u00a087) states that he took the concept of \"intentionality\" from the work of the German philosopher Franz Brentano. When clarifying the distinction between mental phenomena (viz., mental activity) and physical phenomena, Brentano (p.\u00a097) argued that, in contrast with physical phenomena, the \"distinguishing characteristic of all mental phenomena\" was \"the reference to something as an object\" \u2013 a characteristic he called \"intentional inexistence\". Dennett constantly speaks of the \"aboutness\" of \"intentionality\"; for example: \"the aboutness of the pencil marks composing a shopping list is derived from the intentions of the person whose list it is\" (Dennett, 1995, p.\u00a0240)."} {"text":"John Searle (1999, pp.\u00a085) stresses that \"competence\" in predicting\/explaining human behaviour involves being able to both recognize others as \"intentional\" beings, and interpret others' minds as having \"intentional states\" (e.g., beliefs and desires):"} {"text":"According to Dennett (1987, pp.\u00a048\u201349), folk psychology provides a systematic, \"reason-giving explanation\" for a particular action, and an account of the historical origins of that action, based on deeply embedded assumptions about the agent; namely that:"} {"text":"This approach is also consistent with the earlier work of Fritz Heider and Marianne Simmel, whose joint study revealed that, when subjects were presented with an animated display of 2-dimensional shapes, they were inclined to ascribe intentions to the shapes."} {"text":"Further, Dennett (1987, p.\u00a052) argues that, based on our fixed personal views of what all humans ought to believe, desire and do, we predict (or explain) the beliefs, desires and actions of others \"by calculating in a normative system\"; and, driven by the reasonable assumption that all humans are rational beings \u2013 who do \"have\" specific beliefs and desires and do \"act\" on the basis of those beliefs and desires in order to get what they want \u2013 these predictions\/explanations are based on four simple rules:"} {"text":"The core idea is that, when understanding, explaining, and\/or predicting the behavior of an object, we can choose to view it at varying levels of abstraction. The more concrete the level, the more accurate \"in principle\" our predictions are; the more abstract, the greater the computational power we gain by zooming out and skipping over the irrelevant details."} {"text":"Dennett defines three levels of abstraction, attained by adopting one of three entirely different \"stances\", or intellectual strategies: the physical stance; the design stance; and the intentional stance:"} {"text":"Even when there is no immediate error, a higher-level stance can simply fail to be useful. If we were to try to understand the thermostat at the level of the intentional stance, ascribing to it beliefs about how hot it is and a desire to keep the temperature just right, we would gain no traction over the problem as compared to staying at the design stance, but we would generate theoretical commitments that expose us to absurdities, such as the possibility of the thermostat not being in the mood to work today because the weather is so nice. Whether to take a particular stance, then, is determined by how successful that stance is when applied."} {"text":"Dennett argues that it is best to understand human behavior at the level of the intentional stance, without making any specific commitments to any deeper reality of the artifacts of folk psychology. In addition to the controversy inherent in this, there is also some dispute about the extent to which Dennett is committing to realism about mental properties. Initially, Dennett's interpretation was seen as leaning more towards instrumentalism, but over the years, as this idea has been used to support more extensive theories of consciousness, it has been taken as being more like Realism. His own words hint at something in the middle, as he suggests that the self is as real as a center of gravity, \"an abstract object, a theorist's fiction\", but operationally valid."} {"text":"As a way of thinking about things, Dennett's intentional stance is entirely consistent with everyday commonsense understanding; and, thus, it meets Eleanor Rosch's (1978, p.\u00a028) criterion of the \"maximum information with the least cognitive effort\". Rosch argues that, implicit within any system of categorization, are the assumptions that:"} {"text":"Also, the intentional stance meets the criteria Dennett specified (1995, pp.\u00a050\u201351) for algorithms:"} {"text":"The general notion of a three level system was widespread in the late 1970s\/early 1980s; for example, when discussing the mental representation of information from a cognitive psychology perspective, Glass and his colleagues (1979, p.\u00a024) distinguished three important aspects of representation:"} {"text":"Other significant cognitive scientists who also advocated a three level system were Allen Newell, Zenon Pylyshyn, and David Marr. The parallels between the four representations (each of which implicitly assumed that computers \"and\" human minds displayed each of the three distinct levels) are detailed in the following table:"} {"text":"The rationale behind the intentional stance is based on evolutionary theory, particularly the notion that the ability to make quick predictions of a system's behaviour based on what we think it might be thinking was an evolutionary adaptive advantage. The fact that our predictive powers are not perfect is a further result of the advantages sometimes accrued by acting contrary to expectations."} {"text":"Robbins and Jack point to a 2003 study in which participants viewed animated geometric shapes in different \"vignettes,\" some of which could be interpreted as constituting social interaction, while others suggested mechanical behavior. Viewing social interactions elicited activity in brain regions associated with identifying faces and biological objects (posterior temporal cortex), as well as emotion processing (right amygdala and ventromedial prefrontal cortex). Meanwhile, the mechanical interactions activated regions related to identifying objects like tools that can be manipulated (posterior temporal lobe). The authors suggest \"that these findings reveal putative 'core systems' for social and mechanical understanding that are divisible into constituent parts or elements with distinct processing and storage capabilities.\""} {"text":"Robbins and Jack argue for an additional stance beyond the three that Dennett outlined. They call it the \"phenomenal stance\": Attributing consciousness, emotions, and inner experience to a mind. The explanatory gap of the hard problem of consciousness illustrates this tendency of people to see phenomenal experience as different from physical processes. The authors suggest that psychopathy may represent a deficit in the phenomenal but not intentional stance, while people with autism appear to have intact moral sensibilities, just not mind-reading abilities. These examples suggest a double dissociation between the intentional and phenomenal stances."} {"text":"In a follow-up paper, Robbins and Jack describe four experiments about how the intentional and phenomenal stances relate to feelings of moral concern. The first two experiments showed that talking about lobsters as strongly emotional led to a much greater sentiment that lobsters deserved welfare protections than did talking about lobsters as highly intelligent. The third and fourth studies found that perceiving an agent as vulnerable led to greater attributions of phenomenal experience. Also, people who scored higher on the empathetic-concern subscale of the Interpersonal Reactivity Index had generally higher absolute attributions of mental experience."} {"text":"Language deprivation experiments have been claimed to have been attempted at least four times through history, isolating infants from the normal use of spoken or signed language in an attempt to discover the fundamental character of human nature or the origin of language."} {"text":"The American literary scholar Roger Shattuck called this kind of research study \"The Forbidden Experiment\" because of the exceptional deprivation of ordinary human contact it requires. Although not designed to study language, similar experiments on non-human primates (labelled the \"Pit of despair\") utilising complete social deprivation resulted in serious psychological disturbances."} {"text":"Throughout history, several rulers have claimed to have carried out this kind of experiment:"} {"text":"An early record of a study of this kind can be found in Herodotus's \"Histories\". According to Herodotus (ca. 485 \u2013 425 BE), the Egyptian pharaoh Psamtik I (664 \u2013 610 BE, i.e. 200 years before Herodotus) carried out such a study, and concluded the Phrygian race must antedate the Egyptians since the child had first spoken something similar to the Phrygian word \"bekos\", meaning \"bread\". Recent researchers suggested this was likely a willful interpretation of their babbling."} {"text":"A long time after Frederick II's alleged experiment, James IV of Scotland was said to have sent two children to be raised by a mute woman isolated on the island of Inchkeith, to determine if language was learned or innate. The children were reported to have spoken good Hebrew, but historians were sceptical of these claims soon after they were made."} {"text":"Mughal emperor Akbar was later said to have children raised by mute wetnurses. Akbar held that speech arose from hearing; thus children raised without hearing human speech would become mute."} {"text":"Critical authors have doubted the veracity of the accounts: probably neither Psamtik I nor James IV ever conducted any such studies, and probably neither did Frederick II. Only Akbar's study is most likely authentic, but offers an ambiguous outcome."} {"text":"Structural priming is a form of positive priming, in that it induces a tendency to repeat or more easily process a current sentence that is similar in structure to a previously presented prime. It is a phenomenon studied in the field of psycholinguistics. J. Kathryn Bock introduced it in 1986. Several paradigms exist to elicit structural priming. Structural priming persists cross-linguistically. One specific form of structural priming is syntactic priming."} {"text":"Bock introduced a picture description task to investigate this phenomenon. In the study phase, at their own pace, participants read a list of sentences and observe a set of pictures. All these pictures describe events including an agent, patient, and theme. Half of the agents pictured are humans and the other half inanimate objects. This phase of the experiment was performed in an attempt to establish a \"recognition memory\" cover story. In the test phase, participants are asked to read a sentence expressing one of four conditions:"} {"text":"After reading a sentence, the participant repeats it. Following this repetition, the participant describes the picture."} {"text":"Consider a trial wherein the participant is reading a dative double-object construction, \"George gave the boy the ball\". The subject is then significantly more likely to describe the a picture as \"X gave Y the Z\" instead of \"X gave the Z to Y\". This persistence in sentential form is structural priming."} {"text":"At least four theories exist to explain structural priming: syntactic repetition; thematic congruency, derivation of subjects, and error-based learning."} {"text":"In the Bock study, the sentences presented match their primes in syntactic structure. This is trivially true for any type-type prime. However, other structural priming patterns exist that complicate this explanation."} {"text":"A structure known as the unaccusative, which is unmarked morphologically in English, is capable of priming passive transitive sentences. The two constructions differ in syntax, but in both cases the subject takes a thematic, or at least non-agentive, thematic role."} {"text":"Because the two constructions have this property in common, it has been suggested that such a thematic relational mapping is what allows structural priming."} {"text":"A second possibility for describing the presence of unaccusative-passive priming is their shared characteristic of having a derived subject. For instance, the passive subject is said by some scholars of syntax to be derived via movement, or \"smuggling,\" from the same position where it is generated in the active, to wit, the complement of the transitive verb. Though the derivation of the unaccusative does not seem to be an identical process, it is nevertheless assumed to be derived."} {"text":"Another explanation is that syntactic priming is a form of implicit learning supported by a prediction error-based learning mechanism."} {"text":"The bouba\/kiki effect is a non-arbitrary mapping between speech sounds and the visual shape of objects. It was first documented by Wolfgang K\u00f6hler in 1929 using nonsense words. The effect has been observed in American university students, Tamil speakers in India, young children, and infants, and has also been shown to occur with familiar names. It is absent in individuals who are congenitally blind and reduced in individuals with autism. The effect has recently been investigated using fMRI."} {"text":"The bouba\/kiki effect was first observed by German American psychologist Wolfgang K\u00f6hler in 1929. In psychological experiments first conducted on the island of Tenerife (where the primary language is Spanish), K\u00f6hler showed forms similar to those shown at the right and asked participants which shape was called \"takete\" and which was called \"baluba\" (\"maluma\" in the 1947 version). Although not explicitly stated, K\u00f6hler implies that there was a strong preference to pair the jagged shape with \"takete\" and the rounded shape with \"baluba\"."} {"text":"In 2001, Vilayanur S. Ramachandran and Edward Hubbard repeated K\u00f6hler's experiment using the words \"kiki\" and \"bouba\" and asked American college undergraduates and Tamil speakers in India \"Which of these shapes is bouba and which is kiki?\" In both groups, 95% to 98% selected the curvy shape as \"bouba\" and the jagged one as \"kiki\", suggesting that the human brain somehow attaches abstract meanings to the shapes and sounds in a consistent way."} {"text":"The effect has also been shown to emerge when the words to be paired are existing first names, suggesting that some familiarity with the linguistic stimuli does not eliminate the effect. A study showed that individuals will pair names such as \"Molly\" with round silhouettes, and names such as \"Kate\" with sharp silhouettes. Moreover, individuals will associate different personality traits with either group of names (\"e.g.\", easygoingness with \"round names\"; determination with \"sharp names\"). This may hint at a role of abstract concepts in the effect."} {"text":"Contexts where the effect is smaller or absent."} {"text":"Other research suggests that this effect does not occur in all communities, and it appears that the effect breaks if the sounds do not make licit words in the language. The bouba\/kiki effect seems to be dependent on a long sensitive period, with high visual capacities in childhood being necessary for its typical development. In contrast to typically sighted individuals, congenitally blind individuals have been reported not to show a systematic bouba\/kiki effect for touched shapes. Autistic individuals do not show as strong a preference. Individuals without autism agree with the standard result 88% of the time, while individuals with autism agree only 56% of the time."} {"text":"In 2019, researchers published the first study using fMRI to explore the bouba\/kiki effect. They found that prefrontal activation is stronger to mismatching (bouba with spiky shape) than to matching (bouba with round shape) stimuli. Interestingly, they also found that sound\u2013shape matching also influences activations in the auditory and visual cortices, suggesting an effect of matching at an early stage in sensory processing."} {"text":"The experiment was reproduced in episode 3 of season 4 of the television show \"Brain Games\" with the names \"takete\" and \"maluma.\" One participant expressed her association of takete and maluma with their respective shapes by comparing them to the rigid movement of a toy soldier and the swaying of the hula, respectively. A similar experiment included the association of the name \"lomba\" with a fictitious brand of milk chocolate and \"kitiki\" with a fictitious brand of dark chocolate."} {"text":"Language production is the production of spoken or written language. In psycholinguistics, it describes all of the stages between having a concept to express and translating that concept into linguistic form. These stages have been described in two types of processing models: the lexical access models and the serial models. Through these models, psycholinguists can look into how speech is produced in different ways, such as when the speaker is bilingual. Psycholinguists learn more about these models and different kinds of speech by using language production research methods that include collecting speech errors and elicited production tasks."} {"text":"The basic loop occurring in the creation of language consists of the following stages:"} {"text":"According to the lexical access model (see section below), in terms of lexical access, two different stages of cognition are employed; thus, this concept is known as the two-stage theory of lexical access. The first stage, lexical selection provides information about lexical items required to construct the functional level representation. These items are retrieved according to their specific semantic and syntactic properties, but phonological forms are not yet made available at this stage. The second stage, retrieval of wordforms, provides information required for building the positional level representation."} {"text":"A serial model of language production divides the process into several stages. For example, there may be one stage for determining pronunciation and a stage for determining lexical content. The serial model does not allow overlap of these stages, so they may only be completed one at a time."} {"text":"This model states that the sentence is made by a sequence of processes generating differing levels of representations. For instance, the functional level representation is made on the a preverbal representation, which is essentially what the speaker seeks to express. This level is responsible for encoding the meanings of lexical items and the way that grammar forms relationships between them. Next, the positional level representation is built, which functions to encode the phonological forms of words and the order they are found in sentence structures. Lexical access, according to this model, is a process that encompasses two serially ordered and independent stages."} {"text":"Fluency can be defined in part by prosody, which is shown graphically by a smooth intonation contour, and by a number of other elements: control of speech rate, relative timing of stressed and unstressed syllables, changes in amplitude, changes in fundamental frequency. In other words, fluency can be described as whether someone speaks smoothly and easily. This term is used in speech-language pathology when describing disorders with stuttering or other disfluencies."} {"text":"Whether or not a speaker is fluent in one or more languages, the process for producing language remains the same. However, bilinguals speaking two languages within a conversation may have access to both languages at the same time. Three of the most commonly discussed models for multilingual language access are the Bilingual Interactive Activation Plus model, the Revised Hierarchical Model, and the Language Mode model:"} {"text":"Speakers fluent in multiple languages may inhibit access to one of their languages, but this suppression can only be done once the speaker is at a certain level of proficiency in that language. A speaker can decide to inhibit a language based on non-linguistic cues in their conversation, such as a speaker of both English and French inhibiting their French when conversing with people who only speak English. When especially proficient multilingual speakers communicate, they can participate in code-switching. Code-switching has been shown to indicate bilingual proficiency in a speaker, though it had previously been seen as a sign of poor language ability."} {"text":"There are three main types of research into language production: speech error collection, picture-naming, and elicited production. Speech error collection focuses on using the analysis of speech errors made in naturally produced speech. On the other hand, elicited production focuses on elicited speech and is conducted in a lab. Also conducted in a lab, picture-naming focuses on reaction-time data from picture-naming latencies. Although originally disparate, these three methodologies are generally looking at the same underlying processes of speech production."} {"text":"Speech errors have been found to be common in naturally produced speech. Analysis of speech errors has found that not all are random, but rather systematic and fall into several categories. These speech errors can demonstrate parts of the language processing system, and what happens when that system doesn't work as it should. Language production occurs quickly with speakers saying a little more than 2 words per second; so though errors occur only once out of 1,000 words, they occur relatively often throughout a speaker's day at once every 7 minutes. Some examples of these speech errors that would be collected by psycholinguists are:"} {"text":"Picture-naming tasks ask participants to look at pictures and name them in a certain way. By looking at the time course for the responses in these tasks, psycholinguists can learn more about the planning involved in specific phrases. These types of tasks can be helpful for investigating cross-linguistic language production and planning processes."} {"text":"Elicited production tasks ask participants to respond to questions or prompts in a particular way. One of the more common types of elicited production tasks is the sentence completion task. These tasks give the participants the beginning of a target sentence, which the participants are then asked to complete. Analyzing these completions can allow psycholinguistics to investigate errors that might be difficult to elicit otherwise."} {"text":"The psychology of reasoning is the study of how people reason, often broadly defined as the process of drawing conclusions to inform how people solve problems and make decisions. It overlaps with psychology, philosophy, linguistics, cognitive science, artificial intelligence, logic, and probability theory."} {"text":"Psychological experiments on how humans and other animals reason have been carried out for over 100 years. An enduring question is whether or not people have the capacity to be rational. Current research in this area addresses various questions about reasoning, rationality, judgments, intelligence, relationships between emotion and reasoning, and development."} {"text":"One of the most obvious areas in which people employ reasoning is with sentences in everyday language. Most experimentation on deduction has been carried out on hypothetical thought, in particular, examining how people reason about conditionals, e.g., \"If A then B\". Participants in experiments make the modus ponens inference, given the indicative conditional \"If A then B\", and given the premise \"A\", they conclude \"B\". However, given the indicative conditional and the minor premise for the modus tollens inference, \"not-B\", about half of the participants in experiments conclude \"not-A\" and the remainder concludes that nothing follows."} {"text":"Other investigations of propositional inference examine how people think about disjunctive alternatives, e.g., \"A or else B\", and how they reason about negation, e.g., \"It is not the case that A and B\". Many experiments have been carried out to examine how people make relational inferences, including comparisons, e.g., \"A is better than B\". Such investigations also concern spatial inferences, e.g. \"A is in front of B\" and temporal inferences, e.g. \"A occurs before B\". Other common tasks include categorical syllogisms, used to examine how people reason about quantifiers such as \"All\" or \"Some\", e.g., \"Some of the A are not B\"."} {"text":"There are several alternative theories of the cognitive processes that human reasoning is based on. One view is that people rely on a mental logic consisting of formal (abstract or syntactic) inference rules similar to those developed by logicians in the propositional calculus. Another view is that people rely on domain-specific or content-sensitive rules of inference. A third view is that people rely on mental models, that is, mental representations that correspond to imagined possibilities. A fourth view is that people compute probabilities."} {"text":"One controversial theoretical issue is the identification of an appropriate competence model, or a standard against which to compare human reasoning. Initially classical logic was chosen as a competence model. Subsequently, some researchers opted for non-monotonic logic and Bayesian probability. Research on mental models and reasoning has led to the suggestion that people are rational in principle but err in practice. Connectionist approaches towards reasoning have also been proposed."} {"text":"It is an active question in psychology how, why, and when the ability to reason develops in infants. Jean Piaget's theory of cognitive development describes a sequence of stages in the development of reasoning from infancy to adulthood. According to the neo-Piagetian theories of cognitive development, changes in reasoning with development come from increasing working memory capacity, increasing speed of processing, and enhanced executive functions and control. Increasing self-awareness is also an important factor."} {"text":"In their book \"The Enigma of Reason\", the cognitive scientists Hugo Mercier and Dan Sperber put forward an \"argumentative\" theory of reasoning, claiming that humans evolved to reason primarily to justify our beliefs and actions and to convince others in a social environment. Key evidence for their theory includes the errors in reasoning that solitary individuals are prone to when their arguments are not criticized, such as logical fallacies, and how groups become much better at performing cognitive reasoning tasks when they communicate with one another and can evaluate each other's arguments. Sperber and Mercier offer one attempt to resolve the apparent paradox that the confirmation bias is so strong despite the function of reasoning naively appearing to be to come to veridical conclusions about the world."} {"text":"Inductive reasoning makes broad generalizations from specific cases or observations. In this process of reasoning, general assertions are made based on past specific pieces of evidence. This kind of reasoning allows the conclusion to be false even if the original statement is true. For example, if one observes a college athlete, one makes predictions and assumptions about other college athletes based on that one observation. Scientists use inductive reasoning to create theories and hypotheses."} {"text":"The syllogism is a form of deductive reasoning in which two statements reach a logical conclusion. With this reasoning, one statement could be \u201cEvery A is B\u201d and another could be \u201cThis C is A\u201d. Those two statements could then lead to the conclusion that \u201cThis C is B\u201d. These types of syllogisms are used to test deductive reasoning to ensure there is a valid hypothesis. A Syllogistic Reasoning Task was created from a study performed by Morsanyi, Kinga, Handley, and Simon that examined the intuitive contributions to reasoning. They used this test to assess why \u201csyllogistic reasoning performance is based on an interplay between a conscious and effortful evaluation of logicality and an intuitive appreciation of the believability of the conclusions\u201d."} {"text":"Another form of reasoning is called abductive reasoning. This type is based on creating and testing hypotheses using the best information available. Abductive reasoning produces the kind of daily decision-making that works best with the information present, which often is incomplete. This could involve making educated guesses from observed unexplainable phenomena. This type of reasoning can be seen in the world when doctors make decisions about diagnoses from a set of results or when jurors use the relevant evidence to make decisions about a case."} {"text":"Judgment and reasoning involve thinking through the options, making a judgment or conclusion and finally making a decision. Making judgments involves heuristics, or efficient strategies that usually lead you to the right answers. The most common heuristics used are attribute substitution, the availability heuristic, the representativeness heuristic and the anchoring heuristic \u2013 these all aid in quick reasoning and work in most situations. Heuristics allow for errors, a price paid to gain efficiency."} {"text":"Other errors in judgment, therefore affecting reasoning, include errors in judgment about covariation \u2013 a relationship between two variables such that the presence and magnitude of one can predict the presence and magnitude of the other. One cause of covariation is confirmation bias, or the tendency to be more responsive to evidence that confirms your beliefs. But assessing covariation can be pulled off track by neglecting base-rate information \u2013 how frequently something occurs in general. However people often ignore base rates and tend to use other information presented."} {"text":"There are more sophisticated judgment strategies that result in fewer errors. People often reason based on availability but sometimes they look for other, more accurate, information to make judgments. This suggests there are two ways of thinking, known as the Dual-Process Model. The first, System I, is fast, automatic and uses heuristics \u2013 more of intuition. The second, System II, is slower, effortful and more likely to be correct \u2013 more reasoning."} {"text":"The inferences people draw are related to factors such as linguistic pragmatics and emotion."} {"text":"Decision making is often influenced by the emotion of regret and by the presence of risk. When people are presented with options, they tend to select the one that they think they will regret the least. In decisions that involve a large amount of risk, people tend to ask themselves how much dread they would experience were a worst-case scenario to occur, e.g. a nuclear accident, and then use that dread as an indicator of the level of risk."} {"text":"Antonio Damasio suggests that somatic markers, certain memories that can cause a strong bodily reaction, act as a way to guide decision making as well. For example, when you are remembering a scary movie and once again become tense and your palms might begin to sweat. Damasio argues that when making a decision we rely on our \u201cgut feelings\u201d to assess various options, and this makes us decide to go with a decision that is more positive and stay away from those that are negative. He also argues that the orbitofrontal cortex - located at the base of the frontal lobe, just above the eyes - is crucial in your use of somatic markers, because it is the part in the brain that allows you to interpret emotion."} {"text":"Another note to make is that when emotion shapes decisions, the influence is usually based on predictions of the future. When people ask themselves how they would react, they are making inferences about the future. Researchers suggest affective forecasting, the ability to predict your own emotions, is poor because people tend to overestimate how much they will regret their errors."} {"text":"Studying reasoning neuroscientifically involves determining the neural correlates of reasoning, often investigated using event-related potentials and functional magnetic resonance imaging?"} {"text":"Psycholinguistics or psychology of language is the study of the interrelation between linguistic factors and psychological aspects. The discipline is mainly concerned with the mechanisms by which language is processed and represented in the mind and brain; that is, the psychological and neurobiological factors that enable humans to acquire, use, comprehend, and produce language."} {"text":"Psycholinguistics is concerned with the cognitive faculties and processes that are necessary to produce the grammatical constructions of language. It is also concerned with the perception of these constructions by a listener."} {"text":"Initial forays into psycholinguistics were in the philosophical and educational fields, due mainly to their location in departments other than applied sciences (e.g., cohesive data on how the human brain functioned). Modern research makes use of biology, neuroscience, cognitive science, linguistics, and information science to study how the mind-brain processes language, and less so the known processes of social sciences, human development, communication theories, and infant development, among others."} {"text":"There are several subdisciplines with non-invasive techniques for studying the neurological workings of the brain. For example: neurolinguistics has become a field in its own right; and developmental psycholinguistics, as a branch of psycholinguistics, concerns itself with a child's ability to learn language."} {"text":"Psycholinguistics is an interdisciplinary field that consists of researchers from a variety of different backgrounds, including psychology, cognitive science, linguistics, speech and language pathology, and discourse analysis. Psycholinguists study how people acquire and use language, according to the following main areas:"} {"text":"A researcher interested in language comprehension may study word recognition during reading, to examine the processes involved in the extraction of orthographic, morphological, phonological, and semantic information from patterns in printed text. A researcher interested in language production might study how words are prepared to be spoken starting from the conceptual or semantic level (this concerns connotation, and possibly can be examined through the conceptual framework concerned with the semantic differential). Developmental psycholinguists study infants' and children's ability to learn and process language."} {"text":"Psycholinguistics further divide their studies according to the different components that make up human language."} {"text":"In seeking to understand the properties of language acquisition, psycholinguistics has roots in debates regarding innate versus acquired behaviors (both in biology and psychology). For some time, the concept of an innate trait was something that was not recognized in studying the psychology of the individual. However, with the redefinition of innateness as time progressed, behaviors considered innate could once again be analyzed as behaviors that interacted with the psychological aspect of an individual. After the diminished popularity of the behaviorist model, ethology reemerged as a leading train of thought within psychology, allowing the subject of language, an innate human behavior, to be examined once more within the scope of psychology."} {"text":"The theoretical framework for psycholinguistics began to be developed before the end of the 19th century as the \"Psychology of Language\". The science of psycholinguistics, so called, began in 1936 when Jacob Kantor, a prominent psychologist at the time, used the term \"psycholinguistic\" as a description within his book \"An Objective Psychology of Grammar\"."} {"text":"However, the term \"psycholinguistics\" only came into widespread usage in 1946 when Kantor's student Nicholas Pronko published an article entitled \"Psycholinguistics: A Review\". Pronko's desire was to unify myriad related theoretical approaches under a single name. Psycholinguistics was used for the first time to talk about an interdisciplinary science \"that could be coherent\", as well as being the title of \"Psycholinguistics: A Survey of Theory and Research Problems\", a 1954 book by Charles E. Osgood and Thomas A. Sebeok."} {"text":"Though there is still much debate, there are two primary theories on childhood language acquisition:"} {"text":"The field of linguistics and psycholinguistics has since been defined by pro-and-con reactions to Chomsky. The view in favor of Chomsky still holds that the human ability to use language (specifically the ability to use recursion) is qualitatively different from any sort of animal ability. This ability may have resulted from a favorable mutation or from an adaptation of skills that originally evolved for other purposes."} {"text":"The structures and uses of language are related to the formation of ontological insights. Some see this system as \"structured cooperation between language-users\" who use conceptual and semantic deference in order to exchange meaning and knowledge, as well as give meaning to language, thereby examining and describing \"semantic processes bound by a 'stopping' constraint which are not cases of ordinary deferring.\" Deferring is normally done for a reason, and a rational person is always disposed to defer if there is good reason."} {"text":"The theory of the \"semantic differential\" supposes universal distinctions, such as:"} {"text":"One question in the realm of language comprehension is how people understand sentences as they read (i.e., sentence processing). Experimental research has spawned several theories about the architecture and mechanisms of sentence comprehension. These theories are typically concerned with the types of information, contained in the sentence, that the reader can use to build meaning, and at what point in reading does that information becomes available to the reader. Issues such as \"modular\" versus \"interactive\" processing have been theoretical divides in the field."} {"text":"In contrast to the modular view, an interactive theory of sentence processing, such as a constraint-based lexical approach assumes that all available information contained within a sentence can be processed at any time. Under an interactive view, the semantics of a sentence (such as plausibility) can come into play early on to help determine the structure of a sentence. Hence, in the sentence above, the reader would be able to make use of plausibility information in order to assume that \"the evidence\" is being examined instead of doing the examining. There are data to support both modular and interactive views; which view is correct is debatable."} {"text":"When reading, saccades can cause the mind to skip over words because it does not see them as important to the sentence, and the mind completely omits it from the sentence or supplies the wrong word in its stead. This can be seen in \"Paris in thethe Spring\". This is a common psychological test, where the mind will often skip the second \"the\", especially when there is a line break in between the two."} {"text":"Language production refers to how people produce language, either in written or spoken form, in a way that conveys meanings comprehensible to others. One of the most effective ways to explain the way people represent meanings using rule-governed languages is by observing and analyzing instances of speech errors, which include speech disfluencies like false starts, repetition, reformulation and constant pauses in between words or sentences, as well as slips of the tongue, like-blendings, substitutions, exchanges (e.g. Spoonerism), and various pronunciation errors."} {"text":"These speech errors have significant implications for understanding how language is produced, in that they reflect that:"} {"text":"It is useful to differentiate between three separate phases of language production:"} {"text":"Psycholinguistic research has largely concerned itself with the study of formulation because the conceptualization phase remains largely elusive and mysterious."} {"text":"Many of the experiments conducted in psycholinguistics, especially early on, are behavioral in nature. In these types of studies, subjects are presented with linguistic stimuli and asked to respond. For example, they may be asked to make a judgment about a word (lexical decision), reproduce the stimulus, or say a visually presented word aloud. Reaction times to respond to the stimuli (usually on the order of milliseconds) and proportion of correct responses are the most often employed measures of performance in behavioral tasks. Such experiments often take advantage of priming effects, whereby a \"priming\" word or phrase appearing in the experiment can speed up the lexical decision for a related \"target\" word later."} {"text":"As an example of how behavioral methods can be used in psycholinguistics research, Fischler (1977) investigated word encoding, using a lexical-decision task. He asked participants to make decisions about whether two strings of letters were English words. Sometimes the strings would be actual English words requiring a \"yes\" response, and other times they would be non-words requiring a \"no\" response. A subset of the licit words were related semantically (e.g., cat\u2013dog) while others were unrelated (e.g., bread\u2013stem). Fischler found that related word pairs were responded to faster, compared to unrelated word pairs, which suggests that semantic relatedness can facilitate word encoding."} {"text":"Recently, eye tracking has been used to study online language processing. Beginning with Rayner (1978), the importance of understanding eye-movements during reading was established. Later, Tanenhaus et al. (1995) used a visual-world paradigm to study the cognitive processes related to spoken language. Assuming that eye movements are closely linked to the current focus of attention, language processing can be studied by monitoring eye movements while a subject is listening to spoken language."} {"text":"The analysis of systematic errors in speech, as well as the writing and typing of language, can provide evidence of the process that has generated it. Errors of speech, in particular, grant insight into how the mind produces language while a speaker is mid-utterance. Speech errors tend to occur in the lexical, morpheme, and phoneme encoding steps of language production, as seen by the ways errors can manifest themselves."} {"text":"The types of speech errors, with some examples, include:"} {"text":"Speech errors will usually occur in the stages that involve lexical, morpheme, or phoneme encoding, and usually not in the first step of semantic encoding. This can be attributed to a speaker still conjuring the idea of what to say; and unless he changes his mind, can not be mistaken for what he wanted to say."} {"text":"Until the recent advent of non-invasive medical techniques, brain surgery was the preferred way for language researchers to discover how language affects the brain. For example, severing the corpus callosum (the bundle of nerves that connects the two hemispheres of the brain) was at one time a treatment for some forms of epilepsy. Researchers could then study the ways in which the comprehension and production of language were affected by such drastic surgery. Where an illness made brain surgery necessary, language researchers had an opportunity to pursue their research."} {"text":"Newer, non-invasive techniques now include brain imaging by positron emission tomography (PET); functional magnetic resonance imaging (fMRI); event-related potentials (ERPs) in electroencephalography (EEG) and magnetoencephalography (MEG); and transcranial magnetic stimulation (TMS). Brain imaging techniques vary in their spatial and temporal resolutions (fMRI has a resolution of a few thousand neurons per pixel, and ERP has millisecond accuracy). Each methodology has advantages and disadvantages for the study of psycholinguistics."} {"text":"Computational modelling, such as the DRC model of reading and word recognition proposed by Max Coltheart and colleagues, is another methodology, which refers to the practice of setting up cognitive models in the form of executable computer programs. Such programs are useful because they require theorists to be explicit in their hypotheses and because they can be used to generate accurate predictions for theoretical models that are so complex that discursive analysis is unreliable. Other examples of computational modelling are McClelland and Elman's TRACE model of speech perception and Franklin Chang's Dual-Path model of sentence production."} {"text":"Psycholinguistics is concerned with the nature of the processes that the brain undergoes in order to comprehend and produce language. For example, the cohort model seeks to describe how words are retrieved from the mental lexicon when an individual hears or sees linguistic input. Using new non-invasive imaging techniques, recent research seeks to shed light on the areas of the brain involved in language processing."} {"text":"Another unanswered question in psycholinguistics is whether the human ability to use syntax originates from innate mental structures or social interaction, and whether or not some animals can be taught the syntax of human language."} {"text":"Two other major subfields of psycholinguistics investigate first language acquisition, the process by which infants acquire language, and second language acquisition. It is much more difficult for adults to acquire second languages than it is for infants to learn their first language (infants are able to learn more than one native language easily). Thus, sensitive periods may exist during which language can be learned readily. A great deal of research in psycholinguistics focuses on how this ability develops and diminishes over time. It also seems to be the case that the more languages one knows, the easier it is to learn more."} {"text":"The field of aphasiology deals with language deficits that arise because of brain damage. Studies in aphasiology can offer both advances in therapy for individuals suffering from aphasia and further insight into how the brain processes language."} {"text":"A short list of books that deal with psycholinguistics, written in language accessible to the non-expert, includes:"} {"text":"International Association for the Study of Child Language"} {"text":"The International Association for the Study of Child Language (IASCL) is an academic society for first language acquisition researchers."} {"text":"IASCL was founded in 1970 by a group of prominent language acquisition researchers to promote international and interdisciplinary cooperation in the study of child language. Its major activity is the sponsorship of the triennial International Congress for the Study of Child Language, for which it publishes proceedings. It also publishes the \"Child Language Bulletin\" approximately twice a year."} {"text":"A mora (plural \"morae\" or \"moras\"; often symbolized \u03bc) is a unit in phonology that describes syllable weight, which in some languages determines stress or timing. A mora is a sound which comes after a short pause in a syllable. The term comes from the Latin word for \"linger, delay\", which was also used to translate the Greek word \"chronos\" (time) in its metrical sense."} {"text":"Monomoraic syllables have one mora, bimoraic syllables have two, and trimoraic syllables have three, although this last type is relatively rare."} {"text":"In general, morae are formed as follows:"} {"text":"In general, monomoraic syllables are called \"light syllables\", bimoraic syllables are called \"heavy syllables\", and trimoraic syllables (in languages that have them) are called \"superheavy syllables\". Some languages, such as Old English and present-day English, can have syllables with up to four morae."} {"text":"A prosodic stress system in which moraically heavy syllables are assigned stress is said to have the property of quantity sensitivity."} {"text":"For the purpose of determining accent in Ancient Greek, short vowels have one mora, and long vowels and diphthongs have two morae. Thus long \"\u0113\" (eta: ) can be understood as a sequence of two short vowels: \"ee\"."} {"text":"Ancient Greek pitch accent is placed on only one mora in a word. An acute (, ) represents high pitch on the only mora of a short vowel or the last mora of a long vowel (\"\u00e9\", \"e\u00e9\"). A circumflex () represents high pitch on the first mora of a long vowel (\"\u00e9e\")."} {"text":"In Old English, short diphthongs and monophthongs were monomoraic, long diphthongs and monophthongs were bimoraic, consonants ending in a syllable were each a mora, and geminate consonants added a mora to the preceding syllable. In Modern English, the rules are similar, except that all diphthongs are bimoraic. In English, and probably also in Old English, syllables cannot have more than four morae, with loss of sounds occurring if a syllable would have more than 4 otherwise. From the Old English period through to today, all content words must be at least two morae long."} {"text":"Gilbertese, an Austronesian language spoken mainly in Kiribati, is a trimoraic language. The typical foot in Gilbertese contains three morae. These trimoraic constituents are units of stress in Gilbertese. These \"ternary metrical constituents of the sort found in Gilbertese are quite rare cross-linguistically, and as far as we know, Gilbertese is the only language in the world reported to have a ternary constraint on prosodic word size.\""} {"text":"In Hawaiian, both syllables and morae are important. Stress falls on the penultimate mora, though in words long enough to have two stresses, only the final stress is predictable. However, although a diphthong, such as \"oi,\" consists of two morae, stress may fall only on the first, a restriction not found with other vowel sequences such as \"io.\" That is, there is a distinction between \"oi,\" a bimoraic syllable, and \"io,\" which is two syllables."} {"text":"Most dialects of Japanese, including the standard, use morae, known in Japanese as \"haku\" () or \"m\u014dra\" (), rather than syllables, as the basis of the sound system. Writing Japanese in kana (hiragana and katakana) is said by those scholars who use the term \"mora\" to demonstrate a moraic system of writing. For example, in the two-syllable word \"m\u014dra\", the \"\u014d\" is a long vowel and counts as two morae. The word is written in three symbols, , corresponding here to \"mo-o-ra\", each containing one mora. Therefore, scholars argue that the 5\/7\/5 pattern of the \"haiku\" in modern Japanese is of morae rather than syllables."} {"text":"The Japanese syllable-final \"n\" is also said to be moraic, as is the first part of a geminate consonant. For example, the Japanese name for \"Japan\", , has two different pronunciations, one with three morae (\"Nihon\") and one with four (\"Nippon\"). In the hiragana spelling, the three morae of \"Ni-ho-n\" are represented by three characters (), and the four morae of \"Ni-p-po-n\" need four characters to be written out as ."} {"text":"Similarly, the names \"T\u014dky\u014d\" (\"To-u-kyo-u\", ), \"\u014csaka\" (\"O-o-sa-ka\", ), and \"Nagasaki\" (\"Na-ga-sa-ki\", ) all have four morae, even though, on this analysis, they can be said to have two, three and four syllables, respectively. The number of morae in a word is not always equal to the number of graphemes when written in kana; for example, even though it has four morae, the Japanese name for \"T\u014dky\u014d\" () is written with five graphemes, because one of these graphemes () represents a \"y\u014don\", a feature of the Japanese writing system that indicates that the preceding consonant is palatalized."} {"text":"In Luganda, a short vowel constitutes one mora while a long vowel constitutes two morae. A simple consonant has no morae, and a doubled or prenasalised consonant has one. No syllable may contain more than three morae. The tone system in Luganda is based on morae. See Luganda tones."} {"text":"In Sanskrit, the mora is expressed as the \"m\u0101tr\u0101\". For example, the short vowel \"a\" (pronounced like a schwa) is assigned a value of one \"m\u0101tr\u0101\", the long vowel \"\u0101\" is assigned a value of two \"m\u0101tr\u0101\"s, and the compound vowel (diphthong) \"ai\" (which has either two simple short vowels, \"a\"+\"i\", or one long and one short vowel, \"\u0101\"+\"i\") is assigned a value of two \"m\u0101tr\u0101\"s. In addition, there is \"plutham\" (trimoraic) and \"d\u012brgha plutham\" (\"long \"plutham\"\" = quadrimoraic)."} {"text":"Sanskrit prosody and metrics have a deep history of taking into account moraic weight, as it were, rather than straight syllables, divided into \"laghu\" (, \"light\") and \"d\u012brgha\"\/\"guru\" (\/, \"heavy\") feet based on how many morae can be isolated in each word. Thus, for example, the word \"kart\u1e5b\" (), meaning \"agent\" or \"doer\", does not contain simply two syllabic units, but contains rather, in order, a \"d\u012brgha\"\/\"guru\" foot and a \"laghu\" foot. The reason is that the conjoined consonants \"rt\" render the normally light \"ka\" syllable heavy."} {"text":"Semantic satiation is a psychological phenomenon in which repetition causes a word or phrase to temporarily lose meaning for the listener, who then perceives the speech as repeated meaningless sounds. Extended inspection or analysis (staring at the word or phrase for a lengthy period of time) in place of repetition also produces the same effect."} {"text":"Leon Jakobovits James coined the phrase \"semantic satiation\" in his 1962 doctoral dissertation at McGill University. It was demonstrated as a stable phenomenon that is possibly similar to a cognitive form of reactive inhibition. Prior to that, the expression \"verbal satiation\" had been used along with terms that express the idea of mental fatigue. The dissertation listed many of the names others had used for the phenomenon:"} {"text":"James presented several experiments that demonstrated the operation of the semantic satiation effect in various cognitive tasks such as rating words and figures that are presented repeatedly in a short time, verbally repeating words then grouping them into concepts, adding numbers after repeating them out loud, and bilingual translations of words repeated in one of the two languages. In each case, the subjects would repeat a word or number for several seconds, then perform the cognitive task using that word. It was demonstrated that repeating a word prior to its use in a task made the task somewhat more difficult."} {"text":"An explanation for the phenomenon is that, in the cortex, verbal repetition repeatedly arouses a specific neural pattern that corresponds to the meaning of the word. Rapid repetition makes both the peripheral sensorimotor activity and central neural activation fire repeatedly. This is known to cause reactive inhibition, hence a reduction in the intensity of the activity with each repetition. Jakobovits James (1962) calls this conclusion the beginning of \"experimental neurosemantics\"."} {"text":"Studies that further explored semantic satiation include the work of Pilotti, Antrobus, and Duff (1997), which claimed that it is possible that the true locus of this phenomenon is presemantic instead of semantic adaptation. There is also the experiment conducted by Kouinos et al. (2000), which revealed that semantic satiation is not necessarily a byproduct of \"impoverishment of perceptual inputs.\""} {"text":"Jakobovits cited several possible semantic satiation applications and these include its integration in the treatment of phobias through systematic desensitization. He argued that \"in principle, semantic satiation as an applied tool ought to work wherever some specifiable cognitive activity mediates some behavior that one wishes to alter.\" An application has also been developed to reduce speech anxiety by stutterers by creating semantic satiation through repetition, thus reducing the intensity of negative emotions triggered during speech."} {"text":"There are studies that also linked semantic satiation in education. For instance, the work of Tian and Huber (2010) explored the impact of this phenomenon on word learning and effective reading. The authors claimed that this process can serve as a unique approach to test for discounting through loss of association since it allows the separation of the \"lexical level from semantic level effects in a meaning-based task that involves repetitions of words.\" Semantic satiation has also been used as a tool to gain more understanding on language acquisition such as those studies that investigated the nature of multilingualism."} {"text":"Speech shadowing is a psycholinguistic experimental technique in which subjects repeat speech at a delay to the onset of hearing the phrase. The time between hearing the speech and responding, is how long the brain takes to process and produce speech. The task instructs participants to shadow speech, which generates intent to reproduce the phrase while motor regions in the brain unconsciously process the syntax and semantics of the words spoken. Words repeated during the shadowing task would also imitate the parlance of the shadowed speech."} {"text":"The reaction time between perceiving speech and then producing speech has been recorded at 250 ms for a standardised test. However, for people with left dominant brains, the reaction time has been recorded at 150 ms. Functional imaging finds that the shadowing of speech occurs through the dorsal stream. This area links auditory and motor representations of speech through a pathway that starts in the superior temporal cortex, extends to the inferior parietal cortex and ends with the posterior and inferior frontal cortexes, specifically in Broca's area."} {"text":"The speech shadowing technique was created as a research technique by the Leningrad Group led by Ludmilla Chistovich and Valerij Kozhevnikov in the late 1950s. In the 1950s, the Motor theory of speech perception was also in development through Alvin Liberman and Franklin S. Cooper. It has been used for research on stuttering and divided attention, with focus on the distraction of conversational audio while driving. Speech shadowing also has applications for language learning, as an interpretation method and in singing."} {"text":"Ludmilla Chistovich and Valerij Kozhevnikov focused on research of the mental processes that stimulate the functions of perception and production of speech in communication. In linguistics, speech perception was the chronological process that analysed steadily paced and similar sounding words but Chistovich and Kozhevnikov found speech perception to be the staggered integration of syllables known as non-linear dynamics. This refers to the diversity of tones and syllables in speech, which is perceived without a conscious detection of delay and forgotten with the limited working memory capacity. This observation developed research towards the speech shadowing technique for research in psycholinguistics."} {"text":"Shadowing was used to measure the reaction time taken to repeat consonant-vowel syllables.\u00a0 Alveolar consonants were measured when the tongue first touched an artificial palate and labial consonants were measured by the contact of metal pieces when the upper and lower lips pressed together.\u00a0 The participant would begin to mimic the consonant as the speaker finished the utterance of the consonant. This consistent rapid response shifted research focus towards close speech shadowing."} {"text":"Close speech shadowing is when the technique requires an immediate repetition, at the fastest pace a person is able to achieve. It does not allow people to hear the entire phrase beforehand or to understand the words vocalised until the end of a sentence. It was found that close speech shadowing would occur at the shortest delay of 250 ms.\u00a0It has also been found to occur with a minimum delay between 150\u00a0m\/s in left-hemisphere dominant brains. \u00a0The left hemisphere is associated with enhanced performance with linguistic skill and information processing.\u00a0It engages with analytic patterns of thought and experiences ease with the speech shadowing task."} {"text":"The short delay of response occurs as the motor regions of the brain have recorded cues that are related to consonants. \u00a0The brain would then estimate the adjacent vowel syllable before it is heard.\u00a0 When the vowel is registered through the auditory system, it would confirm the action to produce speech based on the estimate.\u00a0 If the vowel estimate is denied, a short delay in response occurs as the motor region configures an alternate vowel."} {"text":"Research has developed a biological model as to how the meaning of speech can be perceived instantaneously even though the sentence has never been heard before. An understanding of syntactic, lexical and phonemic characteristics is first required for this to occur. Speech perception also requires the physical components of the auditory system to recognise similarities in sounds. Within the basilar membrane, energy is transferred, and specific frequencies can be detected and activate auditory hairs. The auditory hairs can be stimulated to sharpened activity when a tonal emission is held for 100 ms. This length of time indicates that speech shadowing ability can be enhanced by a moderately paced phrase."} {"text":"Shadowing is more complex than only the use of the auditory system.\u00a0 A shadow response can reduce the delay by analysing the temporal difference between the pronunciation of phonemes within a syllable.\u00a0 During a shadowing task, the process of perceiving speech and a subsequent response by the production of speech does not occur separately, it would partially overlap.\u00a0 The auditory system shifts between a translation stage of perceiving phonemes and a choice phase of anticipating the following phonemes to create an immediate response. This period of overlap occurs in 20 \u2013 90 ms, depending on the combination of vowels with consonants."} {"text":"Speech perception also has links to phonological processing skills.\u00a0 This includes recognition of all phonemes in a language and how they can combine to form common syllables.\u00a0 A low understanding of phonological norms can negatively affect performance in a speech shadowing task.\u00a0 This is measured through the inclusion of proper and nonsense words in the task.\u00a0 High phonological processing skills produced shorter reaction times and low phonological processing skilled participants experienced uncertainty and slower responses."} {"text":"The speech shadowing technique is part of research methods that examine the mechanics of stuttering and identifies practical improvement strategies. \u00a0A primary characteristic of stuttering is a repeated movement, characterised by the repetition of a syllable.\u00a0 In this activity, stutters are made to shadow a repeated movement that is internally or externally sourced. \u00a0It reduces the likelihood of stuttering as the linguistic mental block is overturned and conditioned to provide an opening for fluid speech. Mirror neurones of the frontal lobe are active during this exercise and act to link speech perception and production. This process combined with cortical priming is engaged to produce the visible response."} {"text":"Another primary characteristic of stuttering is a fixed posture, involving the prolongation of sounds.\u00a0 Speech shadowing research involving fixed postures produces no benefit in improving speech flow. The elongation of words in this stuttering characteristic does not align with the auditory system, which functions efficiently with moderately paced speech."} {"text":"Speech shadowing has also been used in research into pseudo-stuttering, a voluntary speech impediment.\u00a0 Pseudo-stuttering involves identifying primary stuttering characteristics and realistic shadowing. It is used as an activity when studying fluency disorders, for students to experience how psychological and social outcomes are impacted by stuttering with strangers.\u00a0Participants of this activity reported feelings of anxiety, frustration and embarrassment, which aligned with the reported emotional states of natural stutterers. The participants also reported lowered expectations towards sufferers in public situations."} {"text":"The speech shadowing technique is used in dichotic listening tests, produced by E. Colin Cherry in 1953. During dichotic listening tests, subjects are presented with two different messages, one in the right ear and one in the left ear. The participants are instructed to focus on one of the two messages and to shadow the attended message out loud. The perceptual ability of the participant is measured as subjects attend to the instructed message while the alternate message behaves as a distraction. Various stimuli are then presented to the other ear, and subjects are afterwards queried on what can be recalled from these messages despite instruction to ignore. Speech shadowing has here been manipulated as an experimental technique to study and test divided attention."} {"text":"Research into the effect of audio stimuli resulting from mobile phone use while driving, has used the speech shadowing technique in its methodology.\u00a0Speech shadowing tasks that have combined a conversational stimulus with a visual stimulus while driving are reported by participants as a distraction that directs focus away from the road and visual periphery. The study concludes that the combination of audio and visual stimuli have little effect on a driver\u2019s ability to manoeuvre a vehicle but it does impair spatial and temporal judgement, which is not detected by the driver. This includes a driver\u2019s judgement of their speed, distance from a parallel vehicle and a delayed reaction to a sudden brake from a driver ahead."} {"text":"The speech shadowing technique had also been used to research whether it is the action of producing speech or concentration on the semantics of speech that distracts drivers. The task of simple speech shadowing had no effects on driving ability but the combination of simple speech shadowing with a content associated follow-up activity showed impairment in reaction time. The high attentional demand required for this alternate task shifts concentration from the primary task of driving. This impairment is problematic as fast reaction time when driving is required to respond to general traffic signals and signage as well as unpredictable events to maintain safety."} {"text":"When learning a foreign language, shadowing can be used as a technique to practice speech and to acquire knowledge. It follows an interactionist perspective of language development. The method of speech shadowing in a learning setting involves providing shadowing tasks of incremental semantic and pronunciation difficulty and rating the accuracy of the shadowed response. It was previously difficult to create a standardised scoring system as learners would slur and skip words when uncertain in order to keep up with the pace of the phrases that were to be shadowed. Automatic scoring using alignment-based and clustering-based scoring techniques were designed and are now implemented to improve the experience of learning of a foreign language through speech shadowing techniques."} {"text":"Remote learning of language can occur without the presence of a real-time speaker through text-to-speech applications and using the principle of speech shadowing. As part of the process to perceive sound, the auditory system distinguishes formant frequencies.\u00a0 The first formant characteristic perceived in the cochlear is the most prominent cue as it there is an attentional shift towards this signal. The formant characteristics of synthetically produced speech currently differs to speech produced by the human vocal tract.\u00a0 This information received effects the pronunciation of speech produced in a shadowing activity.\u00a0Applications for learning languages are focused on developing greater accuracy in pronunciation and pitch since these features are also replicated when shadowing speech."} {"text":"Speech shadowing can be used in the alternate form of vocal shadowing.\u00a0 It also requires the process of perception and production but with inverted energy distributions of a low input and a large output.\u00a0Vocal shadowing perceives pure tones and focuses on the manipulation of the vocal tract to produce a shadowed response.\u00a0Singers in comparison to non-singers are able to produce a shadowed response phrase that includes more accuracy in achieving the target frequencies and rapid movement between the frequencies.\u00a0 Research associates this ability with greater control and awareness of the vocal-fold breadth.\u00a0 The glottal stop is a technique manipulated by singers during shadowing to enhance frequency change."} {"text":"Letter frequency effect - the effect of letter frequency according to which the frequency with which the letter is encountered influences the recognition time of a letter. Letters of high frequency show a significant advantage over letters of low frequency in letter naming, same-different matching, and visual search. Letters of high frequency are recognized faster than letters of low frequency. Appelman and Mayzner (1981) in their re-analysis of the studies concerning letter frequency effect have found that in 3 out of 6 studies using reaction times (RTs) as a dependent variable the letter frequency correlated significantly with RTs."} {"text":"Majority of studies on letter frequency effect failed to find a significant letter frequency effect. These studies, however, used the same-different matching task in which the participants see two letters and are to respond if these letters are same or different. Therefore, the absence of letter frequency effect in these studies may be due to the participants using the visual form of a letter instead of a letter itself to match the letters."} {"text":"The hypothesis of linguistic relativity, also known as the Sapir\u2013Whorf hypothesis , the Whorf hypothesis, or Whorfianism, is a principle suggesting that the structure of a language affects its speakers' worldview or cognition, and thus people's perceptions are relative to their spoken language."} {"text":"Linguistic relativity has been understood in many different, often contradictory ways throughout its history. The idea is often stated in two forms: the \"strong hypothesis\", now referred to as linguistic determinism, was held by some of the early linguists before World War II, while the \"weak hypothesis\" is mostly held by some of the modern linguists."} {"text":"The term \"Sapir\u2013Whorf hypothesis\" is considered a misnomer by linguists for several reasons: Sapir and Whorf never co-authored any works, and never stated their ideas in terms of a hypothesis. The distinction between a weak and a strong version of this hypothesis is also a later invention; Sapir and Whorf never set up such a dichotomy, although often their writings and their views of this relativity principle are phrased in stronger or weaker terms."} {"text":"The principle of linguistic relativity and the relation between language and thought has also received attention in varying academic fields from philosophy to psychology and anthropology, and it has also inspired and colored works of fiction and the invention of constructed languages."} {"text":"From the late 1980s, a new school of linguistic relativity scholars has examined the effects of differences in linguistic categorization on cognition, finding broad support for non-deterministic versions of the hypothesis in experimental contexts. Some effects of linguistic relativity have been shown in several semantic domains, although they are generally weak. Currently, a balanced view of linguistic relativity is espoused by most linguists holding that language influences certain kinds of cognitive processes in non-trivial ways, but that other processes are better seen as arising from connectionist factors. Research is focused on exploring the ways and extent to which language influences thought."} {"text":"In the late 18th and early 19th centuries, the idea of the existence of different national characters, or \"Volksgeister\", of different ethnic groups was the moving force behind the German romantics school and the beginning ideologies of ethnic nationalism."} {"text":"Swedish philosopher Emanuel Swedenborg inspired several of the German Romantics. As early as 1749, he alludes to something along the lines of linguistic relativity in commenting on a passage in the table of nations in the book of Genesis:"} {"text":"In 1771 he spelled this out more explicitly:"} {"text":"Johann Georg Hamann is often suggested to be the first among the actual German Romantics to speak of the concept of \"the genius of a language.\" In his \"Essay Concerning an Academic Question\", Hamann suggests that a people's language affects their worldview:"} {"text":"In 1820, Wilhelm von Humboldt connected the study of language to the national romanticist program by proposing the view that language is the fabric of thought. Thoughts are produced as a kind of internal dialog using the same grammar as the thinker's native language. This view was part of a larger picture in which the world view of an ethnic nation, their \"Weltanschauung\", was seen as being faithfully reflected in the grammar of their language. Von Humboldt argued that languages with an inflectional morphological type, such as German, English and the other Indo-European languages, were the most perfect languages and that accordingly this explained the dominance of their speakers over the speakers of less perfect languages. Wilhelm von Humboldt declared in 1820:"} {"text":"In Humboldt's humanistic understanding of linguistics, each language creates the individual's worldview in its particular way through its lexical and grammatical categories, conceptual organization, and syntactic models."} {"text":"Herder worked alongside Hamann to establish the idea of whether or not language had a human\/rational or a divine origin. Herder added the emotional component of the hypothesis and Humboldt then took this information and applied to various languages to expand on the hypothesis."} {"text":"Boas' student Edward Sapir reached back to the Humboldtian idea that languages contained the key to understanding the world views of peoples. He espoused the viewpoint that because of the differences in the grammatical systems of languages no two languages were similar enough to allow for perfect cross-translation. Sapir also thought because language represented reality differently, it followed that the speakers of different languages would perceive reality differently."} {"text":"On the other hand, Sapir explicitly rejected strong linguistic determinism by stating, \"It would be na\u00efve to imagine that any analysis of experience is dependent on pattern expressed in language.\""} {"text":"Sapir was explicit that the connections between language and culture were neither thoroughgoing nor particularly deep, if they existed at all:"} {"text":"Sapir offered similar observations about speakers of so-called \"world\" or \"modern\" languages, noting, \"possession of a common language is still and will continue to be a smoother of the way to a mutual understanding between England and America, but it is very clear that other factors, some of them rapidly cumulative, are working powerfully to counteract this leveling influence. A common language cannot indefinitely set the seal on a common culture when the geographical, physical, and economics determinants of the culture are no longer the same throughout the area.\""} {"text":"While Sapir never made a point of studying directly how languages affected thought, some notion of (probably \"weak\") linguistic relativity underlay his basic understanding of language, and would be taken up by Whorf."} {"text":"More than any linguist, Benjamin Lee Whorf has become associated with what he called the \"linguistic relativity principle\". Studying Native American languages, he attempted to account for the ways in which grammatical systems and language-use differences affected perception. Whorf's opinions regarding the nature of the relation between language and thought remain under contention. Critics such as Lenneberg, Black, and Pinker attribute to Whorf a strong linguistic determinism, while Lucy, Silverstein and Levinson point to Whorf's explicit rejections of determinism, and where he contends that translation and commensuration are possible."} {"text":"Detractors such as Lenneberg, Chomsky and Pinker criticized him for insufficient clarity in his description of how language influences thought, and for not proving his conjectures. Most of his arguments were in the form of anecdotes and speculations that served as attempts to show how 'exotic' grammatical traits were connected to what were apparently equally exotic worlds of thought. In Whorf's words:"} {"text":"Among Whorf's best-known examples of linguistic relativity are instances where an indigenous language has several terms for a concept that is only described with one word in European languages (Whorf used the acronym SAE \"Standard Average European\" to allude to the rather similar grammatical structures of the well-studied European languages in contrast to the greater diversity of less-studied languages)."} {"text":"One of Whorf's examples was the supposedly large number of words for 'snow' in the Inuit language, an example which later was contested as a misrepresentation."} {"text":"Another is the Hopi language's words for water, one indicating drinking water in a container and another indicating a natural body of water. These examples of polysemy served the double purpose of showing that indigenous languages sometimes made more fine grained semantic distinctions than European languages and that direct translation between two languages, even of seemingly basic concepts such as snow or water, is not always possible."} {"text":"Whorf\u2019s argument about Hopi speakers\u2019 conceptualization about time is an example of the structure-centered approach to research into linguistic relativity, which Lucy identified as one of three main strands of research in the field. The \"structure-centered\" approach starts with a language's structural peculiarity and examines its possible ramifications for thought and behavior. The defining example is Whorf's observation of discrepancies between the grammar of time expressions in Hopi and English. More recent research in this vein is Lucy's research describing how usage of the categories of grammatical number and of numeral classifiers in the Mayan language Yucatec result in Mayan speakers classifying objects according to material rather than to shape as preferred by English speakers."} {"text":"Whorf died in 1941 at age 44, leaving multiple unpublished papers. His line of thought was continued by linguists and anthropologists such as Hoijer and Lee who both continued investigations into the effect of language on habitual thought, and Trager, who prepared a number of Whorf's papers for posthumous publishing. The most important event for the dissemination of Whorf's ideas to a larger public was the publication in 1956 of his major writings on the topic of linguistic relativity in a single volume titled \"Language, Thought and Reality\"."} {"text":"In 1953, Eric Lenneberg criticized Whorf's examples from an objectivist view of language holding that languages are principally meant to represent events in the real world and that even though languages express these ideas in various ways, the meanings of such expressions and therefore the thoughts of the speaker are equivalent. He argued that Whorf's English descriptions of a Hopi speaker's view of time were in fact translations of the Hopi concept into English, therefore disproving linguistic relativity. However Whorf was concerned with how the habitual \"use\" of language influences habitual behavior, rather than translatability. Whorf's point was that while English speakers may be able to \"understand\" how a Hopi speaker thinks, they do not \"think\" in that way."} {"text":"Lenneberg's main criticism of Whorf's works was that he never showed the connection between a linguistic phenomenon and a mental phenomenon. With Brown, Lenneberg proposed that proving such a connection required directly matching linguistic phenomena with behavior. They assessed linguistic relativity experimentally and published their findings in 1954."} {"text":"Since neither Sapir nor Whorf had ever stated a formal hypothesis, Brown and Lenneberg formulated their own. Their two tenets were (i) \"the world is differently experienced and conceived in different linguistic communities\" and (ii) \"language causes a particular cognitive structure\". Brown later developed them into the so-called \"weak\" and \"strong\" formulation:"} {"text":"Brown's formulations became widely known and were retrospectively attributed to Whorf and Sapir although the second formulation, verging on linguistic determinism, was never advanced by either of them."} {"text":"Since Brown and Lenneberg believed that the objective reality denoted by language was the same for speakers of all languages, they decided to test how different languages codified the same message differently and whether differences in codification could be proven to affect behavior."} {"text":"Universalist scholars ushered in a period of dissent from ideas about linguistic relativity. Lenneberg was one of the first cognitive scientists to begin development of the Universalist theory of language that was formulated by Chomsky as Universal Grammar, effectively arguing that all languages share the same underlying structure. The Chomskyan school also holds the belief that linguistic structures are largely innate and that what are perceived as differences between specific languages are surface phenomena that do not affect the brain's universal cognitive processes. This theory became the dominant paradigm in American linguistics from the 1960s through the 1980s, while linguistic relativity became the object of ridicule."} {"text":"Today many followers of the universalist school of thought still oppose linguistic relativity. For example, Pinker argues in \"The Language Instinct\" that thought is independent of language, that language is itself meaningless in any fundamental way to human thought, and that human beings do not even think in \"natural\" language, i.e. any language that we actually communicate in; rather, we think in a meta-language, preceding any natural language, called \"mentalese.\" Pinker attacks what he calls \"Whorf's radical position,\" declaring, \"the more you examine Whorf's arguments, the less sense they make.\""} {"text":"Pinker and other universalists have been accused by relativists of misrepresenting Whorf's views and arguing against strawmen."} {"text":"Joshua Fishman's \"Whorfianism of the third kind\"."} {"text":"Joshua Fishman argued that Whorf's true position was largely overlooked. In 1978, he suggested that Whorf was a \"neo-Herderian champion\" and in 1982, he proposed \"Whorfianism of the third kind\" in an attempt to refocus linguists' attention on what he claimed was Whorf's real interest, namely the intrinsic value of \"little peoples\" and \"little languages\". Whorf had criticized Ogden's Basic English thus:"} {"text":"Where Brown's weak version of the linguistic relativity hypothesis proposes that language \"influences\" thought and the strong version that language \"determines\" thought, Fishman's \"Whorfianism of the third kind\" proposes that language \"is a key to culture\"."} {"text":"In his book \"Women, Fire and Dangerous Things: What Categories Reveal About the Mind\", Lakoff reappraised linguistic relativity and especially Whorf's views about how linguistic categorization reflects and\/or influences mental categories. He concluded that the debate had been confused. He described four parameters on which researchers differed in their opinions about what constitutes linguistic relativity:"} {"text":"Lakoff concluded that many of Whorf's critics had criticized him using novel definitions of linguistic relativity, rendering their criticisms moot."} {"text":"The publication of the 1996 anthology \"Rethinking Linguistic Relativity\" edited by Gumperz and Levinson began a new period of linguistic relativity studies that focused on cognitive and social aspects. The book included studies on the linguistic relativity and universalist traditions. Levinson documented significant linguistic relativity effects in the linguistic conceptualization of spatial categories between languages. For example, men speaking the Guugu Yimithirr language in Queensland gave accurate navigation instructions using a compass-like system of north, south, east and west, along with a hand gesture pointing to the starting direction."} {"text":"Lucy defines this approach as \u201cdomain-centered,\u201d because researchers select a semantic domain and compare it across linguistic and cultural groups. Space is another semantic domain that has proven fruitful for linguistic relativity studies. Spatial categories vary greatly across languages. Speakers rely on the linguistic conceptualization of space in performing many ordinary tasks. Levinson and others reported three basic spatial categorizations. While many languages use combinations of them, some languages exhibit only one type and related behaviors. For example, Yimithirr only uses absolute directions when describing spatial relations\u2014 the position of everything is described by using the cardinal directions. Speakers define a location as \"north of the house\", while an English speaker may use relative positions, saying \"in front of the house\" or \"to the left of the house\"."} {"text":"Separate studies by Bowerman and Slobin treated the role of language in cognitive processes. Bowerman showed that certain cognitive processes did not use language to any significant extent and therefore could not be subject to linguistic relativity. Slobin described another kind of cognitive process that he named \"thinking for speaking\" \u2013 the kind of process in which perceptional data and other kinds of prelinguistic cognition are translated into linguistic terms for communication. These, Slobin argues, are the kinds of cognitive process that are at the root of linguistic relativity."} {"text":"Researchers such as Boroditsky, Majid, Lucy and Levinson believe that language influences thought in more limited ways than the broadest early claims. Researchers examine the interface between thought (or cognition), language and culture and describe the relevant influences. They use experimental data to back up their conclusions. Kay ultimately concluded that \"[the] Whorf hypothesis is supported in the right visual field but not the left\". His findings show that accounting for brain lateralization offers another perspective."} {"text":"Recent studies have also taken the \"behavior centered\" approach, which starts by comparing behavior across linguistic groups and then searches for causes for that behavior in the linguistic system. In an early example of this approach, Whorf attributed the occurrence of fires at a chemical plant to the workers' use of the word 'empty' to describe the barrels containing only explosive vapors."} {"text":"More recently, Bloom noticed that speakers of Chinese had unexpected difficulties answering counter-factual questions posed to them in a questionnaire. He concluded that this was related to the way in which counter-factuality is marked grammatically in Chinese. Other researchers attributed this result to Bloom's flawed translations. Str\u00f8mnes examined why Finnish factories had a higher occurrence of work related accidents than similar Swedish ones. He concluded that cognitive differences between the grammatical usage of Swedish prepositions and Finnish cases could have caused Swedish factories to pay more attention to the work process while Finnish factory organizers paid more attention to the individual worker."} {"text":"Everett's work on the Pirah\u00e3 language of the Brazilian Amazon found several peculiarities that he interpreted as corresponding to linguistically rare features, such as a lack of numbers and color terms in the way those are otherwise defined and the absence of certain types of clauses. Everett's conclusions were met with skepticism from universalists who claimed that the linguistic deficit is explained by the lack of need for such concepts."} {"text":"Recent research with non-linguistic experiments in languages with different grammatical properties (e.g., languages with and without numeral classifiers or with different gender grammar systems) showed that language differences in human categorization are due to such differences. Experimental research suggests that this linguistic influence on thought diminishes over time, as when speakers of one language are exposed to another."} {"text":"Kashima & Kashima showed that people living in countries where spoken languages often drop pronouns (such as Japanese) tend to have more collectivistic values than those who use non\u2013pronoun drop languages such as English. They argued that the explicit reference to \u201cyou\u201d and \u201cI\u201d reminds speakers the distinction between the self and other."} {"text":"Psycholinguistic studies explored motion perception, emotion perception, object representation and memory. The gold standard of psycholinguistic studies on linguistic relativity is now finding non-linguistic cognitive differences in speakers of different languages (thus rendering inapplicable Pinker's criticism that linguistic relativity is \"circular\")."} {"text":"Recent work with bilingual speakers attempts to distinguish the effects of language from those of culture on bilingual cognition including perceptions of time, space, motion, colors and emotion. Researchers described differences between bilinguals and monolinguals in perception of color, representations of time and other elements of cognition."} {"text":"Linguistic relativity inspired others to consider whether thought could be influenced by manipulating language."} {"text":"The question bears on philosophical, psychological, linguistic and anthropological questions."} {"text":"A major question is whether human psychological faculties are mostly innate or whether they are mostly a result of learning, and hence subject to cultural and social processes such as language. The innate view holds that humans share the same set of basic faculties, and that variability due to cultural differences is less important and that the human mind is a mostly biological construction, so that all humans sharing the same neurological configuration can be expected to have similar cognitive patterns."} {"text":"Multiple alternatives have advocates. The contrary constructivist position holds that human faculties and concepts are largely influenced by socially constructed and learned categories, without many biological restrictions. Another variant is idealist, which holds that human mental capacities are generally unrestricted by biological-material strictures. Another is essentialist, which holds that essential differences may influence the ways individuals or groups experience and conceptualize the world. Yet another is relativist (Cultural relativism), which sees different cultural groups as employing different conceptual schemes that are not necessarily compatible or commensurable, nor more or less in accord with external reality."} {"text":"Another debate considers whether thought is a form of internal speech or is independent of and prior to language."} {"text":"In the philosophy of language the question addresses the relations between language, knowledge and the external world, and the concept of truth. Philosophers such as Putnam, Fodor, Davidson, and Dennett see language as representing directly entities from the objective world and that categorization reflect that world. Other philosophers (e.g. Quine, Searle, Foucault) argue that categorization and conceptualization is subjective and arbitrary."} {"text":"Another question is whether language is a tool for representing and referring to objects in the world, or whether it is a system used to construct mental representations that can be communicated."} {"text":"Sapir\/Whorf contemporary Alfred Korzybski was independently developing his theory of general semantics, which was aimed at using language's influence on thinking to maximize human cognitive abilities. Korzybski's thinking was influenced by logical philosophy such as Russell and Whitehead's \"Principia Mathematica\" and Wittgenstein's \"Tractatus Logico-Philosophicus\". Although Korzybski was not aware of Sapir and Whorf's writings, the movement was followed by Whorf-admirer Stuart Chase, who fused Whorf's interest in cultural-linguistic variation with Korzybski's programme in his popular work \"The Tyranny of Words\". S. I. Hayakawa was a follower and popularizer of Korzybski's work, writing \"Language in Thought and Action\". The general semantics movement influenced the development of neuro-linguistic programming (NLP), another therapeutic technique that seeks to use awareness of language use to influence cognitive patterns."} {"text":"Korzybski independently described a \"strong\" version of the hypothesis of linguistic relativity."} {"text":"In their fiction, authors such as Ayn Rand and George Orwell explored how linguistic relativity might be exploited for political purposes. In Rand's \"Anthem\", a fictive communist society removed the possibility of individualism by removing the word \"I\" from the language. In Orwell's \"1984\" the authoritarian state created the language Newspeak to make it impossible for people to think critically about the government, or even to contemplate that they might be impoverished or oppressed, by reducing the number of words to reduce the thought of the locutor."} {"text":"APL programming language originator Kenneth E. Iverson believed that the Sapir\u2013Whorf hypothesis applied to computer languages (without actually mentioning it by name). His Turing Award lecture, \"Notation as a Tool of Thought\", was devoted to this theme, arguing that more powerful notations aided thinking about computer algorithms."} {"text":"The essays of Paul Graham explore similar themes, such as a conceptual hierarchy of computer languages, with more expressive and succinct languages at the top. Thus, the so-called \"blub\" paradox (after a hypothetical programming language of average complexity called \"Blub\") says that anyone preferentially using some particular programming language will \"know\" that it is more powerful than some, but not that it is less powerful than others. The reason is that \"writing\" in some language means \"thinking\" in that language. Hence the paradox, because typically programmers are \"satisfied with whatever language they happen to use, because it dictates the way they think about programs\"."} {"text":"In a 2003 presentation at an open source convention, Yukihiro Matsumoto, creator of the programming language Ruby, said that one of his inspirations for developing the language was the science fiction novel \"Babel-17\", based on the Sapir\u2013Whorf Hypothesis."} {"text":"Ted Chiang's short story \"Story of Your Life\" developed the concept of the Sapir\u2013Whorf hypothesis as applied to an alien species which visits Earth. The aliens' biology contributes to their spoken and written languages, which are distinct. In the 2016 American film \"Arrival\", based on Chiang's short story, the Sapir\u2013Whorf hypothesis is the premise. The protagonist explains that \"the Sapir\u2013Whorf hypothesis is the theory that the language you speak determines how you think\"."} {"text":"In his science fiction novel \"The Languages of Pao\" the author Jack Vance describes how specialized languages are a major part of a strategy to create specific classes in a society, to enable the population to withstand occupation and develop itself."} {"text":"In the Samuel R. Delany science fiction novel, \"Babel-17,\" the author describes a highly advanced, information-dense language that can be used as a weapon. Learning it turns one into an unwilling traitor as it alters perception and thought."} {"text":"The Totalitarian regime depicted in George Orwell's \"Nineteen Eighty Four\" in effect acts on the basis of the Sapir\u2013Whorf hypothesis, seeking to replace English with \"Newspeak\", a language constructed specifically with the intention that thoughts subversive of the regime cannot be expressed in it, and therefore people educated to speak and think in it would not have such thoughts."} {"text":"Intentionality is the power of minds to be about something: to represent or to stand for things, properties and states of affairs. Intentionality is primarily ascribed to mental states, like perceptions, beliefs or desires, which is why it has been regarded as the characteristic \"mark of the mental\" by many philosophers. A central issue for theories of intentionality has been the problem of \"intentional inexistence\": to determine the ontological status of the entities which are the objects of intentional states."} {"text":"The earliest theory of intentionality is associated with Anselm of Canterbury's ontological argument for the existence of God, and with his tenets distinguishing between objects that exist in the understanding and objects that exist in reality. The idea fell out of discussion with the end of the medieval scholastic period, but in recent times was resurrected by empirical psychologist Franz Brentano and later adopted by contemporary phenomenological philosopher Edmund Husserl. Today, intentionality is a live concern among philosophers of mind and language. A common dispute is between naturalism about intentionality, the view that intentional properties are reducible to natural properties as studied by the natural sciences, and the phenomenal intentionality theory, the view that intentionality is grounded in consciousness."} {"text":"The concept of intentionality was reintroduced in 19th-century contemporary philosophy by Franz Brentano (a German philosopher and psychologist who is generally regarded as the founder of act psychology, also called intentionalism) in his work \"Psychology from an Empirical Standpoint\" (1874). Brentano described intentionality as a characteristic of all acts of consciousness that are thus \"psychical\" or \"mental\" phenomena, by which they may be set apart from \"physical\" or \"natural\" phenomena."} {"text":"Brentano coined the expression \"intentional inexistence\" to indicate the peculiar ontological status of the contents of mental phenomena. According to some interpreters the \"in-\" of \"in-existence\" is to be read as locative, i.e. as indicating that \"an intended object ... exists in or has \"in-existence\", existing not externally but in the psychological state\" (Jacquette 2004, p.\u00a0102), while others are more cautious, stating: \"It is not clear whether in 1874 this ... was intended to carry any ontological commitment\" (Chrudzimski and Smith 2004, p.\u00a0205)."} {"text":"A major problem within discourse on intentionality is that participants often fail to make explicit whether or not they use the term to imply concepts such as agency or desire, i.e. whether it involves teleology. Dennett (see below) explicitly invokes teleological concepts in the \"intentional stance\". However, most philosophers use \"intentionality\" to mean something with no teleological import. Thus, a thought of a chair can be about a chair without any implication of an intention or even a belief relating to the chair. For philosophers of language, what is meant by intentionality is largely an issue of how symbols can have meaning. This lack of clarity may underpin some of the differences of view indicated below."} {"text":"To bear out further the diversity of sentiment evoked from the notion of intentionality, Husserl followed on Brentano, and gave the concept of intentionality more widespread attention, both in continental and analytic philosophy. In contrast to Brentano's view, French philosopher Jean-Paul Sartre (\"Being and Nothingness\") identified intentionality with consciousness, stating that the two were indistinguishable. German philosopher Martin Heidegger (\"Being and Time\"), defined intentionality as \"care\" (\"Sorge\"), a sentient condition where an individual's existence, facticity, and being in the world identifies their ontological significance, in contrast to that which is merely ontic (\"thinghood\")."} {"text":"Other 20th-century philosophers such as Gilbert Ryle and A.J. Ayer were critical of Husserl's concept of intentionality and his many layers of consciousness. Ryle insisted that perceiving is not a process, and Ayer that describing one's knowledge is not to describe mental processes. The effect of these positions is that consciousness is so fully intentional that the mental act has been emptied of all content, and that the idea of pure consciousness is that it is nothing. (Sartre also referred to \"consciousness\" as \"nothing\")."} {"text":"Platonist Roderick Chisholm has revived the Brentano thesis through linguistic analysis, distinguishing two parts to Brentano's concept, the ontological aspect and the psychological aspect. Chisholm's writings have attempted to summarize the suitable and unsuitable criteria of the concept since the Scholastics, arriving at a criterion of intentionality identified by the two aspects of Brentano's thesis and defined by the logical properties that distinguish language describing psychological phenomena from language describing non-psychological phenomena. Chisholm's criteria for the intentional use of sentences are: existence independence, truth-value indifference, and referential opacity."} {"text":"In current artificial intelligence and philosophy of mind, intentionality is sometimes linked with questions of semantic inference, with both skeptical and supportive adherents. John Searle argued for this position with the Chinese room thought experiment, according to which no syntactic operations that occurred in a computer would provide it with semantic content. Others are more skeptical of the human ability to make such an assertion, arguing that the kind of intentionality that emerges from self-organizing networks of automata will always be undecidable because it will never be possible to make our subjective introspective experience of intentionality and decision making coincide with our objective observation of the behavior of a self-organizing machine."} {"text":"A central issue for theories of intentionality has been the problem of intentional inexistence: to determine the ontological status of the entities which are the objects of intentional states. This is particularly relevant for cases involving objects that have no existence outside the mind, as in the case of mere fantasies or hallucinations."} {"text":"For example, assume that Mary is thinking about Superman. On the one hand, it seems that this thought is intentional: Mary is \"thinking about something\". On the other hand, Superman \"doesn't exist\". This suggests that Mary is either \"not thinking about something\" or that Mary is \"thinking about something that doesn't exist\". Various theories have been proposed in order to reconcile these conflicting intuitions. These theories can roughly be divided into \"eliminativism\", \"relationalism\", and \"adverbialism\". Eliminativists deny that this kind of problematic mental state is possible. Relationalist try to solve the problem by interpreting intentional states as relations while adverbialists interpret them as properties."} {"text":"Eliminativists deny that the example above is possible. It might seem to us and to Mary that she is thinking about something but she is not really thinking at all. Such a position could be motivated by a form of semantic externalism, the view that the meaning of a term, or in this example the content of a thought, is determined by factors external to the subject. If meaning depends on successful reference then failing to refer would result in a lack of meaning. The difficulty for such a position is to explain why it seems to Mary that she is thinking about something and how seeming to think is different from actual thinking."} {"text":"Relationalists hold that having an intentional state involves standing in a relation to the intentional object. This is the most natural position for non-problematic cases. So if Mary perceives a tree, we might say that a perceptual relation holds between Mary, the subject of this relation, and the tree, the object of this relation. Relations are usually assumed to be existence-entailing: the instance of a relation entails the existence of its relata. This principle rules out that we can bear relations to non-existing entities. One way to solve the problem is to deny this principle and argue for a kind of \"intentionality exceptionalism\": that intentionality is different from all other relations in the sense that this principle doesn't apply to it."} {"text":"Dennett's taxonomy of current theories about intentionality."} {"text":"Daniel Dennett offers a taxonomy of the current theories about intentionality in Chapter 10 of his book \"The Intentional Stance\". Most, if not all, current theories on intentionality accept Brentano's thesis of the irreducibility of intentional idiom. From this thesis the following positions emerge:"} {"text":"Roderick Chisholm (1956), G.E.M. Anscombe (1957), Peter Geach (1957), and Charles Taylor (1964) all adhere to the former position, namely that intentional idiom is problematic and cannot be integrated with the natural sciences. Members of this category also maintain realism in regard to intentional objects, which may imply some kind of dualism (though this is debatable)."} {"text":"The latter position, which maintains the unity of intentionality with the natural sciences, is further divided into three standpoints:"} {"text":"Proponents of the \"eliminative materialism\", understand intentional idiom, such as \"belief\", \"desire\", and the like, to be replaceable either with behavioristic language (e.g. Quine) or with the language of neuroscience (e.g. Churchland)."} {"text":"Holders of \"realism\" argue that there is a deeper fact of the matter to both translation and belief attribution. In other words, manuals for translating one language into another cannot be set up in different yet behaviorally identical ways and ontologically there are intentional objects. Famously, Fodor has attempted to ground such realist claims about intentionality in a language of thought. Dennett comments on this issue, Fodor \"attempt[s] to make these irreducible realities acceptable to the physical sciences by grounding them (somehow) in the 'syntax' of a system of physically realized mental representations\" (Dennett 1987, 345)."} {"text":"They are further divided into two theses:"} {"text":"Advocates of the former, the Normative Principle, argue that attributions of intentional idioms to physical systems should be the propositional attitudes that the physical system ought to have in those circumstances (Dennett 1987, 342). However, exponents of this view are still further divided into those who make an \"Assumption of Rationality\" and those who adhere to the \"Principle of Charity\". Dennett (1969, 1971, 1975), Cherniak (1981, 1986), and the more recent work of Putnam (1983) recommend the Assumption of Rationality, which unsurprisingly assumes that the physical system in question is rational. Donald Davidson (1967, 1973, 1974, 1985) and Lewis (1974) defend the Principle of Charity."} {"text":"The latter is advocated by Grandy (1973) and Stich (1980, 1981, 1983, 1984), who maintain that attributions of intentional idioms to any physical system (e.g. humans, artifacts, non-human animals, etc.) should be the propositional attitude (e.g. \"belief\", \"desire\", etc.) that one would suppose one would have in the same circumstances (Dennett 1987, 343)."} {"text":"Basic intentionality types according to Le Morvan."} {"text":"Intentionalism is the thesis that all mental states are intentional, i.e. that they are about something: about their intentional object. This thesis has also been referred to as \"representationalism\". Intentionalism is entailed by Brentano's claim that intentionality is \"the mark of the mental\": if all and only mental states are intentional then it is surely the case that all mental states are intentional."} {"text":"Discussions of intentionalism often focus on the intentionality of conscious states. One can distinguish in such states their phenomenal features, or what it is like for a subject to have such a state, from their intentional features, or what they are about. These two features seem to be closely related to each other, which is why intentionalists have proposed various theories in order to capture the exact form of this relatedness."} {"text":"Critics of intentionalism, so-called anti-intentionalists, have proposed various apparent counterexamples to intentionalism: states that are considered mental but lack intentionality."} {"text":"Some anti-intentionalist theories, such as that of Ned Block, are based on the argument that phenomenal conscious experience or qualia is also a vital component of consciousness, and that it is not intentional. (The latter claim is itself disputed by Michael Tye.)"} {"text":"Another form of anti-intentionalism associated with John Searle regards phenomenality itself as the \"mark of the mental\" and sidelines intentionality."} {"text":"A further form argues that some unusual states of consciousness are non-intentional, although an individual might live a lifetime without experiencing them. Robert K.C. Forman argues that some of the unusual states of consciousness typical of mystical experience are \"pure consciousness events\" in which awareness exists, but has no object, is not awareness \"of\" anything."} {"text":"Several authors have attempted to construct philosophical models describing how intentionality relates to the human capacity to be self-conscious. Cedric Evans contributed greatly to the discussion with his \"The Subject of Self-Consciousness\" in 1970. He centered his model on the idea that executive attention need not be propositional in form."} {"text":"The auditory moving-window is a psycholinguistic paradigm developed at Michigan State University by Fernanda Ferreira and colleagues. Ferreira and colleagues built the paradigm in order to address the scarcity of (fluent) spoken-language comprehension literature versus the robustness of that for visual-word processing. Auditory moving-window can be used to assess indirectly the processing load of a sentence: this processing load is assessed by an analogue of reaction time within the paradigm (discussed below). Reaction times within the paradigm are sensitive to at least word frequency and garden path effects."} {"text":"The paradigm has been used in the study of syntactic processing in the study of aphasic patients. One such study suggests that many aphasic patients retain their abilities to process syntactic structures on-line. Further, evidence suggests that Expressive aphasics have a degraded ability to process complex syntax on-line, whereas Receptive aphasics are impaired only after on-line comprehension concludes"} {"text":"The auditory moving-window paradigm, because of its similarity to the eye tracking paradigm, has a broad range of applications. It is at least sensitive enough to detect frequency effects on comprehension: low frequency words had a greater IRT and DT than high frequency words, suggesting a relative difficulty of lexical access. Further, it is sensitive to garden path effects"} {"text":"Because one of the aims of the auditory moving-window is to investigate fluent speech, the paradigm is several steps more complex than simple auditory word-by-word presentation:"} {"text":"The presentation of a prepared sample depends on what software is being used. What follows is an abstraction of the general strategy."} {"text":"The auditory moving-window is roughly analogous to an eye tracking task presented in the auditory modality. The eye tracking variable of interest that is thought to be closest to the DT is that of fixation duration. They are held to be directly related: a greater DT is correspondent to a greater fixation duration. Several eye-tracking studies use fixation duration as an indirect measure of processing load: a greater fixation duration is correspondent to a greater processing load . The same applies to DTs."} {"text":"Kenneth Goodman (December 23, 1927 - March 12, 2020) was Professor Emeritus, Language Reading and Culture, at the University of Arizona. He is best known for developing the theory underlying the literacy philosophy of whole language."} {"text":"Goodman began teaching at Wayne State University in 1962. His research focused on reading in public schools. While at Wayne State University, Goodman developed miscue analysis, a process of assessing students' reading comprehension based on samples of oral reading. One of his research assistants in miscue analysis was Rudine Sims Bishop. Goodman taught at Wayne State University for 15 years before moving to the University of Arizona."} {"text":"After publishing an influential book on the subject of whole language, Goodman began to create a psycholinguistic and sociolinguistic model of reading inspired by the work of Noam Chomsky. Goodman decided that the process of reading was similar to the process of learning a language as conceptualized by Chomsky, and that literacy developed naturally as a consequence of experiences with print, just as language ability developed naturally as a consequence of experiences with language. Goodman concluded that attempts to teach rules (\"phonics\") to children for decoding words were inappropriate and not likely to succeed."} {"text":"After developing and researching the Whole Language model, Goodman presented his work to the American Educational Research Association (AERA) conference and published an article in the \"Journal of the Reading Specialist,\" in which he famously wrote that reading is a \"psycholinguistic guessing game.\" He retired from the University of Arizona in August 1998."} {"text":"Goodman's concept of written language development views it as parallel to oral language development. Goodman's theory was a basis for the whole language movement, which was further developed by Yetta Goodman, Regie Routman, Frank Smith and others. His concept of reading as an analogue to language development has been studied by brain researchers such as Sally Shaywitz, who rejected the theory on the grounds that reading does not develop naturally in the absence of instruction. Despite this, the theory continues to receive support from some scholars. Goodman's theory and strong convictions made him an icon of the whole language movement and a lightning rod for criticism from those who disagree with it. His book \"What's Whole in Whole Language\" sold over 250,000 copies in six languages."} {"text":"Goodman served in several important capacities, including as President of the International Reading Association, President of the National Conference on Research in Language and Literacy, and President of the Center for Expansion of Language and Thinking. He also worked extensively with the National Council of Teachers of English. He received a number of awards, including the James Squire award from NCTE for contributions to the profession and NCTE (2007). Goodman has published over 150 articles and book chapters as well as a number of books. In addition to \"What's Whole in Whole Language\", he also wrote \"Ken Goodman on Reading\" and \"Phonics Phacts\"; all were published by Heinemann. His book \"Scientific Realism in Studies of Education\", was published by Taylor and Francis in 2007."} {"text":"His last book was \"Reading- The Grand Illusion: How and Why People Make Sense of Print\" with contributions from linguist, Peter H. Fries and neurologist, Steven L. Strauss and was published by Routledge in 2016."} {"text":"Goodman was inducted into the Reading Hall of Fame in 1989."} {"text":"1. \"A Communicative Theory of the Reading Curriculum,\" Elementary English, Vol. 40:3, March 1963, pp.\u00a0290\u2013298."} {"text":"2. and Yetta M. Goodman, \"Spelling Ability of a Self-Taught Reader,\" The Elementary School Journal, Vol. 64:3, December 1963, pp.\u00a0149\u2013154."} {"text":"3. \"The Linguistics of Reading,\" The Elementary School Journal, Vol. 64:8, April 1964, pp.\u00a0355\u2013361."} {"text":"Also in Durr, (ed.), Readings on Reading, Boston: Houghton, Mifflin, 1968."} {"text":"Also in Frost, (ed.), Issues and Innovations in the Teaching of Reading, Chicago: Scott, Foresman, 1967."} {"text":"4. \"A Linguistic Study of Cues and Miscues in Reading,\" Elementary English, Vol. 42:6, October 1965, pp.\u00a0639\u2013643."} {"text":"Also in Wilson and Geyer, (eds.), Reading for Diagnostic and Remedial Reading, Merrill, 1972, pp.\u00a0103\u2013 110."} {"text":"Also in Gentile, Kamil, and Blanchard, (eds.), Reading Research Revisited, Columbus: Charles Merrill, 1983, pp.\u00a0129\u2013134."} {"text":"Also in Singer and Ruddell, (eds.), Theoretical Models and Processes of Reading, 3rd Edition, Newark: IRA, 1985."} {"text":"5. \"Dialect barriers to reading comprehension,\" Elementary English, Vol. 42:8, pp.\u00a0852\u201360, December 1965. Also in Linguistics and Reading, NCTE, 1966."} {"text":"Also in Dimensions of Dialect, NCTE, 1967."} {"text":"Also in Kosinski, (ed.), Readings on Creativity and Imagination in Literature and Language, NCTE, 1969."} {"text":"Also in Teaching Black Children to Read, Center for Applied Linguistics, Washington, 1969."} {"text":"Also in Kise, Binter, and Dalabalto, (eds.), Readings on Reading, Int. Book Co., pp.\u00a0241\u201351."} {"text":"Also in Caper, Green, Baker, Listening and Speaking in the English Classroom, Macmillan, 1971. Also in Shores, Contemporary English: Change and Variation, Lippincott, 1972."} {"text":"Also in Ruddell, (ed.), Resources in Reading Language Instruction, Prentiss Hall, 1972."} {"text":"Also in DeStefano, Editor, Language, Society and Education, Jones Co., Worthington, Ohio, 1973."} {"text":"6. and Yetta Goodman, \"References on Linguistics and the Teaching of Reading,\" Reading Teacher, Vol. 21:1, October, 1967, pp.\u00a022\u201323."} {"text":"7. \"Word Perception: Linguistic Bases,\" Education, Vol. 87, May 1967, pp.\u00a0539\u2013543."} {"text":"8. \"Reading: A Psycholinguistic Guessing Game,\" Journal of the Reading Specialist, Vol. 6:4, May 1967, pp.\u00a0126\u2013135."} {"text":"Also in Singer, H. and Ruddell, R.B., Theoretical Models and Processes of Reading, IRA, 1970, pp.\u00a0259\u2013272."} {"text":"Also in Gunderson, D., Language and Reading, Center for Applied Linguistics, Washington, 1970. Also in Harris, A.J. and Sipay, E.R., Readings on Reading Instruction, David McKay, 1972."} {"text":"Also in Karlin, Robert, Perspectives on Elementary Reading, Harcourt."} {"text":"Also in Comprehension and the Use of Context, Open University Press, London, pp.\u00a030\u201341, 1973. Also in Johnson, Nancy, (ed.), Current Topics in Language, Winthrop, 1976, pp.\u00a0370\u201383"} {"text":"Also in Reading Development, Open University Press, London, 1977."} {"text":"Also in The English Curriculum: Reading I, London: The English and Media Centre, 1990, pp.\u00a021\u201324."} {"text":"9. \"Linguistic Insights Teachers May Apply,\" Education, Vol. 88:4, April\u2013May 1968, pp.\u00a0313\u2013316."} {"text":"Also in What About Linguistics and the Teaching of Reading, Scott, Foresman, 1968."} {"text":"10. \"Reading Disability: A Challenge,\" The Michigan English Teacher, October\u2013November, 1968."} {"text":"11. \"Linguistics in a Relevant Curriculum,\" Education, April\u2013May, 1969, pp.\u00a0303\u2013307."} {"text":"Also in Savage, Linguistics For Teachers, SRA, 1973, pp.\u00a092\u201397."} {"text":"12. \"Building on Children's Language,\" The Grade Teacher, March 1969, pp.\u00a035\u201342."} {"text":"13. \"Let's Dump the Up-Tight Model in English,\" Elementary School Journal, October 1969, pp.\u00a01\u201313."} {"text":"Also in the Education Digest, December 1969, pp.\u00a045\u201348."} {"text":"Also in Linguistics for Teachers: Selected Readings, SRA."} {"text":"Also in Burns, Elementary School Language Arts, Selected Readings, 2nd Edition, Rand McNally. Also in Harris, J., Handbook of Standard and Non-Standard Communication, Alabama Assistance Center, University of Alabama, 1976."} {"text":"14. \"Language and the Ethnocentric Researcher,\" SRIS Quarterly, Summer, 1969."} {"text":"Also in The Reading Specialist, Spring, 1970."} {"text":"15. \"What's New In Curriculum: Reading,\" Nations Schools, 1969."} {"text":"Also in Smith, Frank, Psycholinguistics and Reading, Holt, 1972, pp.\u00a0158\u2013176."} {"text":"Also in Emans and Fishbein, Competence in Reading, SRA, 1972."} {"text":"German translation in Hofer, A., Lesenlernen: Theorie and Unterricht, Schwann: Dusseldorf, 1976, pp. 298-320."} {"text":"Also in Current Comments, Vol. 21:6, February 6, 1989, p.\u00a020. (Cited as \"Classic Citation\" in Social Science Abstracts)"} {"text":"17. \"A Psycholinguistic Approach to Reading. Implications for the Mentally Retarded,\" The Slow Learning Child, (Australia), Summer 1969. Also in Simon and Schuster, Selected Academic Readings."} {"text":"18. \"On Valuing Diversity in Language: Overview,\" Childhood Education, 1969, pp.\u00a0123\u2013126."} {"text":"Also in Triplett and Funk, Language Arts in the Elementary School, Lippincott."} {"text":"Dutch translation in Kleuterwereld as \"Het Belag Van De Verscheidenbled in de Tall,\" April 1973, pp.\u00a0170\u2013171."} {"text":"Also in Harris, J., A Handbook of Standard and Non-Standard Communication, Alabama Assistance Center, 1976."} {"text":"19. and Carolyn L. Burke, \"When a Child Reads: A Psycholinguistic Analysis,\" Elementary English, January 1970, pp.\u00a0121\u2013129."} {"text":"Also in Harris and Smith, Individualizing Reading Instruction, Holt, 1972, pp.\u00a0231\u2013243."} {"text":"Also in Ruddell et al., Resources in Reading-Language Instruction, Prentiss Hall, 1973."} {"text":"20. \"Psycholinguistic Universals in the Reading Process,\" Journal of Typographic Research, Spring 1970, pp.\u00a0103\u2013110."} {"text":"Also in Pimslear and Quinn, (eds.), Papers on the Psychology of Second Language Learning, Cambridge University Press, 1971, pp.\u00a0135\u201342."} {"text":"Also in Smith, F., Psycholinguistics and Reading, Holt, 1972, pp.\u00a021\u201327."} {"text":"21. \"Dialect Rejection and Reading: A Response,\" Reading Research Quarterly, Summer 1970, pp.\u00a0600\u2013603. Also in Selected Academic Readings, Simon and Schuster."} {"text":"22. and Frank Smith, \"On the Psycholinguistic Method of Teaching Reading,\" Elementary School Journal, January, 1971, pp.\u00a0177\u2013181."} {"text":"Also in Ekwell, Psychological Factors in the Teaching of Reading, Merrill, pp.\u00a0303\u2013308."} {"text":"Also in Fox and DeStefano, Language and the Language Arts, Little Brown, Boston, 1973, pp.\u00a0239\u201343."} {"text":"Also in Smith, F., Psycholinguistics and Reading, Holt, 1973, pp.\u00a0177\u2013182."} {"text":"German translation in A. Hofer, Lesenlernen: Theorie und Unterricht, Schwann: Dusseldorf, 1976, pp. 232-237."} {"text":"23. \"Promises, Promises,\" The Reading Teacher, January, 1971, Vol. 24:4, pp.\u00a0356\u2013367."} {"text":"Also in Fox, Language and the Language Arts, Little Brown, 1972."} {"text":"Also in Malberger et al., Learning, Shoestring Press, 1972."} {"text":"24. \"Who Gave Us The Right?,\" The English Record, April, 1971, Vol. xxi, 4, pp.\u00a044\u201345."} {"text":"25. and D. Menosky, \"Reading Instruction: Let's Get It All Together,\" Instructor, March 1971, pp.\u00a044\u201345."} {"text":"26. \"Decoding -- From Code to What?\" Journal of Reading, April, 1971, Vol 14:7, pp.\u00a0455\u2013462."} {"text":"Also in Fox and DeStefano, Language and the Language Arts, Little Brown, 1973, pp.\u00a0230\u2013236. Also in Berry, Barrett, and Powell, Editors, Elementary Reading Instruction Selected Materials II, Allen & Bacon, 1974, pp.\u00a015\u201323."} {"text":"27. \"Oral Language Miscues,\" Viewpoints, Vol. 48:1, January, 1972, pp.\u00a013\u201328."} {"text":"28. \"Reading: The Key Is in Children's Language,\" The Reading Teacher, Vol. 25, March 1972, pp.\u00a0505\u2013508."} {"text":"Also in Reid, Jesse, and Harry Donaldson, (eds.), Reading: Problems and Practices, 2nd edition, London: Ward Lock Educational Limited, 1977, pp.\u00a0358\u2013362."} {"text":"29. \"Orthography in a Theory of Reading Instruction,\" Elementary English, December, 1972, Vol. 49:8, pp.\u00a01254\u20131261."} {"text":"30. \"Up-Tight Ain't Right,\" School Library Journal, October, 1972, Vol. 19:2, pp.\u00a082\u201384."} {"text":"Also in Trends and Issues in Children's Literature, New York: Xerox, 1973."} {"text":"31. \"The 13th Easy Way to Make Learning to Read Difficult,\" A Reaction to Gleitman and Rozin, Reading Research Quarterly, Summer, 1973, VIII:4."} {"text":"32. with Catherine Buck, \"Dialect Barriers to Reading Comprehension Revisited,\" Reading Teacher, October 1973, Vol. 27:1, pp.\u00a06\u201312."} {"text":"Also in Mental Health Digest, December, 1973, Vol. 5:12, pp.\u00a020\u201323."} {"text":"Also in Johnson, Nancy, (ed.), Current Topics in Language, Winthrop, 1976, pp.\u00a0409\u2013417."} {"text":"Reprinted as classic article in The Reading Teacher, Volume 50, No. 6, March, 1997, pp.\u00a0454\u2013459."} {"text":"33. and Yetta M. Goodman and Carolyn L. Burke, \"Language in Teacher Education,\" Journal of Research and Development in Education, Fall, 1973, Vol. 7:1, pp.\u00a066\u201371."} {"text":"34. \"Military-Industrial Thinking Finally Captures the Schools,\" Educational Leadership, February, 1974, pp.\u00a0407\u2013411."} {"text":"35. \"Effective Teachers of Reading Know Language and Children,\" Elementary English, September 1974, Vol. 51:6, pp.\u00a0823\u2013828."} {"text":"36. \"Reading: You Can Get Back to Kansas Anytime You're Ready, Dorothy,\" English Journal, November, 1974, Vol. 63:8, pp.\u00a062\u201364."} {"text":"Also in Reading in Focus, NCTE Newsletter, Australia, October 1976."} {"text":"37. \"Do You Have to be Smart to Read? Do You Have to Read to be Smart?\" Reading Teacher, April 1975, pp.\u00a0625\u2013632."} {"text":"Also in Education Digest, Sept., 1975, Vol. 41, pp.\u00a041\u201344."} {"text":"Also in ABH Reading Pacesetter, Manilla, Philippines, 1975."} {"text":"Also Spanish translation in Enfoques Educacionales, Chile, No. 5, 1979, pp.\u00a040\u201347."} {"text":"38. \"Influence of the Visual Peripheral Field in Reading,\" Research in Teaching of English, Fall 1975, Vol. 9:2, pp.\u00a0210\u2013222."} {"text":"39. \"A Bicentenniel Revolution in Reading,\" Georgia Journal of Reading, Vol. 2:1, pp.\u00a013\u201319, Fall 1976."} {"text":"40. \"From the Strawman to the Tin Woodman, A Response to Mosenthal,\" Reading Research Quarterly, Vol. XII:4, pp.\u00a0575\u201385."} {"text":"41. \"And a Principled View from the Bridge\", Reading Research Quarterly, Vol. XII:4, p.\u00a0604."} {"text":"42. and Yetta M. Goodman, \"Lesenlernen - ein funktionaler Ansatz\" in Die Grundschule, Vol. 9:6, June 1977, pp.\u00a0263\u201367."} {"text":"43. and Y. Goodman, W. McGinnitie, Michio Namekawa, Eikkchi Kurasawa, Takashiko Sakamoto, \"Tokubetsu Zadankai: Eizo Jidai ni okero Dokusho Shido\" (Reading Instruction in the Era of Visual Imagery) Sogo Kyuiku Gijutso (Unified Educational Theory), Vol. 31.11, pp.\u00a0116\u201325, December 1976, Tokyo."} {"text":"44. and Yetta M. Goodman, \"Learning about Psycholinguistic Processes by Analyzing Oral Reading,\" Harvard Educational Review, Vol. 40:3, 1977, pp.\u00a0317\u201333."} {"text":"Also in Constance McCullough, Editor, Inchworm, Inchworm Persistent Problems in Reading Education, IRA, 1980, pp.\u00a0179\u2013201."} {"text":"Also in Thought and Language\/Language and Reading, (eds.), Harvard University Press, 1980."} {"text":"45. \"Acquiring Literacy is Natural: Who Skilled Cock Robin?,\" Theory Into Practice, December 1977, Vol. xvi:5, pp.\u00a0309\u2013314."} {"text":"Also in 25th Anniversary Issue, Theory Into Practice, December, 1987, pp.\u00a0368\u2013373."} {"text":"46. \"And Good Luck to Your Boy,\" Arizona English Bulletin, October, 1977, Vol. 20, pp.\u00a06\u201310."} {"text":"47. \"Open Letter to President Carter,\" SLATE, 3:2, March, 1978."} {"text":"Condensed in Ohio Reading Teacher, January, 1979;"} {"text":"Also in Michigan English Teacher, May, 1978;"} {"text":"Also in Wisconsin Reading Teacher, May 1979; Wisconsin Administration Bulletin, May 1979."} {"text":"48. \"Minimum Competencies: A Moral View,\" in International Reading Association, Minimum Competency Standards, Three Points of View, 1978."} {"text":"49. \"What is Basic About Reading,\" in Eisner, Elliot W., (ed.), Reading, The Arts and the Creation of Meaning, National Art Education Association, 1978, pp.\u00a055\u201370."} {"text":"50. \"Commentary: Breakthroughs and Lock-outs,\" Language Arts, November\u2013December, 1978, Vol. 55:8, pp.\u00a0911\u201320. Also in Connecticut Council of Teachers of English Newsletter, XII:2, December 1978."} {"text":"51. \"The Know-More and Know-Nothing Movements in Reading: A Personal Response,\" Language Arts, September, 1979, Vol. 56:8, pp.\u00a0657\u201363."} {"text":"Also in Georgia Journal of Reading, Vol. 5:2, Spring, 1980, pp.\u00a05\u201312."} {"text":"Translation in Danish in Laesepaedogogen 1981 and as Laese Rapport 4 under the title, \"Laesning efter mening-eller laesning som teknik,\" undated."} {"text":"52. and Yetta M. Goodman, \"Learning to Read is Natural,\" in L.B. Resnick and P.A. Weaver, (eds.), Theory and Practice of Early Reading, Hillsdale, NJ: Erlbaum, 1979, pp.\u00a0137\u201355."} {"text":"Translation in French in Apprentissage et Socialisation, Vol. 3:2, 1980, pp.\u00a0107\u201323."} {"text":"Translation in Spanish in Enfoques Educasionales, Chile, 1980."} {"text":"53. \"Revisiting Research Revisited,\" Reading Psychology, Summer, 1980, pp.\u00a0195\u201397."} {"text":"Also in Gentile, Kamil, and Blanchard, Editors, Reading Research Revisited, Columbus: Merrill, 1983."} {"text":"54. \"On The Ann Arbor Black English Case,\" English Journal, Vol. 69:6, September, 1980, p.\u00a072."} {"text":"55. and Frederick V. Gollasch, \"Word Omissions: Deliberate and Non-Deliberate,\" Reading Research Quarterly, XVI:1, 1980, pp.\u00a06\u201331. (See also occasional papers.)"} {"text":"56. with Yetta Goodman, \"Twenty Questions about Teaching Language,\" Educational Leadership, March 1981, Vol. 38:6, pp.\u00a0437\u201342."} {"text":"57. \"A Declaration of Professional Conscience for Teachers\" Childhood Education, March\u2013April 1981, pp.\u00a0253\u201355."} {"text":"Also in Learning from Children, by Edward Labinowicz, Addison-Wesley Publishing Company, 1984."} {"text":"Also in Goodman, K.S., Bird, L., and Goodman, Y., (eds.), The Whole Language Catalog, Santa Rosa, CA: American School Publishers, 1991, inside front cover."} {"text":"Also in Kaufmann, F. A. (ed.), Council-Grams, Vol. 54:4, Urbana, IL: NCTE, 1991, p.\u00a08."} {"text":"Also in Society for Developmental Education News, Vol. 2:2, Fall, 1992, p.\u00a06."} {"text":"Also in Into Teachers' Hands, D. Sumner, (ed.)Peterborough, NH: Society for Developmental Education, 1992, inside front cover."} {"text":"Also in Whole Teaching, Society for Developmental Education Sourcebook, 6th Edition, Peterborough, NH: Society for Developmental Education, 1993, inside front cover."} {"text":"58. and Y.M. Goodman, \"To Err is Human,\" NYU Education Quarterly, Summer, 1981, Vol. XII:4, pp.\u00a014\u201319."} {"text":"59. \"Response to Stott,\" Reading-Canada-Lecture, April, 1981, Vol. I:2, pp.\u00a018\u2013120."} {"text":"60. \"Lukemisprosessi: monikielinen, kehityksellinen nakokulma\" in Jasenlehti, No. 3, (Finland)1981, pp.\u00a08\u20139."} {"text":"61. \"Revaluing readers and reading,\" Topics in Learning and Learning Disabilities, Vol. I:4, January 1982, pp.\u00a087\u201393."} {"text":"62. and Yetta Goodman, \"Reading and Writing Relationships: Pragmatic Functions,\" Language Arts, May 1983, pp.\u00a0590\u201399."} {"text":"Also in J. Jensen, (ed.), Composing and Comprehending, Urbana, IL: NCRE\/ERIC, 1984, pp.\u00a0155\u201364."} {"text":"63. \"The Solution is the Risk: A Reply to the Report of the National Commission on Excellence in Education,\" SLATE, Vol. 9:1, September, 1983."} {"text":"64. and L. bird, \"On the Wording of Texts: A Study of Intra-text Word Frequency,\" Research in Teaching English, Vol. 18:2, May, 1984, pp.\u00a0119\u201345."} {"text":"65. Growing into Literacy\" Prospects, Education Quarterly of UNESCO, Vol. XV:I, 1985. (Also in French, Spanish, Arabic, and Russian translations)."} {"text":"66. Commentary: \"On Being Literate in an Age of Information,\" Journal of Reading, Vol. 28:5, February 1985, pp.\u00a0388\u201392."} {"text":"Also in Jean M. Eales, Language, Communication and Education, London: Open University and Croom Helm, November, 1986."} {"text":"67. Introduction to: \"A Glimpse At Reading Instruction In China\" by Shanye Jiang, Bo Li, The Reading Teacher, Vol. 38:8, April, 1985, pp.\u00a0762\u201366."} {"text":"68. \"Commentary: Chicago Mastery Learning Reading: A Program with 3 Left Feet,\" Education Week, October 9, 1985, p.\u00a020."} {"text":"69. \"Un programma olistico per l'apprendimento e lo sviloppo della lettura,\" Educazione e Scuola, (Italy) Vol.IV:15, September, 1985, pp.\u00a011\u201324. (Also see Occasional Paper No. 1 below)."} {"text":"70. \"Response to Becoming a Nation of Readers,\" Reading Today, October, 1985."} {"text":"71. \"Basal Readers: A Call for Action,\" Language Arts, April 1986."} {"text":"72. and Mira Beer-Toker, \"Questions about Children's Language and Literacy: an Interview with Kenneth S. Goodman,\" Mother Tongue Education Bulletin, (Quebec, Canada) Vol. l:2, Spring and Fall, 1986, pp.\u00a019\u201322."} {"text":"73. \"You and the Basals: Taking Charge of Your Classroom,\" Learning 87, Vol. 16:2, September, 1987, pp.\u00a062\u201365."} {"text":"Also in Manning, G. and M., (eds.), Whole Language: Beliefs and Practices, K-8, NEA: Washington, D.C., 1989, pp.\u00a0217\u201319."} {"text":"74. \"Determiners in Reading: Miscues on a Few Little Words,\" Language and Education, Vol. 1:l, 1987, pp.\u00a033\u201358."} {"text":"75. \"Who Can Be a Whole Language Teacher?,\" Teachers Networking, Vol.1:1, April, 1987, p.\u00a01."} {"text":"76. \"To My Professional Friends in New Zealand,\" Reading Forum NZ, June, 1987."} {"text":"77. \"The Reading Process: Ken Goodman's Comments,\" ARA Today, August, 1987."} {"text":"78. \"Look What They've Done to Judy Blume!: The `Basalization' of Children's Literature,\" The New Advocate, Vol. I:1, 1988, pp.\u00a029\u201341."} {"text":"79. \"Reflections: An Interview with Ken and Yetta Goodman,\" Reading - Canada - Lecture, Vol. 6:1, Spring, 1988, pp.\u00a046\u201353."} {"text":"80. On writing 'Reading Miscues - Windows on the Psycholinguistic Guessing Game',\" Current Comments, Vol. 21:6, February 6, 1989, p. 20."} {"text":"81. \"Whole Language is Whole: A Response to Heymsfeld,\" Educational Leadership, Vol. 46:6, March, 1989, pp.\u00a069\u201371."} {"text":"82. \"The Whole Language Approach: A Conversation with Kenneth Goodman,\" Writing Teacher, Vol. III:1, August\u2013September, 1989, pp.\u00a05\u20138."} {"text":"83. \"Access to Literacy: Basals and Other Barriers,\" Theory Into Practice, Guest Editors, Patrick Shannon and Kenneth S. Goodman, Vol. XXXVIIII:4, Autumn, 1989, pp.\u00a0300\u2013306."} {"text":"84. \"Latin American Conference is Successful,\" Reading Today, Vol. 7:3, December, 1989."} {"text":"85. and Ira E. Aaron, Jeanne S. Chall, Dolores Durkin, Dorothy S. Strickland, \"The Past, Present, and Future of Literacy Education: Comments from a Panel of Educators, Part I,\" The Reading Teacher, Vol. 43:4, January, 1990, pp.\u00a0302\u201315."} {"text":"86. \"Whole Language Research: Foundations and Development,\" Elementary School Journal, Vol. 90:2, November 1989, pp.\u00a0207\u201321."} {"text":"Japanese translation by Takashi Kuwabara in Journal of Language Teaching, Vol. XVII, pp.\u00a099\u2013116, 1990."} {"text":"87. \"Managing the Whole Language Classroom,\" Instructor, Vol. 99:6, February, 1990, pp.\u00a026\u201329."} {"text":"88. \"A Rebuttal to Priscilla Vail,\" WLSIG Newsletter, Spring, 1990, p.\u00a04."} {"text":"89. \"El Linguaje Integral: Un Camino Facil para el Desarrollo del Lenguaje,\" Lectura Y Vida, Vol. IX:2, June 1990, pp.\u00a05\u201313."} {"text":"90. and Dorothy F. King \"Whole Language: Cherishing Learners and Their Language,\" LSHSS, Vol. 21:4, October, 1990."} {"text":"91. \"An Open Letter to President Bush,\" Whole Language Umbrella Newsletter, Summer, 1991, p.\u00a01-4."} {"text":"92. and Yetta M. Goodman, \"About Whole Language,\" Japanese (First Language) Education Research Monthly, No. 233, October, 1991, pp.\u00a064\u201371."} {"text":"93. and Diane de Ford, Irene Fountas, Yetta Goodman, Vera Milz, and Sharon Murphy \"Dialogue on Issues in Whole Language,\" Orbit, (Canada)Vol. 22:4, December, 1991, pp.\u00a01\u20133."} {"text":"94. and Richard J. Meyer, \"Whole Language: Principles for Principals,\" SAANYS Journal, Vol. 22:3, Winter, 1991\u201392, pp.\u00a07\u201310."} {"text":"95. \"Why Whole Language is Today's Agenda in Education,\" Language Arts, Vol. 69:5, September, 1992, pp.\u00a0354\u2013363."} {"text":"96. \"I Didn't Found Whole Language,\" The Reading Teacher, Vol. 46:3, November, 1992, pp."} {"text":"Also in The Education Digest, October, 1993, Vol. 59, No. 2, pp.\u00a064\u201367."} {"text":"97. \"Gurus, Professors, and the Politics of Phonics,\" Reading Today, December 1992\/January 1993, pp.\u00a08\u201310."} {"text":"98. \"Phonics Phacts,\" Nebraska Language Arts Bulletin, Vol. 5:2, January, 1993, pp.\u00a01\u20135."} {"text":"99. with Lisa Maras and Debbie Birdseye \"Look! Look! Who Stole the Pictures From the Picture Book?,\" The New Advocate, Volume 7, No. 1, Winter 1994, pp.\u00a01\u201324."} {"text":"100. \"Standards, Not!\" Commentary, Education Week, September 7, 1994, pp.\u00a039 & 41."} {"text":"Also in The Council Chronicle, Volume 4, Number 2, November 1994, pp. back and 17."} {"text":"101. \"Deconstructing the rhetoric of Moorman, Blanton, and McLaughlin: A response,\" Reading Research Quarterly, Vol. 29, No. 4, Oct\/Nov\/Dec 1994, pp.\u00a0340\u2013346."} {"text":"102. \"Is whole-language instruction the best way to teach reading?, CQ Researcher, May 19, 1995, Volume 5, No. 19, pp. 457-461."} {"text":"103. \"Forced Choices in a Non-Crisis, A Critique of the Report of the California Reading Task Force\" Education Week, Vol. XV, Number 11, November, 1995, pp.\u00a039 & 42."} {"text":"104. with Elizabeth Noll \"Using a Howitzer to Kill a Butterfly\": Teaching Literature with Basals, The New Advocate, Volume 8, Number 4, Fall 1995, pp.\u00a0243\u2013254."} {"text":"105. with Yetta M. Goodman, Rev. of \"Possible Lives: The Promise of Public Education in America,\"Mike Rose, Rhetoric Review, Volume 14:2, Spring, 1996, pp. 420-424."} {"text":"106. \"An open letter to Richard Riley and Bill Clinton\", Reading Today, Volume 13, No. 6, June\/July, 1996, pp.\u00a039."} {"text":"107. \"The Reading Derby: An Open Letter to Wisconsin Teachers\" WSRA Journal, Volume 40, No. 3, Summer\/Fall 1996, pp.\u00a01\u20135."} {"text":"108. \"Educar, como se ense\ufffda a vivir\" Interview with Ken and Yetta Goodman, Para Ti, No. 3844, March 11, 1996, pp.\u00a092\u201393."} {"text":"109. \"Ken and Yetta Goodman: Exploring the Roots of Whole Language\" Interview with Ken and Yetta Goodman, by Jerome Harste and K. Short, Language Arts, Volume 73, Number 7, November, 1996, pp.\u00a0508\u2013519."} {"text":"111. \"The Reading Process: Insights from Miscue Analysis\" A Summary adapted from The 1996-97 Dean's Forum, The Advancement of Knowledge and Practice in Education Proceedings, University of Arizona, Tucson, January, 1997."} {"text":"112. \u201cCalifornia, Whole Language, and the NAEP\u201d CLIPS, Volume 3, Number 1, Spring, 1997, pp.\u00a053\u201356."} {"text":"113. \"Capturing 'America Reads' For a Larger Agenda?\" Education Week, Volume XVII, Number 4, September, 1997, pp.\u00a034\u201335."} {"text":"114. \u201cPutting Theory and Research in the Context of History\u201d Language Arts, Volume 74, Number 8, December, 1997, pp.\u00a0595\u2013599."} {"text":"115. \u201cParental Choice bill requires a state-mandated curriculum\u201d Guest Comment, Arizona Daily Star, February 20, 1998, Section A, p.\u00a011."} {"text":"116. \u201cGood News from a Bad Test: Arizona, California and the National Assessment\u201d Arizona Reading Journal, Vol XXV, No. 1 Spring\/Summer, 1998, pp.\u00a013\u201323."} {"text":"117. \u201cThe Phing Points, Volume 11, Number 2, April\/May, 2000, pp. 18-19."} {"text":"126. \u201cTeaching Amid the Rocket\u2019s Red Glare\u201d Minnesota English Journal, Fall, 2000, pp.\u00a0107\u2013110."} {"text":"127. \u201cDefending Teachers and Learners from Mandates\u201d Minnesota English Journal, Fall, 2000, pp.\u00a0111\u2013114."} {"text":"128. With Paulson, Eric J., \u201cInfluential Studies in Eye-Movement Research,\u201d Reading Online, The International Reading Association's Electronic Journal. December, 1998. www.readingonline.org\/research\/eyemove.html"} {"text":"129. \u201cOn Reading\u201d (6) The Science of Reading, Tokyo, Japan: The Japan Reading Association, Vol July, 2000, Japanese Translation, (Yokota,Rayco translator), pp.\u00a073\u201382."} {"text":"130. \u201cOn Reading\u201d (7) The Science of Reading, Tokyo, Japan: The Japan Reading Association, Vol. 44, No. 3, October, 2000, Japanese Translation, (Yokota, Rayco translator), pp.\u00a083\u2013104."} {"text":"131 \u201cAims\u201d Tucson, AZ: Arizona Daily Star, Sunday, October 28, 2001 (Guest Opinion), p.B-11."} {"text":"132. \u201cOn Reading\u201d (8) The Science of Reading, Tokyo, Japan: The Japan Reading Association, Vol. 45, No. 3, October, 2001, Japanese Translation, (Yokota, Rayco translator), pp.\u00a0103\u2013125."} {"text":"133. \u201cA Declaration of Professional Conscience for Teacher Educators\u201d Practically Primary, Vol. 8, Number 3, October 2003, Australian Literacy Educators\u2019 Association, pp.\u00a05\u20136."} {"text":"134. \u201cIntroduction,\u201d Colombian Applied Linguistics Journal, Special Issue on Literacy Processes, Number 6, Sept. 2004, p.\u00a04-5."} {"text":"135. \u201cPerspectiva transaccional sociopsyicolinguistica de la lectura y la escritura,\u201d Revista Lectura y Vida, Textos en Contexto, No. 2, Buenos Aires, Argentina: Asociacion Internacional de Lectura, December, 2004."} {"text":"136. \u201cMaking Sense of Written Language: A liflong Journey\u201d, Journal of Literacy Research Vol 37 No. 1 Spring 2005"} {"text":"1. with Hans Olsen, Cynthia Colvin, Louis Vanderlinde, Choosing Materials to Teach Readings, Detroit: Wayne State University Press, 1966. Second edition, 1973."} {"text":"2. (ed.), The Psycholinguistic Nature of the Reading Process, Detroit: Wayne State University Press, 1968. Second Printing, 1973."} {"text":"Lead article translation in German, A. Hofer, Lesenlernen: Theorie und Unterricht, Schwann: Dusseldorf, 1976, pp. 139-51."} {"text":"3. and J. Fleming, (eds.), Psycholinguistics and the Teaching of Reading, Newark, International Reading Association, 1969."} {"text":"4. and Olive Niles, Reading: Process and Program, Champaign, IL: NCTE, 1969, (monograph)."} {"text":"Excerpt in Reading: Today and Tomorrow, London: Open University, 1972."} {"text":"Also in Singer and Ruddell, Theoretical Models and Processes in Reading, Second edition, Neward, DE: IRA, 1976."} {"text":"Also in German edition of Theoretical Models, M. Angermaier."} {"text":"5. with E. Brooks Smith, and Robert Meredith, Language and Thinking in the Elementary School, Holt, Rinehart and Winston, 1970."} {"text":"Chapter reprinted in \"Resources in Reading-Language Instruction,\" Ruddell, (ed.), Prentice Hall, 1973. 2nd edition of Language and Thinking in School, 1976."} {"text":"3rd edition, with E. B. Smith, R. Meredith, and Y. Goodman, Language and Thinking in School, A Whole-Language Curriculum, New York: Richard C. Owen, 1987."} {"text":"6. and Yetta M. Goodman, Annotated Bibliography on Linguistics, Psycholinguistics and the Teaching of Reading, Newark, DE: International Reading Association, 1972. 3rd edition, 1980."} {"text":"7. editor, Miscue Analysis: Applications to Reading Instruction, NCTE-ERIC, 1973."} {"text":"Excerpt in: Plackett, E. (ed.), The English Curriculum: Reading 2, Slow Readers, London: The English Centre, 1990, pp.\u00a079\u201383."} {"text":"8. Reading: A Conversation with Kenneth Goodman, Chicago: Scott, Foresman, 1976."} {"text":"Digest in TSI Repeater-Cable, Telesensory Systems, Palo Alto, September, 1977."} {"text":"9. with Yetta Goodman and Barbara Flores, Reading in the Bilingual Classroom: Literacy and Biliteracy, National Clearinghouse for Bilingual Education, Rosslyn, Virginia, 1979."} {"text":"10. Reading and Readers, (The 1981 Catherine Molony Memorial Lecture), New York, City College School of Education, Workshop Center for Open Education, 1981."} {"text":"11. Language and Literacy, The Selected Writings of Kenneth S. Goodman, Volume 1: Process, Theory, Research, (eds.), Frederick V. Gollasch, Boston & London: Routledge and Kegan Paul, 1982."} {"text":"12. Language and Literacy, The Selected Writings of Kenneth S. Goodman, Volume II: Reading, Language and the Classroom Teacher, (eds.), F.V. Gollasch, Boston and London Routledge and Kegan Paul, 1982."} {"text":"13. What's Whole in Whole Language, Richmond Hill, Toronto: Scholastic Lmtd., 1986, and Portsmouth, NH: Heinemann Educational."} {"text":"Spanish edition, Lenguaje Integral, Editorial Venezolana C.A., 1989;"} {"text":"French edition, Le Comment et Pourqois de la Language Integre, Scholastic, 1989;"} {"text":"Japanese edition Kyoku e no atarashi chosen: Eigo ken ni okeri zentai gengo kyoiku, Tokyo: Ozora Sha, 1990."} {"text":"Spanish edition, El lenguaje integral, Aique Grupo Editor, S.A.: Libro De Edici\ufffdn Argentina, 1995."} {"text":"Portuguese edition, Linguagem Integral, Traducao: Marcos A.G. Domingues, Porto Alegre: Artes Medicas, 1997."} {"text":"14. with Patrick Shannon, Yvonne Freeman and Sharon Murphy, Report Card on Basal Readers, Katonah, NY: R.C. Owen, 1988."} {"text":"15. with Yetta Goodman and Wendy Hood, The Whole Language Evaluation Book, Portsmouth, NH: Heineman, 1989."} {"text":"16. with Yetta Goodman and Wendy Hood, Organizing for Whole Language, Portsmouth, NH: Heinemann, 1991."} {"text":"17. with Lois Bird and Yetta Goodman, The Whole Language Catalog, American School Publishers, January, 1991."} {"text":"18. Eminent Scholar Conversation #15 by Rudine Sims Bishop, Ohio State University: Martha L. King Language and Literacy Center, 1991, pp.\u00a01\u201335."} {"text":"19. with Lois B. Bird and Yetta Goodman, The Whole Language Catalog: Authentic Assessment Supplement, Santa Rosa, CA: American School Publishers, May 1992."} {"text":"20. Ken Goodman Phonics Phacts, Richmond Hill, Ontario: Scholastic Canada, Ltd (Canada), and Heinemann (US) Portsmouth, NH, 1994."} {"text":"21. with Lois Bird and Yetta Goodman, The Whole Language Catalog: Forms for Authentic Assessment, New York, NY: SRA Division McMillan\/McGraw-Hill School Publishing Company, 1994."} {"text":"22. with Patrick Shannon, Basal Readers, A Second Look, Katonah, NY: Richard C. Owen, 1994"} {"text":"23. Ken Goodman On Reading, Richmond Hill, Ontario: Scholastic Canada, Ltd. (Canada), and Heinemann (US) Portsmouth, NH, 1996."} {"text":"24. with Joel Brown and Ann M. Marek, Studies in Miscue Analysis: An Annotated bibliography, Newark, DE: International Reading Association, 1996."} {"text":"25. In Defense of Good Teaching, Kenneth S. Goodman (ed.), York, ME: Stenhouse Publishers, 1998."} {"text":"26. Reflections and Connections: Essays in Honor of Kenneth S. Goodman's Influence on Language Education, Marek, Ann M. & Carole Edelsky (eds.), Cresskill, NJ: Hampton Press, Inc., 1998"} {"text":"27. On the Revolution of Reading, The Selected Writings of Kenneth S. Goodman (Alan D. Flurkey and Xu, Jingguo (Eds.), Portsmouth, NH: Heinemann, 2003."} {"text":"28. with. Shannon, P., Goodman, Y. and Rapoport, R. (eds.), Saving Our Schools, Berkeley, CA: RDR Books, 2004."} {"text":"29 . Examining DIBELS, Vermont Society for the Study of Education in press"} {"text":"1. \"A Psycholinguistic View of Reading Comprehension,\" New Frontiers in College-Adult Reading, 15th Yearbook of the National Reading Conference, Milwaukee, 1966."} {"text":"2. \"Elementary Education,\" Foundations of Education, revised edition, George Kneller, (ed.), New York: John Wiley and Sons, 1967, pp.\u00a0493\u2013521."} {"text":"3. \"Is the Linguistic Approach an Improvement in Reading Instruction: Pro,\" Nila Banton Smith, (ed.), Current Issues in Reading, Newark, DE: IRA, 1969, pp.\u00a0268\u2013276."} {"text":"4. \"Words and Morphemes in Reading,\" Psycholinguistics and the Teaching of Reading, Goodman and J. Fleming, (eds.), Newark, DE: IRA, 1969, pp.\u00a025\u201333."} {"text":"5. \"The Interrelationships Between Language Development and Learning to Read,\" The Impact of Society on Learning to Read, Miriam Schleich, (ed.), Hofstra University, 1970."} {"text":"6. \"Comprehension-Centered Reading Instruction,\" Proceedings of the 1970 Claremont Reading Conference, pp.\u00a0125\u2013135. Also in Ekwell, Psychological Factors in the Teaching of Reading, Merrill, 1972, pp.\u00a0292\u2013302."} {"text":"7. \"Psycholinguistics in Reading,\" Innovations in the Elementary School: An IDEA, occasional paper, Melbourne, FL, 1970."} {"text":"8. \"Urban Dialects and Reading Instruction,\" Kender, J.P., (ed.), Teaching Reading\u2014Not By Decoding Alone, Interstate: Danville, 1971, pp.\u00a061\u201375."} {"text":"9. \"The Search Called Reading,\" Coordinating Reading Instruction, Helen Robinson, (ed.), Scott Foresman, Glenview, 1971, pp.\u00a08\u201314."} {"text":"10. \"Children's Language and Experience: A Place to Begin,\" Coordinating Reading Instruction, Helen Robinson (ed.), Scott Foresman, Glenview, 1971, pp.\u00a046\u201352."} {"text":"11. \"Linguistics and Reading,\" Encyclopedia of Education, Lee C. Deighton, (ed.), Macmillan, 1971."} {"text":"12. \"Psycholinguistics and Reading,\" Proceedings of the Maryland Reading Institute, 1971."} {"text":"13. \"The Reading Process: Theory and Practice,\" Language and Learning to Read: What Teachers Should Know About Language, Hodges and Rudorf, (eds.), Houghton-Mifflin, 1972, pp.\u00a0143\u201359."} {"text":"14. \"Testing in Reading: A General Critique\" Accountability and Reading Instruction, Robert Ruddell, Editor, Urbana, IL: NCTE, 1973."} {"text":"15. \"Strategies for Increasing Comprehension in Reading,\" Improving Reading in the Intermediate Years, Robinson, H., (ed.), Glenview, IL: Scott Foresman and Co., 1973, pp.\u00a059\u201371"} {"text":"Also available as a separate monograph, Scott Foresman, 1974."} {"text":"16. \"The Reading Process\", Proceedings of the Sixth Western Symposium on Learning: Language and Reading, Bellingham, WA, 1975."} {"text":"17. \"Miscue Analysis: Theory and Reality in Reading,\" New Horizons in Reading, Proceedings of Fifth IRA World Congress on Reading, Merritt, John E., (ed.), Newark, DE: International Reading Association, 1976, pp.\u00a015\u201326."} {"text":"18. \"Linguistically Sound Research in Reading,\" Improving Reading Research, Farr, Roger, Weintraub, and Tone, (eds.), Newark, DE: IRA, 1976, pp.\u00a089\u2013100."} {"text":"19. \"What's Universal About the Reading Process,\" Proceedings of 20th Annual Convention of the Japan Reading Association, Tokyo, 1976."} {"text":"20. \"Manifesto for a Reading Revolution,\" Malcolm Douglas, (ed.), 40th Yearbook Claremont Reading Conference, 1976, pp.\u00a016\u201328."} {"text":"21. \"What We Know About Reading,\" Allen, P.D. and Watson, D., (eds.), Findings of Research in Miscue Analysis: Classroom Implications, ERIC-NCTE, 1976, pp.\u00a057\u201369."} {"text":"22. \"The Goodman Taxonomy of Reading Miscues,\" Allen, P.D. and Watson, D., (eds.), Findings of Research in Miscue Analysis: Classroom Implications, ERIC-NCTE, 1976, pp.\u00a0157\u2013244."} {"text":"23. and Yetta M. Goodman, \"Reading and Reading Instruction: Insights from Miscue Analysis,\" Watson, K.D. and Eagleson, R.D., (eds.), English in Secondary Schools: Today & Tomorrow, Sydney: English Teachers Association of New South Wales, 1977, pp.\u00a0254\u201359."} {"text":"24. and Carolyn Burke, \"Reading for Life: The Psycholinguistic Base\" Conference Proceedings Reading: Curriculum Demands-Towards Implementing the Bullock Report, London, England: Ward Lock Educational, 1977."} {"text":"25. \"Bridging the Gaps in Reading: Respect and Communication,\" Harste, J. and R. Carey, (eds.), New Perspectives on Comprehension, Bloomington, Indiana University, October, 1979."} {"text":"26. with Yetta M. Goodman and Barbara Flores, \"Reading in the Bilingual Classroom: Literacy and Biliteracy,\" Rosslyn, VA: National Clearinghouse for Bilingual Education, 1979."} {"text":"27. \"Needed for the '80's: Schools that Start Where Learners Are,\" Needs of Elementary and Secondary Education in the 1980s; Sub-committee on Elementary Secondary and Vocational Education House of Representatives, 96th Congress, Washington: GPO January 1980."} {"text":"28. \"El proceso lector en ninos normales,\" Bravo Valdiviesco, Luis, (ed.), El Nino con Dificultades para Aprender, Santiago de Chile: UNICEF\/Pontificia Universidad Catolica, 1980."} {"text":"29. \"Linguistic Diversity, Teacher Preparation and Professional Development,\" G. Smitherman, Editor, Black English and the Education of Black Children Youth, Center for Black Studies, Wayne State University, Detroit, MI, 1981, pp.\u00a0171\u201389."} {"text":"30. \"Miscue Analysis and Future Research Directions,\" Huddleson, Sarah, (ed.), Learning to Read in Different Languages, Linguistics and Literacy, Series: 1, Center for Applied Linguistics, Washington, 1981."} {"text":"31. \"Language Development: Issues, Insights, and Implementation\" Goodman, Haussler, and Strickland, (eds.), Oral and Written Language Development Research: Impact on the Schools, NCTE & IRA, 1982."} {"text":"32. \"El proceso de lectura: consideraciones a traves de las lenguas y del desarrollo,\" E. Ferreiro and M. Gomez Palacio, (eds.), Nuevas Perspectivas Sobre Los Procesos de Lectura y Escritura, Mexico Editorial Siglo XXI, 1982, pp.\u00a013\u201328."} {"text":"33. \"The Reading Process, A Multi-Lingual Developmental Perspective,\" K. Tuunainen and A. Chiaroni, (eds.), Full Participation, Proceedings of the Second European Conference on Reading, Joensuu Finland, 1982."} {"text":"34. and Yetta M. Goodman \"A Whole-Language Comprehension-Centered View of Reading Development,\" L. Reed and S. Ward, (eds.), Basic Skills: Issues and Choices, Vol. 2, St. Louis: Cemrel, 1982, pp.\u00a0125\u2013134."} {"text":"35. \"On Research and the Improvement of Reading,\" M. Douglas, (ed.), Forty-seventh Yearbook of the Claremont Reading Conference, Claremont Graduate School, 1983, pp.\u00a028\u201336."} {"text":"36. \"A Conversation with Kenneth Goodman,\" L. Rainsberry, (ed.), and Producer, Out of the Shadows, guide accompanying three program video series of the same name, Toronto: TV Ontario, 1983, pp.\u00a017\u201320."} {"text":"37. and Y. Goodman \"Everything You Wanted to Know But Didn't Have the Opportunity to Ask,\" L. Rainsberry, (ed.), and Producer, Out of the Shadows, guide accompanying three program video series of the same name, Toronto: TV Ontario, 1983, pp.\u00a028\u201344."} {"text":"38. \"Unity in Reading,\" Olives Niles and Alan Purves, (eds.), Becoming Readers in a Complex Society, 83rd Yearbook of the National Society for the Study of Education, 1984."} {"text":"Also in Singer and Ruddell, Theoretical Models & Processes of Reading, 3rd Edition, Newark, DE: International Reading Association, 1985."} {"text":"Also in Portuguese as \"Unidad na Leitura,\" in Letras de Hoje, 12\/1991, No. 86, pp.\u00a09\u201344."} {"text":"Also in German as \u201cLesen - ein transaktionaler ProzeB,\u201d Konstruktionen der verstandigung, Luneburg: Universitat Luneburg, 1997, pp.\u00a0103\u2013132."} {"text":"39. \"Literacy: for Whom and for What,\" Makhan L. Tickoo, (ed.), Language in Learning, Singapore: SEAMEO Regional Language Centre, 1986."} {"text":"40. \"A Holistic Model of Reading,\" Trondhjem, Editor, Aspects in Reading Processes, 12th Danavox Symposium, Klarskovgard, Denmark, 1986."} {"text":"41. \"Foreward\" Making Connections with Writing, Mary and Chisato Kitagawa, Portsmouth, NH: Heinemann, 1987."} {"text":"42. \"Teachers Detechnologizing Reading,\" Dorothy J. Watson, (ed.), Ideas and Insights, Urbana, IL: NCTE, 1987, pp. x-xi."} {"text":"43. \"Reading for Life: the Psycholinguistic Base,\" Reading Concerns: Selected Papers from UKRA Conferences 1972\u20131980, London: UKRA, 1988."} {"text":"44. \"Language and Learning: Toward a social-Personal View,\" Proceedings of the Brisbane Conference on Language and Learning, July, 1988."} {"text":"45. \"Afterword,\" Jane L. Davidson, (ed.), Counterpoint and Beyond, Urbana, IL: NCTE, 1988, pp.\u00a0105\u2013108."} {"text":"46. \"Language Development: Issues, Insights and Implementation,\" G. Pinnell and M. Matlin, (eds.), Teachers and Research: Language Learning in the Classroom, IRA, 1989, pp.\u00a0130\u2013141."} {"text":"47. with Yetta Goodman \"Vygotsky in a Whole Language Perspective,\" Vygotsky and Education, Luis Moll, (ed.), Cambridge University Press, 1990, pp.\u00a0223\u2013250."} {"text":"48. \"The Whole Language Curriculum,\" Hydrick, J, and N. Wildermuth, (eds.), Whole Language: Empowerment at the Chalk Face, New York: Scholastic, 1990, pp.\u00a0191\u2013211."} {"text":"49. with Yetta M. Goodman \"Our Ten Best Ideas for Reading Teachers,\" Fry, E., (ed.), 10 Best Ideas for Reading Teachers, Menlo Park, CA: Addison-Wesley, 1991, pp.\u00a060\u201364."} {"text":"50. \"Whole Language: What Makes it Whole,\" Power, B. and R. Hubbard, Literacy in Process, Portsmouth, NH: Heinemann, 1991."} {"text":"51. \"The Teacher Interview\" Toby Kahn Curry and Debra Goodman, An interview by Yetta Goodman, Commentary by Ken Goodman, Atwell, N., (ed.), Workshop 3: The Politics of Process, Portsmouth, NH: Heinemann, 1991, pp.\u00a081\u201393."} {"text":"52. with Yetta M. Goodman \"Whole Language: A Whole Educational Reform,\" Schools of Thought, Pathways to Educational Reform, Cleveland, OH: North American Montessori Teachers' Association, Vol. 16:2, Spring, 1991, pp.\u00a059\u201370."} {"text":"53. \"Whole Language Research: Foundations and Development,\" Samuels, S. Jay and A. Farstrup, (eds.), What Research Has To Say About Reading Instruction, 2nd edition, Newark, DE: International Reading Association, 1992."} {"text":"Also in Japanese: Horu Rangegi, (ed. & translated) by Takashi Kuwabara, Tokyo: Kokudo sha, 1992, pp.\u00a0112\u2013157."} {"text":"54. \"A Question About the Future,\" Questions & Answers About Whole Language, Orin Cochrine, (ed.), Katonah, NY: Richard C. Owen, 1992, pp.\u00a0137\u201340."} {"text":"55. \"Forward,\" Whitin, David J. and Sandra Wilde, Read Any Good Math Lately? Portsmouth, NH: Heinemann, 1992, pp. xi-xii."} {"text":"56. with D. Freeman \"What's Simple in Simplified Language?,\" Simplification: Theory and Application, M.L. Tickoo (ed.), Singapore:SEAMEO Regional Language Centre, 1993, pp.\u00a069\u201381."} {"text":"57. \"Ponencias Primero Conferencia\", Memorias Del Primer Congreso de las Americas sobre Lectoescritura, Maracaibo, Venezuela: Universidad de Los Andes, 1993, pp.\u00a04\u201315."} {"text":"58. \"El Lenguaje Integral Como Filosofia Educativa\", Memorias Del Primer Congreso de las Americas sobre Lectoescritura, Maracaibo, Venezuela: Universidad de Los Andes, 1993, pp.\u00a016\u201329."} {"text":"59. with Yetta M. Goodman \"Vygotsky desde la perspective del lenguaje total (whole-language)\" Vygotsky Y La Educaci\ufffdn, Luis Moll (ed.), M\ufffdndez de And\ufffds: Aique Grupo Editor S.A., 1993, pp.\u00a0263\u2013292. Spanish translation of \"Vygotsky in a Whole Language Perspective\" in Vygotsky and Education."} {"text":"60. with Yetta M. Goodman \"To Err Is Human: Learning about Language Processes by Analyzing Miscues,\" Theoretical Models and Processes of Reading, 4th Edition, Robert B. Ruddell, Ruddell, M.R., & Singer, H. (eds.), Neward, DE: International Reading Association, 1994."} {"text":"61. \"Reading, Writing and Written Texts: A Transactional Scociopsycholoinguistic View,\" Theoretical Models and Processes of Reading, 4th Edition, Robert B. Ruddell, Ruddell, M.R., & Singer, H. (eds.), Newark, DE: International Reading Association, 1994."} {"text":"62. \"Universals in Reading: A Transactional Socio-Psychoinguistic Model of Reading, Writing and Texts,\" A summary by Patrick Gallo, Singapore: Report of the Regional Seminar on Reading and Writing Research: Implications for Language Education, 1994, p.\u00a06."} {"text":"63. \"Forward: Lots of Changes, But Little Gained\" Basal Readers: A Second Look, Patrick Shannon & Goodman, K.S. (eds)., Katonah, NY: Richard C. Owen Publishers, 1994."} {"text":"64. with Lisa Maras and Debbie Birdseye \"Look! Look! Who Stole the Pictures From the Picture Book?,\" Basal Readers: A Second Look, Patrick Shannon & Goodman, K.S. (eds.), Katonah, NY: Richard C. Owen Publishers, 1994."} {"text":"65. with Yetta M. Goodman \"Preface,\" Leadership in Whole Language, The Principal's Role, York, ME: Stenhouse Publishers, 1995, pp.ix-xi."} {"text":"66. with Kathryn F. Whitmore \"Practicing What We Teach: The Principles That Guide Us,\" Whole Language Voices In Teacher Education, Kathryn F. Whitmore & Yetta M. Goodman (eds.), York, ME: Stenhouse Publishers, 1996, pp.\u00a01\u201316."} {"text":"67. with Richard Meyer and Yetta M. Goodman \"Continuous Evaluation in a Whole Language Preservice Program\" Whole Language Voices In Teacher Education, York, ME: Stenhouse Publishers, 1996, pp.\u00a0256\u2013267."} {"text":"68. \"Lines of Print,\" Whole Language Voices In Teacher Education, York, ME: Stenhouse Publishers, 1996, pp.\u00a0134\u2013135."} {"text":"69. \"The Boat in the Basement,\" Whole Language Voices In Teacher Education, York, ME: Stenhouse Publishers, 1996, pp.\u00a0136\u2013137."} {"text":"70. \"Nonsense Texts to Illustrate the Three Cue Systems: \"A Mardsan Giberter for Farfie,\" \"Gloopy and Blit,\" and \"The Marlup,\" Whole Language Voices In Teacher Education, York, ME: Stenhouse Publishers, 1996, pp.\u00a0138\u2013140."} {"text":"71. \"Real Texts to Illustrate the Three Cue Systems: Downhole Heave Compensator,\" Whole Language Voices In Teacher Education, York, ME: Stenhouse Publishers, 1996, pp.\u00a0141\u2013143."} {"text":"72. \"Real Texts to Illustrate the Three Cue Systems: Poison,\" Whole Language Voices In Teacher Education, York, ME: Stenhouse Publishers, 1996, pp.\u00a0144\u2013145."} {"text":"73. \"Principles of Revaluing\" Retrospective Miscue Analysis, Katonah, NY: Richard C. Owen, 1996, pp.\u00a013\u201320."} {"text":"74. with Yetta M. Goodman \"Vygotsky em uma perspectiva da \"linguagem integral\" Vygotsky e an educa\ufffd\ufffdao, Luis Moll (ed.), Porto Alegre RS, Brazil: Artes M\ufffddicas, 1996, pp.\u00a0219\u2013224. Portuguese translation of \"Vygotsky in a Whole Language Perspective\" in Vygotsky and Education."} {"text":"75. \"Preface\" Studies in Miscue Analysis An Annotated Bibliography, Newark, DE: International Reading Association, 1996, pp.iv-x."} {"text":"76. \"Oral and Written Language: Functions and Purposes\" Many Families, Many Literacies An International Declaration of Principles, Denny Taylor (ed.), Portsmouth, NH: Heinemann, 1997, pp.\u00a043\u201346."} {"text":"77. With Yetta Goodman, \u201cForward\u201d multiple voices, multiple texts, Dornan, R., Rosen, L, and Wilson, M. Portsmouth, NH: Boynton\/Cook Publishers Heineman, 1997, pp. ix-xi."} {"text":"78. \ufffdPor qu\ufffd es importante el lenguaje? Una Historia Sin Fin. Crear Y Recrear Texto, Gabriela Yncl\ufffdn (ed.), M\ufffdxico, D.F., 1997, pp.\u00a015\u201317."} {"text":"79. With Yetta M. Goodman, \u201cTo Err Is Human: Learning about Language Processes by Analyzing Miscues,\u201dReconsidering a Balanced Approach to Reading, Constance Weaver (ed.), Urbana, IL: National Council of Teachers of English, 1998, pp. 101-123."} {"text":"80. \u201cCalifornia, Whole Language, and National Assessment of Educational Progress (NAEP),\u201d Reconsidering a Balanced Approach to Reading, Constance Weaver (ed.), Urbana, IL: National Council of Teachers of English, 1998, pp.\u00a0467\u2013491."} {"text":"81. \u201cThe Phonics Scam: The Pedagogy of the Absurd,\u201d Perspectives on Reading Instruction, Alexandria, VA: Association for Supervision and Curriculum Development, 1998, pp.\u00a027\u201331."} {"text":"82. \u201cThe Reading Process,\u201d Encyclopedia of Language and Education, Volume 2, Viv Edwards and Corson, David (eds.), Dordrecht, The Netherlands: Kluwer Academic Publishers, 1997, pp.\u00a01\u20137."} {"text":"83. With Catherine Buck, \u201cDialect Barriers to Reading Comprehension Revisited,\u201d Literacy Instruction for Culturally and Linguistically Diverse Student, Newark: DE: International Reading Association, 1998, pp.\u00a0139\u2013145."} {"text":"84. \u201cI Didn't Found Whole Language,\u201d Distinguished Educators on Reading, Nancy Padak. . . (et al.), (eds.), Newark, DE: International Reading Association, 2000, pp.\u00a02\u201319."} {"text":"85. \u201cUpdate: Forward 8 Years and Back a Century,\u201d Distinguished Educators on Reading, Nancy Padak. . . (et al.), (eds.), Newark, DE: International Reading Association, 2000, pp.\u00a020\u201327."} {"text":"86. With Yetta Goodman and Prisca Martens, \u201cText Matters: Readers Who Learn with Decodable Texts\u201d 51st Yearbook of the National Reading Conference, Oak Creek, Wisconsin: National Reading Conference, Inc., 2002, pp.\u00a0186\u2013203."} {"text":"87. \u201cWhole Language and Whole-Language Assessment\u201d Literacy in America An Encyclopedia of History, Theory, and Practice, Vol. 2 N-Z, Barbara Guzetti, (ed.), Santa Barbara, CA: ABC Clio, 2002, pp.\u00a0673\u2013677."} {"text":"88. With Yetta M. Goodman, \u201cTo Err Is Human: Learning About Language Processes by Analyzing Miscues\u201d Theoretical Models and Processes of Reading, 5th Edition, Robert B. Ruddell and Unrau, Norman J. (Eds.), Newark, DE: International Reading Association, 2004, pp 620\u2013639."} {"text":"1. The Psychology of Language Thought and Instruction, Readings by DeCecco in Journal of Reading, Vol. 11:8, May 1968, pp.\u00a0648\u201350."} {"text":"2. \"Research Critique: Oral Language of Kindergarten Children,\" Elementary English, Vol. 43:8, December, 1966, pp.\u00a0897\u2013900."} {"text":"3. Buros, \"Reading Tests and Reviews,\" American Educational Research Journal, January, 1971, pp.\u00a0169\u201371."} {"text":"4. Linguistics in Language Arts and Reading, Journal of Reading, November, 1972."} {"text":"5. Williams, Hopper, and Natalicio, The Sounds of Children, Reading Teacher, Vol. 31:5, February, 1978, pp.\u00a0578\u201380."} {"text":"Co-author, Scott Foresman Reading Systems: Scott Foresman, Levels 1-21 (Grades K-6), 1971\u201373. Levels 22\u201327, 1974. Revised Edition, Chicago: Reading Unlimited, Levels 1-27, 1976."} {"text":"1. A Study of Children's Behavior While Reading Orally, Final Report, Project No. S-425, Contract No. OE-6-10-136, U.S. Department of Health, Education and Welfare, Office of Education, Bureau of Research."} {"text":"2. A Study of Oral Reading Miscues that Result in Grammatical Re-Transformations, Final Report, Project No. 7-E-219, Contract No. OEG-O-8-070219-2806 (010), U.S. Department of Health, Education and Welfare, Office of Education, Bureau of Research."} {"text":"3. Theoretically Based Studies of Patterns of Miscues in Oral Reading Performance, Final Report, Project No. 9-0775, Grant No. OEG-0-9-320375-4269, U.S. Department of Health, Education and Welfare, Office of Education, Bureau of Research, May, 1973. Abstracted in ERIC."} {"text":"4. with William Page, Reading Comprehension Programs: Theoretical Bases of Reading Comprehension Instruction in the Middle Grades, Contract No. NIE C-74-0140, National Institute of Education, U.S. Department of Health, Education and Welfare, August, 1976."} {"text":"5. Reading of American Children Whose Reading is a Stable, Rural Dialect of English or Language Other Than English, Grant No. NIE-C-00-3-0087, National Institute of Education, U.S. Department of Health Education and Welfare, August, 1978."} {"text":"6. with Suzanne Gespass, Analysis of Text Structures as They Relate to Patterns of Oral Reading Miscues, Project NIE-G-80-0057, National Institute of Education, Department of Health, Education and Welfare, February, 1982."} {"text":"with Janet Emig and Yetta M. Goodman, Interrelationships of Reading and Writing, NCTE No. 7250R."} {"text":"with Barbara Bonder and Jean Malmstram, Psycholinguistics and Reading, NCTE No. 73276R."} {"text":"and Yetta M. Goodman, Reading for Meaning: The Goodman Model, Sydney: Film Australia, 1977,"} {"text":"with DeWayne Triplett and Frank Greene, The Right Not To Read, NCTE No. 71311R."} {"text":"with Yetta M. Goodman, Watching Children Reading, BBC, London, 1986."} {"text":"What's Whole in Whole Language?, ASCD, Alexandria, Virginia, 1992."} {"text":"with Constance Kamii, Constructivism & Whole Language, ASCD, Alexandria, Virginia, 1993."} {"text":"No. 1 with Yetta Goodman, A Whole-Language Comprehension Centered View of Reading Development, February, 1981."} {"text":"No. 2 with F.V. Gollasch, Word Omissions in Reading Deliberate and Non-Deliberate: Implications and Applications, March, 1981."} {"text":"No. 3 with Bess Altwerger, Studying Text Difficulty Through Miscue Analysis, June, 1981."} {"text":"No. 6 with Lois Bridges Bird, On the Wording of Texts: A Study of Intra-Text Word Frequency, March, 1982."} {"text":"No. 7 with Suzanne Gespass, Text Features as they Relate to Miscues: Pronouns, March, 1983."} {"text":"No. 8 Text Features as they Relate to Miscues: Determiners, July, 1983."} {"text":"No. 15 with G. Williams and J. David, Revaluing Troubled Readers, February, 1986."} {"text":"No. 16 with Brown, J. and Marek, A. Annotated Chronological Miscue Analysis Bibliography, August, 1994."} {"text":"The critical period hypothesis is the subject of a long-standing debate in linguistics and language acquisition over the extent to which the ability to acquire language is biologically linked to age. The hypothesis claims that there is an ideal time window to acquire language in a linguistically rich environment, after which further language acquisition becomes much more difficult and effortful. The critical period hypothesis was first proposed by Montreal neurologist Wilder Penfield and co-author Lamar Roberts in their 1959 book \"Speech and Brain Mechanisms\", and was popularized by Eric Lenneberg in 1967 with \"Biological Foundations of Language.\""} {"text":"The critical period hypothesis states that the first few years of life is the crucial time in which an individual can acquire a first language if presented with adequate stimuli, and that first-language acquisition relies on neuroplasticity. If language input does not occur until after this time, the individual will never achieve a full command of language. There is much debate over the timing of the critical period with respect to SLA, with estimates ranging between 2 and 13 years of age."} {"text":"The critical period hypothesis is derived from the concept of a critical period in the biological sciences, which refers to a set period in which an organism must acquire a skill or ability, or said organism will not be able to acquire it later in life. Strictly speaking, the experimentally verified critical period relates to a time span during which \"damage\" to the development of the visual system can occur, for example if animals are deprived of the necessary binocular input for developing stereopsis."} {"text":"The discussion of language critical period is complicated by the subjectivity of determining native-like competence in language, which includes things like pronunciation, prosody, syllable stress, timing and articulatory setting. Some aspects of language, such as phoneme tuning, grammar processing, articulation control, and vocabulary acquisition have weak critical periods and can be significantly improved by training at any age. Other aspects of language, such as prefrontal synthesis, have strong critical periods and cannot be acquired after the end of the critical period."} {"text":"The theory has often been extended to a critical period for second-language acquisition (SLA), although this is much less widely accepted. David Singleton states that in learning a second language, \"younger = better in the long run\", but points out that there are many exceptions, noting that five percent of adult bilinguals master a second language even though they begin learning it when they are well into adulthood\u2014long after any critical period has presumably come to a close. Jane H. Hill posited that much research into SLA has focused on monolingual communities, whereas multilingual communities are more of a global norm, and this impacts the standard of competence that the SLA speaker is judged by."} {"text":"The critical period hypothesis in SLA follows a \"use it then lose it\" approach, which dictates that as a person ages, excess neural circuitry used during L1 learning is essentially broken down. If these neural structures remained intact they would cost unnecessary metabolic energy to maintain. The structures necessary for L1 use are kept. On the other hand, a second \"use it or lose it\" approach dictates that if an L2 user begins to learn at an early age and continues on through their life, then their language-learning circuitry should remain active. This approach is also called the \"exercise hypothesis\"."} {"text":"There is much debate over the timing of the critical period with respect to SLA, with estimates ranging between 2 and 13 years of age. However, some studies have shown that \"even very young L2 beginners diverge at the level of fine linguistic detail from native speakers.\""} {"text":"Some writers have argued that the critical period hypothesis does not apply to SLA, and that second-language proficiency is determined by the time and effort put into the learning process, and not the learner's age. observed that factors other than age may be even more significant in successful second-language learning, such as personal motivation, anxiety, input and output skills, and the learning environment. A combination of these factors often leads to individual variation in second-language acquisition experiences."} {"text":"On reviewing the published material, Bialystok and Hakuta (1994) conclude that second-language learning is not necessarily subject to biological critical periods, but \"on average, there is a continuous decline in ability [to learn] with age.\""} {"text":"Other work has challenged the biological approach; Krashen (1975) re-analysed clinical data used as evidence and concluded cerebral specialisation occurs much earlier than Lenneberg calculated. Therefore, if a CP exists, it does not coincide with lateralisation. Despite concerns with Lenneberg's original evidence and the dissociation of lateralisation from the language CP idea, however, the concept of a CP remains a viable hypothesis, which later work has better explained and substantiated."} {"text":"Contrary to biological views, behavioural approaches assert that languages are learned as any other behaviour, through conditioning. Skinner (1957) details how operant conditioning forms connections with the environment through interaction and, alongside O. Hobart Mowrer (1960), applies the ideas to language acquisition. Mowrer hypothesises that languages are acquired through rewarded imitation of \u2018language models\u2019; the model must have an emotional link to the learner (e.g. parent, spouse), as imitation then brings pleasant feelings which function as positive reinforcement. Because new connections between behaviour and the environment are formed and reformed throughout life, it is possible to gain new skills, including language(s), at any age."} {"text":"asserts that environmental factors must be relatively unimportant for language emergence, as so many different factors surround children acquiring L1. Instead, Chomsky claims language learners possess innate principles building a 'language acquisition device' (LAD) in the brain. These principles denote restricted possibilities for variation within the language, and enable learners to construct a grammar out of 'raw input' collected from the environment. Input alone cannot explain language acquisition because it is degenerated by characteristic features such as stutters, and lacks corrections from which learners discover incorrect variations."} {"text":"Singleton and Newport (2004) demonstrate the function of UG in their study of 'Simon'. Simon learned ASL as his L1 from parents who had learned it as an L2 after puberty and provided him with imperfect models. Results showed Simon learned normal and logical rules and was able to construct an organised linguistic system, despite being exposed to inconsistent input. Chomsky developed UG to explain L1 acquisition data, but maintains it also applies to L2 learners who achieve near-native fluency not attributable solely to input and interaction ."} {"text":"Although it does not describe an optimal age for SLA, the theory implies that younger children can learn languages more easily than older learners, as adults must reactivate principles developed during L1 learning and forge an SLA path: children can learn several languages simultaneously as long as the principles are still active and they are exposed to sufficient language samples (Pinker, 1995). The parents of Singleton and Newport's (2004) patient also had linguistic abilities in line with these age-related predictions; they learned ASL after puberty and never reached complete fluency."} {"text":"Problems within UG theory for L2 acquisition."} {"text":"This suggests that L2 may be qualitatively different from L1 due to its dissociation from the 'normal' language brain regions, thus the extrapolation of L1 studies and theories to SLA is placed in question. A further disadvantage of UG is that supporting empirical data are taken from a limited sample of syntactic phenomena: a general theory of language acquisition should cover a larger range of phenomena. Despite these problems, several other theorists have based their own models of language learning on it. These ideas are supported by empirical evidence, which consequently supports Chomsky's ideas. Due to this support and its descriptive and explanatory strength, many theorists regard UG as the best explanation of language, and particularly grammar, acquisition."} {"text":"A key question about the relationship of UG and SLA is: is the language acquisition device posited by Chomsky and his followers still accessible to learners of a second language? The critical period hypothesis suggests that it becomes inaccessible at a certain age, and learners increasingly depended on explicit teaching. In other words, although all of language may be governed by UG, older learners might have great difficulty in gaining access to the target language's underlying rules from positive input alone."} {"text":"Although Krashen (1975) also criticises this theory, he does not deny the importance of age for second-language acquisition. Krashen (1975) proposed theories for the close of the CP for L2 at puberty, based on Piaget's cognitive stage of formal operations beginning at puberty, as the \u2018ability of the formal operational thinker to construct abstract hypotheses to explain phenomena\u2019 inhibits the individual's natural ability for language learning."} {"text":"The term \"language acquisition\" became commonly used after Stephen Krashen contrasted it with formal and non-constructive \"learning.\" Today, most scholars use \"language learning\" and \"language acquisition\" interchangeably, unless they are directly addressing Krashen's work. However, \"second-language acquisition\" or \"SLA\" has become established as the preferred term for this academic discipline."} {"text":"Though SLA is often viewed as part of applied linguistics, it is typically concerned with the language system and learning processes themselves, whereas applied linguistics may focus more on the experiences of the learner, particularly in the classroom. Additionally, SLA has mostly examined \"naturalistic\" acquisition, where learners acquire a language with little formal training or teaching."} {"text":"Virtually all research findings on SLA to date build on data from literate learners. find significantly different results when replicating standard SLA studies with low literate L2 learners. Specifically, learners with lower alphabetic literacy levels are significantly less likely to notice corrective feedback on form or to perform elicited imitation tasks accurately. These findings are consistent with research in cognitive psychology showing significant differences in phonological awareness between literate and illiterate adults . An important direction for SLA research must therefore involve the exploration of the impact of alphabetic literacy on cognitive processing in second-language acquisition."} {"text":"Empirical research has attempted to account for variables detailed by SLA theories and provide an insight into L2 learning processes, which can be applied in educational environments. Recent SLA investigations have followed two main directions: one focuses on pairings of L1 and L2 that render L2 acquisition particularly difficult, and the other investigates certain aspects of language that may be maturationally constrained. looked at bilingual dominance to evaluate two explanations of L2 performance differences between bilinguals and monolingual-L2 speakers, i.e. a maturationally defined CP or interlingual interference."} {"text":"investigated whether the age at which participants learned English affected dominance in Italian-English bilinguals, and found the early bilinguals were English (L2) dominant and the late bilinguals Italian (L1) dominant. Further analysis showed that dominant Italian bilinguals had detectable foreign accents when speaking English, but early bilinguals (English dominant) had no accents in either language. This suggests that, though interlingual interference effects are not inevitable, their emergence, and bilingual dominance, may be related to a CP."} {"text":"also studied bilinguals and highlight the importance of early language exposure. They looked at vocabulary processing and representation in Spanish-Catalan bilinguals exposed to both languages simultaneously from birth in comparison to those who had learned L2 later and were either Spanish- or Catalan-dominant. Findings showed 'from birth bilinguals' had significantly more difficulty distinguishing Catalan words from non-words differing in specific vowels than Catalan-dominants did (measured by reaction time)."} {"text":"These difficulties are attributed to a phase around age eight months where bilingual infants are insensitive to vowel contrasts, despite the language they hear most. This affects how words are later represented in their lexicons, highlighting this as a decisive period in language acquisition and showing that initial language exposure shapes linguistic processing for life. also indicate the significance of phonology for L2 learning; they believe learning an L2 once the L1 phonology is already internalised can reduce individuals\u2019 abilities to distinguish new sounds that appear in the L2."} {"text":"Most studies into age effects on specific aspects of SLA have focused on grammar, with the common conclusion that it is highly constrained by age, more so than semantic functioning. compared attainment of French learners in early and late immersion programs. She reports that after 1000 exposure hours, late learners had better control of French verb systems and syntax. However, comparing early immersion students (average age 6.917 years) with age-matched native speakers identified common problem areas, including third person plurals and polite \u2018vous\u2019 forms. This suggests grammar (in L1 or L2) is generally acquired later, possibly because it requires abstract cognition and reasoning."} {"text":"B. Harley also measured eventual attainment and found the two age groups made similar mistakes in syntax and lexical selection, often confusing French with the L1. The general conclusion from these investigations is that different aged learners acquire the various aspects of language with varying difficulty. Some variation in grammatical performance is attributed to maturation, however, all participants began immersion programs before puberty and so were too young for a strong critical period hypothesis to be directly tested."} {"text":"This corresponds to Noam Chomsky\u2019s UG theory, which states that while language acquisition principles are still active, it is easy to learn a language, and the principles developed through L1 acquisition are vital for learning an L2."} {"text":"also suggest learning some syntactic processing functions and lexical access may be limited by maturation, whereas semantic functions are relatively unaffected by age. They studied the effect of late SLA on speech comprehension by German immigrants to the US and American immigrants to Germany. They found that native-English speakers who learned German as adults were disadvantaged on certain grammatical tasks but performed at near-native levels on lexical tasks."} {"text":"It is commonly believed that children are better suited to learning a second language than are adults. However, general second-language research has failed to support the critical period hypothesis in its strong form (i.e., the claim that full language acquisition is impossible beyond a certain age)."} {"text":"Another aspect worth considering is that bilingual children are often doing code switching, which does not mean that the child is not able to separate the languages. The reason for code switching is the child's lack of vocabulary in a certain situation. The acquisition of a second language in early childhood broadens children's minds and enriches them more than it harms them. Thus they are not only able to speak two languages in spite of being very young but they also acquire knowledge about the different cultures and environments. It is possible for one language to dominate. This depends on how much time is spent on learning each language."} {"text":"In order to provide evidence for the evolutionary functionality of the critical period in language acquisition, generated a computer simulation of plausible conditions of evolving generations, based on three central assumptions:"} {"text":"According to Hurford's evolutionary model, language acquisition is an adaptation that has survival value for humans, and that knowing a language correlates positively with an individual's reproductive advantage. This finding is in line with views of other researchers such as Chomsky and . For example, Steven Pinker and Paul Bloom argue that because a language is a complex design that serves a specific function that cannot be replaced by any other existing capacity, the trait of language acquisition can be attributed to natural selection."} {"text":"However, while arguing that language itself is adaptive and \"did not 'just happen'\" (p.\u00a0172), Hurford suggests that the critical period is not an adaptation, but rather a constraint on language that emerged due to a lack of selection pressures that reinforce acquiring more than one language. In other words, Hurford explains the existence of a critical period with genetic drift, the idea that when there are no selection pressures on multiple alleles acting on the same trait, one of the alleles will gradually diminish through evolution. Because the simulation reveals no evolutionary advantage of acquiring more than one language, Hurford suggests that the critical period evolved simply as a result of a lack of selection pressure."} {"text":"supported Hurford's model, yet pointed out that it was limited in the sense that it did not take into account the costs of learning a language. Therefore, they created their own algorithmic model, with the following assumptions:"} {"text":"Age of acquisition (AOA or AoA), is a psycholinguistic variable referring to the age at which a word is typically learned. For example, the word 'penguin' is typically learned at a younger age than the word 'albatross'. Studies in psycholinguistics suggest that age of acquisition has an effect on the speed of reading words both simple and complex. It is a particularly strong variable in predicting the speed of picture naming. It has been generally found that words that are more frequent, shorter, more familiar and refer to concrete concepts are learned earlier than more complex words."} {"text":"Sets of normative values for age of acquisition for large sets of words have been developed."} {"text":"It has been disputed whether age of acquisition has an effect on word tasks on its own or by virtue of its covariance with other variables such as word frequency. Alternatively, it has been suggested that the age of acquisition is related to the fact that an earlier learned word has been encountered more often. These issues were partially resolved in an article by Ghyselinck, Lewis and Brysbaert."} {"text":"Alternatively there have been discussions of the effect that the age of acquisition has on learning a second language."} {"text":"The Modular Online Growth and Use of Language"} {"text":"The Modular Online Growth and Use of Language (MOGUL) project is the cover term name for any research on language carried out using the Modular Cognition Framework Cognition Framework (MCF)."} {"text":"The word chain can be displayed using the following abbreviations always using 'S' for '"} {"text":"Processing works in both directions depending where the initial input comes from and then after that going in \"both\" directions in principle until the overall best-fit is found. In other words, processing is parallel, incremental and bidirectional."} {"text":"Linguists may note that what is conventionally thought of as the scope of \"phonetics\" is expressed here as the domain of auditory structure. Similarly, what is conventionally thought of as the scope of \"semantics\" and \"'pragmatics\" falls within the scope of conceptual structure. None of these linguistic areas are treated here as the domain of one or other of the two linguistic systems: the term \"linguistic\" is reserved for the two above-mentioned systems that process and store linguistic structure."} {"text":"As we match various types of cognitive structure available to us in order to find the best fit for unfamiliar input from the environment new connections are developed, initially with the relevant structures possessing a low resting level of activation. This means they will have a relatively poor chance of selection for future instances of the same input. However, the more they are selected the more they will show up in the observable behaviour of the individual concerned."} {"text":"The cognitive systems involved in language comprehension work in two directions. Production involves a physical response to internal events, the creation of a message to be conveyed. This requires articulation of different parts of the body, following the commands of motor structures. As mentioned earlier, meanings in the conceptual processor are matched with syntactic structures which in turn are matched with phonological structures; this structural chain continues to be built following different routes according to the selected mode of articulation. The required motor structures that drive the articulation of speech will be different from those involved in writing or signing."} {"text":"To take a simple example, the word \"horse\" can be discussed or pondered; all that is needed for this is an auditory structure (the sound of the word) and its visual structure (representing its orthographic, written form), both of which are matched up with its meaning. consisting of metalinguistic concepts such as \"word, syllable, noun, definition\" and the like. These concepts are required for any analytic thinking about language and may vary widely in degree and complexity, depending on an individual's metalinguistic sophistication. In any case, the linguistic systems are not directly implicated in any explicit discussion (or explicit thinking) about what is actually a linguistic form. They are simply activated at lower levels to support the ongoing thought processes (Sharwood Smith, 2020)."} {"text":"In psychology, the transposed letter effect is a test of how a word is processed when two letters within the word are switched."} {"text":"Priming is an effect of implicit memory where exposure to a certain stimulus, event, or experience affects responding to a different stimulus. Typically, the event causes the stimulus to become more salient. The transposed letter effect can be used as a form of priming."} {"text":"With any priming task the purpose is to test the initial stages of processing in order to better understand more complex processing. Psychologists use transposed-letter priming to test how people comprehend word meanings. From these findings, people can begin to understand how people learn, develop and understand language. Transposed-letter priming is used in a wide array of experiments and the reasons for using this method can depend on the particular hypothesis."} {"text":"Switching the position of adjacent letters in the base word is a close transposition. This type of transposition creates the greatest priming effect. For example, an effective prime for the word \"computer\" would be the TL non-word \"\"."} {"text":"Forming a prime word by switching the position of nonadjacent letters in the base word is a distant transposition. There is significantly less priming effect in a distant transposition than a close transposition, no matter how distant the two letters are from each other."} {"text":"The first study to test the transposed-letter effects was Burner and O\u2019Dowd (1958). However, their experiment did not use priming. They showed participants a word that had a two letter switched either at the beginning, in the middle or at the end of the word and they had to determine what the English word was. They measured their response time. Bruner and O\u2019Dowd found that the error at the beginning created the slowest response time, the end was the next slowest and the middle was the fastest. The conclusion to this data was that the beginning and the end were more important for word recognition than the middle. From there, the transposition letter effect was used to test how people process and recognize words using many tasks."} {"text":"Theories challenged by effects of transposed-letter priming."} {"text":"There are a number of theories that have been challenged by the effects shown with transposition-letter priming. These theories mainly have to do with how letters are used to process words."} {"text":"The parallel distributed processing model proposed by Seidenberg and McClelland (1989) also uses a portion of words but instead of letters they are a small group of letters in the same order as in the word. For example, the word \u201cjudge\u201d would have these groupings: . This predicts that if part of two words match there will be some priming, but this model still depends on the position of the letters to some extent, so it is not compatible with results from transposed-letter priming."} {"text":"Theories supported by effect of transposed-letter priming."} {"text":"There are a number of theories that are supported by the results shown by the transposed-letter effect."} {"text":"The SERIOL model (sequential encoding regulated by inputs to oscillations within letter units) described by Whitney (2001) explains processing of words as five levels, or nodes: retinal level, feature level, letter level, bigram level and word level. In the bigram level, the letters detected are turned into a number of pairs. For example, the word \u201ccart\u201d has the bigrams ca, ar, rt, ar, at and ct. The bigrams that more closely represent the location of letters in the words are given more weight. The pairs are then used to form the word. Within this model, letter location is still a factor but is not a defining feature of word processing, so the transposed-letter effect is consistent with this model."} {"text":"In the SOLAR model (self-organizing lexical acquisition and recognition) described by Davis (1999) each letter is associated with its own level of activation. The first letter in the word has the highest level of activation and so on until the last word has the lowest level of activation. In this model, position does describe the level of activation for that particular letter but because the activation is successive, two letters beside each other would have a similar activation level. The SOLAR model is consistent with the results of the transposed-letter effect priming because with this effect experiments have shown priming when two adjacent letters are switched but not when two letters farther apart in the word are switched."} {"text":"Transposed-letter priming was used by Christianson, Johnson and Rayner (2005) on compound words to test the role of morphemes in word processing. They switched the letters either within the morphemes (for example, snowball to snowblal) or between morphemes (for example, snowball to snobwall) in the primes and found a greater priming effect within the morphemes than between. This supported the theory that morphemes are used during the processing of compound words because the priming effect was only reduced when the letters were switched over the morpheme boundary and were unable to separate into their separate parts."} {"text":"The classic version of the model focused on competition during sentence processing, crosslinguistic competition in bilingualism, and the role of competition in language acquisition."} {"text":"The Competition Model was initially proposed as a theory of cross-linguistic sentence processing. The model suggests that people interpret the meaning of a sentence by taking into account various linguistic cues contained in the sentence context, such as word order, morphology, and semantic characteristics (e.g., animacy), to compute a probabilistic value for each interpretation, eventually choosing the interpretation with the highest likelihood. According to the model, cue weights are learned inductively on the basis of the extent to which the cues are available and reliable guides to meanings in comprehension and to forms in production."} {"text":"The model holds that cues both compete and coooperate during processing. Sometimes cues cooperate or converge by pointing to the same interpretation or production. Sometimes, cues compete by pointing to conflicting interpretations or productions."} {"text":"The application of the model to child language acquisition focuses on the role that cue availability and reliability play in determining the order of acquisition of grammatical structures. The basic finding is that children first learn the most available cue(s) in their language. If the most available cue is not also the most reliable, then children slowly shift from depending on the available cue to depending on the more reliable cue."} {"text":"The classic Competition Model accounts well for many of the basic features of sentence processing and cue learning. It relies on a small set of assumptions regarding cues, validity, reliability, competition, transfer, and strength\u2014each of which could be investigated directly.\u00a0 However, the model is limited in several important ways."} {"text":"\u00b7 \u00a0 \u00a0 \u00a0 Brain Structure: The classic model makes no contact with what we now know about the organization of language in the brain. As a result, it provides only incomplete understanding of patterns of language disorder and loss."} {"text":"\u00b7 \u00a0 \u00a0 \u00a0 Critical Period: The classic model fails to come to grips with the idea that there is a biologically-determined critical period for language acquisition."} {"text":"\u00b7 \u00a0 \u00a0 \u00a0 Motivation: The classic model provides no role for social and motivational factors governing language learning, preference, code-switching, and attrition."} {"text":"\u00b7 \u00a0 \u00a0 \u00a0 Mental Models: The classic model fails to include a role for mental model construction during comprehension and formulation during production."} {"text":"\u00b7 \u00a0 \u00a0 \u00a0 Microgenesis: The classic model does not provide a microgenetic account for the course of item acquisition, fluency development, and cue strength learning."} {"text":"Extending the classic model to deal with these challenges involves borrowing insights from related theories. The resultant broader theory is called the Unified Competition Model or UCM, because it seeks to unify a variety of independent theoretical frameworks into a single overall model. The transition from the classic version of the model to the unified version worked to bring the model into fuller accord with the theory of emergentism, as developed in the biological (West-Eberhard, 2003), social (Kontopoulos, 1993) and physical sciences (von Bertalanffy, 1968)."} {"text":"Unifying the L1 and L2 Learning Models."} {"text":"A major challenge facing an emergentist, functionalist, non-nativist model such as the UCM involves dealing with age-related changes in the outcome of second language (L2) acquisition.\u00a0 It is widely accepted that children end up acquiring a second language more completely than adults. One account proposes that this \"fundamental difference\" (Bley-Vroman, 2009) between child and adult L2 learning arises from the expiration of a biologically-based critical period for natural language learning. In contrast, the framework of the Competition Model emphasizes that all forms of language acquisition make use of the same set of cognitive and social processes, although they differ in the relative reliance on specific processes and the extent to which these processes interact with other learning."} {"text":"Specifically, the UCM holds that adults are more challenged than children by a set of four risk factors that can impede L2 acquisition."} {"text":"Adults can counterbalance these four risk factors through an emphasis on four protective or preventive factors."} {"text":"All of these processes can impact both children and adults. What differs across age is the relative social status of the person and the degree to which they have already consolidated L1."} {"text":"Structural linguistic analysis (Harris, 1951) distinguishes the levels of input phonology, output phonology, lexicon, semantics, morphology, syntax, mental models, and interaction. Processing on these levels can be analyzed in terms of the related theories of statistical learning (input phonology), gating and fluency (output phonology), embodied cognition and hub-and-spoke theory (semantics), DevLex (lexicon), item-based patterns (syntax), perspective theory (mental models), and CA theory (interaction). The theories for lexicon, syntax, and mental models have been elaborated in specific ways that help unify the approach. These elaborations include specifically the theory of item-based patterns and the theory of perspective shifting."} {"text":"The levels distinguished by structural analysis are richly interconnected. This means that, although they are partially decomposable (Simon, 1962), they are not modular in the sense of Fodor (1983), but rather interactive in the sense of Rumelhart and McClelland (1987). In order to achieve gating and activation, processing levels must be interconnected in a way that permits smooth coordination. The UCM assumes that these interconnections rely on methods for topological, i.e. tonotopic (Wessinger, Buonocore, Kussmaul, & Mangun, 1997) or somatotopic (Hauk, Johnsrude, & Pulvermuller, 2004), organization that are used throughout the cortex."} {"text":"Structural analysis has many important consequences for our understanding of relations between first and second language learning. Age-related first language entrenchment operates in very different ways in different cortical areas (Werker & Hensch, 2014). In second language production, contrasts and timing relations between the levels of conceptualization, formulation, and articulation (Levelt, 1989) produce marked effects on language performance (Skehan, 2009), although similar effects can be found also in first language acquisition (Snow, 1999). The details of this analysis can be found in MacWhinney (2017)."} {"text":"The classic version of the Competition Model emphasized the ways in which cue reliability shaped cue strength. These effects were measured in highly structured sentence processing experiments. To address certain limitations of this research, the Unified Competition Model sought to account in greater detail for age-related facts in the comparison between child and adult second language learning. Within the classic model, the only mechanism that could account for these effects was competition between L1 and L2 patterns, as expressed through negative transfer.\u00a0 Although transfer plays a major role as a risk factor for difficulties in adult L2 learning, it is not the only risk factor."} {"text":"Looking more closely at the variety of L2 learning outcomes across structural levels and timeframes, it became evident that we needed to construct a more complex account for variable outcomes in L2 learning. This account required a deeper integration of emergentist theory into the UCM framework. The resultant account is now able to address each of the limitations of the classic model mentioned earlier. Specifically,"} {"text":"\u00b7 \u00a0 \u00a0 \u00a0 by linking linguistic structures to particular brain regions, the model is increasingly grounded neurolinguistically (MacWhinney, 2019),"} {"text":"\u00b7 \u00a0 \u00a0 \u00a0 by delineating a set of risk and protective factors, the model deals more accurately with age-related patterns in L2 learning,"} {"text":"\u00b7 \u00a0 \u00a0 \u00a0 by providing a time\/process frames account of social and motivational factors, the model accounts better for variation in L2 outcome by social groups, work environments, as well as providing accounts for patterns of code-switching and language attrition,"} {"text":"\u00b7 \u00a0 \u00a0 \u00a0 by linking in the theory of perspective-switching, we have a fuller understanding of online sentence processing, and"} {"text":"\u00b7 \u00a0 \u00a0 \u00a0 by developing corpus (MacWhinney, 2019) and online experimental (eCALL) methods (MacWhinney, 2017), the model now provides a fuller microgenetic account of the growth of fluency ."} {"text":"By addressing each of these issues within the context of analyses of L2 learning, the current version of the UCM allows us to better understand not only L2 learning, but also language evolution (MacWhinney, 2005), language change, child language development (MacWhinney, 2015), language disorders (Presson & MacWhinney, 2011), and language attrition (MacWhinney, 2018)."} {"text":"Spreading activation is a method for searching associative networks, biological and artificial neural networks, or semantic networks. The search process is initiated by labeling a set of source nodes (e.g. concepts in a semantic network) with weights or \"activation\" and then iteratively propagating or \"spreading\" that activation out to other nodes linked to the source nodes. Most often these \"weights\" are real values that decay as activation propagates through the network. When the weights are discrete this process is often referred to as marker passing. Activation may originate from alternate paths, identified by distinct markers, and terminate when two alternate paths reach the same node. However brain studies show that several different brain areas play an important role in semantic processing."} {"text":"Spreading activation models are used in cognitive psychology to model the fan out effect."} {"text":"Spreading activation can also be applied in information retrieval, by means of a network of nodes representing documents and terms contained in those documents."} {"text":"When a word (the target) is preceded by an associated word (the prime) in word recognition tasks, participants seem to perform better in the amount of time that it takes them to respond. For instance, subjects respond faster to the word \"doctor\" when it is preceded by \"nurse\" than when it is preceded by an unrelated word like \"carrot\". This semantic priming effect with words that are close in meaning within the cognitive network has been seen in a wide range of tasks given by experimenters, ranging from sentence verification to lexical decision and naming."} {"text":"As another example, if the original concept is \"red\" and the concept \"vehicles\" is primed, they are much more likely to say \"fire engine\" instead of something unrelated to vehicles, such as \"cherries\". If instead \"fruits\" was primed, they would likely name \"cherries\" and continue on from there. The activation of pathways in the network has everything to do with how closely linked two concepts are by meaning, as well as how a subject is primed."} {"text":"A directed graph is populated by Nodes[ 1...N ] each having an associated activation value A [ i ] which is a real number in the range [ 0.0 ... 1.0]. A Link[ i, j ] connects source node[ i ] with target node[ j ]. Each edge has an associated weight W [ i, j ] usually a real number in the range [0.0 ... 1.0]."} {"text":"Language coordination is the tendency of people to mimic the language of others. The coordination occurs when one person responds to another using similar vocabulary, or word or sentence structure. Language coordination can also be applied to individuals, who linguistically coordinate to a group. As suggested by the communication accommodation theory, this is often used as a way to reduce social distance (convergence). Language coordination often occurs unconsciously."} {"text":"A propositional attitude is a mental state held by an agent toward a proposition."} {"text":"Linguistically, propositional attitudes are denoted by a verb (e.g. \"believed\") governing an embedded \"that\" clause, for example, 'Sally believed that she had won'."} {"text":"Propositional attitudes are often assumed to be the fundamental units of thought and their contents, being propositions, are true or false from the perspective of the person. An agent can have different propositional attitudes toward the same proposition (e.g., \"S believes that her ice-cream is cold,\" and \"S fears that her ice-cream is cold\")."} {"text":"Propositional attitudes have directions of fit: some are meant to reflect the world, others to influence it."} {"text":"One topic of central concern is the relation between the modalities of assertion and belief, perhaps with intention thrown in for good measure. For example, we frequently find ourselves faced with the question of whether or not a person's assertions conform to his or her beliefs. Discrepancies here can occur for many reasons, but when the departure of assertion from belief is intentional, we usually call that a \"lie\"."} {"text":"Other comparisons of multiple modalities that frequently arise are the relationships between belief and knowledge and the discrepancies that occur among observations, expectations, and intentions. Deviations of observations from expectations are commonly perceived as \"surprises\", phenomena that call for \"explanations\" to reduce the shock of amazement."} {"text":"In logic, the formal properties of verbs like \"assert\", \"believe\", \"command\", \"consider\", \"deny\", \"doubt\", \"imagine\", \"judge\", \"know\", \"want\", \"wish\", and a host of others that involve attitudes or intentions toward propositions are notorious for their recalcitrance to analysis."} {"text":"One of the fundamental principles governing identity is that of \"substitutivity\", also known as fungibility\u00a0\u2014 or, as it might well be called, that of \"indiscernibility of identicals\". It provides that, \"given a true statement of identity, one of its two terms may be substituted for the other in any true statement and the result will be true\". It is easy to find cases contrary to this principle. For example, the statements:"} {"text":"are true; however, replacement of the name 'Giorgione' by the name 'Barbarelli' turns (2) into the falsehood:"} {"text":"Quine's example here refers to Giorgio Barbarelli's sobriquet \"Giorgione\", an Italian name roughly glossed as \"Big George.\" The basis of the paradox here is that while the two names signify the same individual (the meaning of the first statement), the names are not themselves identical; the second statement refers to an attribute (origin) that they do not share."} {"text":"What sort of name shall we give to verbs like 'believe' and 'wish' and so forth? I should be inclined to call them 'propositional verbs'. This is merely a suggested name for convenience, because they are verbs which have the \"form\" of relating an object to a proposition. As I have been explaining, that is not what they really do, but it is convenient to call them propositional verbs. Of course you might call them 'attitudes', but I should not like that because it is a psychological term, and although all the instances in our experience are psychological, there is no reason to suppose that all the verbs I am talking of are psychological. There is never any reason to suppose that sort of thing. (Russell 1918, 227)."} {"text":"What a proposition is, is one thing. How we feel about it, or how we regard it, is another. We can accept it, assert it, believe it, command it, contest it, declare it, deny it, doubt it, enjoin it, exclaim it, expect it. Different attitudes toward propositions are called \"propositional attitudes\", and they are also discussed under the headings of \"intentionality\" and \"linguistic modality\"."} {"text":"Many problematic situations in real life arise from the circumstance that many different propositions in many different modalities are in the air at once. In order to compare propositions of different colours and flavours, as it were, we have no basis for comparison but to examine the underlying propositions themselves. Thus we are brought back to matters of language and logic. Despite the name, propositional attitudes are not regarded as psychological attitudes proper, since the formal disciplines of linguistics and logic are concerned with nothing more concrete than what can be said in general about their formal properties and their patterns of interaction."} {"text":"Bilingual lexical access is an area of psycholinguistics that studies the activation or retrieval process of the mental lexicon for bilingual people."} {"text":"Bilingual lexical access can be understood as all aspects of the word processing, including all of the mental activity from the time when a word from one language is perceived to the time when all its lexical knowledge from the target language is available. Research in this field seeks to fully understand these mental processes. Bilingual individuals have two mental lexical representations for an item or concept and can successfully select words from one language without significant interference from the other language. It is the field's goal to understand whether these dual representations interact or affect one another."} {"text":"Bilingual lexical access researchers focus on the control mechanisms bilinguals use to suppress the language not in use when in a monolingual mode and the degree to which the related representations within the language not in use are activated. For example, when a Dutch-English bilingual is asked to name a picture of a dog in English, he or she will come up with the English word \"dog\". Bilingual lexical access is the mental process that underlies this seemingly simple task: the process that makes the connection between the idea of a dog and the word \"dog\" in the target language. While activating the English word \"dog\", its Dutch equivalent (\"hond\") is most likely also in a state of activation."} {"text":"Early research of bilingual lexical access was based on theories of monolingual lexical access. These theories relied mainly upon generalizations without specifying how lexical access works."} {"text":"Subsequent advancement in medical science has improved understanding of psycholinguistics, resulting in more detailed research and a deeper understanding of language production. \"Many early studies of second language acquisition focused on the morphosyntactic development of learners and the general finding was that bound morphemes appear in the same order in the first and second language\"."} {"text":"Knowledge of monolingual access led to the question of bilingual lexical access. Early models of bilingual lexical access shared similar characteristics with these monolingual lexical access models; the bilingual models began by focusing on if bilingual lexical access would be different from monolinguals. In addition, to study the activation process in a separate language, they also investigated whether the lexical activation would be processed in a parallel fashion for both languages or selectively processed for the target language. The bilingual models also study whether the bilingual system has a single lexicon combining words from both languages or separate lexicons for words in each language."} {"text":"Language-selective access is the exclusive activation of information in the contextually appropriate language system. It implies when a bilingual encounters a spoken or written word, the activation is restricted to the target language subsystem which contains the input word."} {"text":"Language-nonselective access is the automatic co-activation of information in both linguistic systems. It implies that when a bilingual encounters a spoken or written word, the activation happens in parallel in both contextually appropriate and inappropriate linguistic subsystems. Also, there is evidence that bilinguals take longer than monolinguals to detect non-words while in both bilingual and monolingual modes, providing evidence that bilinguals do not fully deactivate their other language while in a monolingual mode."} {"text":"Once bilinguals acquire the lexical information from both languages, bilingual lexical access activates in language comprehension. \"Lexical access incomprehension\" is the process of how people make contact with lexical representation in their mental lexicon that contains the information, which enables them to understand the words or sentences. Word recognition is the most essential process of bilingual lexical access in language comprehension, in which researchers investigate the selective or non-selective recognition of isolated words. At the same time, sentence processing also plays an important role in language comprehension, where researchers can investigate if the presence of words in a sentence context restricts lexical access only to the target language."} {"text":"Word recognition is usually used in both narrow and broad ways. When it is used in the narrow sense, it means the moment when a match occurs between a printed word and its orthographic word-form stored in the lexicon, or a match between a spoken word and its phonological word-form. Only after this match has taken place, all the syntactical and morphological information of the word and the meaning of the word will become accessible for further processing. In a broader way, it refers to lexical access is the entire period from the matching processing to the retrieval of lexical information. In the research of bilingual lexical access, word recognition uses single, out-of-context words from both languages to investigate all the aspects of bilingual lexical access."} {"text":"In word recognition studies, the cognate or interlingual homograph effects are most often used with the following tasks:"} {"text":"Models of bilingual lexical access in word recognition."} {"text":"Most current models in word recognition assume that bilingual lexical access is nonselective, which also take into account the demands of task and context-dependence of processing."} {"text":"The IC model is complementary to the BIA model. It focuses on the importance of task demands and regulation that happened during language processing by modifying the levels of activation of items in the language network. In this model, a key concept is the language task schema, which specifies the mental processing steps that bilinguals take to perform a particular language task. The language task schema regulates the output from the word identification system by altering the activation levels of representations within that system by inhibiting outputs from it. For example, when a bilingual switches from one language to another in translation, a change in the language schema corresponding to the languages must take place."} {"text":"In the language model framework, language processing mechanisms and languages as a whole can be achieved to different extents. The relative activation state of language is called language mode, and it is influenced by many factors, such as the person spoke or listened to, users\u2019 language proficiency, the non-linguistic context and so on. Language users can be in a bilingual mode if they are talking to other bilinguals or reading text with mixed languages. However, if they listen to someone who is monolingual or is just speaking one language, the activation state would switch to a more monolingual mode. Based on this model, the bilinguals' language mode depends on the language users' expectation and by language environment."} {"text":"The BIA+ model is an extension and adaptation of the BIA model. The BIA+ model includes not only an orthographic representation and language nodes, but also phonological and semantic representations. All these representations are assumed to be part of a word identification system that provides output to a task\/decision system. The information flow in bilingual lexical processing proceeds exclusively from the word identification system toward a task\/decision system without any influence of this task\/decision system on the activation state of words."} {"text":"Most current studies of bilingual lexical access are based on the comprehension of isolated words without considering whether contextual information affects lexical access in bilinguals. However, in everyday communication, words are most often encountered in a meaningful context and not in isolation (e.g. in a newspaper article). Research done by D\u00e9prez (1994) has shown that mixed utterances in children are not limited to the lexical level but also in the areas of morphology, syntax, and pronunciation. Researchers also began to investigate the cognitive nature of bilingual lexical access in context by examining word recognition in sentences."} {"text":"The main methodological tasks in sentence processing."} {"text":"In sentence processing, a number of online measuring techniques are exploited to detect cognitive activity at the very moment it takes place or only slightly after. Cognates and interlingual homographs are often used as markers that are inserted in test sentences with the following tasks:"} {"text":"Studies of bilingual lexical access in sentence processing."} {"text":"Although most studies on bilingual sentence processing focus on L2 processing, there are still a few studies that have investigated cross-language activation during their native language (L1) reading. For example, van Assche et al. replicated the cognate effect in L1 with Dutch\u2013English bilinguals, and found that a non-dominant language may affect native-language sentence reading, both at earliest and at later reading stages. Titone et al. observed this cross-language activation in English-French bilinguals at early reading stages only when the L2 was acquired early in life. They also concluded that the semantic constraint provided by a sentence can attenuate cross-language activation at later reading stages."} {"text":"Polysemy ( or ; from , , \"many\" and , , \"sign\") is the capacity for a word or phrase to have multiple meanings, usually related by contiguity of meaning within a semantic field. Polysemy is thus distinct from homonymy\u2014or homophony\u2014which is an accidental similarity between two (or even more) words (such as \"bear\" the animal, and the verb \"to bear\"); while homonymy is a mere linguistic coincidence, polysemy is not. In deciding between polysemy or homonymy, it might be necessary to look at the history of the word to see if the two meanings are historically related. Dictionary writers often list polysemes under the same entry; homonyms are defined separately."} {"text":"In linear or vertical polysemy, one sense of a word is a subset of the other. These are examples of hyponymy and hypernymy, and are sometimes called autohyponyms. For example, 'dog' can be used for 'male dog'. Alan Cruse identifies four types of linear polysemy:"} {"text":"In non-linear polysemy, the original sense of a word is used figuratively to provide a different way of looking at the new subject. Alan Cruse identifies three types of non-linear polysemy:"} {"text":"There are several tests for polysemy, but one of them is zeugma: if one word seems to exhibit zeugma when applied in different contexts, it is likely that the contexts bring out different polysemes of the same word. If the two senses of the same word do not seem to \"fit,\" yet seem related, then it is likely that they are polysemous. This test again depends on speakers' judgments about relatedness, which means that it is not infallible, but merely a helpful conceptual aid."} {"text":"The difference between homonyms and polysemes is subtle. Lexicographers define polysemes within a single dictionary lemma, numbering different meanings, while homonyms are treated in separate entries (or lemmata). Semantic shift can separate a polysemous word into separate homonyms. For example, \"check\" as in \"bank check\" (or \"Cheque\"), \"check\" in chess, and \"check\" meaning \"verification\" are considered homonyms, while they originated as a single word derived from chess in the 14th century. Psycholinguistic experiments have shown that homonyms and polysemes are represented differently within people's mental lexicon: while the different meanings of homonyms (which are semantically unrelated) tend to interfere or compete with each other during comprehension, this does not usually occur for the polysemes that have semantically related meanings. Results for this contention, however, have been mixed."} {"text":"For Dick Hebdige polysemy means that, \"each text is seen to generate a potentially infinite range of meanings,\" making, according to Richard Middleton, \"any homology, out of the most heterogeneous materials, possible. The idea of \"signifying practice\"\u2014texts not as communicating or expressing a pre-existing meaning but as 'positioning subjects' within a \"process\" of semiosis\u2014changes the whole basis of creating social meaning\"."} {"text":"Charles Fillmore and Beryl Atkins' definition stipulates three elements: (i) the various senses of a polysemous word have a central origin, (ii) the links between these senses form a network, and (iii) understanding the 'inner' one contributes to understanding of the 'outer' one."} {"text":"One group of polysemes are those in which a word meaning an activity, perhaps derived from a verb, acquires the meanings of those engaged in the activity, or perhaps the results of the activity, or the time or place in which the activity occurs or has occurred. Sometimes only one of those meanings is intended, depending on context, and sometimes multiple meanings are intended at the same time. Other types are derivations from one of the other meanings that leads to a verb or activity."} {"text":"This example shows the specific polysemy where the same word is used at different levels of a taxonomy. Example 1 contains 2, and 2 contains 3."} {"text":"The different meanings can be combined in a single sentence, e.g. \"John used to work for the newspaper that you are reading.\""} {"text":"A lexical conception of polysemy was developed by B. T. S. Atkins, in the form of lexical implication rules. These are rules that describe how words, in one lexical context, can then be used, in a different form, in a related context. A crude example of such a rule is the pastoral idea of \"verbizing one's nouns\": that certain nouns, used in certain contexts, can be converted into a verb, conveying a related meaning."} {"text":"Another clarification of polysemy is the idea of predicate transfer\u2014the reassignment of a property to an object that would not otherwise inherently have that property. Thus, the expression \"I am parked out back\" conveys the meaning of \"parked\" from \"car\" to the property of \"I possess a car\". This avoids incorrect polysemous interpretations of \"parked\": that \"people can be parked\", or that \"I am pretending to be a car\", or that \"I am something that can be parked\". This is supported by the morphology: \"We are parked out back\" does not mean that there are multiple cars; rather, that there are multiple passengers (having the property of being in possession of a car)."} {"text":"Cognitive shifting is the mental process of \"consciously\" redirecting one's attention from one fixation to another. In contrast, if this process happened \"unconsciously\", then it is referred to as task switching. Both are forms of cognitive flexibility."} {"text":"In the general framework of cognitive therapy and awareness management, cognitive shifting refers to the conscious choice to take charge of one's mental habits\u2014and redirect one's focus of attention in helpful, more successful directions. In the term's specific usage in corporate awareness methodology, cognitive shifting is a performance-oriented technique for refocusing attention in more alert, innovative, charismatic and empathic directions."} {"text":"In cognitive therapy, as developed by its founder Aaron T. Beck and others, a client is taught to shift his or her cognitive focus from one thought or mental fixation to a more positive, realistic focus\u2014thus the descriptive origins of the term \"cognitive shifting\". In \"third wave\" ACT therapy as taught by Steven C. Hayes and his associates in the Acceptance and Commitment Therapy movement, cognitive shifting is employed not only to shift from negative to positive thoughts, but also to shift into a quiet state of mindfulness. Cognitive shifting is also employed quite dominantly in the meditative-health procedures of medical and stress-reduction researchers such as Jon Kabat-Zinn at the University of Massachusetts Medical School."} {"text":"Cognitive shifting has become a common term among therapists especially on the West Coast, and more recently in discussions of mind management methodology. More recently the term, as noted above, has appeared regularly in medical and psychiatric journals etc."} {"text":"\"In research\": The term has become fairly common in psychiatric research, used in the following manner: \"Neuropsychological findings in obsessive-compulsive disorder (OCD) have been explained in terms of reduced cognitive shifting ability as a result of low levels of frontal inhibitory activity.\""} {"text":"\"In therapy\": In therapy (as in the work of Steven Hayes and associates), a client is taught first to identify and accept a negative thought or attitude, and then to allow the cognitive shifting process to re-direct attention away from the negative fixation, toward a chosen aim or goal that is more positive\u2014thus the \"accept and choose act\" from whence comes the ACT therapy name. Cognitive studies of the elderly refer to \"...Impaired cognitive shifting in Parkinsonian patients on anticholinergic therapy...\" etc."} {"text":"\"Everyday usage\": Books such as \"The Way Of The Tiger\" by Lance Secretan, and \"The Creative Manager\" by Peter Russell have shown how cognitive shifting principles apply to everyday life. Decades ago Rollo May taught the process of conscious choosing and cognitive shifting at Princeton in his psychology lectures. And in books such as \"The Emotional Brain\", Joseph LeDoux clarified the power of consciously shifting from a negative to a more positive emotional focus. In John Selby's writings, most notable in \"Quiet Your Mind\", the term appears frequently."} {"text":"\"In meditation\": Among the first references to the general mental process of focal shifting or cognitive shifting (the term cognitive is a relatively new term), the Hindu Upanishads are probably the first written documentation of the meditative process of redirecting one's focus of attention in particular disciplined directions. Cognitive shifting is the core process of all meditation, especially in Kundalini meditation but also in Zen meditation and even in Christian mysticism where the mind's attention is re-directed (or shifted) toward particular theologically-determined focal points. Recent books have spoken directly of cognitive shifting as a meditative procedure."} {"text":"In a recent NPR interview with Michael Toms, and elsewhere in his writings, John Selby attributes his initial introduction to the process of cognitive shifting to Jiddu Krishnamurti, whom he considers his early spiritual teacher, and also to his training with Rollo May at Princeton. In the NPR interview, Selby says he is almost certain that he first heard the actual term from a lecture by the 1960s philosopher Alan Watts during his \"Expanding Christianity\" lectures at the San Francisco Theological Seminary in 1972."} {"text":"The primary cognitive technology that is used for cognitive shifting is called \"focus phrase\" methodology. This term has emerged from the actual process in which cognitive shifting is encouraged or even provoked in a client or any other person. The person states clear intent through a specially-worded focus phrase\u2014and then experiences the inner shift that the focus phrase elicits."} {"text":"Another term sometimes used for focus phrases is \"elicitor statements\". In some methodologies focus phrases are said as a set of 4 to 7 statements, fairly quickly and to oneself. In other techniques a single focus phrase is held in the mind during a whole morning or day, and perhaps changed each new day during the week."} {"text":"Formulation of the theory is credited to the Belgian psychologist Albert Michotte and Fabio Metelli, an Italian psychologist, with their work developed in recent years by E.S. Reed and the Gestaltists."} {"text":"Modal completion is a similar phenomenon in which a shape is perceived to be occluding other shapes even when the shape itself is not drawn. Examples include the triangle that appears to be occluding three disks and an outlined triangle in the Kanizsa triangle and the circles and squares that appear in different versions of the Koffka cross."} {"text":"Graphical perception is the human capacity for visually interpreting information on graphs and charts. Both quantitative and qualitative information can be said to be encoded into the image, and the human capacity to interpret it is sometimes called decoding. The importance of human graphical perception, what we discern easily versus what our brains have more difficulty decoding, is fundamental to good statistical graphics design, where clarity, transparency, accuracy and precision in data display and interpretation are essential for understanding the translation of data in a graph to clarify and interpret the science."} {"text":"Graphical perception is achieved in dimensions or steps of discernment by:"} {"text":"Cleveland and McGill's experiments to elucidate the graphical elements humans \"detect\" most accurately is a fundamental component of good statistical graphics design principles. In practical terms, graphs displaying relative position on a common scale most accurately are most effective. A graph type that utilizes this element is the dot plot. Conversely, angles are perceived with less accuracy; an example is the pie chart. Humans do not naturally order color hues. Only a limited number of hues can be discriminated in one graphic."} {"text":"Graphic designs that utilize visual pre-attentive processing in the graph design's \"assembly\" is why a picture can be worth a thousand words by using the brain's ability to perceive patterns. Not all graphs are designed to consider pre-attentive processing. For example in the attached figure, a graphic design feature, table look-up, requires the brain to work harder and take longer to decode than if the graph utilizes our ability to discern patterns."} {"text":"Graphic design that readily answers the scientific questions of interest will include appropriate \"estimation\". Details for choosing the appropriate graph type for continuous and categorical data and for grouping have been described. Graphics principles for accuracy, clarity and transparency have been detailed and key elements summarized."} {"text":"Compartmentalization is a subconscious psychological defense mechanism used to avoid cognitive dissonance, or the mental discomfort and anxiety caused by a person having conflicting values, cognitions, emotions, beliefs, etc. within themselves."} {"text":"Compartmentalization allows these conflicting ideas to co-exist by inhibiting direct or explicit acknowledgement and interaction between separate compartmentalized self-states."} {"text":"Psychoanalysis considers that whereas isolation separates thoughts from feeling, compartmentalization separates different (incompatible) cognitions from each other. As a secondary, intellectual defense, it may be linked to rationalization. It is also related to the phenomenon of neurotic typing, whereby everything must be classified into mutually exclusive and watertight categories."} {"text":"Otto Kernberg has used the term \"bridging interventions\" for the therapist's attempts to straddle and contain contradictory and compartmentalized components of the patient's mind."} {"text":"Compartmentalization may lead to hidden vulnerabilities in those who use it as a major defense mechanism."} {"text":"Those suffering from borderline personality disorder will often divide people into all good versus all bad, to avoid the conflicts removing the compartments would inevitably bring, using denial or indifference to protect against any indications of contradictory evidence."} {"text":"Using indifference towards a better viewpoint is a normal and common example of this. It can be caused by someone having used multiple compartment ideals and having been uncomfortable with modifying them, at risk of being found incorrect. This often causes double-standards, and bias."} {"text":"Conflicting social identities may be dealt with by compartmentalizing them and dealing with each only in a context-dependent way."} {"text":"In his novel, \"The Human Factor\", Graham Greene has one of his corrupt officials use the rectangular boxes of Ben Nicholson's art as a guide to avoiding moral responsibility for bureaucratic decision-making\u2014a way to compartmentalize oneself within one's own separately colored box."} {"text":"Doris Lessing considered that the essential theme of \"The Golden Notebook\" was \"that we must not divide things off, must not compartmentalise. 'Bound. Free. Good. Bad. Yes. No. Capitalism. Socialism. Sex. Love...'\"."} {"text":"The psychology of programming (PoP) is the field of research that deals with the psychological aspects of writing programs (often computer programs). The field has also been called the empirical studies of programming (ESP). It covers research into computer programmers' cognition, tools and methods for programming-related activities, and programming education."} {"text":"Psychologically, computer programming is a human activity which involves cognitions such as reading and writing computer language, learning, problem solving, and reasoning."} {"text":"It is desirable to achieve a programming performance such that creating a program meets its specifications, is on schedule, is adaptable for the future and runs efficiently. Being able to satisfy all these goals at a low cost is a difficult and common problem in software engineering and project management. By understanding the psychological aspects of computer programming, we can better understand how to achieve a higher programming performance, and to assist programmers to produce better software with less error."} {"text":"Some methods which one can use to study the psychological aspects of computer programming include introspection, observation, experiment, and qualitative research."} {"text":"Models of consciousness are used to illustrate and aid in understanding and explaining distinctive aspects of consciousness. Sometimes the models are labeled theories of consciousness. Anil Seth defines such models as those that relate brain phenomena such as fast irregular electrical activity and widespread brain activation to properties of consciousness such as qualia. Seth allows for different types of models including mathematical, logical, verbal and conceptual models."} {"text":"The Neural correlates of consciousness (NCC) formalism is used as a major step towards explaining consciousness. The NCC are defined to constitute the minimal set of neuronal events and mechanisms sufficient for a specific conscious percept, and consequently sufficient for consciousness. In this formalism, consciousness is viewed as a state-dependent property of some undefined complex, adaptive, and highly interconnected biological system."} {"text":"Timothy Leary introduced and Robert Anton Wilson and Antero Alli elaborated the Eight-circuit model of consciousness as hypothesis that \"suggested eight periods [circuits] and twenty-four stages of neurological evolution\"."} {"text":"Daniel Dennett proposed a physicalist, information processing based multiple drafts model of consciousness described more fully in his 1991 book, Consciousness Explained."} {"text":"The Dehaene\u2013Changeux model (DCM), also known as the global neuronal workspace or the global cognitive workspace model is a computer model of the neural correlates of consciousness programmed as a neural network. Stanislas Dehaene and Jean-Pierre Changeux introduced this model in 1986. It is associated with Bernard Baars's Global workspace theory for consciousness."} {"text":"Clouding of consciousness, also known as brain fog or mental fog, is a term used in medicine denoting an abnormality in the regulation of the overall level of consciousness that is mild and less severe than a delirium. It is part of an overall model where there's regulation of the \"overall level\" of the consciousness of the brain and aspects responsible for \"arousal\" or \"wakefulness\" and awareness of oneself and of the environment."} {"text":"Electromagnetic theories of consciousness propose that consciousness can be understood as an electromagnetic phenomenon that occurs when a brain produces an electromagnetic field with specific characteristics. Some electromagnetic theories are also quantum mind theories of consciousness; examples include quantum brain dynamics (QBD)."} {"text":"Orchestrated objective reduction (Orch-OR) model is based on the hypothesis that consciousness in the brain originates from quantum processes inside neurons, rather than from connections between neurons (the conventional view). The mechanism is held to be associated with molecular structures called microtubules. The hypothesis was advanced by Roger Penrose and Stuart Hameroff and has been the subject of extensive debate,"} {"text":"Min proposed in a 2010 paper a Thalamic reticular networking model of consciousness. The model suggests consciousness as a \"mental state embodied through TRN-modulated synchronization of thalamocortical networks\". In this model the thalamic reticular nucleus (TRN) is suggested as ideally suited for controlling the entire cerebral network, and responsible (via GABAergic networking) for synchronization of neural activity."} {"text":"Functionalism is a view in the theory of the mind. It states that mental states (beliefs, desires, being in pain, etc.) are constituted solely by their functional role \u2013 that is, they have causal relations to other mental states, numerous sensory inputs, and behavioral outputs."} {"text":"Sociology of human consciousness uses the theories and methodology of sociology to explain human consciousness. The theory and its models emphasize the importance of language, collective representations, self-conceptions, and self-reflectivity. It argues that the shape and feel of human consciousness is heavily social."} {"text":"Levels of Consciousness are a complete overview of the evolution of Human Consciousness and possible life experiences."} {"text":"The model of hierarchical complexity (MHC) is a framework for scoring how complex a behavior is, such as verbal reasoning or other cognitive tasks. It quantifies the order of hierarchical complexity of a task based on mathematical principles of how the information is organized, in terms of information science. This model was developed by Michael Commons and Francis Richards in the early 1980s."} {"text":"The model of hierarchical complexity (MHC) is a formal theory and a mathematical psychology framework for scoring how complex a behavior is. Developed by Michael Lamport Commons and colleagues, it quantifies the order of hierarchical complexity of a task based on mathematical principles of how the information is organized, in terms of information science. Its forerunner was the general stage model."} {"text":"Behaviors that may be scored include those of individual humans or their social groupings (e.g., organizations, governments, societies), animals, or machines. It enables scoring the hierarchical complexity of task accomplishment in any domain. It is based on the very simple notions that higher order task actions:"} {"text":"It is cross-culturally and cross-species valid. The reason it applies cross-culturally is that the scoring is based on the mathematical complexity of the hierarchical organization of information. Scoring does not depend upon the content of the information (e.g., what is done, said, written, or analyzed) but upon how the information is organized."} {"text":"The MHC is a non-mentalistic model of developmental stages. It specifies 16 orders of hierarchical complexity and their corresponding stages. It is different from previous proposals about developmental stage applied to humans; instead of attributing behavioral changes across a person's age to the development of mental structures or schema, this model posits that task sequences of task behaviors form hierarchies that become increasingly complex. Because less complex tasks must be completed and practiced before more complex tasks can be acquired, this accounts for the developmental changes seen, for example, in individual persons' performance of complex tasks. (For example, a person cannot perform arithmetic until the numeral representations of numbers are learned. A person cannot operationally multiply the sums of numbers until addition is learned)."} {"text":"The creators of the MHC claim that previous theories of stage have confounded the stimulus and response in assessing stage by simply scoring responses and ignoring the task or stimulus. The MHC separates the task or stimulus from the performance. The participant's performance on a task of a given complexity represents the stage of developmental complexity."} {"text":"Development of Hierarchal Complexity and Relationship to the Traditional Stage Theory."} {"text":"The traditional stage theory is the idea that an action\u2019s complexity is determined by how frequently specific sub actions occur. This differs from the theory of hierarchal complexity, as the complexity of an action here is determined by the nonarbitrary organization of sub actions. In other words, the primary difference is that TST counts repeated sub actions, whereas THC organizes sub actions."} {"text":"The traditional stage theory was unsatisfying to Commons and Richards, as they felt it did not show the existence of the stages more than describing sequential changes in human behavior. This led them to create a list of two concepts they felt a successful developmental theory should address. The two ideas they wanted to study were (1) the hierarchical complexity of the task to be solved and (2) the psychology, sociology, and anthropology of the task performance (and the development of the performance)."} {"text":"One major basis for this developmental theory is task analysis. The study of ideal tasks, including their instantiation in the real world, has been the basis of the branch of stimulus control called psychophysics. Tasks are defined as sequences of contingencies, each presenting stimuli and each requiring a behavior or a sequence of behaviors that must occur in some non-arbitrary fashion. The complexity of behaviors necessary to complete a task can be specified using the horizontal complexity and vertical complexity definitions described below. Behavior is examined with respect to the analytically-known complexity of the task."} {"text":"Every task contains a multitude of subtasks. When the subtasks are carried out by the participant in a required order, the task in question is successfully completed. Therefore, the model asserts that all tasks fit in some configured sequence of tasks, making it possible to precisely determine the hierarchical order of task complexity. Tasks vary in complexity in two ways: either as \"horizontal\" (involving classical information); or as \"vertical\" (involving hierarchical information)."} {"text":"Hierarchical complexity refers to the number of recursions that the coordinating actions must perform on a set of primary elements. Actions at a higher order of hierarchical complexity: (a) are defined in terms of actions at the next lower order of hierarchical complexity; (b) organize and transform the lower-order actions (see Figure 2); (c) produce organizations of lower-order actions that are qualitatively new and not arbitrary, and cannot be accomplished by those lower-order actions alone. Once these conditions have been met, we say the higher-order action coordinates the actions of the next lower order."} {"text":"To illustrate how lower actions get organized into more hierarchically complex actions, let us turn to a simple example. Completing the entire operation 3 \u00d7 (4 + 1) constitutes a task requiring the distributive act. That act non-arbitrarily orders adding and multiplying to coordinate them. The distributive act is therefore one order more hierarchically complex than the acts of adding and multiplying alone; it indicates the singular proper sequence of the simpler actions. Although simply adding results in the same answer, people who can do both display a greater freedom of mental functioning. Additional layers of abstraction can be applied. Thus, the order of complexity of the task is determined through analyzing the demands of each task by breaking it down into its constituent parts."} {"text":"The hierarchical complexity of a task refers to the number of concatenation operations it contains, that is, the number of recursions that the coordinating actions must perform. An order-three task has three concatenation operations. A task of order three operates on one or more tasks of vertical order two and a task of order two operates on one or more tasks of vertical order one (the simplest tasks)."} {"text":"Stage theories describe human organismic and\/or technological evolution as systems that move through a pattern of distinct stages over time. Here development is described formally in terms of the model of hierarchical complexity (MHC)."} {"text":"Since actions are defined inductively, so is the function \"h\", known as the order of the hierarchical complexity. To each action \"A\", we wish to associate a notion of that action's hierarchical complexity, \"h(A)\". Given a collection of actions A and a participant \"S\" performing A, the \"stage of performance\" of \"S\" on A is the highest order of the actions in A completed successfully at least once, i.e., it is: stage (\"S\", A) = max{\"h(A)\" | \"A\" \u2208 A and \"A\" completed successfully by \"S\"}. Thus, the notion of stage is discontinuous, having the same transitional gaps as the orders of hierarchical complexity. This is in accordance with previous definitions."} {"text":"Because MHC stages are conceptualized in terms of the hierarchical complexity of tasks rather than in terms of mental representations (as in Piaget's stages), the highest stage represents successful performances on the most hierarchically complex tasks rather than intellectual maturity."} {"text":"The following table gives descriptions of each stage in the MHC."} {"text":"The hierarchal complexity model builds directly on both Piaget\u2019s and Kohlberg\u2019s theories. Because of this, it is considered by many to be neo-Piagetian, as it supposes the Piagetian model is correct, but that there are several stages above it that a normal human adult achieves (which are explained and described in the theory of hierarchal complexity)."} {"text":"There are some commonalities between the Piagetian and Commons' notions of stage and many more things that are different. In both, one finds:"} {"text":"What Commons et al. (1998) have added includes:"} {"text":"This makes it possible for the model's application to meet real world requirements, including the empirical and analytic. Arbitrary organization of lower order of complexity actions, possible in the Piagetian theory, despite the hierarchical definition structure, leaves the functional correlates of the interrelationships of tasks of differential complexity formulations ill-defined."} {"text":"Moreover, the model is consistent with the neo-Piagetian theories of cognitive development. According to these theories, progression to higher stages or levels of cognitive development is caused by increases in processing efficiency and working memory capacity. That is, higher-order stages place increasingly higher demands on these functions of information processing, so that their order of appearance reflects the information processing possibilities at successive ages."} {"text":"The following dimensions are inherent in the application:"} {"text":"More complex behaviors characterize multiple system models. The four highest stages in the MHC are not represented in Piaget's model. The higher stages of the MHC have extensively influenced the field of positive adult development. Some adults are said to develop alternatives to, and perspectives on, formal operations; they use formal operations within a \"higher\" system of operations. Some theorists call the more complex orders of cognitive tasks \"postformal thought\", but other theorists argue that these higher orders cannot exactly be labelled as postformal thought."} {"text":"Jordan (2018) argued that unidimensional models such as the MHC, which measure level of complexity of some behavior, refer to only one of many aspects of adult development, and that other variables are needed (in addition to unidimensional measures of complexity) for a fuller description of adult development."} {"text":"The MHC has a broad range of applicability. Its mathematical foundation permits it to be used by anyone examining task performance that is organized into stages. It is designed to assess development based on the order of complexity which the actor utilizes to organize information. The model thus allows for a standard quantitative analysis of developmental complexity in any cultural setting. Other advantages of this model include its avoidance of mentalistic explanations, as well as its use of quantitative principles which are universally applicable in any context."} {"text":"The following practitioners can use the MHC to quantitatively assess developmental stages:"} {"text":"In one representative study, Commons, Goodheart, and Dawson (1997) found, using Rasch analysis (Rasch, 1980), that hierarchical complexity of a given task predicts stage of a performance, the correlation being r = 0.92. Correlations of similar magnitude have been found in a number of the studies. The following are examples of tasks studied using the model of hierarchical complexity or Kurt W. Fischer's similar skill theory:"} {"text":"As of 2014, people and institutes from all the major continents of the world, except Africa, have used the model of hierarchical complexity. Because the model is very simple and is based on analysis of tasks and not just performances, it is dynamic. With the help of the model, it is possible to quantify the occurrence and progression of transition processes in task performances at any order of hierarchical complexity."} {"text":"The descriptions of stages 13\u201315 have been described as insufficiently precise."} {"text":"An object of the mind is an object that exists in the imagination, but which, in the real world, can only be represented or modeled. Some such objects are abstractions, literary concepts, or fictional scenarios."} {"text":"Closely related are intentional objects, which are what thoughts and feelings are about, even if they are not about anything real (such as thoughts"} {"text":"about unicorns, or feelings of apprehension about a dental appointment which is subsequently cancelled). However, intentional objects may coincide with real objects (as in thoughts about horses, or a feeling of regret about a missed appointment)."} {"text":"Mathematics and geometry describe abstract objects that sometimes correspond to familiar shapes, and sometimes do not. Circles, triangles, rectangles, and so forth describe two-dimensional shapes that are often found in the real world. However, mathematical formulas do not describe individual physical circles, triangles, or rectangles. They describe ideal shapes that are objects of the mind. The incredible precision of mathematical expression permits a vast applicability of mental abstractions to real life situations."} {"text":"Many more mathematical formulas describe shapes that are unfamiliar, or do not necessarily correspond to objects in the real world. For example, the Klein bottle is a one-sided, sealed surface with no inside or outside (in other words, it is the three-dimensional equivalent of the M\u00f6bius strip). Such objects can be represented by twisting and cutting or taping pieces of paper together, as well as by computer simulations. To hold them in the imagination, abstractions such as extra or fewer dimensions are necessary."} {"text":"If-then arguments posit logical sequences that sometimes include objects of the mind. For example, a counterfactual argument proposes a hypothetical or subjunctive possibility which \"could\" or \"would\" be true, but \"might not\" be false. Conditional sequences involving subjunctives use intensional language, which is studied by modal logic, whereas classical logic studies the extensional language of necessary and sufficient conditions."} {"text":"In general, a logical antecedent is a sufficient condition, and a logical consequent is a necessary condition (or the contingency) in a logical conditional. But logical conditionals accounting only for necessity and sufficiency do not always reflect every day if-then reasoning, and for this reason they are sometimes known as material conditionals. In contrast, indicative conditionals, sometimes known as non-material conditionals, attempt to describe if-then reasoning involving hypotheticals, fictions, or counterfactuals."} {"text":"Truth tables for if-then statements identify four unique combinations of premises and conclusions: true premises and true conclusions; false premises and true conclusions; true premises and false conclusions; false premises and false conclusions. Strict conditionals assign a positive truth-value to every case except the case of a true premise and a false conclusion. This is sometimes regarded as counterintuitive, but makes more sense when false conditions are understood as objects of the mind."} {"text":"A false antecedent is a premise known to be false, fictional, imaginary, or unnecessary. In a conditional sequence, a false antecedent may be the basis for any consequence, true or false."} {"text":"The subjects of literature are sometimes false antecedents. For instance, the contents of false documents, the origins of stand-alone phenomena, or the implications of loaded words. Moreover, artificial sources, personalities, events, and histories. False antecedents are sometimes referred to as \"nothing\", or \"nonexistent\", whereas nonexistent referents are not referred to."} {"text":"Art and acting often portray scenarios without any antecedent other than an artist's imagination. For example, mythical heroes, legendary creatures, gods and goddesses."} {"text":"A false consequent, in contrast, is a conclusion known to be false, fictional, imaginary, or insufficient. In a conditional statement, a fictional conclusion is known as a non sequitur, which literally means \"out of sequence\". A conclusion that is out of sequence is not contingent on any premises that precede it, and it does not follow from them, so such a sequence is not conditional. A conditional sequence is a connected series of statements. A false consequent cannot follow from true premises in a connected sequence. But, on the other hand, a false consequent can follow from a false antecedent."} {"text":"As an example, the name of a team, a genre, or a nation is a collective term applied ex post facto to a group of distinct individuals. None of the individuals on a sports team is the team itself, nor is any musical chord a genre, nor any person America. The name is an identity for a collection that is connected by consensus or reference, but not by sequence. A different name could equally follow, but it would have different social or political significance."} {"text":"In metaphysics and ontology, Austrian philosopher Alexius Meinong advanced nonexistent objects in the 19th and 20th century within a \"theory of objects\". He was interested in intentional states which are directed at nonexistent objects. Starting with the \"principle of intentionality\", mental phenomena are intentionally directed towards an object. People may imagine, desire or fear something that does not exist. Other philosophers concluded that intentionality is not a real relation and therefore does not require the existence of an object, while Meinong concluded there is an object for every mental state whatsoever\u2014if not an existent then at least a nonexistent one."} {"text":"In philosophy of mind, mind\u2013body dualism is the doctrine that mental activities exist apart from the physical body, notably posited by Ren\u00e9 Descartes in \"Meditations on First Philosophy\"."} {"text":"Many objects in fiction follow the example of false antecedents or false consequents. For example, \"The Lord of the Rings\" by J.R.R. Tolkien is based on an imaginary book. In the \"Appendices\" to \"The Lord of the Rings\", Tolkien's characters name the \"Red Book of Westmarch\" as the source material for \"The Lord of the Rings\", which they describe as a translation. But the \"Red Book of Westmarch\" is a fictional document that chronicles events in an imaginary world. One might imagine a different translation, by another author."} {"text":"Social reality is composed of many standards and inventions that facilitate communication, but which are ultimately objects of the mind. For example, money is an object of the mind which currency represents. Similarly, languages signify ideas and thoughts."} {"text":"Objects of the mind are frequently involved in the roles that people play. For example, acting is a profession which predicates real jobs on fictional premises. Charades is a game people play by guessing imaginary objects from short play-acts."} {"text":"Imaginary personalities and histories are sometimes invented to enhance the verisimilitude of fictional universes, and\/or the immersion of role-playing games. In the sense that they exist independently of extant personalities and histories, they are believed to be fictional characters and fictional time frames."} {"text":"Science fiction is abundant with future times, alternate times, and past times that are objects of the mind. For example, in the novel \"Nineteen Eighty-Four\" by George Orwell, the number 1984 represented a year that had not yet passed."} {"text":"Calendar dates also represent objects of the mind, specifically, past and future times. In \"\", which was released in 1986, the narration opens with the statement, \"It is the year 2005.\" In 1986, that statement was futuristic. During the year 2005, that reference to the year 2005 was factual. Now, \"The Transformers: The Movie\" is retro-futuristic. The number 2005 did not change, but the object of the mind that it represents did change."} {"text":"Deliberate invention also may reference an object of the mind. The intentional invention of fiction for the purpose of deception is usually referred to as lying, in contrast to invention for entertainment or art. Invention is also often applied to problem solving. In this sense the physical invention of materials is associated with the mental invention of fictions."} {"text":"The theoretical posits of one era's scientific theories may be demoted to mere objects of the mind by subsequent discoveries: some standard examples include phlogiston and ptolemaic epicycles."} {"text":"This raises questions, in the debate between scientific realism and instrumentalism about the status of current posits, such as black holes and quarks. Are they still merely intentional, even if the theory is correct?"} {"text":"The situation is further complicated by the existence in scientific practice of entities which are explicitly held not to be real, but which nonetheless serve a purpose\u2014convenient fictions. Examples include field lines, centers of gravity, and electron holes in semiconductor theory."} {"text":"A reference that names an imaginary source is in some sense also a self-reference. A self-reference automatically makes a comment about itself. Premises that name themselves as premises are premises by self-reference; conclusions that name themselves as conclusions are conclusions by self-reference."} {"text":"In their respective imaginary worlds the \"Necronomicon\", \"The Hitchhiker's Guide to the Galaxy\", and the \"Red Book of Westmarch\" are realities, but only because they are referred to as real. Authors use this technique to invite readers to pretend or to make-believe that their imaginary world is real. In the sense that the stories that quote these books are true, the quoted books exist; in the sense that the stories are fiction, the quoted books do not exist."} {"text":"Object permanence is the understanding that objects continue to exist even when they cannot be seen, heard, or otherwise sensed. This is a fundamental concept studied in the field of developmental psychology, the subfield of psychology that addresses the development of young children's social and mental capacities. There is not yet scientific consensus on when the understanding of object permanence emerges in human development."} {"text":"Jean Piaget, the Swiss psychologist who first studied object permanence in infants, argued that it is one of an infant's most important accomplishments, as, without this concept, objects would have no separate, permanent existence. In Piaget's theory of cognitive development, infants develop this understanding by the end of the \"sensorimotor stage\", which lasts from birth to about two years of age. Piaget thought that an infant's perception and understanding of the world depended on their motor development, which was required for the infant to link visual, tactile and motor representations of objects. According to this view, it is through touching and handling objects that infants develop object permanence."} {"text":"Piaget concluded that some infants are too young to understand object permanence. A lack of object permanence can lead to A-not-B errors, where children reach for a thing at a place where it should not be. Older infants are less likely to make the A-not-B error because they are able to understand the concept of object permanence more than younger infants. However, researchers have found that A-not-B errors do not always show up consistently. They concluded that this type of error might be due to a failure in memory or the fact that infants usually tend to repeat a previous motor behavior."} {"text":"In Piaget's formulation, there are six stages of object permanence. These are:"} {"text":"In more recent years, the original Piagetian object permanence account has been challenged by a series of infant studies suggesting that much younger infants do have a clear sense that objects exist even when out of sight. Bower showed object permanence in 3-month-olds. This goes against Piaget's coordination of secondary circular reactions stage because infants are not supposed to understand that a completely hidden object still exists until they are eight to twelve months old. The two studies below demonstrate this idea."} {"text":"There are primarily four challenges to Piaget's framework:"} {"text":"One criticism of Piaget's theory is that culture and education exert stronger influences on a child's development than Piaget maintained. These factors depend on how much practice their culture provides in developmental processes, such as conversational skills."} {"text":"Experiments in non-human primates suggest that monkeys can track the displacement of invisible targets, that invisible displacement is represented in the prefrontal cortex, and that development of the frontal cortex is linked to the acquisition of object permanence. Various evidence from human infants is consistent with this. For example, formation of synapses in the frontal cortex peaks during human infancy, and recent experiments using near infrared spectroscopy to gather neuroimaging data from infants suggests that activity in the frontal cortex is associated with successful completion of object permanence tasks."} {"text":"One of the areas of focus on object permanence has been how physical disabilities (blindness, cerebral palsy and deafness) and intellectual disabilities (Down syndrome, etc.) affect the development of object permanence. In a study that was performed in 1975\u201376, the results showed that the only area where children with intellectual disabilities performed more weakly than children without disabilities was along the lines of social interaction. Other tasks, such as imitation and causality tasks, were performed more weakly by the children without disabilities. However, object permanence was still acquired similarly because it was not related to social interaction."} {"text":"The language of thought hypothesis (LOTH), sometimes known as thought ordered mental expression (TOME), is a view in linguistics, philosophy of mind and cognitive science, forwarded by American philosopher Jerry Fodor. It describes the nature of thought as possessing \"language-like\" or compositional structure (sometimes known as \"mentalese\"). On this view, simple concepts combine in systematic ways (akin to the rules of grammar in language) to build thoughts. In its most basic form, the theory states that thought, like language, has syntax."} {"text":"Using empirical evidence drawn from linguistics and cognitive science to describe mental representation from a philosophical vantage-point, the hypothesis states that thinking takes place in a language of thought (LOT): cognition and cognitive processes are only 'remotely plausible' when expressed as a system of representations that is \"tokened\" by a linguistic or semantic structure and operated upon by means of a combinatorial syntax. Linguistic tokens used in mental language describe elementary concepts which are operated upon by logical rules establishing causal connections to allow for complex thought. Syntax as well as semantics have a causal effect on the properties of this system of mental representations."} {"text":"These mental representations are not present in the brain in the same way as symbols are present on paper; rather, the LOT is supposed to exist at the cognitive level, the level of thoughts and concepts. The LOTH has wide-ranging significance for a number of domains in cognitive science. It relies on a version of functionalist materialism, which holds that mental representations are actualized and modified by the individual holding the propositional attitude, and it challenges eliminative materialism and connectionism. It implies a strongly rationalist model of cognition in which many of the fundamentals of cognition are innate."} {"text":"The hypothesis applies to thoughts that have propositional content, and is not meant to describe everything that goes on in the mind. It appeals to the representational theory of thought to explain what those tokens actually are and how they behave. There must be a mental representation that stands in some unique relationship with the subject of the representation and has specific content. Complex thoughts get their semantic content from the content of the basic thoughts and the relations that they hold to each other. Thoughts can only relate to each other in ways that do not violate the syntax of thought. The syntax by means of which these two sub-parts are combined can be expressed in first-order predicate calculus."} {"text":"The thought \"John is tall\" is clearly composed of two sub-parts, the concept of John and the concept of tallness, combined in a manner that may be expressed in first-order predicate calculus as a predicate 'T' (\"is tall\") that holds of the entity 'j' (John). A fully articulated proposal for what a LOT would have to take into account greater complexities such as quantification and propositional attitudes (the various attitudes people can have towards statements; for example I might \"believe\" or \"see\" or merely \"suspect\" that John is tall)."} {"text":"The language of thought hypothesis has been both controversial and groundbreaking. Some philosophers reject the LOTH, arguing that our public language \"is\" our mental language\u2014a person who speaks English \"thinks\" in English. But others contend that complex thought is present even in those who do not possess a public language (e.g. babies, aphasics, and even higher-order primates), and therefore some form of mentalese must be innate."} {"text":"The notion that mental states are causally efficacious diverges from behaviorists like Gilbert Ryle, who held that there is no break between the cause of mental state and effect of behavior. Rather, Ryle proposed that people act in some way because they are in a disposition to act in that way, that these causal mental states are representational. An objection to this point comes from John Searle in the form of biological naturalism, a non-representational theory of mind that accepts the causal efficacy of mental states. Searle divides intentional states into low-level brain activity and high-level mental activity. The lower-level, nonrepresentational neurophysiological processes have causal power in intention and behavior rather than some higher-level mental representation."} {"text":"Daniel Dennett accepts that homunculi may be explained by other homunculi and denies that this would yield an infinite regress of homunculi. Each explanatory homunculus is \u201cstupider\u201d or more basic than the homunculus it explains, but this regress is not infinite but bottoms out at a basic level that is so simple that it does not need interpretation. John Searle points out that it still follows that the bottom-level homunculi are manipulating some sorts of symbols."} {"text":"LOTH implies that the mind has some tacit knowledge of the logical rules of inference and the linguistic rules of syntax (sentence structure) and semantics (concept or word meaning). If LOTH cannot show that the mind knows that it is following the particular set of rules in question, then the mind is not computational because it is not governed by computational rules. Also, the apparent incompleteness of this set of rules in explaining behavior is pointed out. Many conscious beings behave in ways that are contrary to the rules of logic. Yet this irrational behavior is not accounted for by any rules, showing that there is at least some behavior that does not act by this set of rules."} {"text":"Another objection within representational theory of mind has to do with the relationship between propositional attitudes and representation. Dennett points out that a chess program can have the attitude of \u201cwanting to get its queen out early,\u201d without having representation or rule that explicitly states this. A multiplication program on a computer computes in the computer language of 1\u2019s and 0\u2019s, yielding representations that do not correspond with any propositional attitude."} {"text":"Susan Schneider has recently developed a version of LOT that departs from Fodor's approach in numerous ways. In her book, The Language of Thought: a New Philosophical Direction, Schneider argues that Fodor's pessimism about the success of cognitive science is misguided, and she outlines an approach to LOT that integrates LOT with neuroscience. She also stresses that LOT that is not wedded to the extreme view that all concepts are innate. She fashions a new theory of mental symbols, and a related two-tiered theory of concepts, in which a concept's nature is determined by its LOT symbol type and its meaning."} {"text":"Since connectionist models can change over time, supporters of connectionism claim that it can solve the problems that LOTH brings to classical AI. These problems are those that show that machines with a LOT syntactical framework very often are much better at solving problems and storing data than human minds, yet much worse at things that the human mind is quite adept at such as recognizing facial expressions and objects in photographs and understanding nuanced gestures. Fodor defends LOTH by arguing that a connectionist model is just some realization or implementation of the classical computational theory of mind and therein necessarily employs a symbol-manipulating LOT."} {"text":"Connectionists have responded to Fodor and Pylyshyn by denying that connectionism uses LOT, by denying that cognition is essentially a function that uses representational input and output or denying that systematicity is a law of nature that rests on representation. Some connectionists have developed implementational connectionist models that can generalize in a symbolic fashion by incorporating variables."} {"text":"Since LOTH came to be it has been empirically tested. Not all experiments have confirmed the hypothesis;"} {"text":"Strategy (from Greek \u03c3\u03c4\u03c1\u03b1\u03c4\u03b7\u03b3\u03af\u03b1 \"strat\u0113gia\", \"art of troop leader; office of general, command, generalship\") is a general plan to achieve one or more long-term or overall goals under conditions of uncertainty. In the sense of the \"art of the general\", which included several subsets of skills including military tactics, siegecraft, logistics etc., the term came into use in the 6th century C.E. in Eastern Roman terminology, and was translated into Western vernacular languages only in the 18th century. From then until the 20th century, the word \"strategy\" came to denote \"a comprehensive way to try to pursue political ends, including the threat or actual use of force, in a dialectic of wills\" in a military conflict, in which both adversaries interact."} {"text":"Strategy is important because the resources available to achieve goals are usually limited. Strategy generally involves, setting goals and priorities, determining actions to achieve the goals, and mobilizing resources to execute the actions. A strategy describes how the ends (goals) will be achieved by the means (resources). Strategy can be intended or can emerge as a pattern of activity as the organization adapts to its environment or competes. It involves activities such as strategic planning and strategic thinking."} {"text":"Henry Mintzberg from McGill University defined strategy as a pattern in a stream of decisions to contrast with a view of strategy as planning, while Henrik von Scheel defines the essence of strategy as the activities to deliver a unique mix of value \u2013 choosing to perform activities differently or to perform different activities than rivals. while Max McKeown (2011) argues that \"strategy is about shaping the future\" and is the human attempt to get to \"desirable ends with available means\". Dr. Vladimir Kvint defines strategy as \"a system of finding, formulating, and developing a doctrine that will ensure long-term success if followed faithfully.\" Complexity theorists define strategy as the unfolding of the internal and external aspects of the organization that results in actions in a socio-economic context."} {"text":"Professor Richard P. Rumelt described strategy as a type of problem solving in 2011. He wrote that good strategy has an underlying structure he called a \"kernel\". The kernel has three parts: 1) A \"diagnosis\" that defines or explains the nature of the challenge; 2) A \"guiding policy\" for dealing with the challenge; and 3) Coherent \"actions\" designed to carry out the guiding policy."} {"text":"President Kennedy illustrated these three elements of strategy in his Cuban Missile Crisis Address to the Nation of 22 October 1962:"} {"text":"Rumelt wrote in 2011 that three important aspects of strategy include \"premeditation, the anticipation of others' behavior, and the purposeful design of coordinated actions.\" He described strategy as solving a design problem, with trade-offs among various elements that must be arranged, adjusted and coordinated, rather than a plan or choice."} {"text":"Strategy typically involves two major processes: \"formulation\" and \"implementation\". \"Formulation\" involves analyzing the environment or situation, making a diagnosis, and developing guiding policies. It includes such activities as strategic planning and strategic thinking. \"Implementation\" refers to the action plans taken to achieve the goals established by the guiding policy."} {"text":"Bruce Henderson wrote in 1981 that: \"Strategy depends upon the ability to foresee future consequences of present initiatives.\" He wrote that the basic requirements for strategy development include, among other factors: 1) extensive knowledge about the environment, market and competitors;"} {"text":"2) ability to examine this knowledge as an interactive dynamic system; and"} {"text":"3) the imagination and logic to choose between specific alternatives. Henderson wrote that strategy was valuable because of: \"finite resources, uncertainty about an adversary's capability and intentions; the irreversible commitment of resources; necessity of coordinating action over time and distance; uncertainty about control of the initiative; and the nature of adversaries' mutual perceptions of each other.\""} {"text":"In military theory, strategy is \"the utilization during both peace and war, of all of the nation's forces, through large scale, long-range planning and development, to ensure security and victory\" (\"Random House Dictionary\")."} {"text":"The father of Western modern strategic study, Carl von Clausewitz, defined military strategy as \"the employment of battles to gain the end of war.\" B. H. Liddell Hart's definition put less emphasis on battles, defining strategy as \"the art of distributing and applying military means to fulfill the ends of policy\". Hence, both gave the pre-eminence to political aims over military goals. U.S. Naval War College instructor Andrew Wilson defined strategy as the \"process by which political purpose is translated into military action.\" Lawrence Freedman defined strategy as the \"art of creating power.\""} {"text":"Eastern military philosophy dates back much further, with examples such as \"The Art of War\" by Sun Tzu dated around 500 B.C."} {"text":"Modern business strategy emerged as a field of study and practice in the 1960s; prior to that time, the words \"strategy\" and \"competition\" rarely appeared in the most prominent management literature."} {"text":"Alfred Chandler wrote in 1962 that: \"Strategy is the determination of the basic long-term goals of an enterprise, and the adoption of courses of action and the allocation of resources necessary for carrying out these goals.\" Michael Porter defined strategy in 1980 as the \"...broad formula for how a business is going to compete, what its goals should be, and what policies will be needed to carry out those goals\" and the \"...combination of the \"ends\" (goals) for which the firm is striving and the \"means\" (policies) by which it is seeking to get there.\""} {"text":"Henry Mintzberg described five definitions of strategy in 1998:"} {"text":"In game theory, a \"strategy\" refers to the rules that a player uses to choose between the available actionable options. Every player in a non-trivial game has a set of possible strategies to use when choosing what moves to make."} {"text":"A strategy may recursively look ahead and consider what actions can happen in each contingent state of the game\u2014e.g. if the player takes action 1, then that presents the opponent with a certain situation, which might be good or bad, whereas if the player takes action 2 then the opponents will be presented with a different situation, and in each case the choices they make will determine their own future situation."} {"text":"Strategies in game theory may be random (mixed) or deterministic (pure). Pure strategies can be thought of as a special case of mixed strategies, in which only probabilities 0 or 1 are assigned to actions."} {"text":"Strategy based games generally require a player to think through a sequence of solutions to determine the best way to defeat the opponent."} {"text":"Divide and rule (), or divide and conquer, in politics and sociology is gaining and maintaining power by breaking up larger concentrations of power into pieces that individually have less power than the one implementing the strategy."} {"text":"The use of this technique is meant to empower the sovereign to control subjects, populations, or factions of different interests, who collectively might be able to oppose his rule. Niccol\u00f2 Machiavelli identifies a similar application to military strategy, advising in Book VI of \"The Art of War\" (1521) (\"L'arte della guerra\"): a Captain should endeavor with every art to divide the forces of the enemy. Machiavelli advises that this act should be achieved either by making him suspicious of his men in whom he trusted, or by giving him cause that he has to separate his forces, and, because of this, become weaker."} {"text":"The maxim divide et impera has been attributed to Philip II of Macedon. It was utilised by the Roman ruler Julius Caesar and the French emperor Napoleon (together with the maxim \"divide ut regnes)\"."} {"text":"The strategy, but not the phrase, applies in many ancient cases: the example of Aulus Gabinius exists, parting the Jewish nation into five conventions, reported by Flavius Josephus in Book I, 169\u2013170 of \"The Jewish War\" (\"De bello Judaico\"). Strabo also reports in \"Geographica\", 8.7.3 that the Achaean League was gradually dissolved under the Roman possession of the whole of Macedonia, owing to their not dealing with the several states in the same way, but wishing to preserve some and to destroy others."} {"text":"by Immanuel Kant (1795), Appendix one, \"Divide et impera\" is the third of three political maxims, the others being \"Fac et excusa\" (Act now, and make excuses later) and \"Si fecisti, nega\" (If you commit a crime, deny it)."} {"text":"Historically, this strategy was used in many different ways by empires seeking to expand their territories."} {"text":"Immanuel Kant was an advocate of this tactic, noting that \"the problem of setting up a state can be solved even by a nation of devils\" so long as they possess an appropriate constitution which pits opposing factions against each other with a system of checks and balances."} {"text":"The concept is also mentioned as a strategy for market action in economics to get the most out of the players in a competitive market."} {"text":"Divide and rule can be used by states to weaken enemy military alliances. This usually happens when propaganda is disseminated within the enemy states in an attempt to raise doubts about the alliance. Once the alliance weakens or dissolves, a vacuum will allow the state to achieve military dominance."} {"text":"In politics, the concept refers to a strategy that breaks up existing power structures, and especially prevents smaller power groups from linking up, causing rivalries and fomenting discord among the people to prevent a rebellion against the elites or the people implementing the strategy. The goal is either to pit the lower classes against themselves to prevent a revolution, or to provide a desired solution to the growing discord that strengthens the power of the elites."} {"text":"The principle \"divide et impera\" is cited as a common in politics by Traiano Boccalini in \"La bilancia politica\"."} {"text":"Clive R. Boddy found that \"divide and conquer\" was a common strategy by corporate psychopaths used as a smokescreen to help consolidate and advance their grip on power in the corporate hierarchy."} {"text":"Harry G. Broadman opined in Forbes regarding President Donald Trump: \"[a]s in his campaign, the President has been successfully\u2014at least to date\u2014pursuing a divide and conquer strategy domestically and internationally to try to achieve his goals. The result is an absence of a robust set of checks and balances to ensure that the best economic interests of the U.S. and the world will be served.\""} {"text":"Examples of this strategy are not entirely limited to deliberate efforts from U.S. Politicial Candidates. Political divide is a systemic problem with bipartisan politics itself, with numerous examples of divided interest among competing politicians adversely sowing divide among the populace they are meant to represent, across many countries. The effects of 'divide and rule' strategies are not always the result of a deliberate effort to control the population."} {"text":"The disruptive solutions process (DSP) is a concept for innovation execution applied to the mishap prevention part of the combat operations process, often at the tactical or operational level, primarily in Air National Guard applications. However, it has been used successfully in other government agencies and the private sector. At its core is the notion of iterative, low-cost, first-to-market development. The term 'disruptive' was borrowed from the marketing term disruptive technologies. DSP was created in 2005 by fighter pilot and the United States Air Force\/Air National Guard Colonel Edward Vaughan."} {"text":"Typical defense industry bureaucratic approach to problem-solving involves exquisite enterprise solutions requiring long lead times, the establishment of large, standing teams, and relative inflexibility. The long development cycles and lead times associated with this approach sometimes result in fielding a solution that is no longer relevant. Recent attempts to resolve inefficiencies may include overwhelming with superior funding, resources, and manpower\u2014for example, take any major weapon systems development such as a new fighter jet or IT system. Conversely, when resources are constrained, bureaucratic staff adopt a tactic of continuous process improvement, similar to that espoused in Kaizen, total quality management, and Lean Six Sigma. This further discourages innovation and perpetuates low-value programs and work teams that should be eliminated altogether rather than \"improved\"."} {"text":"Because most preventable \"safety\" mishaps are caused by human factors (83% of the Fiscal Year 2007 Air Force major mishap costs due to human factors per AF Safety Center) and can be traced to human cultural and behavioral issues, according to DSP, safety can and should uniquely apply a \"disruptive\" solution set to address the issues. Such a disruptive, iterative approach may not be appropriate in otherwise hardware-centric, large budget programs, such as aircraft procurement and production."} {"text":"To address the safety cultural issues associated with mishap prevention in a large bureaucracy, Air National Guard safety directorate pursued a disruptive approach in requirement definition, problem identification, solution vetting, funding, and procurement. Using Boyd's Observe, Orient, Decide, Act OODA Loop to assess the efficiency and effectiveness of the process, DSP was created. However, taking on a bureaucracy is not without its downside. Fiefdoms and stovepipes within the system attempt to protect their \"turf\" and \"lanes\" with rules, regulations, and non-stop administrative delays and paperwork. All this requires a commitment to a long-term solution set, while constantly changing the solution itself in order to work through the bureaucratic hurdles."} {"text":"The DSP approach is both persistent and adaptive, which makes it entrepreneurial, according to Christopher Gergen and Gregg Vanourek in their article \"Fending off the Recession with 'Adaptive Persistence',\" published in Harvard Business Review, April 2009. They write... \"Persistence is about refusing to give up even in the face of adversity. Adaptation is about shortening the time to success through ingenuity and flexibility. 'Adaptive persistence' entails alternating between anticipation, changing course, and sticking with it, deftly navigating that paradox with aplomb.\""} {"text":"The \"process\" is executed similar to a venture capitalist's portfolio of projects in that the team invests small amounts of resources in many disruptive ideas. Steps in the process are not rigorous and may be eliminated, combined, or reordered appropriately to the desired outcome. Then the team assesses initial demonstrations and validations (DEM\/VAL) of those solutions, choosing only to fully develop those that show success and return on the investment. Within the simplified OODA (Observe, Orient, Decide, Act) model, step 1 would be observed, steps 2 and 3 combine to form orient, steps 4 and 5 are decide, and step 6 is Act."} {"text":"Essentially DSP is a six-step process that runs counter to the military mantra of \"requirements-driven\", which is backward-looking and focuses instead on projecting future market needs that will eventually become formal requirements but not currently identified as such. Accomplish this by looking at front-line problem solving activity and scaling these solutions up. These six steps, when applied rapidly, can get ahead of recognition, providing viable solutions at the point and time of need:"} {"text":"1. POLL FIELD\u2014IDEA MINING: use network of professionals at the field unit level to identify best practice mishap prevention, education, mishap investigation, procurement, and other tools. Project unpublished requirements by including end-use customers in the idea mining process. Look for full and partial solutions."} {"text":"2. CONSOLIDATE \/ RACK AND STACK: Heuristically sort list of ideas into groups based on resource requirements, proven record, technology leveraging, mission accomplishment, Department of Defense, Air Force Instruction, and National Guard Bureau identified needs. Based on chosen development cycles, monthly, quarterly, etc., rank order all projects based on overall value to the force using DSP assessment algorithm (citation forthcoming after public release of algorithm)."} {"text":"3. ELIMINATE BAD FITS: Scrub the list for those items requiring major hardware, Air Force Major Command level funding, or other special, difficult to acquire funding or processes. Enterprise-level and\/or exquisite programs are anathema to this innovative process. Additionally remove from consideration solutions that duplicate or compete directly with future programmed or existing military programs unless the cost savings is significant. Eliminate those programs that are not scalable in scope."} {"text":"4. SELECT AND DEM\/VAL: Consider resource requirements and rapidly source field unit funding or headquarters seed monies in the sub-$50K range to perform a limited DEM\/VAL of concept. Many technology solutions can be demonstrated with little or no initial funding. Air National Guard Safety office has a presentation on creative funding without a budget. Use rapid contracting mechanisms through government contracting office, primarily employing SBA set-asides, blanket purchase agreements, or previously procured assets that may be re-roled into current use. This requires expert contracting officers and staff who possess training in performing basic functions of government contracting officer representative, or contracting officer technical representative. The key is to remove barriers to execution that typically delay other military efforts."} {"text":"5. ITERATE FOR RESULTS: Establish definition of success at the outset. Measurable and reportable. Demonstrate measurable results within six months and seek further external and scalable funding from sources such as DARPA, Defense Safety Oversight Council (DSOC), other services, other government agencies, etc. Match requirements to resources and solutions."} {"text":"6. LEAD AND MARKET: lead the effort on behalf of the United States Department of Defense, Joint, Interagency, etc. and tighten the OODA loop down to nothing. Essentially creating an agile, continuous loop so tight, Boyd might describe it as an OODA Point. Market the solution intensely and seek buy in by returning the solution to same experts that initially proposed it. Identify capable project leaders to run with the project."} {"text":"More recently, DSP has been used in the ANG and USAF to create and field mishap prevention programs. Safety programs created, executed, or developed using DSP:"} {"text":"SEE AND AVOID\u00a0\u2013 Joint DOD and Interagency with AOPA, EAA, and FAA. It is a web based civilian-military midair collision avoidance program created by then-Lt Col Ed Vaughan and led by ANG Safety directorate from 2005 to 2009, considered a best practice. ACC is partner; AFCENT asked for Iraq, Afghanistan coverage, now under contract, currently led and funded by FAA and ANG."} {"text":"WingmanDay.org: Originally fielded as RealBase across the Air National Guard...this Comprehensive Commander\u2019s Toolkit identifies safety issues, resiliency subject matter, and provides tools for commanders, leaders, and care practitioners to address; created by ANG Safety directorate after 2007 Safety Stand Down Day to provide ONE STOP SHOPPING for commanders and leaders. The RealBase web portal ran through 2009, when IT officials at the National Guard Bureau suspended it. In 2011, the program was relaunched as Wingman Day. The Air Force Safety Center took the RealBase Toolkit concept and developed one-stop-shopping online tool kits hosted on the secure Air Force Portal."} {"text":"Maintenance Resource Management (MRM): Joint DOD-wide. Originated by Lt Col Doug Slocum (AZ ANG) --see Maintenance Resource Management. ANG included it in DSP and took it DOD-wide with ANG and DOD funding...now Air Force program mandated by Air Force Instruction 21\u2013101. Air Force Safety Center will propose way ahead on ORM revitalization & role of CRM \/ MRM."} {"text":"FlyAwake: ANG-wide, soon to be DOD-wide Joint Service. 201 Airlift Squadron (DC ANG), under command of Col Woody Akins, originated the basic concept for a web-based fatigue risk management tool which returns quantitative fatigue analysis for given flight schedule. This tool was based on the algorithm contained within FAST. Under direction of program manager Captain Lynn Lee, the ANG integrated it into the DSP and took it ANG-wide, then DOD-wide."} {"text":"Wingman Project: The Wingman Project was created by Lt Col Edward Vaughan, chief of aviation safety at the Air National Guard in August 2007. Wingman Project is an ANG suicide intervention initiative that SHOWS, not tells, family and friends of distressed Airmen how to intervene to save a life, using a validated model known as ACE (Ask, Care, Escort). The Wingman Project provides training and awareness through media outreach in 54 U.S. states and territories."} {"text":"dBird bird mortality model. Created and developed as interagency program combining partners from CDC, Smithsonian, NSF, USDA, DHS, and NOAA under ANG leadership to track, target, and predict movements of pathogen-infected bird flocks using BASH resources such as BAM\/AHAS, NexRAD radar system, and others."} {"text":"BASH: ANG has comprehensive full-service BASH assessments and plan writing program, with MIPR and contracts from ANG to USDA and the world\u2019s leading expert in avian wildlife biology, Dr. Russ DeFusco."} {"text":"Air Reserve Component Chief of Safety Course (ARCCOS)\u00a0\u2013 created by the ANG safety directorate in 2005, ARCCOS is tailored to ANG\/AFRC needs; syllabus designed and course taught by ANG, well represented at active duty mishap investigation courses."} {"text":"Low Altitude Deconfliction Program\u00a0\u2013 Deconflict.org is online scheduling function with FAA's MADE program to provide collision avoidance for military aircraft operating in low altitude environment."} {"text":"Ready 54\u00a0\u2013 Ready54.org is online joint resiliency outreach and education tool with associated mobile apps. Ready 54 is a joint endeavor between the Air and Army National Guard."} {"text":"On September 25, 2009, Dr. John Ohab of the American Forces Press Service, and host of Armed With Science interviewed Lt Col Edward Vaughan about the Disruptive Solutions Process. An article about that interview can be found here Defense News Service. A direct link to the interview here"} {"text":"Strategic planning is an organization's process of defining its strategy, or direction, and making decisions on allocating its resources to pursue this strategy."} {"text":"It may also extend to control mechanisms for guiding the implementation of the strategy. Strategic planning became prominent in corporations during the 1960s and remains an important aspect of strategic management. It is executed by strategic planners or strategists, who involve many parties and research sources in their analysis of the organization and its relationship to the environment in which it competes."} {"text":"Strategy has many definitions, but generally involves setting strategic goals, determining actions to achieve the goals, and mobilizing resources to execute the actions. A strategy describes how the ends (goals) will be achieved by the means (resources). The senior leadership of an organization is generally tasked with determining strategy. Strategy can be planned (intended) or can be observed as a pattern of activity (emergent) as the organization adapts to its environment or competes."} {"text":"Strategy includes processes of formulation and implementation; strategic planning helps coordinate both. However, strategic planning is analytical in nature (i.e., it involves \"finding the dots\"); strategy formation itself involves synthesis (i.e., \"connecting the dots\") via strategic thinking. As such, strategic planning occurs around the strategy formation activity."} {"text":"Michael Porter wrote in 1980 that formulation of competitive strategy includes consideration of four key elements:"} {"text":"The first two elements relate to factors internal to the company (i.e., the internal environment), while the latter two relate to factors external to the company (i.e., the external environment). These elements are considered throughout the strategic planning process."} {"text":"Data is gathered from a variety of sources, such as interviews with key executives, review of publicly available documents on the competition or market, primary research (e.g., visiting or observing competitor places of business or comparing prices), industry studies, etc. This may be part of a competitive intelligence program. Inputs are gathered to help support an understanding of the competitive environment and its opportunities and risks. Other inputs include an understanding of the values of key stakeholders, such as the board, shareholders, and senior management. These values may be captured in an organization's vision and mission statements."} {"text":"Strategic planning activities include meetings and other communication among the organization's leaders and personnel to develop a common understanding regarding the competitive environment and what the organization's response to that environment (its strategy) should be. A variety of strategic planning tools (described in the section below) may be completed as part of strategic planning activities."} {"text":"The organization's leaders may have a series of questions they want to be answered in formulating the strategy and gathering inputs, such as:"} {"text":"The output of strategic planning includes documentation and communication describing the organization's strategy and how it should be implemented, sometimes referred to as the strategic plan. The strategy may include a diagnosis of the competitive situation, a guiding policy for achieving the organization's goals, and specific action plans to be implemented. A strategic plan may cover multiple years and be updated periodically."} {"text":"The organization may use a variety of methods of measuring and monitoring progress towards the strategic objectives and measures established, such as a balanced scorecard or strategy map. Companies may also plan their financial statements (i.e., balance sheets, income statements, and cash flows) for several years when developing their strategic plan, as part of the goal-setting activity. The term operational budget is often used to describe the expected financial performance of an organization for the upcoming year. Capital budgets very often form the backbone of a strategic plan, especially as it increasingly relates to Information and Communications Technology (ICT)."} {"text":"Whilst the planning process produces outputs, as described above, strategy implementation or execution of the strategic plan produces Outcomes. These outcomes will invariably differ from the strategic goals. How close they are to the strategic goals and vision will determine the success or failure of the strategic plan. There will also arise unintended Outcomes, which need to be attended to and understood for strategy development and execution to be a true learning process."} {"text":"A variety of analytical tools and techniques are used in strategic planning. These were developed by companies and management consulting firms to help provide a framework for strategic planning. Such tools include:"} {"text":"Simply extending financial statement projections into the future without consideration of the competitive environment is a form of financial planning or budgeting, not strategic planning. In business, the term \"financial plan\" is often used to describe the expected financial performance of an organization for future periods. The term \"budget\" is used for a financial plan for the upcoming year. A \"forecast\" is typically a combination of actual performance year-to-date plus expected performance for the remainder of the year, so is generally compared against plan or budget and prior performance. The financial plans accompanying a strategic plan may include 3\u20135 years of projected performance."} {"text":"McKinsey & Company developed a capability maturity model in the 1970s to describe the sophistication of planning processes, with strategic management ranked the highest. The four stages include:"} {"text":"Categories 3 and 4 are strategic planning, while the first two categories are non-strategic or essentially financial planning. Each stage builds on the previous stages; that is, a stage 4 organization completes activities in all four categories."} {"text":"For Michael C. Sekora, Project Socrates founder in the Reagan White House, during the cold war the economically challenged Soviet Union was able to keep on western military capabilities by using technology-based planning while the U.S. was slowed by finance-based planning, until the Reagan administration launched the Socrates Project, which should be revived to keep up with China as an emerging superpower."} {"text":"Strategic planning has been criticized for attempting to systematize strategic thinking and strategy formation, which Henry Mintzberg argues are inherently creative activities involving synthesis or \"connecting the dots\" which cannot be systematized. Mintzberg argues that strategic planning can help coordinate planning efforts and measure progress on strategic goals, but that it occurs \"around\" the strategy formation process rather than within it. Further, strategic planning functions remote from the \"front lines\" or contact with the competitive environment (i.e., in business, facing the customer where the effect of competition is most clearly evident) may not be effective at supporting strategy efforts."} {"text":"The Campaign Between the Wars (Hebrew: \u05d4\u05de\u05e2\u05e8\u05db\u05d4 \u05d1\u05d9\u05df \u05d4\u05de\u05dc\u05d7\u05de\u05d5\u05ea or \u05de\u05d1\"\u05dd lit. the military campaign between wars) refers to the targeted covert inter-war campaign waged by the State of Israel. This is done through the IDF and the Israeli Intelligence Community, by preventing Israel's enemies, whomsoever they may be, from developing capabilities that will enable them to violate Israel's balance of deterrence through detecting and selectively destroying emerging threats to Israel's security."} {"text":"Among the activities attributed to Israel: the 2007 strike on a suspected nuclear reactor in Syria (Operation Outside the Box), the assassination of Syrian General Muhammad Suleiman (although not publicly attributed to Israel, only that Israel was consulted on the assassination), Imad Mughniyah, the military commander of Hezbollah and his son, and Mahmoud al-Mabhouh. The Israeli attack in Sudan (2009) during Operation Cast Lead, the May 2013 attacks on Iranian arms shipments to Hezbollah in Damascus, the January 2013 attack on the Syrian arms convoy in the Rif region of Damascus, the February 2014 attack on the Syrian arms convoy to Hezbollah in Baalbek, activities against the Iranian nuclear program, and the delivery of about 800 bombs against 200 targets across Syria in 2017\u20132018."} {"text":"An organizing principle is a core assumption from which everything else by proximity can derive a classification or a value. It is like a central reference point that allows all other objects to be located, often used in a conceptual framework. Having an organizing principle might help one simplify and get a handle on a particularly complicated domain or phenomenon. On the other hand, it might create a deceptive prism that colors one's judgment."} {"text":"Geostrategy, a subfield of geopolitics, is a type of foreign policy guided principally by geographical factors as they inform, constrain, or affect political and military planning. As with all strategies, geostrategy is concerned with matching means to ends\u2014in this case, a country's resources (whether they are limited or extensive) with its geopolitical objectives (which can be local, regional, or global). Strategy is as intertwined with geography as geography is with nationhood, or as Colin S. Gray and Geoffrey Sloan state it, \"[geography is] the mother of strategy.\""} {"text":"Geostrategists, as distinct from geopoliticians, approach geopolitics from a nationalist point of view. Geostrategies are relevant principally to the context in which they were devised: the strategist's nation, the historically rooted national impulses, the strength of the country's resources, the scope of the country's goals, the political geography of the time period, and the technological factors that affect military, political, economic, and cultural engagement. Geostrategy can function prescriptively, advocating foreign policy based on geographic and historical factors, analytically, describing how foreign policy is shaped by geography and history, or predictively, projecting a country's future foreign policy decisions and outcomes."} {"text":"Many geostrategists are also geographers, specializing in subfields of geography, such as human geography, political geography, economic geography, cultural geography, military geography, and strategic geography. Geostrategy is most closely related to strategic geography."} {"text":"Especially following World War II, some scholars divide geostrategy into two schools: the uniquely German organic state theory; and, the broader Anglo-American geostrategies."} {"text":"Most definitions of geostategy below emphasize the merger of strategic considerations with geopolitical factors. While geopolitics is ostensibly neutral \u2014 examining the geographic and political features of different regions, especially the impact of geography on politics \u2014 geostrategy involves comprehensive planning, assigning means for achieving national goals or securing assets of military or political significance."} {"text":"The term \"geo-strategy\" was first used by Frederick L. Schuman in his 1942 article \"Let Us Learn Our Geopolitics.\" It was a translation of the German term \"Wehrgeopolitik\" as used by German geostrategist Karl Haushofer. Previous translations had been attempted, such as \"defense-geopolitics\". Robert Strausz-Hup\u00e9 had coined and popularized \"war geopolitics\" as another alternate translation."} {"text":"As a science or science based political practice geostrategy uses factual and empirical analysis, thus theoretical formulations in geostrategy usually heavily rely on empirical base although facts-values relations or conclusions are differently observed by different and\/or competitive geostrategic approaches. Geostrategic conceptions that stems from the theory become base for the countries foreign and international policies. Geostrategic conceptions are also historically acquired or even inherited from one country to another due to common history, relations between the countries, culture and even propaganda."} {"text":"The geostrategy of location include river valleys, inland sea, world ocean, world island, and so on. For instance, the start of Western civilization was located in the river valleys of the Nile in Egypt and the Tigris and Euphrates in Mesopotamia. The Nile and Tigris and Euphrates not only provided the fertile soil for crop production, but also allowed for the floods that taxed the ingenuity of the inhabitants. The climate of the area was conducive to an existence based primarily upon agriculture. The rivers also provided the avenues of trade in a period when muscles of man and the winds of the sky were the motive power of ships. The river valleys became a unifying factor in the political development of the people."} {"text":"As early as Herodotus, observers saw strategy as heavily influenced by the geographic setting of the actors. In \"History\", Herodotus describes a clash of civilizations between the Egyptians, Persians, Scythians, and Greeks\u2014all of which he believed were heavily influenced by the physical geographic setting."} {"text":"Dietrich Heinrich von B\u00fclow proposed a geometrical science of strategy in the 1799 \"The Spirit of the Modern System of War.\" His system predicted that the larger states would swallow the smaller ones, resulting in eleven large states. Mackubin Thomas Owens notes the similarity between von B\u00fclow's predictions and the map of Europe after the unification of Germany and of Italy."} {"text":"Between 1890 and 1919 the world became a geostrategist's paradise, leading to the formulation of the classical geopolitical theories. The international system featured rising and falling great powers, many with global reach. There were no new frontiers for the great powers to explore or colonize\u2014the entire world was divided between the empires and colonial powers. From this point forward, international politics would feature the struggles of state against state."} {"text":"Two strains of geopolitical thought gained prominence: an Anglo-American school, and a German school. Alfred Thayer Mahan and Halford J. Mackinder outlined the American and British conceptions of geostrategy, respectively, in their works \"The Problem of Asia\" and \"The Geographical Pivot of History\". Friedrich Ratzel and Rudolf Kjell\u00e9n developed an organic theory of the state which laid the foundation for Germany's unique school of geostrategy."} {"text":"The most prominent German geopolitician was General Karl Haushofer. After World War II, during the Allied occupation of Germany, the United States investigated many officials and public figures to determine if they should face charges of war crimes at the Nuremberg trials. Haushofer, an academic primarily, was interrogated by Father Edmund A. Walsh, a professor of geopolitics from the Georgetown School of Foreign Service, at the request of the U.S. authorities. Despite his involvement in crafting one of the justifications for Nazi aggression, Fr. Walsh determined that Haushofer ought not stand trial."} {"text":"After the Second World War, the term \"geopolitics\" fell into disrepute, because of its association with Nazi \"geopolitik\". Virtually no books published between the end of World War II and the mid-1970s used the word \"geopolitics\" or \"geostrategy\" in their titles, and geopoliticians did not label themselves or their works as such. German theories prompted a number of critical examinations of \"geopolitik\" by American geopoliticians such as Robert Strausz-Hup\u00e9, Derwent Whittlesey and Andrew Gyorgy."} {"text":"As the Cold War began, N.J. Spykman and George F. Kennan laid down the foundations for the U.S. policy of containment, which would dominate Western geostrategic thought for the next forty years."} {"text":"Alexander de Seversky would propose that airpower had fundamentally changed geostrategic considerations and thus proposed a \"geopolitics of airpower.\" His ideas had some influence on the administration of President Dwight D. Eisenhower, but the ideas of Spykman and Kennan would exercise greater weight. Later during the Cold War, Colin Gray would decisively reject the idea that airpower changed geostrategic considerations, while Saul B. Cohen examined the idea of a \"shatterbelt\", which would eventually inform the domino theory."} {"text":"After Cold War ended, states started preferring management of space at low cost to expansion of it with military force. Use of military force in order to secure space causes not only great burden on countries, but also severe criticism from the international society as interdependence between countries continuously increases. As a way of new space management, countries either created regional institutions related to the space or make regimes on specific issues to allow intervention on space. Such mechanisms let countries to have indirect control over space. The indirect space management reduces required capital and at the same time provides justification and legitimacy of the management, that the countries involved do not have to face criticism from the international society."} {"text":"Since the fall of the Berlin Wall, for most NATO or former Warsaw Pact countries, geopolitical strategies have generally followed the course of either solidifying security obligations or accesses to global resources; however, the strategies of other countries have not been as palpable."} {"text":"The below geostrategists were instrumental in founding and developing the major geostrategic doctrines in the discipline's history. While there have been many other geostrategists, these have been the most influential in shaping and developing the field as a whole."} {"text":"Alfred Thayer Mahan was an American Navy officer and president of the U.S. Naval War College. He is best known for his \"Influence of Sea Power upon History\" series of books, which argued that naval supremacy was the deciding factor in great power warfare. In 1900, Mahan's book \"The Problem of Asia\" was published. In this volume he laid out the first geostrategy of the modern era."} {"text":"The \"Problem of Asia\" divides the continent of Asia into 3 zones:"} {"text":"The Debated and Debatable zone, Mahan observed, contained two peninsulas on either end (Anatolia and the Korean Peninsula), the Suez Canal, Palestine, Syria, Mesopotamia, two countries marked by their mountain ranges (Iran and Afghanistan), the Pamir Mountains, the Himalayas, the Yangtze, and Japan. Within this zone, Mahan asserted that there were no strong states capable of withstanding outside influence or capable even of maintaining stability within their own borders. So whereas the political situations to the north and south were relatively stable and determined, the middle remained \"debatable and debated ground.\""} {"text":"North of the 40th parallel, the vast expanse of Asia was dominated by the Russian Empire. Russia possessed a central position on the continent, and a wedge-shaped projection into Central Asia, bounded by the Caucasus Mountains and Caspian Sea on one side and the mountains of Afghanistan and Western China on the other side. To prevent Russian expansionism and achievement of predominance on the Asian continent, Mahan believed pressure on Asia's flanks could be the only viable strategy pursued by sea powers."} {"text":"South of the 30th parallel lay areas dominated by the sea powers \u2013 the United Kingdom, the United States, Germany and Japan. To Mahan, the possession of India by the United Kingdom was of key strategic importance, as India was best suited for exerting balancing pressure against Russia in Central Asia. The United Kingdom's predominance in Egypt, China, Malaysia, Australia, Canada and South Africa was also considered important."} {"text":"The strategy of sea powers, according to Mahan, ought to be to deny Russia the benefits of commerce that come from sea commerce. He noted that both the Turkish Straits and Danish Straits could be closed by a hostile power, thereby denying Russia access to the sea. Further, this disadvantageous position would reinforce Russia's proclivity toward expansionism in order to obtain wealth or warm water ports. Natural geographic targets for Russian expansionism in search of access to the sea would therefore be the Chinese seaboard, the Persian Gulf, and Asia Minor."} {"text":"In this contest between land power and sea power, Russia would find itself allied with France (a natural sea power, but in this case necessarily acting as a land power), arrayed against Germany, Britain, Japan, and the United States as sea powers. Further, Mahan conceived of a unified, modern state composed of Turkey, Syria, and Mesopotamia, possessing an efficiently organized army and navy to stand as a counterweight to Russian expansion."} {"text":"Further dividing the map by geographic features, Mahan stated that the two most influential lines of division would be the Suez Canal and Panama Canal. As most developed nations and resources lay above the North\u2013South divide, politics and commerce north of the two canals would be of much greater importance than those occurring south of the canals. As such, the great progress of historical development would not flow from north to south, but from east to west, in this case leading toward Asia as the locus of advance."} {"text":"Halford J. Mackinder's major work, Democratic ideals and reality: a study in the politics of reconstruction, appeared in 1919.[12] It presented his theory of the Heartland and made a case for fully taking into account geopolitical factors at the Paris Peace conference and contrasted (geographical) reality with Woodrow Wilson's idealism. The book's most famous quote was: \"Who rules East Europe commands the Heartland; Who rules the Heartland commands the World Island; Who rules the World Island commands the World.\""} {"text":"This message was composed to convince the world statesmen at the Paris Peace conference of the crucial importance of Eastern Europe as the strategic route to the Heartland was interpreted as requiring a strip of buffer state to separate Germany and Russia. These were created by the peace negotiators but proved to be ineffective bulwarks in 1939 (although this may be seen as a failure of other, later statesmen during the interbellum). The principal concern of his work was to warn of the possibility of another major war (a warning also given by economist John Maynard Keynes)."} {"text":"Mackinder was anti-Bolshevik, and as British High Commissioner in Southern Russia in late 1919 and early 1920, he stressed the need for Britain to continue her support to the White Russian forces, which he attempted to unite."} {"text":"Mackinder's work paved the way for the establishment of geography as a distinct discipline in the United Kingdom. His role in fostering the teaching of geography is probably greater than that of any other single British geographer."} {"text":"Whilst Oxford did not appoint a professor of Geography until 1934, both the University of Liverpool and University of Wales, Aberystwyth established professorial chairs in Geography in 1917. Mackinder himself became a full professor in Geography in the University of London (London School of Economics) in 1923."} {"text":"Mackinder is often credited with introducing two new terms into the English language: \"manpower\" and \"heartland\"."} {"text":"The Heartland Theory was enthusiastically taken up by the German school of Geopolitik, in particular by its main proponent Karl Haushofer. Geopolitik was later embraced by the German Nazi regime in the 1930s. The German interpretation of the Heartland Theory is referred to explicitly (without mentioning the connection to Mackinder) in \"The Nazis Strike\", the second of Frank Capra's \"Why We Fight\" series of American World War II propaganda films."} {"text":"The Heartland Theory and more generally classical geopolitics and geostrategy were extremely influential in the making of US strategic policy during the period of the Cold War."} {"text":"Evidence of Mackinder's Heartland Theory can be found in the works of geopolitician Dimitri Kitsikis, particularly in his geopolitical model \"Intermediate Region\"."} {"text":"Influenced by the works of Alfred Thayer Mahan, as well as the German geographers Carl Ritter and Alexander von Humboldt, Friedrich Ratzel would lay the foundations for \"geopolitik\", Germany's unique strain of geopolitics."} {"text":"Ratzel wrote on the natural division between land powers and sea powers, agreeing with Mahan that sea power was self-sustaining, as the profit from trade would support the development of a merchant marine. However, his key contribution were the development of the concepts of \"raum\" and the organic theory of the state. He theorized that states were organic and growing, and that borders were only temporary, representing pauses in their natural movement. \"Raum\" was the land, spiritually connected to a nation (in this case, the German peoples), from which the people could draw sustenance, find adjacent inferior nations which would support them, and which would be fertilized by their \"kultur\" (culture)."} {"text":"Ratzel's ideas would influence the works of his student Rudolf Kjell\u00e9n, as well as those of General Karl Haushofer."} {"text":"Rudolf Kjell\u00e9n was a Swedish political scientist and student of Friedrich Ratzel. He first coined the term \"geopolitics.\" His writings would play a decisive role in influencing General Karl Haushofer's \"geopolitik\", and indirectly the future Nazi foreign policy."} {"text":"His writings focused on five central concepts that would underlie German \"geopolitik\":"} {"text":"Karl Haushofer's geopolitik expanded upon that of Ratzel and Kjell\u00e9n. While the latter two conceived of geopolitik as the state-as-an-organism-in-space put to the service of a leader, Haushofer's Munich school specifically studied geography as it related to war and designs for empire. The behavioral rules of previous geopoliticians were thus turned into dynamic normative doctrines for action on lebensraum and world power."} {"text":"Haushofer defined geopolitik in 1935 as \"the duty to safeguard the right to the soil, to the land in the widest sense, not only the land within the frontiers of the Reich, but the right to the more extensive Volk and cultural lands.\" Culture itself was seen as the most conducive element to dynamic expansion. Culture provided a guide as to the best areas for expansion, and could make expansion safe, whereas solely military or commercial power could not."} {"text":"To Haushofer, the existence of a state depended on living space, the pursuit of which must serve as the basis for all policies. Germany had a high population density, whereas the old colonial powers had a much lower density: a virtual mandate for German expansion into resource-rich areas. A buffer zone of territories or insignificant states on one's borders would serve to protect Germany."} {"text":"Closely linked to this need was Haushofer's assertion that the existence of small states was evidence of political regression and disorder in the international system. The small states surrounding Germany ought to be brought into the vital German order. These states were seen as being too small to maintain practical autonomy (even if they maintained large colonial possessions) and would be better served by protection and organization within Germany. In Europe, he saw Belgium, the Netherlands, Portugal, Denmark, Switzerland, Greece and the \"mutilated alliance\" of Austro-Hungary as supporting his assertion."} {"text":"Haushofer and the Munich school of geopolitik would eventually expand their conception of lebensraum and autarky well past a restoration of the German borders of 1914 and \"a place in the sun.\" They set as goals a New European Order, then a New Afro-European Order, and eventually to a Eurasian Order. This concept became known as a pan-region, taken from the American Monroe Doctrine, and the idea of national and continental self-sufficiency. This was a forward-looking refashioning of the drive for colonies, something that geopoliticians did not see as an economic necessity, but more as a matter of prestige, and of putting pressure on older colonial powers. The fundamental motivating force was not be economic, but cultural and spiritual."} {"text":"Beyond being an economic concept, pan-regions were a strategic concept as well. Haushofer acknowledged the strategic concept of the Heartland put forward by the Halford Mackinder. If Germany could control Eastern Europe and subsequently Russian territory, it could control a strategic area to which hostile sea power could be denied. Allying with Italy and Japan would further augment German strategic control of Eurasia, with those states becoming the naval arms protecting Germany's insular position."} {"text":"Nicholas J. Spykman was a Dutch-American geostrategist, known as the \"godfather of containment.\" His geostrategic work, \"The Geography of the Peace\" (1944), argued that the balance of power in Eurasia directly affected United States security."} {"text":"George F. Kennan, U.S. ambassador to the Soviet Union, laid out the seminal Cold War geostrategy in his \"Long Telegram\" and \"The Sources of Soviet Conduct\". He coined the term \"containment\", which would become the guiding idea for U.S. grand strategy over the next forty years, although the term would come to mean something significantly different from Kennan's original formulation."} {"text":"Kennan advocated what was called \"strongpoint containment.\" In his view, the United States and its allies needed to protect the productive industrial areas of the world from Soviet domination. He noted that of the five centers of industrial strength in the world\u2014the United States, Britain, Japan, Germany, and Russia\u2014the only contested area was that of Germany. Kennan was concerned about maintaining the balance of power between the U.S. and the USSR, and in his view, only these few industrialized areas mattered."} {"text":"Here Kennan differed from Paul Nitze, whose seminal Cold War document, NSC 68, called for \"undifferentiated or global containment,\" along with a massive military buildup. Kennan saw the Soviet Union as an ideological and political challenger rather than a true military threat. There was no reason to fight the Soviets throughout Eurasia, because those regions were not productive, and the Soviet Union was already exhausted from World War II, limiting its ability to project power abroad. Therefore, Kennan disapproved of U.S. involvement in Vietnam, and later spoke out critically against Reagan's military buildup."} {"text":"Henry Kissinger implemented two geostrategic objectives when in office: the deliberate move to shift the polarity of the international system from bipolar to tripolar; and, the designation of regional stabilizing states in connection with the Nixon Doctrine. In Chapter 28 of his long work, \"Diplomacy\", Kissinger discusses the \"opening of China\" as a deliberate strategy to change the balance of power in the international system, taking advantage of the split within the Sino-Soviet bloc. The regional stabilizers were pro-American states which would receive significant U.S. aid in exchange for assuming responsibility for regional stability. Among the regional stabilizers designated by Kissinger were Zaire, Iran, and Indonesia."} {"text":"Zbigniew Brzezinski laid out his most significant contribution to post-Cold War geostrategy in his 1997 book \"The Grand Chessboard\". He defined four regions of Eurasia, and in which ways the United States ought to design its policy toward each region in order to maintain its global primacy. The four regions (echoing Mackinder and Spykman) are:"} {"text":"In his subsequent book, \"The Choice\", Brzezinski updates his geostrategy in light of globalization, 9\/11 and the intervening six years between the two books."} {"text":"In his journal called \"America's New Geostrategy\", he discusses the need of shift in America's geostrategy to avoid its massive collapse like many scholars predict. He points out that:"} {"text":"A strategy is a long term plan of action designed to achieve a particular goal."} {"text":"The most basic way to evaluate one's position is to count the total value of pieces on both sides. The point values used for this purpose are based on experience. Usually pawns are considered to be worth one point, knights and bishops three points each, rooks five points, and queens nine points. The fighting value of the king in the endgame is approximately four points. These basic values are modified by other factors such as the \"position of the pieces\" (e.g. advanced pawns are usually more valuable than those on their starting squares), \"coordination between pieces\" (e.g. a usually coordinates better than a bishop plus a knight), and the \"type of position\" (knights are generally better in with many pawns, while bishops are more powerful in )."} {"text":"Another important factor in the evaluation of chess positions is the pawn structure or pawn skeleton. Since pawns are the most immobile and least valuable of the chess pieces, the pawn structure is relatively static and largely determines the strategic nature of the position. Weaknesses in the pawn structure, such as isolated, doubled, or backward pawns and , once created, are usually permanent. Care must therefore be taken to avoid them unless they are compensated by another valuable asset, such as the possibility to develop an attack."} {"text":"A advantage applies both strategically and tactically. Generally more pieces or an aggregate of more powerful pieces means greater chances of winning. A fundamental strategic and tactical rule is to capture opponent pieces while preserving one's own."} {"text":"Bishops and knights are called \"minor pieces\". A knight is about as valuable as a bishop, but less valuable than a rook. Rooks and the queen are called \"major pieces\". Bishops are usually considered slightly better than knights in open positions, such as toward the end of the game when many of the pieces have been captured, whereas knights have an advantage in closed positions. Having two bishops (the ) is a particularly powerful weapon, especially if the opposing player lacks one or both of their bishops."} {"text":"Three pawns are likely to be more useful than a knight in the endgame, but in the middlegame, a knight is often more powerful. Two minor pieces are stronger than a single rook, and two rooks are slightly stronger than a queen. The bishop on squares of the same color as the player is slightly more valuable in the opening as it can attack the vulnerable f7\/f2-square. A rook is more valuable when with another rook or queen; consequently, doubled rooks are worth more than two ."} {"text":"One commonly used simple scoring system is:"} {"text":"Other things being equal, the side that controls more on the board has an advantage. More space means more options, which can be exploited both tactically and strategically. A player who has all pieces developed and no tactical tricks or promising long-term plan should try to find a move that enlarges their influence, particularly in the center. However, in some openings, one player accepts less space for a time, to set up a counterattack in the middlegame. This is one of the concepts behind hypermodern play."} {"text":"The easiest way to gain space is to push the pawn skeleton forward. However, one must be careful not to over stretch. If the opponent succeeds in getting a protected piece behind enemy lines, this piece can become such a serious problem that a piece with a higher value might have to be exchanged for it."} {"text":"Larry Evans gives a method of evaluating space. The method (for each side) is to count the number of squares attacked or occupied on the opponent's side of the board. In this diagram from the Nimzo-Indian Defense, Black attacks four squares on White's side of the board (d4, e4, f4, and g4). White attacks seven squares on Black's side of the board (b5, c6, e6, f5, g5, and h6 \u2013 counting b5 twice) and occupies one square (d5). White has a space advantage of eight to four and Black is cramped."} {"text":"The strategy consists of placing pieces so that they attack the central four squares of the board. However, a piece being placed on a central square does not necessarily mean it controls the center, e.g., a knight on a central square does not attack any central squares. Conversely, a piece does not have to be on a central square to control the center. For example, the bishop can control the center from afar."} {"text":"Control of the center is generally considered important because tactical battles often take place around the central squares, from where pieces can access most of the board. Center control allows more movement and more possibility for attack and defense."} {"text":"Chess openings try to control the center while developing pieces. Hypermodern openings are those that control the center with pieces from afar (usually the side, such as with a fianchetto); the older Classical (or Modern) openings control it with pawns."} {"text":"The initiative belongs to the player who can make threats that cannot be ignored, such as checking the opponent's king. They thus put their opponent in the position of having to use their turns responding to threats rather than making their own, hindering the development of their pieces. The player with the initiative is generally attacking and the other player is generally defending."} {"text":"It is important to defend one's pieces even if they are not directly threatened. This helps stop possible future campaigns from the opponent. If a defender must be added at a later time, this may cost a tempo or even be impossible due to a fork or discovered attack. The approach of always defending one's pieces has an antecedent in the theory of Aron Nimzowitsch who referred to it as \"overprotection.\" Similarly, if one spots undefended enemy pieces, one should immediately take advantage of those pieces' weakness."} {"text":"Even a defended piece can be vulnerable. If the defending piece is also defending something else, it is called an overworked piece, and may not be able to fulfill its task. When there is more than one attacking piece, the number of defenders must also be increased, and their values taken into account. In addition to defending pieces, it is also often necessary to defend key squares, open files, and the . These situations can easily occur if the pawn structure is weak."} {"text":"To exchange pieces means to capture a hostile piece and then allow a piece of the same value to be captured. As a rule of thumb, exchanging pieces eases the task of the defender who typically has less room to operate in."} {"text":"Exchanging pieces is usually desirable to a player with an existing advantage in material, since it brings the endgame closer and thereby leaves the opponent with less ability to recover ground. In the endgame even a single pawn advantage may be decisive. Exchanging also benefits the player who is being attacked, the player who controls less space, and the player with the better pawn structure."} {"text":"When playing against stronger players, many beginners attempt to constantly exchange pieces \"to simplify matters\". However, stronger players are often relatively stronger in the endgame, whereas errors are more common during the more complicated middlegame."} {"text":"Note that \"the exchange\" may also specifically mean a rook exchanged for a bishop or knight. The phrase, \"going up the exchange,\" means capturing a rook in exchange for a bishop or knight as that is a materially better trade. Conversely, \"going down an exchange,\" means losing a rook but capturing a bishop or knight, a materially worse trade."} {"text":"In the endgame, passed pawns, unhindered by enemy pawns from promotion, are strong, especially if advanced or protected by another pawn. A passed pawn on the sixth is roughly as strong as a knight or bishop and often decides the game. (Also see isolated pawn, doubled pawns, backward pawn, connected pawns.)"} {"text":"Since knights can easily be chased away by pawn moves, it is often advantageous for knights to be placed in \"\" in the enemy position as outposts\u2014squares where they cannot be attacked by pawns. Such a knight on the fifth rank is a strong asset. The ideal position for a knight is the opponent's third rank, when it is supported by one or two pawns. A knight at the edge or corner of the board controls fewer squares than one on the board's interior, thus the saying: \"A Knight on the rim is dim!\""} {"text":"A king and one knight is not sufficient material to checkmate an opposing lone king (see Two knights endgame). A king and two knights can checkmate a lone king but"} {"text":"A bishop always stays on squares of the color it started on, so once one of them is gone, the squares of that color become more difficult to control. When this happens, pawns moved to squares of the other color do not block the bishop, and enemy pawns directly facing them are stuck on the vulnerable color."} {"text":"A \"fianchettoed\" bishop, e.g. at g2 after pawn g2\u2013g3, can provide a strong defense for the castled king on g1 and often exert pressure on the long diagonal h1\u2013a8. After a fianchetto, giving up the bishop can weaken the holes in the pawn chain; doing so in front of the castled king may thus affect its safety."} {"text":"In general, a bishop is of roughly equal value to a knight. In certain circumstances, one can be more powerful than the other. If the game is \"closed\" with many interlocked pawn formations, the knight tends to be stronger, because it can hop over the pawns while they block the bishop. A bishop is also weak if it is restricted by its own pawns, especially if they are blocked and on the bishop's color. Once a bishop is lost, the remaining bishop is considered weaker since the opponent can now plan their moves to play a white or black color game."} {"text":"In an with action on both sides of the board, the bishop tends to be stronger because of its long range. This is especially true in the endgame; if passed pawns race on opposite sides of the board, the player with a bishop usually has better winning chances than a player with a knight."} {"text":"A king and a bishop is not sufficient material to checkmate an opposing lone king, but two bishops and a king checkmate an opposing lone king easily."} {"text":"Rooks have more scope of movement on half-open files (ones with no pawns of one's own color). Rooks on the seventh rank can be very powerful as they attack pawns that can only be defended by other pieces, and they can restrict the enemy king to its back rank. A pair of rooks on the player's seventh rank is often a sign of a winning position."} {"text":"In middlegames and endgames with a passed pawn, Tarrasch's rule states that rooks, both friend and foe of the pawn, are usually strongest \"behind\" the pawn rather than in front of it."} {"text":"A king and a rook is sufficient material to checkmate an opposing lone king, although it's a little harder than checkmating with king and queen; thus the rook's distinction as a major piece above the knight and bishop."} {"text":"Queens are the most powerful pieces. They have great mobility and can make many threats at once. They can act as a rook and as a bishop at the same time. For these reasons, checkmate attacks involving a queen are easier to achieve than those without one. Although powerful, the queen is also easily harassed. Thus, it is generally wise to wait to the queen until after the knights and bishops have been developed to prevent the queen from being attacked by minor pieces and losing tempo. When a pawn is promoted, most of the time it is promoted to a queen."} {"text":"During the middle game, the king is often best protected in a corner behind its pawns. Such a position for either of the players is often achieved by castling by that player. If the rooks and queen leave the first rank (commonly called that player's \"back rank\"), however, an enemy rook or queen can checkmate the king by invading the first rank, commonly called a back-rank checkmate. Moving one of the pawns in front of the king (making a luft) can allow it an escape square, but may weaken the king's overall safety otherwise. One must therefore wisely balance between these trade-offs."} {"text":"Castling is often thought to help protect the king and often \"connects\" the player's two rooks together so the two rooks may protect each other. This can reduce a threat of a back-rank skewer in which the king can be skewered with capture of a rook behind it."} {"text":"The king can become a strong piece in the endgame. With reduced material, a quick checkmate becomes less of a concern, and moving the king towards the center of the board gives it more opportunities to make threats and actively influence play."} {"text":"Considerations for a successful long term deployment."} {"text":"Chess strategy consists of setting and achieving long-term goals during the game\u2014for example, where to place different pieces\u2014while tactics concentrate on immediate maneuver. These two parts of chess thinking cannot be completely separated, because strategic goals are mostly achieved by the means of tactics, while the tactical opportunities are based on the previous strategy of play."} {"text":"Because of different strategic and tactical patterns, a game of chess is usually divided into three distinct phases: Opening, usually the first 10 to 25 moves, when players develop their armies and set up the stage for the coming battle; middlegame, the developed phase of the game; and endgame, when most of the pieces are gone and kings start to take an active part in the struggle."} {"text":"A chess opening is the group of initial moves of a game (the \"opening moves\"). Recognized sequences of opening moves are referred to as \"openings\" and have been given names such as the Ruy Lopez or Sicilian Defence. They are catalogued in reference works such as the \"Encyclopaedia of Chess Openings\". It is recommended for anyone but the chessmasters that when left with a choice to either invent a new variation or follow a standard opening, choose the latter."} {"text":"There are dozens of different openings, varying widely in character from quiet positional play (e.g. the R\u00e9ti Opening) to very aggressive (e.g. the Latvian Gambit). In some opening lines, the exact sequence considered best for both sides has been worked out to 30\u201335 moves or more. Professional players spend years studying openings, and continue doing so throughout their careers, as opening theory continues to evolve."} {"text":"The fundamental strategic aims of most openings are similar:"} {"text":"During the opening, some pieces have a recognized optimum square they try to reach. Hence, an optimum deployment could be to push the king and queen pawn two steps followed by moving the knights so they protect the center pawns and give additional control of the center. One can then deploy the bishops, protected by the knights, to pin the opponent's knights and pawns. The optimum opening is ended with a castling, moving the king to safety and deploying for a strong back rank and a rook along the ."} {"text":"Apart from these fundamentals, other strategic plans or tactical sequences may be employed in the opening."} {"text":"Most players and theoreticians consider that White, by virtue of the first move, begins the game with a small advantage. Black usually strives to neutralize White's advantage and achieve , or to develop in an unbalanced position."} {"text":"The middlegame is the part of the game when most pieces have been developed. Because the opening theory has ended, players have to assess the position, to form plans based on the features of the positions, and at the same time to take into account the tactical possibilities in the position."} {"text":"Typical plans or strategic themes\u2014for example the , that is the attack of pawns against an opponent who has more pawns on the queenside\u2014are often appropriate just for some pawn structures, resulting from a specific group of openings. The study of openings should therefore be connected with the preparation of plans typical for resulting middlegames."} {"text":"Middlegame is also the phase when most combinations occur. Middlegame combinations are often connected with the attack against the opponent's king; some typical patterns have their own names, for example the Boden's Mate or the Lasker\u2014Bauer combination."} {"text":"Another important strategical question in the middlegame is whether and how to reduce material and transform into an endgame (i.e. ). For example, minor material advantages can generally be transformed into victory only in an endgame, and therefore the stronger side must choose an appropriate way to achieve an ending. Not every reduction of material is good for this purpose; for example, if one side keeps a light-squared bishop and the opponent has a dark-squared one, the transformation into a \"bishops and pawns\" ending is usually advantageous for the weaker side only, because an endgame with bishops on opposite colors is likely to be a draw, even with an advantage of one or two pawns."} {"text":"The endgame (or \"end game\" or \"ending\") is the stage of the game when there are few pieces left on the board. There are three main strategic differences between earlier stages of the game and endgame:"} {"text":"Endgames can be classified according to the type of pieces that remain on board. Basic checkmates are positions where one side has only a king and the other side has one or two pieces and can checkmate the opposing king, with the pieces working together with their king. For example, king and pawn endgames involve only kings and pawns on one or both sides and the task of the stronger side is to promote one of the pawns. Other more complicated endings are classified according to the pieces on board other than kings, e.g. \"rook and pawn versus rook endgame\"."} {"text":"A strategist is a person with responsibility for the formulation and implementation of a strategy. Strategy generally involves setting goals, determining actions to achieve the goals, and mobilizing resources to execute the actions. A strategy describes how the ends (goals) will be achieved by the means (resources). The senior leadership of an organization is generally tasked with determining strategy. Strategy can be intended or can emerge as a pattern of activity as the organization adapts to its environment or competes. It involves activities such as strategic planning and strategic thinking."} {"text":"The strategy role exists in a variety of organizations and fields of study."} {"text":"In large corporations, strategic planners or corporate financial planning and analysis (FP&A) personnel are involved in the formulation and implementation of the organization's strategy. The strategy is typically set by business leaders such as the chief executive officer and key business or functional leaders and is reviewed by the board of directors."} {"text":"An AI strategist uses evidence and reason to make circumstance-dependent decisions that shape the development of AI towards a set of desired outcomes. The scope of AI development can range from within small organizations to global landscape."} {"text":"A design strategist has the ability to combine the innovative, perceptive and holistic insights of a designer with the pragmatic and systemic skills of a planner to guide strategic direction in context of business needs, brand intent, design quality and customer values."} {"text":"An economic strategist is a person who can create a sustainable commercial advantage by applying innovative and quantitative ideas and systems at a sell side financial institution."} {"text":"A political strategist is a multi-discipline strategist who works within political campaigns. Also known as political consulting, the political strategist will advise a campaign on a range of activities such as media, resourcing, opposition research, opinion polling and engagement strategy."} {"text":"A sport strategist is a professional that performs scouting and analysis of the players involved in an upcoming competitive match. Sports strategists typically analyze film footage, organize video libraries, and recommend attacks and defensive strategies in order to capitalize on an opponents' weaknesses."} {"text":"Working closely with investment managers, a principal investment strategist contributes revenue by providing principal investment analytics and alternative product structuring."} {"text":"A sales strategist develops innovative trade ideas and assists in the marketing of those trades to buy side clients."} {"text":"A banking strategist partners with investment bankers and capital market experts on corporate finance and capital structure analyses to identify and execute banking transactions."} {"text":"A trading strategist contributes revenue to the business in which his team is embedded by developing and delivering innovative trade ideas, models and analytic systems to the trading desk."} {"text":"Within the financial services industry, strategists are known as \u201cstrats\u201d."} {"text":"A military strategist develops strategies in the field of warfare with the objective of outmaneuvering their opponent."} {"text":"An IT Strategist develops an IT strategy that is aligned with the business strategy to implement systems to give business processes efficiency and productivity gains and therefore a possible competitive advantage."} {"text":"People who possess a strategist mindset are generally capable of doing well in any possible field due to the various traits that they own. Strategists tend to follow a career path that challenges them mentally in terms of development, and seek to work with people who are in the same caliber in terms of intelligence and competency. As it is highly likely that people with a strategist mindset tend to be more single-minded and may not be appreciative of others' effort, it is crucial for them to work in a suitable working environment."} {"text":"Common careers that strategists tend to choose are:"} {"text":"Strategists can have a variety of backgrounds such journalism, speech writing, data analyzing, or telemarketing. People with a background of public relations or advertising can be hired as strategists because of their experience in market research and message delivery."} {"text":"Carl Von Clausewitz (1780-1831) was a Prussian military theorist and strategist who was known for his originality in terms of ideas, influenced mainly by the Napoleonic war. Clausewitz most famous work was called \u201cOn War,\u201d however it was not finished and was published posthumously."} {"text":"Sir Winston Churchill (1874-1965) was the prime minister of the United Kingdom and was in office from 1940 to 1945 and 1951 to 1955. Churchill was known for his leadership role during World War II. However, there were many controversial incidents, which resulted in Churchill's reputation as a strategist to waver between being known as a savior and a scapegoat. The battle of Gallipoli, which started on April 25, 1915 was one of the major setbacks in Churchill's military career, having pressed on the battle of Gallipoli resulted in the casualty of over 200,000 allied soldiers"} {"text":"Napoleon Bonaparte (August 15, 1769 \u2013 May 5, 1821) was a military general who later established the French empire in 1804 becoming emperor as well. Napoleon was known to be the pioneer during the French revolution. Bonaparte was in charge of leading the French army to victory during the Battle of Marengo fought on the 14th of June 1800. His strategic thinking and plans allowed the French to win despite having less in numbers and resources."} {"text":"Other notable, political strategists include Roger Ailes, Bill Moyers, Bob Shrum, Ben Rhodes, Kellyanne Conway, David Plouffe, and James A. Baker III, who were more recent strategists who worked on presidential elections including those of President Donald J. Trump, Barack Obama, George H. W. Bush, Hillary Clinton, and Bill Clinton."} {"text":"A strategic reserve is the reserve of a commodity or items that is held back from normal use by governments, organisations, or businesses in pursuance of a particular strategy or to cope with unexpected events."} {"text":"Another definition issued by the US Department of Defense in 2005 describes a strategic reserve as follows: \"An external reinforcing force which is not committed in advance to a specific Major Subordinate Command, but which can be deployed to any region for a mission decided at the time by the Major NATO Commander.\""} {"text":"There are several national and international projects aiming to preserve the existing natural wealth and diversity in case of mass extinction or a global catastrophe. The Svalbard Global Seed Vault facility, opened in 2008, focuses on collecting duplicate samples of plant seeds from all around the world and currently contains close to 1 million different agricultural seed samples. The final storage capacity is said to be 4.5 million seed samples. Another such institution, Frozen Ark, concentrates on DNA preservation of endangered animal species for generations."} {"text":"Certain countries create their own unusual strategic reserves of food or commodities they value the most, such as cotton, butter, pork, raisins or even maple syrup."} {"text":"\"Strategic reserve is a volume-based capacity mechanism in which a centrally established capacity is kept outside of the electricity market and is only used if the market participants do not offer enough generation to meet short-term demand.\""} {"text":"A slasher is a basketball player who primarily drives (slashes) to the basket when on offense. They are typically a guard, but can also be a forward. A slasher is a fast and athletic player who attempts to get close to the basket for a layup, dunk or teardrop shot. This style of high-percentage two-point play is commonly referred to as slashing."} {"text":"Slashers usually take more free-throw shots than other players due to the increased amount of contact made on them as they constantly and aggressively run towards the basket. Many different kinds of slashers gain extra free-throws by \"drawing fouls\", which is deliberately causing contact with a defending player. They may spend many hours working on increasing their free-throw percentage."} {"text":"Many players who begin as slashers typically develop their game (especially their jump shot), as age and injuries occur, which may prevent them from being as effective as a slasher (for example, Michael Jordan and Kobe Bryant both developed a fadeaway jump shot as they got older)."} {"text":"Line defense is a strategy used in basketball. It is referred to as the \"line defense\" because of its formation on the court, which consists of two lines of defense. Three players at the front of the defense (at the half-court center line) and two players behind (between the center line and the team's own key). The line was the first zone concept to be used in basketball. The line defense was developed to counter the fast break plays that were being developed, and adopted, at the time. The line defense was the catalyst of the future 3-2 zone defense."} {"text":"Teams started to break down the line defense when they were able to get an offensive player(s) behind the line before the defending team was able to set up the defensive line. This helped to create plays such as the fly fast break, the fast break, or the 2-Out Fast break."} {"text":"A back screen is a basketball maneuver involving two players, called a cutter and a screener. The screener remains stationary on the court while the cutter moves toward the basket and attempts to use the screener to separate himself from his defender."} {"text":"The screener positions himself with his back to the basket on the same side of the court as the cutter. The cutter positions himself outside of and above the screener. \"Outside\" implies that the cutter is closer to the sideline than the screener. \"Above\" implies that the cutter is closer to the midcourt line than the screener. Neither player has the ball. With the screener completely stationary, the cutter moves toward the basket and passes close enough to the screener that they almost touch shoulders. If the cut is properly made, the player defending the cutter will be disrupted by the screener (who has not moved while setting the screen) and the cutter will have an opportunity to receive a pass very near the basket."} {"text":"A back-screen becomes effective when the cutter is defended very closely. An over-playing defender often has their back turned to the basket and cannot see the screen being set. Without time to adjust, the defender will collide with the screener."} {"text":"The 5 man weave is a basketball drill introduced at Lindsey Wilson College, in Columbia, KY in 1991. Assistant Coach Ed Yuhas introduced the drill as a pre-season conditioning drill. The initial drill consisted of 5 players spaced evenly along the baseline, with the middle player holding the ball. On the smack of the ball players pass the ball repeatedly to the nearest player, while traveling up the court. They then run behind two players. Thus the terminology, \"pass and go behind two\"."} {"text":"Upon reaching the other end of the court the drill turns into a 3 on 2 drill, with the person who shot the layup and the last passer returning to play defense. The ballhandler amongst the group of the 3 will retreat to the other end after attacking the goal. The 2 defenders attack the single defender resulting in a 2 on 1 to the other side. These remaining 3 players then execute a 3 man weave to the far baseline."} {"text":"This drill became very popular amongst high school and small college coaches throughout the south and midwest as Coach Yuhas introduced it on the summer camp circuit. He even introduced it at Coach Mike Dunleavy's Los Angeles Lakers Camp in 1992. Today the drill is used in programs of all sizes across the country."} {"text":"This drill was featured in the \"Hoops and Caroms International Playbook\" authored by Ed Yuhas in 1992, as well \"More Five-Star Basketball Drills\" by Howard Garfinkel in 2003"} {"text":"Zone defense is a type of defense, used in team sports, which is the alternative to man-to-man defense; instead of each player guarding a corresponding player on the other team, each defensive player is given an area (a zone) to cover."} {"text":"A zone defense can be used in many sports where defensive players guard players on the other team. Zone defenses and zone principles are commonly used in basketball, American football, association football, ice hockey, lacrosse, Australian rules football, netball and ultimate among others."} {"text":"The names given to zone defenses start with the number of players on the front of the zone (farthest from the goal) followed by the numbers of players in the rear zones. For example, in a 2\u20133 zone two defenders cover areas in the top of the zone (near the top of the key) while three defenders cover areas near the baseline."} {"text":"Match-up zone is a hybrid man-to-man and zone defense in which players apply man-to-man defense to whichever opposing player enters their area. John Chaney, former head coach of Temple University, is the most famous proponent of this defense. Hybrid defenses also include Box-and-one, in which four defenders are in a 2\u20132 zone and one defender guards a specific player on the offense. A variant of this is triangle-and-two, in which three defenders are in a 2\u20131 zone and two defenders guard two specific offensive players."} {"text":"Zone defenses are common in international, college, and youth competition. In the National Basketball Association, zone defenses were prohibited until the 2001\u20132002 season, and most teams do not use them as a primary defensive strategy. The NBA has a defensive three-second violation rule, which makes it more difficult for teams to play zone, since such defenses usually position a player in the middle of the key to stop penetration. The Dallas Mavericks under coach Rick Carlisle are an example of an NBA team that have regularly used zone defenses."} {"text":"Frank Lindley, Newton, KS High School basketball coach from 1914 to 1945, was among the first to use the zone defense and other innovations in the game and authored numerous books about basketball. He finished his career with a record of 594\u2013118 and guided the Railroaders to ten state titles and seven second-place finishes. Jim Boeheim, coach of the Syracuse Orange men's basketball team, is famous for using a 2\u20133 zone that is among the best in the NCAA. His zone, which typically features athletic, disruptive, and aggressive defenders, has become a prototype for use on other teams including the United States men's national basketball team, where he has spent time as an assistant coach."} {"text":"Some of the reasons for using a zone defense are:"} {"text":"While strategies for countering zone defenses vary and often depend on the strengths and weaknesses of both the offensive and defensive teams, there are some general principles that are typically used by offensive teams when facing a zone."} {"text":"A zone defense in American football is a type of \"pass coverage\". See American football defensive strategy and zone blocking."} {"text":"The zone defence tactic, borrowed from basketball, was introduced into Australian football in the late 1980s by Robert Walls and revolutionized the game. It was used most effectively by Essendon Football Club coach Kevin Sheedy."} {"text":"Another kick-in technique is the \"huddle\", often used before the zone, which involves all of the players from the non-kicking team huddling together and then breaking in different directions. The kicker typically aims in whichever direction the designated target (typically the ruckman) runs in."} {"text":"In ice hockey, players defend zones in the neutral zone trap and left wing lock."} {"text":"In lacrosse, a zone defense is not as often as the normal man defense. It has been used effectively at the D-III level by schools such as Wesleyan University. They almost always use a 6-man \u201cbacker\u201d zone, where they have three guys up top and three guys down low and they try to stay in their zone and not rotate as much as possible. When teams are man down, many teams employ a \u201cbox and one\u201d zone defense, where the four outside players stay in their designated zone while the fifth player follows the ball while staying on the crease man."} {"text":"Netball is a sport similar to basketball with similar strategies and tactics employed, although with seven players per team. Zone defense is one of the main defensive strategies employed by teams, along with one-on-one defense. Common variants include center-court block, box-and-two zone, diamond-and-two zone, box-out zone and split-circle zone."} {"text":"Ultimate allows for a number of zone defence tactics, usually employed in poor (such as windy, rainy or snowy) conditions, to discourage long passes and slow the progress of the opposition's movement."} {"text":"The Death Lineup was a lineup of smaller basketball players on the Golden State Warriors of the National Basketball Association (NBA) from 2014 to 2019. Developed under head coach Steve Kerr, it began during their 2014\u201315 run that led to an NBA championship. Unlike typical small-ball units, this group of Warriors was versatile enough to defend larger opponents, while also aiming to create mismatches on offense with their shooting and playmaking skills."} {"text":"The Death Lineup was considered to be indicative of a larger overall trend in the NBA towards \"positionless\" basketball, where traditional position assignments and roles have less importance."} {"text":"The Death Lineup ended after the 2018\u201319 season, when Durant left the Warriors for the Brooklyn Nets and Iguodala was traded to the Memphis Grizzlies."} {"text":"After the 2018\u201319 season, the free agent Durant announced that he would sign with the Brooklyn Nets, while Thompson agreed to re-sign with Golden State. Eyeing a replacement for Thompson while he recovered from his injury, the Warriors traded Iguodala to the Memphis Grizzlies in order to free salary cap space to acquire All-Star guard D'Angelo Russell in a sign-and-trade package with Brooklyn for Durant. After Durant's and Iguodala's departures, Warriors CEO Joe Lacob announced his intention to eventually retire their numbers."} {"text":"In 2019\u201320, the Warriors moved into their new arena, Chase Center, which includes a hallway featuring drawings of each member of the Hamptons Five. Golden State finished with a league-worst 15\u201350 record. Thompson missed the entire season rehabbing his injury, and Curry was limited to five games all season after breaking his left hand in October. The Warriors' season ended prematurely due to the COVID-19 pandemic. As for Russell, he was eventually traded to the Minnesota Timberwolves for former first-overall draft pick Andrew Wiggins."} {"text":"The UCLA High Post Offense is an offensive strategy in basketball, used by John Wooden, head coach at the University of California, Los Angeles. Due to UCLA's immense success under Wooden's guidance, the UCLA High Post Offense has become one of the most popular offensive tactics, and elements of it are commonly used on all levels of basketball including the NBA. Wooden sought the advice of Press Maravich, then coach of NC State, whether to implement it into his offense."} {"text":"The UCLA High Post offense can be run to both sides of the court, and has a variety of options or \"reads\". It is a near relative of Tex Winter's triangle offense, featuring a three-man triangle game on the strong side and a two-man game on the weak side. Its strengths include simplicity, superb offensive rebounding coverage, a weak-side attack, consistent spacing, flexibility based on personnel and the ability to penetrate the defense. However, due to the presence of a strong-side high-low-wing triangle formation, the ability to penetrate with the dribble is highly limited."} {"text":"The four corners offense, technically four corner stall, is an offensive strategy for stalling in basketball. Four players stand in the corners of the offensive half-court while the fifth dribbles the ball in the middle. Most of the time the point guard stays in the middle, but the middle player would periodically switch, temporarily, with one of the corner players. It was a strategy that was used in college basketball before the shot clock was instituted."} {"text":"The team running the offense typically would seek to score, but only on extremely safe shots. The players in the corners might try to make backdoor cuts, or the point guard could drive the lane."} {"text":"Even if the team wanted to hold the ball until the end of the game, some such strategy was necessary since the rules did not (and still do not) let a player hold the ball for more than five seconds while closely guarded. So some mechanism to facilitate safe passes would be needed, which the four corners provided. There were other slowdown strategies, but the four corners was the most well known."} {"text":"It was most frequently used to retain a lead by holding on to the ball until the clock ran out. The trailing team would be forced to spread their defense in hopes of getting a steal, which often allowed easy drives to the basket. Sometimes it was used throughout the game to reduce the number of possessions in hopes of getting an upset against a stronger team."} {"text":"The \"5 seconds closely guarded\" rule was originally introduced partly to prevent stalling, and other rule changes were made to the college rules through the 1970s in hopes of eliminating stalling without using a shot clock as the National Basketball Association had since the 1954\u201355 season. (Thus, the four corners has always been a strategy of high school and college basketball.) There was a perception that the NBA shot clock did not allow time to work the ball to get a good shot, and that it would reduce the opportunity for varied styles of play."} {"text":"The offense was created by head coach (Neal Baisi of WV Tech fame in the mid-1950s) John McClendon, and popularized (at the Div.1 level) by longtime University of North Carolina at Chapel Hill head coach Dean Smith in the early 1960s. He used it to great effect under point guard Phil Ford; it was during his career that some writers referred to the offense as the \"Ford Corners.\""} {"text":"However, by the 1980s, fans were fed up. In the nationally televised 1982 ACC championship game between the University of North Carolina Tar Heels and the University of Virginia Cavaliers, UNC held the ball for roughly the last seven minutes of the second half to nurse a small lead, eventually winning 47\u201345. This style of offense was so distinctive that a local restaurant-bar in Chapel Hill, NC, was called Four Corners in homage to Smith, a local hero."} {"text":"The next year, the ACC and other conferences introduced a shot clock experimentally, along with a three-point line to force the defense to spread out. In 1985, the National Collegiate Athletic Association adopted a shot clock nationally and added the 3-pointer a year later."} {"text":"On February 21, 2015 the Tar Heels, coached by Smith protege Roy Williams, successfully ran the offense on the opening possession against the Georgia Tech Yellow Jackets as a tribute to the recently deceased Smith."} {"text":"The dribble drive motion is an offensive strategy in basketball, developed by former Pepperdine head coach Vance Walberg during his time as a California high school coach and at Fresno City College."} {"text":"The offense was popularized at the major college level by John Calipari while at The University of Memphis, and was sometimes called the \"Memphis Attack\". Originally called 'AASAA' by Walberg (for \"Attack, Attack, Skip, Attack, Attack\"), the offense is also sometimes known as the 'Walberg offense' or abbreviated to DDM, and has been described as \"Princeton on steroids\"."} {"text":"The offense focuses on spreading the offensive players in the half court, so that helping on dribble penetration or skips becomes difficult for the defense, because the help will leave an offensive player open without any defenders near him. As an example a guard can drive through the defensive gaps for a layup or dunk, or pass out to the perimeter if the defense collapses onto him."} {"text":"Like most motion-type offenses the Dribble Drive is predicated on reading the defense rather than set plays, as it relies on the speed and decision making of its players. \"I feel we're teaching kids how to play basketball instead of how to run plays\" says Walberg of the offense. Coaches that rely upon the offense have said that they do most of their coaching work in practices rather than games. However, the offense contains a lot of initial entry sets, which are used as starting-out points. The sets serve as a way to get the defense different looks, a way to feature a certain player, or exploit a defensive weakness."} {"text":"In 1997 Vance Walberg developed the offense, which he named the AASAA, meaning \"Attack-Attack-Skip-Attack-Attack\", while coaching at Clovis West High School in Fresno, California. Walberg adopted the offense to take advantage of the skills of his point guard Chris Hernandez, later the starting point guard at Stanford. After several years of tweaking the system, he took it with him to Fresno City College, where he coached from 2002\u20132006."} {"text":"While at dinner with Memphis coach John Calipari in October, 2003, he described the basic principles of the offense. John Calipari would implement the offense for the 2005\u20132006 season at Memphis, for which it is sometimes known as the \"Memphis Attack\" offense. After he implemented the offense, Calipari took the Memphis Tigers to great success. His teams made 3 consecutive Elite Eight appearances in the NCAA Tournament, and made it to the NCAA Men's Basketball Championship Game in 2008. That same season, Calipari's Tigers set an NCAA single-season record for most victories, with 38, though this season would later be expunged from the record books per imposed sanctions on Memphis."} {"text":"In 2012 Calipari's Kentucky Wildcats won the NCAA Championship utilizing the Dribble Drive offense."} {"text":"By the 2007\u20132008 basketball season, at least 224 junior high, high school, college, and professional teams were using some form of the Dribble drive motion."} {"text":"During the 2012-2013 NBA Season the Denver Nuggets led by coach George Karl implemented a version of the dribble drive offense behind point guards Ty Lawson and Andre Miller, leading them to the highest ranked offense in the NBA by points scored, and the 3rd Seed in the Western Conference, while winning a franchise best 57 games."} {"text":"Filipino coach Chot Reyes has used the dribble-drive motion offense for his Talk 'N Text Tropang Texters team of the Philippine Basketball Association (PBA), which has resulted in his team winning four PBA championships. In his Philippines men's national basketball team stint, Reyes also used the dribble-drive offense, also proving to be effective in the international level, where the Philippines placed second in the 2013 FIBA Asia Championship and qualified for the 2014 FIBA Basketball World Cup."} {"text":"A continuity offense is one of two main categories of basketball offenses, the other being motion offense. Continuity offenses are characterized by a pattern of movement, cuts, screens and passes which eventually leads back to the starting formation. At this point the pattern of movement is repeated, hence the name continuity offense. The best-known continuity offenses are the shuffle offense, flex offense, wheel offense and John Wooden's UCLA High Post Offense."} {"text":"A full-court press is a basketball term for a defensive style in which the defense applies pressure to the offensive team the entire length of the court before and after the inbound pass. Pressure may be applied man-to-man, or via a zone press using a zone defense. Some presses attempt to deny the initial inbounds pass and trap ball handlers either in the backcourt or at midcourt."} {"text":"Defenses not employing a full-court press generally allow the offensive team to get halfway down the court (a half-court press) or near the basket before applying strong defensive pressure."} {"text":"Effective press breaks employ quick passing more often than dribbling to advance the ball up the floor. Short, quick passes are less prone to turnovers than either long passes or dribbling. Another effective way to break a man-to-man press is to pass to the center. Most presses keep a \"last man back\" (usually the center) whose job is to disrupt a potential fast break resulting from the press; this may leave the offensive center unguarded and able to receive a pass near midcourt or near the basket for an easy score."} {"text":"In the 1950s, the full-court press style of play was invented by John McLendon, an American basketball coach who is recognized as the first African American basketball coach at a predominantly white university and the first African American head coach in any professional sport. McLendon is often not credited because he invented it within the African American college league. Due to segregation, African American teams could only compete against other African American teams. For years, his style of play went unnoticed by white society and later was called unrefined until white coaches adopted it . McLendon's contributions to the game of basketball also include an increase in tempo and the four corners offense."} {"text":"Gene Johnson, head coach at Wichita University (now called Wichita State University) is credited with creating the full court press."} {"text":"In the 1960s, Hobbs High School, New Mexico boys' basketball coach Ralph Tasker began using a man-to-man pressure defense from baseline to baseline, buzzer to buzzer. This defensive strategy resulted in numerous turnovers and scoring opportunities for his teams. The 1969-70 Hobbs Eagles team scored 100 points or higher in 14 consecutive games, a national record held for 40 years. Tasker's teams set the New Mexico scoring record for most points scored in a game with 170 points against Carlsbad High in 1970 and with 176 points against Roswell High in 1978, and scored above 150 points in three games in 1981."} {"text":"Arkansas's coach Nolan Richardson observed and adopted Tasker's up-tempo pressure defense while coaching at the high school level. He called his version of full court pressure \"40 minutes of Hell.\" VCU's former coach Shaka Smart calls his form of full court pressure \"Wreaking Havoc\" or \"Havoc Ball\"."} {"text":"The Serbian coach \u0110or\u0111e Andrija\u0161evi\u0107 was the first one to use this technique in Europe. His zone press was an adapted and improved version of Gene Johnson's full-court press. He used it for the first-time with French team JA Vichy in 1965. This defensive style was then reproduced by other French squads and quickly became popular in other European leagues."} {"text":"The flex is a type of continuity offense, similar to (and in fact derived from) the earlier shuffle offense."} {"text":"Gonzaga University runs a modified version of the simplistic flex offense. The University of Maryland ran a modified version of the flex offense under previous head coach Gary Williams. Maryland's prior offense attempted to run a version of the flex offense that allowed for closer shots at the basket, and was less focused on obtaining open perimeter jump shots. Boston College under Coach Al Skinner also ran the flex; the BC version was very compact, creating an extremely physical game and limiting a team's ability to help because of how collapsed the floor is."} {"text":"Variations of the flex include the 5 man flex, utilizing all 5 players in the cutting and screening action and the 4 man flex, which utilizes 4 players. Since this offense is classified as a continuity offense, in which players repeat specific actions, some teams will build in options within the offense to keep defenses from anticipating a particular cut or screen."} {"text":"This strategy has also sometimes been employed against other prolific scoring guards. The Jordan Rules were an instrumental aspect of the rivalry between the \"Bad Boys\" Pistons and Jordan's Chicago Bulls in the late 1980s and early 1990s. This style of defense limited players including Jordan from entering the paint and was carried out by Dennis Rodman and Bill Laimbeer."} {"text":"This strategy was later used by the New York Knicks from 1992 to 1998. However, the Knicks were not as successful as Detroit in containing Jordan and the Bulls. Jordan faced New York in the NBA Playoffs in 1991, 1992, 1993, and 1996. The Bulls eliminated the Knicks and captured NBA titles in all four of those seasons."} {"text":"In an interview with \"Sports Illustrated\", then Detroit Pistons coach Chuck Daly described the Jordan Rules as:"} {"text":"When doing an ESPN \"30 for 30\", Joe Dumars said that,"} {"text":"In basketball, small ball is a style of play that sacrifices height, physical strength and low post offense\/defense in favor of a lineup of smaller players for speed, agility and increased scoring (often from the three-point line). It is closely tied to the concepts of pace and space, which pushes the speed of the offense and spreads out the defense with extra shooters on the court. Many small ball lineups feature a non-traditional center who offers skills that are not normally found from players at that position."} {"text":"Teams often move a physically dominant player who would typically play the small forward position into the power forward position. Examples of players who have been used in this role include Kevin Durant, Carmelo Anthony, and LeBron James. That individual would play alongside either a traditional power forward (shifted into the center position), or alongside a center."} {"text":"While the style of play does have advantages, there are several disadvantages. The addition of speed and agility comes at the cost of strength and height; the lack of traditional \"big men\" can make it more difficult to guard the space under the basket while on defense and can also prevent the team from having a low-post offensive threat when attacking. Rebounding is often sacrificed; for example, in the 2012\u201313 season, the Miami Heat, playing small ball, had the most wins during the season, but were the worst team in the NBA in rebounding."} {"text":"The Golden State Warriors in 2014\u201315 used small ball to a greater extent in the NBA Finals than any prior champion, swapping out big man Andrew Bogut from the starting lineup for Andre Iguodala, who would eventually be named the Finals MVP. The Warriors small lineup came to be known as the Death Lineup. The Warriors have attained a historic level of success, winning three NBA titles and setting the NBA wins record during the period from 2014-2017. The success of the Warriors' small ball lineups has caused some analysts to consider small ball to be the future of basketball, eschewing traditional lineups in favor of a brand of \"positionless\" basketball that allows teams to play small."} {"text":"The triangle offense is an offensive strategy used in basketball. Its basic ideas were initially established by Hall of Fame coach Sam Barry at the University of Southern California. His system was further developed by former Houston Rockets and Kansas State University basketball head coach Tex Winter, who played for Barry in the late 1940s. Winter later served as an assistant coach for the Chicago Bulls in the 1980s and 1990s and for the Los Angeles Lakers in the 2000s, mostly under head coach Phil Jackson."} {"text":"The system's most important feature is the sideline triangle created by the center, who stands at the low post, the forward at the wing, and the guard at the corner. The team's other guard stands at the top of the key and the weak-side forward is on the weak-side high post\u2014together forming the \"two-man game\". The goal of the offense is to fill those five spots, which creates good spacing between players and allows each one to pass to four teammates. Every pass and cut has a purpose and everything is dictated by the defense."} {"text":"It has been claimed that the triangle offense is the optimal way for five players to space the floor on the basketball court."} {"text":"The offense starts when a guard passes to the wing and cuts to the strong-side corner. The triangle is created from a post player on the strong-side block, the strong-side corner, and the extended strong-side wing, who gains possession on the first pass. The desired initial option in the offense is to pass to the strong-side post player on the block who is in good scoring position. From there the player has the options of looking to score or pass to one of the perimeter players who are exchanging from strong-side corner and wing, a dive cut down the lane, or the opposite wing flashing to the top of the key which initiates another common option known as the \"pinch post\"."} {"text":"If the strong-side wing-to-guard pass is not possible, the third option is for the weak-side forward to flash to the strong-side elbow, take the pass, and cut to the basket on the trademark backdoor play of the offense. Meanwhile, the wing and corner guard exchange on a down screen. The forward with the ball can pass to the cutting guard or to the corner guard coming off the wing's screen. If nothing's available, he can shoot the basketball himself."} {"text":"Head coach Phil Jackson, with help from assistant coach Tex Winter, won 11 NBA Finals with the triangle offense. Jackson coached the Bulls from 1989\u20131998. He next served as the head coach of the Lakers twice, first from 1999\u20132004, and then from 2005\u20132011. The Chicago Bulls under Jackson won six championships in the 1990s playing in the triangle. His first three title-winning teams in Chicago featured superstars Michael Jordan and Scottie Pippen. Jackson's later three titles with the Bulls came with Jordan, Pippen, and fellow superstar Dennis Rodman. Jackson's Los Angeles Lakers won five championships employing the triangle. His first three Lakers championship squads fielded superstars Shaquille O'Neal and Kobe Bryant, while his last two title teams saw him pair Bryant with fellow All-Star Pau Gasol."} {"text":"The triangle offense was used very effectively by the Bulls during the 1995\u201396 season. Jordan, back at the helm for the team in his first full season since coming out of retirement, won his fourth NBA MVP award. He also finished the season as the league's leading scorer for the 8th time. The Bulls recorded a then NBA-record 72\u201310 season en route to what was then their fourth NBA championship. Jackson won his first (and only) NBA Coach of the Year Award for his efforts during his team's record-breaking season. Overall, the Bulls won six NBA titles during the 1990s, and the team is considered to be one of the NBA's greatest dynasties."} {"text":"When Phil Jackson retired as a head coach at the end of the 2010\u20132011 season, he finished his career with over 1000 victories over the course of his coaching career, regular season and playoff games combined. Jackson, Jordan, Pippen, Rodman, and O'Neal are all Hall of Famers. Kobe Bryant will be posthumously inducted into the Hall of Fame in the 2020 class. Tex Winter earned induction into the Hall of Fame in 2011 for his contributions to basketball involving the triangle offense. He was an assistant for both the Bulls and Lakers on the first nine of Jackson's 11 championship teams, and served as a consultant to the Lakers on the final two."} {"text":"Tim Cone, the current head coach of the Barangay Ginebra San Miguel, continued the movement in triangle offense and brought it in the PBA in 1989, thus aiding him to win a league-record 23 championships with three different franchises."} {"text":"The Blocker-Mover or Wheel offense is an offensive scheme used in basketball, primarily, college basketball. The offense was popularized by Dick Bennett when he was the coach at Wisconsin-Green Bay, Wisconsin, and Washington State."} {"text":"Now used by teams like Virginia and San Diego State, the Blocker-Mover offense consists of two \"blockers\" and three \"movers.\" The offensive scheme usually works in pairs. A blocker is paired with a mover, but the blockers must stay on their sides of the floor. One blocker stays on the strong side of the floor in between the key and the three-point line where the other blocker remains on their side of the floor between the key and the three-point line. The movers use the screens to create separation and find open shots, or drive to the hoop."} {"text":"The blockers are usually forwards and centers. Blockers usually are under the three-point line (on the wings) and are always looking to set screens for movers. By setting a screen for a mover, the blocker seeks to free their teammate for a shot. However, the screener usually gets open themselves when their defender is forced to help on the screen. Because of the over help by the opposing defenders, blockers often get easy points near the basket off of post-ups or slips."} {"text":"The movers are usually guards because they have to be well-conditioned because they are in constant motion, moving all over the court seeking scoring opportunities to score. The movers are usually the team\u2019s best scorers. Also, they utilize screens in hopes of breaking free from defenders for open shots. Movers must be agile and intelligent cutters who can read opposing defense properly. Rarely, movers may also screen for other movers."} {"text":"The pindown screen usually creates a lot of space and open shots for the movers on the wings. If the blocker can shoot three-pointers, they can pop out after they set a pindown screen to have a wide-open attempt if the defense over-commits to the pass. The flare screen doesn't create as much separation as the pindown screen, but can still be effective off the boomerang pass concept."} {"text":"The scheme doesn't work well when the players don't know who they're paired up with. It also doesn't work well when the movers get bad angles off of the screens, forcing in confusion and turnovers. A well-coached team can counter the Blocker-Mover by switching everything, but it only works if the players are around the same size, creating no mismatches."} {"text":"Box-and-one defense is a type of defense used in basketball. The box-and-one defense is a hybrid between a man-to-man defense (in which each defensive player is responsible for marking a player on the other team) and a zone defense (in which each defensive player is responsible for guarding an area of the court)."} {"text":"In a box-and-one defense, four players play zone defense, and align themselves in a box protecting the basket, with typically the two larger (or frontcourt) players playing directly under the basket, and the two smaller (or backcourt) players playing towards the foul line. The fifth defensive player in a box-and-one defense plays man-to-man defense, typically marking the best offensive player on the other team."} {"text":"A box-and-one defense is usually used against teams with one dominant scoring threat. The idea is to try to shut that player down by forcing them to score against a dedicated man-to-man player, and a supporting zone. Players such as Allen Iverson and Ray Allen often faced box-and-one defensive schemes while competing for Georgetown University and the University of Connecticut, respectively."} {"text":"One variation is the \"diamond-and-one defense\", where the four players in the box are arranged in a diamond pattern (one under the basket, two between the basket and foul line, and the fourth at the foul line). Another variation is the triangle-and-two defense, in which three defenders play zone defense while the remaining two play man-to-man defense."} {"text":"The biggest weakness of a box-and-one defense is its vulnerability to a pass to the middle of the \"box\". As there is no defensive player responsible for this area of the court, offensive teams are able to exploit the gap. A pass to the middle of the box or to the top of the box will generally yield a short-range shot from inside the key. Or, it will \u201ccollapse\u201d the box, causing the four zone defenders to fall inside the key and, upon a second pass, yielding a wide open and uncontested look from the perimeter. It is for this reason that the box-and-one defense is not often seen in professional leagues."} {"text":"The Raptors once again used this defense in the 2020 NBA Playoffs during the Eastern Conference semifinals against the Boston Celtics. Throughout the series, the Raptors employed the defense again against Jayson Tatum in Game 4 and Kemba Walker in Game 6, winning both games. The box-and-one was used again in Game 7, but the Raptors ultimately lost the deciding game."} {"text":"This strategy is also used in a man down situation in lacrosse. When a team has a penalty and is down a man, the team will send out a Long Stick Midfielder to add to the three Long Stick defensemen. As well there is one short stick midfielder. The Long Sticks make a tight box in front of the goal with the short stick on the crease. The four long sticks will play zone defense, with the closest man to the ball playing man to man, and the farthest splitting two offensemen. Every time the ball is passed the formation rotates to the next man. The short stick will play man to man if there is an attackman on the crease, otherwise he will join the rotation."} {"text":"This strategy is also used in the sport of ultimate frisbee, which relies on defenses sometimes similar to basketball in its mixture of man and zone formats. In ultimate, the box-and-one defense is usually incorporated into a defensive strategy called the \"cup\", where 3 other players play a zone around the player in possession of the disc. If the boxed player is a handler (similar to a point guard in basketball) in possession of the disc, the cup will include the boxed player temporarily in their zone."} {"text":"A motion offense is a category of offensive scheme used in basketball. Motion offenses use player movement, often as a strategy to exploit the quickness of the offensive team or to neutralize a size advantage of the defense."} {"text":"Motion offenses are different from continuity offenses in that they follow no fixed repeating pattern. Instead, a motion offense is free-flowing and relatively unrestricted, though following a set of rules. Some examples of basic rules that are commonly used are:"} {"text":"Instead of relying on set plays, Knight's offense is designed to react to the defense. His motion emphasized post players setting screens and perimeter players passing the ball until a teammate becomes open for an uncontested lay-up or jump shot. Players are required to be unselfish and disciplined and must be effective in setting and using screens to get open."} {"text":"Fast break is an offensive strategy in basketball and handball. In a fast break, a team attempts to move the ball up court and into scoring position as quickly as possible, so that the defense is outnumbered and does not have time to set up. The various styles of the fast break\u2013derivative of the original created by Frank Keaney\u2013are seen as the best method of providing action and quick scores. A fast break may result from cherry picking."} {"text":"In a typical fast-break situation, the defending team obtains the ball and passes it to the fastest player, who sets up the fast break. That player (usually the smaller point guard, in the case of basketball) then speed-dribbles the ball up the court with several players trailing on the wings. He then either passes it to another player for quick scoring or takes the shot himself. If contact is made between him and a defender from behind while on a fast break, an unsportsmanlike foul is called. Recognition, speed, ball-handling skills, and decision making are critical to the success of a fast break."} {"text":"In basketball, fast breaks are often the result of good defensive play such as a steal, obtaining the ball off a block, or a missed shot by the opposing team and a rebound, where the defending team takes possession of the ball and the other team has not adjusted."} {"text":"A fast break can sometimes lead to an alley-oop if there are more offensive players than defenders."} {"text":"In basketball, if the fast break did not lead to a basket and an offensive rebound is obtained and put back quickly, this is called a secondary break."} {"text":"A fly fast break (also known as a one out fast break, the technical term for the play) is a basketball move in which after a shot is attempted, the player who is guarding the shooter does not box out or rebound but instead runs down the court looking for a pass from a rebounding teammate for a quick score."} {"text":"How to play the Fly fast break."} {"text":"The coach designates a certain guard or guards to carry out the Fly fast break. This is often the guard that defends the opponents' shooting guard. When the designated opposing guard makes an attempted shot. The defending guard (referred to as 'Fly') will contest the shot but then sprints down the court to the other team's key. When the defending team obtains the rebound or has to inbound the ball (after a made basket), they throw the ball into the other team's key, knowing that there is a 'Fly' waiting to catch the ball and score."} {"text":"Breaking down the Fly fast break can be done in two ways:"} {"text":"The 'Fly' is a term in fly fishing where the actions of this type of fishing are similar to the actions of the basketball player in Fly fast break."} {"text":"Wheel offense is an offensive strategy in basketball, developed in the late 1950s by Garland F. Pinholster at the Oglethorpe University. It is a kind of continuity offense in which players move around in a circular pattern to create good scoring opportunities. The wheel offense is a popular offensive play, frequently used by teams from middle school to college levels because it can effectively work against any defense, including zone defense and man-to-man defense."} {"text":"There are various ways to run the wheel offense. The original form of the wheel offense developed by Garland Pinholster starts with a 2-1-2 formation, where two players stand edge by edge at the free throw lane."} {"text":"The wheel offense is very advantageous to use. First it is very flexible and easy to set up. All the positions in the wheel offense are interchangeable (i.e. the point guard doesn\u2019t have to be the first cutter).This enables the ball-handler to start the wheel offense from either wing without the other players changing their positions. With its flexibility, the wheel offense blends well with both half-court attack and fast break."} {"text":"Once the play is set up, the wheel offense can work effectively against both man-to-man defense and zone defense. The various cuts and double screens will create open shot opportunities if the defense fails to react quickly. Even if the defensive players manage to cover all the cutting offensive players, they are forced to switch match-ups. Switching match-ups often causes mismatches between offensive players and defensive players, and when mismatch happens, the offensive team often take advantage of it to score. Also, when a team runs the wheel offense, their game tempo will be very hard to disrupt."} {"text":"The wheel offense can also integrate other offensive plays. Pinholster's Oglethorpe team would often run some concealment plays before they start the wheel offense. This made the wheel offense very hard to detect, and they could catch the defense off guard. During the play, when the wheel offense is in the 1-3-1 formation, it can also switch to other plays based on the same formation. This greatly increases the variation of the wheel offense, making it very hard to defend."} {"text":"Moreover, the wheel offense is very helpful for team-building. Because the wheel offense demands every player to have good ball-handling and shooting skills, each player is forced to develop more fully. In the wheel offensive play, five players play as a team rather than individuals. Thus practicing and running the wheel offense is very helpful for developing a team spirit among the players."} {"text":"There are a few conditions need to be satisfied before using the wheel offense. Some are listed below."} {"text":"Player tracking refers to technologies used to track players and the ball (if applicable) in various sports. The National Basketball Association (NBA) first tracked all games at the start of the 2013-14 NBA season. Second Spectrum is the current Official Optical Tracking Provider of the NBA and began league-wide tracking in the 2017-18 NBA season, replacing STATS SportVU which previously held the league-wide contract."} {"text":"The NBA (via Second Spectrum) uses an optical tracking system that leverages multiple cameras placed in the catwalks in all 29 NBA arenas. The cameras receive and update data at a rate of 25 frames per second. The cameras feed the data into proprietary software, where computer vision algorithms extract positional data for all players on the court and the ball."} {"text":"The NBA provides a variety of statistics to the public based on the data produced by player tracking to the public on its website. This includes information for players covering categories such as drives, defensive impact, catch and shoot, passing, touches, pull up shooting, rebounding, shooting efficiency, speed, and post ups among others. Similar information is available for teams."} {"text":"In addition, more sophisticated and detailed tools are available to teams and broadcasters that are not currently available to the public."} {"text":"Player tracking systems introduce many new statistics, automate the collection of data and provide precision which would be impossible without the use of camera technology and tracking software."} {"text":"Statistics collected, and available to view during the game and throughout the season include (all statistics are per player):"} {"text":"Match-up zone defense is a type of defense used in the game of basketball. It is commonly referred to as a \"combination\" defense, as it combines certain aspects of man-to-man defense and zone defense."} {"text":"College head coaches Jim Boeheim and John Chaney were advocates of the match-up zone defense."} {"text":"With the match-up zone defense, the on-ball defender will play tight as if he was playing man-to-man. At the same time, the zone away from the ball will resemble \"help-side\" man-to-man defense. This creates one of the advantages for the match-up zone, as it may confuse the opponent as to what defense you are actually playing. The match-up zone also resembles a \"switching man-to-man\" defense, where the big men stay down low in the post and the guards stay around the perimeter. When asked to describe Chaney's match-up zone, Saint Joseph's Hawks coach Phil Martelli replied: \"In college basketball, there's the Pete Carril Princeton offense, the John Chaney Match up Zone defense, then everything else. Those are the only two truly unique syles designed and being used today.\""} {"text":"The triangle-and-two defense is a particular type of defense used in basketball."} {"text":"The triangle-and-two defense is a hybrid between a man-to-man defense in which each defensive player is responsible for marking a player on the other team, and a zone defense in which each defensive player is responsible for guarding an area of the court."} {"text":"In a triangle-and-two defense, three players play zone defense, and align themselves in a triangle protecting the basket, with typically the power forward and center playing directly under the basket, and the small forward playing towards the foul line."} {"text":"The shooting guard and point guard in a triangle-and-two defense play man-to-man defense, typically marking the opposing team's best offensive players on the perimeter."} {"text":"The biggest weakness of a triangle-and-two defense is its vulnerability to cutters through the lane, and also against good passing from the forward spots. Teams with good passers on the floor are often able to easily find flaws in this defense."} {"text":"One variation of the triangle-and-two defense is the diamond-and-one defense, where the four players in the box are arranged in a diamond pattern (one under the basket, two between the basket and foul line, and the fourth at the foul line). Another variation is the box-and-one defense, in which four defenders play zone defense in a box shape around the key, while the remaining defender plays man-to-man defense."} {"text":"Main reasons a team would want to play man-to-man are:"} {"text":"Some risks and downsides of playing it:"} {"text":"Man-to-man defense is still the primary defensive scheme in the NBA, and some coaches use it exclusively."} {"text":"The shuffle offense is an offensive strategy in basketball, developed in the early 1950s by Bruce Drake at the University of Oklahoma. It was later used by Bob Spear, who was the first head basketball coach of the United States Air Force Academy in 1957\u201371. The shuffle offense has all five players rotate in each of the five shuffle positions. This offense would be an option for a team that has good ball-handlers but is not blessed with height or a strong dominant post player (which may be why Spear used it at Air Force, which has a height restriction)."} {"text":"Coach Dean Smith of the University of North Carolina at Chapel Hill also taught the shuffle offense."} {"text":"The pick and roll (also called a ball screen or screen and roll) in basketball is an offensive play in which a player sets a screen (pick) for a teammate handling the ball and then moves toward the basket (rolls) to receive a pass. In the NBA, the play came into vogue in the 1990s and has developed into the league's most common offensive action. There are however many ways in which the defense can also counter the offensive screen."} {"text":"The pick and roll is often employed by a shorter guard handling the ball and a taller forward or center setting the screen; if the taller defender switches to guard the ballhandler, then the offensive team can have favorable mismatches. The shorter guard has a speed advantage over the taller defender, while the taller forward\/center has a size advantage over the shorter defender."} {"text":"A successful pick and roll play may result in the screener being in position to receive a pass with a clear path for an easy shot, with the chance of drawing a foul as other defenders move toward the play to try to prevent penetration. It may alternately lead to the ballhandler being momentarily without a defender, and thus free to pass to any open teammate, or take an uncontested shot, which greatly improves the chance of scoring, again with the chance of drawing a foul as the screened defender hurries to get back into the play."} {"text":"The success of the strategy depends largely on the ballhandler, who must recognize the situation quickly and make a decision whether to take the shot, pass to the screener who is rolling (if the defender switches) or pass to another open teammate (if other defenders come to help). The screener also must recognize the open spaces of the court to roll to and be alert to receive the pass and finish the play."} {"text":"Variations of the pick and roll are the \"pick and pop\" (or \"pick and fade\"), where the screener moves for an open jump shot instead of rolling to the basket, or the \"pick and slip\", where the screener fakes setting a screen before slipping behind the defender to accept the pass."} {"text":"The pick and pop is an offensive play that is a derivative of the classic pick and roll. Instead of rolling toward the basket, however, the player setting the pick moves (\"pops\") to an open area of the court to receive a pass from the ballhandler for a jump shot."} {"text":"The premise of the two plays is the same: a ballhandler uses a teammate's pick to attract the attention of two defensive players to free his teammate for a scoring opportunity. A successful pick and pop relies on a ballhandler who demands constant defensive attention and a teammate with an accurate jumpshot or a layup if by the rim."} {"text":"According to Synergy Sports Technology, use of the pick and roll in the NBA rose from 15.6% of total plays in the 2004\u201305 NBA season to 18.6% in the 2008\u201309 NBA season."} {"text":"The pick and roll is also used extensively in box lacrosse, the sport played in the National Lacrosse League."} {"text":"Nellie Ball is an offensive strategy in basketball developed by NBA head coach Don \"Nellie\" Nelson. It is a fast-paced run-and-gun offense relying on smaller, more athletic players who can create mismatches by outrunning their opponents. A true center is usually not needed to run this type of offense. A large volume of three-point attempts is also a feature of Nellie Ball. This offense is most effective against teams that do not have the athleticism or shooting ability to keep up with the fast pace."} {"text":"While coaching the Dallas Mavericks, Nelson employed Nellie Ball once again, utilizing the All-Star trio of Steve Nash, Michael Finley, and Dirk Nowitzki. Nelson often played Nowitzki, a natural power forward, at the center position, placing him at the three-point line in order to stretch out the defense. Nelson's trio of star players spearheaded the Mavericks' transformation into a promising young franchise capable of reaching the NBA Playoffs."} {"text":"Avery Johnson, Nelson's prot\u00e9g\u00e9 and successor in Dallas, had abandoned Nellie Ball in favor of a more traditional offensive lineup, which reached the 2006 NBA Finals. En route to reaching the finals, Johnson's Mavericks defeated Mike D'Antoni's Phoenix Suns, the latter using an up-tempo style centered on former Mavs superstar and 2-time NBA MVP Steve Nash. Although the Mavericks lost to the Miami Heat in the NBA Finals that year, Johnson won the 2006 NBA Coach of the Year Award for making Dallas a better defensive team while still keeping their up-tempo style of offense."} {"text":"Further validation of the Nellie Ball formula was served when the Golden State Warriors, a team Nelson had coached twice, won the 2015 NBA Championship. The Warriors, who were now led by head coach Steve Kerr, successfully closed out the 2015 NBA Finals against the Cleveland Cavaliers using a Nellie Ball-style \"Death Lineup\" of Stephen Curry, Klay Thompson, Andre Iguodala, Harrison Barnes and Draymond Green. In 2017 & 2018, Golden State won back-to-back NBA titles. This time, the high-scoring Warriors were powered by Curry, Thompson, Green, and fellow superstar Kevin Durant. Additionally, multiple teams have adopted different variations of Nellie Ball, with point forwards orchestrating some of the most prolific offenses in the current NBA."} {"text":"The Princeton offense is an offensive basketball strategy which emphasizes constant motion, back-door cuts, picks on and off the ball, and disciplined teamwork. It was used and perfected at Princeton University by Pete Carril, though its roots may be traced back to Franklin \u201cCappy\u201d Cappon, who coached Princeton in the late 1930s, and Bernard \"Red\" Sarachek, who coached at Yeshiva University from 1938 to 1977."} {"text":"The offense is designed for a unit of five players who can each pass, shoot, and dribble at an above-average level. These players hope to isolate and exploit a mismatch using these skills. Positions become less important and on offense there is no point guard, shooting guard, small forward or power forward. However, there are certain rules that players running this offense are expected to follow."} {"text":"The offense usually starts out with four players outside the three-point arc with one player at the top of the key. The ball is kept in constant motion through passing until either a mismatch allows a player to cut to the basket or a player without the ball cuts toward the unoccupied area under and around the basket, and is passed the ball for a layup. The post player is a very important player in the offense. He sets up in the high post and draws attention to his positioning. When the ball is received in to the post the players main objective is to find back door cutters or defenders who have fallen asleep on the weak side."} {"text":"The hallmark of the offense is the backdoor pass, where a player on the wing suddenly moves in towards the basket, receives a bounce pass from a guard on the perimeter, and (if done correctly) finds himself with no defenders between him and a layup. Alternatively, when the defensive team attempts to pack the paint to prevent backdoor cuts, the offense utilizes three point shots from the perimeter. All five players in the offense\u2014including the center\u2014should be competent at making a three-point attempt, further spreading the floor, and not allowing the defense to leave any player unattended."} {"text":"The offense is often a very slowly developing one, relying on a high number of passes, and is often used in college basketball by teams facing opponents with superior athletic talent in order to maintain a low-scoring game (believing that a high-scoring game would favor the athletically superior opponent). As a result, Princeton has led the nation in scoring defense 19 times including in every year from 1989 to 2000."} {"text":"During his tenure as head coach of Princeton (1967\u20131996), Pete Carril compiled a 514\u2013261 record, a .658 winning percentage. His teams won 13 Ivy League championships during his 29-year tenure with the Tigers, and received 11 NCAA Tournament bids and two National Invitation Tournament berths. Princeton captured the NIT title in 1975. Perhaps Carril's greatest win was his final upset victory on a backdoor cut to give Princeton the win 43 - 41 over the 1995 defending NCAA champion UCLA. The win extended Coach Carril's retirement by one game and is ranked as one of the best NCAA upsets of all time. Former Princeton coach Sydney Johnson and his predecessors Bill Carmody, John Thompson III, and Joe Scott have all employed the Princeton offense."} {"text":"After his retirement from Princeton in 1996, Pete Carril served as an assistant coach for the National Basketball Association's Sacramento Kings until 2006. During his time with Sacramento, Carril helped Rick Adelman, who became the Kings' head coach in 1998, implement the Princeton offense. Carril returned to the Kings during the 2008\u20132009 season as a consultant."} {"text":"The Cleveland Cavaliers, Los Angeles Lakers, New Orleans Hornets, New Jersey Nets, and Washington Wizards also have run versions of the Princeton offense. in the National Basketball Association. Rick Adelman introduced a modified version of Pete Carril's system to the Houston Rockets during the 2007\u20132008 season. Coach Alvin Gentry also implemented an altered version of it that shows similarities to the triangle offense during the Phoenix Suns\u2032s 2012\u201313 season. Eddie Jordan implemented the Princeton offense as coach of the Washington Wizards from 2003 to 2008) and of the Philadelphia 76ers from 2009 to 2010."} {"text":"Besides Princeton, some of the NCAA Division I college basketball teams best known for using the offense are:"} {"text":"NCAA Division II colleges that have used the Princeton offense include:"} {"text":"NCAA Division III colleges that have used the Princeton offense include:"} {"text":"NAIA colleges that have used the Princeton offense include:"} {"text":"High school basketball teams that have used the Princeton offense include:"} {"text":"Amateur Athletic Union, Youth Basketball of America, and United States Basketball Association teams that have used the Princeton offense include:"} {"text":"Hack-a-Shaq is a basketball defensive strategy used in the National Basketball Association (NBA), where Dallas Mavericks coach Don Nelson adapted the strategy of committing intentional fouls (originally a clock management strategy) to the purpose of lowering opponents' scoring. He directed players to commit personal fouls throughout the game against selected opponents who shot free throws poorly."} {"text":"Nelson initially used the strategy against Dennis Rodman, a star power forward for the Detroit Pistons, San Antonio Spurs, and Chicago Bulls. However, the strategy acquired its name for Nelson's subsequent use of it against Hall of Fame center Shaquille O'Neal."} {"text":"The term was coined when O'Neal played at LSU and during his NBA tenure with the Orlando Magic. At that time, the term referred simply to especially physical defense against O'Neal. Teams sometimes defended him by bumping, striking or pushing him \"after\" he received the ball to deny him an easy layup or slam dunk. Because of O'Neal's poor free throw shooting, teams did not fear the consequences of committing personal fouls. However, once Nelson's off-the-ball fouling strategy became prevalent, the term \"Hack-a-Shaq\" was applied to this new tactic, and the original usage was largely forgotten."} {"text":"The name is sometimes altered to reflect the player being fouled, for example \"Hack-a-Howard\" when used against Dwight Howard, or \"Hack-a-DJ\" for DeAndre Jordan."} {"text":"Committing repeated intentional personal fouls is a longstanding defensive strategy used by teams that are trailing near the end of the game. Basketball, unique among major world sports, permits intentional fouling to gain a strategic advantage; in other sports, it is considered an unfair act or professional foul."} {"text":"Once the fouling team enters the penalty situation, the fouled team is awarded free throws. The typical NBA player makes a high enough percentage of his free throws that, over time, opponents' possessions that end with free throws will yield more points than possessions in which the opponents try to score a field goal. Even the highest-scoring NBA teams average only about 1.1 points per possession. Giving such a team two free throws on each possession, the poorest free throw shooting teams make around 70% of their free throws and would score 1.4 points per possession. So intentionally fouling tends not to reduce the opponent's score."} {"text":"However, fouls stop the game clock. If a team is trailing with time running out, intentional fouling may be the only hope. In normal game play, the opponents will stall and run out the clock, even at the expense of failing to score, to the extent that the shot clock allows. The trailing team fouls intentionally to end the opponents' possession as soon as possible, leaving more time on the clock for the opposing team to respond to any score. It may also hope that fatigue and pressure affect the ability of the free-throw shooter."} {"text":"When this strategy was originally employed in the NBA, the trailing team often made a point of fouling the opposition player who was the poorest free throw shooter in the game at that time, even if that player did not possess the ball. However, fouling \"off the ball\" became a problem for the league when Wilt Chamberlain\u2014a player of superstar caliber but an atrocious free throw shooter\u2014entered the NBA."} {"text":"Wilt Chamberlain and the off-the-ball foul rule."} {"text":"Wilt Chamberlain was such a dominant player that he was sure to be on the floor near the end of any close game. However, he was such a poor free throw shooter (51%) as to be the natural target of a strategy of intentional fouling. The opposition was eager to send Chamberlain to the free throw line, and Chamberlain wished to avoid doing so. This led to a game of tag developing away from the basketball, players chasing Chamberlain as he tried to avoid being fouled."} {"text":"The NBA enacted a new rule on off-the-ball fouls\u2014personal fouls against an offensive player who neither has the ball nor is trying to obtain it. On such fouls within the last two minutes of the game or in overtime, the offensive team is awarded the usual number of free throws and then possession of the ball. The new rule removed the benefit of fouling to gain possession of the ball and limited late-game intentional fouls to the ball handler."} {"text":"The current version of the rule contains an additional disincentive to off-the-ball fouls: The free throws need not be attempted by the player who was fouled; the fouled team can choose as shooter any player on the court at the time."} {"text":"The reason they have that rule is that fouling someone off-the-ball looks foolish . . . Some of the funniest things I ever saw were players that used to chase [Wilt Chamberlain] like it was hide-and-seek. Wilt would run away from people, and the league changed the rule based on how silly that looked."} {"text":"There are several late-game situations where committing an isolated intentional foul makes sense. For a team trailing, late in the game, stopping the clock is a higher priority than keeping the opponents from scoring. In other situations, intentional fouling does not make sense because it typically lets the opponents score more points."} {"text":"Intentional fouling every time the opponents get the ball was an innovation of Don Nelson in the late 1990s as coach of the Dallas Mavericks. He theorized that, if the opponents played an especially bad free throw shooter, intentionally fouling him might hold down his team's points per possession, compared to a conventional defense against them. Nelson used the strategy throughout the game, when the late-game penalties for off-the-ball fouls did not apply, such as the ball being given back to the fouled team."} {"text":"Nelson did not invent the strategy; his innovation was to take a strategy whose primary purpose had always been simply stopping the clock, and use it instead primarily to minimize the opposition's scoring."} {"text":"Nelson first used the strategy against Dennis Rodman of the Chicago Bulls in 1997, who was making 38% of his free throws on the season. He could not use the strategy on every Bulls possession, as a player committing his sixth foul is disqualified from the game. He used the strategy selectively, and chose a little-used player, whose absence the team could tolerate, to commit the fouls. He believed that Rodman's horrific foul shooting would result in the Mavericks actually giving up fewer total points during those Bulls possessions than they would give up by playing a standard defense against the Bulls' efficient offense, led by Michael Jordan and Scottie Pippen."} {"text":"In that game, Rodman shot 9-for-12 from the free throw line, defeating the strategy, and the Bulls won the game. The strategy was thus largely forgotten, except that Maverick player Bubba Wells, who had been assigned to foul Rodman, set the all-time NBA record for fewest minutes played (3) before fouling out of a game."} {"text":"Nelson used the strategy again in 1999, this time against Shaquille O'Neal, a career 52% free throw shooter. Other NBA coaches also did so to defend against O'Neal. So, even though it had been first used two years earlier against Rodman, the strategy became known for its use against O'Neal."} {"text":"As with Chamberlain decades earlier, intentional off-the-ball fouls against O'Neal became controversial. During the 2000 NBA Playoffs, both the Portland Trail Blazers and Indiana Pacers relentlessly used the Hack-a-Shaq defense against the Lakers. The NBA discussed expanding the off-the-ball foul rule to cover more than just the final two minutes of the game, or another rule change that would discourage the use of Hack-a-Shaq. Ultimately, though, the NBA did not change any rules to discourage the Hack-a-Shaq strategy. An effective rebuttal was that the Lakers won both of the games where Hack-a-Shaq was notorious, suggesting that the strategy was too ineffective to require remediation."} {"text":"In subsequent seasons, fans and media remained displeased with the continued use of the strategy, particularly in high-profile playoff games. In 2008, the NBA Competition Committee again considered rule changes but did not achieve consensus. According to an ESPN study in 2016, offensive efficiency was higher than the Golden State Warriors when the Hack-a-Shaq strategy was used against a team. NBA commissioner Adam Silver announced that the competition committee would look into changing the rule before the start of the 2016\u20132017 season due to extended length of games. It takes only three or more Hack-a-Shaq fouls to add 11 minutes to the length of a game, and at the time such fouls were being committed at a rate of four times more often than the prior season."} {"text":"The Hack-a-Shaq strategy is most effective against a player who shoots free throws poorly, but who is so effective in other areas that the coach is reluctant to simply remove them from the game. Few players other than O'Neal meet those criteria."} {"text":"Ben Wallace shot only 42% free throws over his career, the worst percentage in the history of the NBA among players with 1000 attempts. Bruce Bowen was also among the game's best defenders but among its worst free throw shooters. Because of their struggles at the free throw line, each man has at times become a target of the Hack-a-Shaq strategy."} {"text":"On May 29, 2012, the Oklahoma City Thunder used a so-called hack-a-Splitter strategy on Tiago Splitter during Game 2 of Western Conference Final of 2012 NBA Playoffs, who made 5 of 10 free throw attempts from these fouls."} {"text":"On April 10, 2015, the San Antonio Spurs was reported to use this strategy on Josh Smith to keep the basketball away from the super-hot James Harden and San Antonio Spurs won this season game by 104\u2013103."} {"text":"During the 2015 NBA Playoffs, Howard, then with the Houston Rockets, was again targeted often by opponents, particularly during round 2 against the Los Angeles Clippers. During Game 2, Howard shot 21 (converting 8) out of the 64 free throws for the Rockets. In turn, the Rockets targeted DeAndre Jordan, who had been victim of \"Hack-a-Jordan\" or \"Hack-a-DJ\" since 2014, and in particular was fouled five times in two minutes during the previous playoff round against the San Antonio Spurs. In Game 4, Jordan broke O'Neal's record for most free throw attempts in a half game with 28."} {"text":"On January 20, 2016, the Houston Rockets used Hack-a-Drummond against Detroit Pistons center Andre Drummond, and Drummond went 13 for 36 from the free throw line. Those 23 misses are an NBA record for most free throws missed by a player in a game. However, the Pistons would still win the game 123-114."} {"text":"In the 2016\u201317 playoffs, Oklahoma City Thunder forward Andr\u00e9 Roberson was a victim of this strategy against the Houston Rockets in the first round of the playoffs. Roberson shot 3\/21 in the series."} {"text":"On November 29, 2017, the Washington Wizards used what a newspaper called a \"hack-a-Ben Simmons strategy\" when trailing the Philadelphia 76ers by 24 points in the third quarter. The Wizards repeatedly fouled 76ers point guard Ben Simmons, giving him 29 free throws, 24 of them in the fourth quarter. Simmons was a notoriously bad shooter and had entered the game with a 56% free throw rate. He shot free throws even worse in this game, making 15\/29 (52%). However, the 76ers held on to win the game, 118-113. Simmons' 31 points were a career high for him at the time."} {"text":"Detractors argue that deliberate fouling makes the game unpleasant to watch, violates the spirit or disrupts the rhythm of the game, puts the fouling team too quickly into the penalty situation, and disparages the team's defensive abilities."} {"text":"\"All that did was allow us to set our defense. I think that's disrespectful to their players. Basically, they were telling their players that they couldn't guard us.\""} {"text":"Many coaches have heeded these criticisms and doubted the effectiveness of the strategy in minimizing scoring. One imponderable is the effect on the psychology of the player fouled deliberately on the belief that he will not make his free throws. Some believe that frequently sending O'Neal to the foul line risked putting him \"into a rhythm\" and temporarily making him a better shooter."} {"text":"These factors, and the fact that there are only handful of players who satisfy the criteria for Hack-a-Shaq, mean that the strategy is uncommon in the NBA. A rule change starting in the 2016\u201317 NBA season put an additional constraint on deliberate fouling: Away-from-the-ball fouls now award the fouled team a free throw and possession of the ball in the final 2 minutes of \"each\" quarter, extended from the prior rule affecting only the final 2 minutes of the 4th quarter. The rule change sought to eliminate cases where teams would intentionally foul off-the-ball in order to gain the final possession of a quarter."} {"text":"In basketball, run and gun is a fast, freewheeling style of play that features a high number of field goal attempts, resulting in high-scoring games. The offense typically relies on fast breaks while placing less emphasis on set plays. A run-and-gun team typically allows many points on defense as well."} {"text":"In the National Basketball Association (NBA), the run and gun was at its peak in the 1960s when teams scored an average of 115 points a game. Around 2003, the average had dropped to 95. The Boston Celtics were a run-and-gun team in the 1950s and 1960s while winning 11 NBA championships, as were the five-time champion Los Angeles Lakers during their Showtime era in the 1980s."} {"text":"Although the run and gun is believed by many to de-emphasize defense, the Celtics of the 60s' had Bill Russell, and the Lakers of the 80s' had Kareem Abdul-Jabbar as defensive stoppers. Coach Doug Moe, who ran the run and gun with the Denver Nuggets in the 1980s, believed the high scores surrendered were more indicative of the fast pace of the game than a low level of defense. Still, his teams sometimes appeared to give up baskets in order to score one. Though his offensive strategy led to high scores, Moe's Denver teams were never adept at running fast breaks."} {"text":"Paul Westhead coached the Loyola Marymount men's basketball team in the late 1980s using a version of the run and gun."} {"text":"While run and gun basketball is often thought of as a system of offense, Westhead's system uses a combined offensive and defensive philosophy. Offensively, the team moves the ball forward as quickly as possible and takes the first available shot, often a three-pointer. Westhead's teams try to shoot the ball in less than seven seconds. The aim is to shoot before the defense is able to get set. Defensively, the team applies constant full-court pressure. Generally, the team is willing to gamble on giving away easy baskets for the sake of maintaining a high tempo."} {"text":"Loyola Marymount successfully used the system in 1990 when they advanced to the Elite 8 of the NCAA Basketball Tournament, beating the defending champion Michigan 149\u2013115 along the way. The style has been used at some other teams. Coach Westhead tried, rather unsuccessfully, to implement the system in the NBA with the Denver Nuggets in the early 1990s. They averaged a league-best 119.9 points per game in 1990-91, but also surrendered an NBA record 130.8 points per game. They also allowed 107 points to be scored in a single half to the Phoenix Suns, which also remains an NBA record."} {"text":"Westhead's system has been imitated by other college teams, including Grinnell College. David Arseneault, the architect of the Grinnell System, added to Westhead's system by substituting players in three waves of five players, similar to an ice hockey shift. The highest scoring game in NCAA history was played by two teams (Troy & DeVry-Atlanta) who both employed Westhead's system the entire game, resulting in 399 combined points in 1992."} {"text":"The Grinnell System, sometimes referred to as The System, is a fast-tempo style of basketball developed by coach David Arseneault at Grinnell College. It is a variation of the run-and-gun system popularized by coach Paul Westhead at Loyola Marymount University in the early 1980s. The Grinnell System relies on shooting three-point field goals, applying constant pressure with a full-court press, and substituting players frequently."} {"text":"Under the system, Grinnell guard Jack Taylor scored an NCAA-record 138 points in a 2012 game, and 109 in a 2013 game. Previously, Grinnell players Jeff Clement (77) and Griffin Lentsch (89) held the Division III scoring record."} {"text":"The main tenets of the system are:"} {"text":"To keep his players fresh and get more individuals involved, Arseneault added to Westhead's system by substituting players in three waves of five players, similar to an ice hockey shift. A 15-man roster is divided three groups of five and new shifts are substituted every 45 to 90 seconds. Each shift plays at full speed and then rests while the next group does the same. Players rarely play more than 20 minutes a game."} {"text":"Arseneault and the Grinnell program have been criticized for using the system to run up the score and set records, especially against overmatched opponents."} {"text":"Other college and high school programs have also adopted the Grinnell System. David Arseneault Jr., the coach's son, ran a modified version of The System after being named the head coach of the Reno Bighorns of the NBA Development League in 2014\u201315. Limited to a 10-man roster and subject to the D-League's high roster turnover, Arseneault adjusted the system, abandoning its hockey-style substitutions and full-court press."} {"text":"The M-drop is a play in the sport of water polo which is mainly used when there is a strong, offensive set of players. The defense sets up in an M-shape, hence the name \"M-Drop\"."} {"text":"A basic offense would look like an umbrella shape, with 5 offensive players in a half-circle around the goal, with one defensive player on each of them. There is one player in the middle, called set. There is one defensive player guarding each of the offensive players. From left to right, and moving around the circle, the offensive players are named 1, 2, 3, 4, 5, and the set is 6. The defensive players are named D1, D2, D3, D4, D5, and D6, who guard sets."} {"text":"If the opposing team has a very strong set player, the D3 player will drop back and help the D6 player. This play leaves the offensive player 3 open, so defensive members D2 and D4 will \u201csplit\u201d while swimming in between their player and offensive player 3. They also split in between the set and their player, hence the name M-drop, from the \u2018M\u2019 shape created."} {"text":"In water polo, the goalkeeper occupies a position as the last line of defense between the opponent's offence and their own team's goal, which is ."} {"text":"The goalkeeper is different from other people on their team; they possess certain privileges and are subject to different restrictions from those of field players. As well as this, they must possess different skills from those of the fielders."} {"text":"Goalkeepers often have longer playing careers than field players because they swim far less."} {"text":"In water polo, the goalkeeper is commonly known as the \"goalie\" or \"keeper\" and may also be known as \"the man\/woman in the cage\"."} {"text":"The position of the goalkeeper has existed since the game of water polo originated. At that time, the object of the game was to touch the ball on the opponent's end of the pool. The goalkeeper would wait at the end of the pool until an opposing player approached the goal, when the goalkeeper would try to stop that player, for example, by dunking their head."} {"text":"A change occurred in the game and the role of the goalkeeper in the 1880s, when the Scottish reduced the size of the scoring area by placing rugby posts, spaced about 10 feet apart, at each end of the pool. At the same time, the rules were changed to allow goalkeepers to stand on the pool deck and leap onto the head of an opposing player who approached the goal. This change in the rules was brief. To prevent the serious injuries that resulted from this method of goalkeeping, the rules were revised again to require the goalie to remain in the water."} {"text":"The basic functions of the goalie position have changed little over the last century, but there have been changes affecting the style of play. In the 1940s, Hungary introduced a new technique called the eggbeater kick that enables goalkeepers to maintain a stable balance in the water."} {"text":"Inside the area, the goalkeeper is the only person on the team permitted to touch the ball with two hands, touch the bottom of the pool and punch the ball with a clenched fist. Although the goalkeeper may not advance beyond the half-way line, they may attempt shots at the other goal."} {"text":"Any goalkeeper who aggressively fouls an attacker in position to score can be charged with a penalty shot for the other team. The goalkeeper can also be temporarily ejected from the game for twenty seconds if they prevent a likely goal (for example, by splashing). If the goalkeeper pushes the ball under the water in the area, instead of being a free throw to the other team, it is a penalty. A penalty is also awarded to the other team if the goalkeeper pulls down from the crossbar of the goal to prevent a goal."} {"text":"Unless reserve, all goalkeepers caps are numbered 1 and contrast with their team's colour to distinguish their position. Reserve goalkeepers have differently numbered caps depending on the governing body; they are shown in the table below."} {"text":"Below is a table showing the major differences of rules and regulations for water polo goalkeepers between the three largest governing bodies: FINA, NCAA and NFHS."} {"text":"In water polo, field players possess entirely different skills and responsibilities to the goalkeeper."} {"text":"The primary role of the goalkeeper is to block shots at the goal. After saving the ball, the goalkeeper has the responsibility to keep possession of the ball in order to stop opposing players regaining possession. They must make sure that whenever the opposition appears to be ready to make a shot on goal, their hands are near or above the surface of the water. They also possess the job to pass down the pool accurately in order to retain possession of the ball, often starting the team's counterattack."} {"text":"The goalkeeper is the only player who may block a penalty and because 63.7% of penalties are goals, the goalkeeper has a massive role in this area but failure to be in the correct position at a penalty is an exclusion foul. At a penalty shootout, the goalkeeper's job is critical and will largely determine the outcome of the match. If the goalkeeper is excluded during the course of the penalty shootout, then one of the other five players in the pool may take their place. The goalkeeper's hips should be high at a penalty shot to give them extra height. The goalkeeper should do one of two things at a penalty shot:"} {"text":"Moreover, goalkeepers should have leadership. They should inform field players of information, such as unmarked players and the time of the game clock and give instructions to the field players. Because of this, they may sometimes be known as \"the coach in the water\"."} {"text":"When a man down, goalkeepers have extra responsibility. It is easier for the other team to continue to shoot, making the goalkeeper very worn out. Platanou said that with a man down the goalkeeper had \"The highest possible intensity\"."} {"text":"Most of the time, goalkeepers do low-intensity work (treading water without too much effort) but when they do work (for example, when they have a man down or are in the ready position) it is very intense."} {"text":"Goalkeepers must be able to perform the eggbeater kick to a high standard. Before the eggbeater kick, goalkeepers would use breaststroke which meant that they could not stay up for very long and players found it easier to score. By using the eggbeater kick, goalkeepers can raise their bodies high enough in the water to be able to block shots at the goal for longer periods of time. This can be used conjunctively with sculling, in which the goalkeeper keeps their hands closed (with the fingers together) and moves them forwards and backwards."} {"text":"The easiest way for the goalkeeper to block shots is for them to block the ball with either their hands or arms. Longer arms can help the goalkeeper to reach the ball, thus being an advantage. Sports involving quick reactions may be helpful, as these help the goalkeeper's reflexes which is a crucial skill."} {"text":"In order to improve, there are a variety of drills designed to improve the goalkeeper's skill."} {"text":"To start with, there are drills to help improve the goalkeeper in the water. These range from simple exercise (such as jumping as high out of the water as possible with two hands) to drills not specifically used in water polo; rather they are used to improve the goalkeeper's core muscles (such as catching a ball dropped from the side into the water)."} {"text":"As the goalkeeper must be able to swim quickly for short distances, to improve they can practice exercises such as swimming quickly and then immediately stopping without touching the sides. It is important for the goalkeeper to swim both breastroke and freestyle - the breastroke helping with the eggbeater kick and the freestyle helping with the swimming in the match."} {"text":"Moreover, at the start of the game it is vital for the goalkeeper to stretch for 15 minutes."} {"text":"As blocking the ball is the primary role of the goalkeeper, they must have good knowledge of blocking techniques."} {"text":"As the goalkeeper has the choice of how many hands they want to use, the decision should be made of what they want to do. A shot should be stopped with two hands either if it is weak or close to the goalkeeper's body, and normally stopped with one in other circumstances. This is because one-handed saves can go to the ball faster."} {"text":"Most shots are saved with the arms or the hands, but goalkeepers are known to save shots with their faces and even feet."} {"text":"Hole set is an offensive position in the game of water polo. It can be referred to as either just the \"hole\" position or the \"set.\" Because this player is typically positioned on the two meter (2M) marker and in center of the opposing team's goal, the position can also be called the two-meter or simply 2M. Other names for this position include center forward, due to the similarity between the corresponding basketball position, as well as the pit-man. The defensive player guarding the hole set can be called the hole-D, where D stands for defense, two-meter defender, or 2M-D."} {"text":"Track and field racers have a variety of options in the ways they can choose to pace their races."} {"text":"Even-splitting is a strategy in which the racer attempts to hit the same split in every lap of the race. The racer tries to run an \"even\" pace during the entire race. In long-distance events, this can often be an optimal strategy."} {"text":"Positive-splitting is a racing strategy that involves completing the first half of a race faster than the second half. Typically, the runner goes out at a pace faster than he or she can maintain for the entire race, leading to a slower end of the race. Positive-splitting can be employed as a tactic, or can simply be a byproduct of an overambitious early pace."} {"text":"Negative-splitting is a racing strategy that involves completing the second half of a race faster than the first half. The racer runs slow in the beginning, and gradually runs faster as the race progresses. This is typically seen as a conservative racing strategy, but in distance events, many world records have been run with a slight negative split."} {"text":"Sit-and-kick, a related strategy to negative-splitting, is one in which the racer typically sits in the pack of the race, not taking the lead or going very fast, and then attempts to \"kick\" or sprint by the other racers during the last laps of the race. The sit-and-kick can be employed by individual runners or, in the case of many championship races, the entire field may attempt to sit-and-kick, thus leading to drastically slow times for the first few laps and faster than normal times for the last laps."} {"text":"While all of the above strategies can be employed, certain pacing strategies, for physiological reasons, will yield the fastest times."} {"text":"For the 100m and 200m events, pacing is not a factor. Because the race is so short, racers simply run at their top speed for the duration of the race. However, for the 400m at the elite level, the event is almost uniformly run with a positive-split strategy. Runners run the first 200m faster than the final 200m."} {"text":"In the 800 meters, the fastest times have almost always been achieved with a positive-split strategy. A study of 26 world-record 800m races from 1912 to 1997 showed that in 92% of the fastest 800m races, the first half of the race has been run faster than the second half. This implies that the optimal strategy for the 800m is to positive-split."} {"text":"In the 400 meters, the strategy proven to be the most effective is starting off at a 70-75% pace and working up to 100%, or known as the threshold pace strategy. Examples of this race plan are Michael Johnson\u2019s former World Record of 43.18 in 1999 and Cathy Freeman\u2019s Olympic Gold Medal in 2000, both 400 meters runners who benefited from this type of pacing strategy."} {"text":"In the 5000 meters and 10000 meters, the optimal strategy shifts to even-splitting. An analysis of world-record performances in these events shows a clear pattern: relatively even pacing throughout most of the race, and a slight increase in speed in the last 1000m of both the 5000m and 10000m. While one could interpret this concluding increase in speed as evidence of a sit-and-kick strategy, the increase in speed observed in these performances is not nearly as dramatic and pronounced as what is typically observed in a sit-and-kick type race."} {"text":"Although \"swindling\" in general usage is synonymous with cheating or fraud, in chess the term does not imply that the swindler has done anything unethical or unsportsmanlike. There is nonetheless a faint stigma attached to swindles, since players feel that one who has outplayed one's opponent for almost the entire game \"is 'morally' entitled to victory\" and a swindle is thus regarded as \"rob[bing] the opponent of a well-earned victory\". However, the best swindles can be quite artistic, and some are widely known."} {"text":"There are ways that a player can maximize their chances of pulling off a swindle, including playing actively and exploiting . Although swindles can be effected in many different ways, themes such as stalemate, perpetual check, and surprise mating attacks are often seen."} {"text":"The ability to swindle one's way out of a lost position is a useful skill for any chess player and according to Graham Burgess is \"a major facet of practical chess\". Frank Marshall may be the only top player who has become well known as a frequent swindler. Marshall was proud of his reputation for swindles, and in 1914 wrote a book entitled \"Marshall's Chess \"Swindles\"\"."} {"text":"Frank Marshall, a gifted who was one of the world's strongest players in the early 20th century, has been called \"the most renowned of swindlers\". To Marshall, the term 'swindle' \"meant a particularly imaginative method of rescuing a difficult, if not lost, position.\" The phrase \"Marshall swindle\" was coined because Marshall \"was famed for extricating himself from hopeless positions by such means\"."} {"text":"Perhaps the most celebrated of his many \"Marshall swindles\" occurred in Marshall\u2013Marco, Monte Carlo 1904. Marshall wrote of the position in the leftmost diagram, \"White's position has become desperate, as the hostile b-pawn must queen.\" White could play 45.Rxc7+, but Black would simply respond 45...Kb8, winning. Many players would resign here, but Marshall saw an opportunity for \"a last 'swindle. He continued 45.c6 Now Black could have played 45...bxc6!, but disdained it because White could then play 46.Rxc7+ Kb8 47.Rb7+! Kxb7 48.Nc5+, winning Black's rook and temporarily stopping Black's pawn from advancing."} {"text":"Black should have played this line, however, because he still wins after 48...Ka7 49.Nxa4: while there are many ways to win from the resulting position, the quickest would be to play Bd4, trapping the knight, and after 50.Kf3 Ka6 51.Ke4 Ka5 52.Kxd4 Kxa4 53.Kc3 Ka3, Black's pawn queens after all. Instead, Marco played 45...Be5, mistakenly thinking that this would put an end to Marshall's tricks. The game continued 46.cxb7+ Kb8 (46...Kxb7? 47.Nc5+ wins the rook) 47.Nc5! Ra2+ 48.Kh3 b2 49.Re7! Ka7 (not 49...b1=Q 50.Re8+ Ka7 51.Ra8+ Kb6 52.b8=Q+, winning Black's newly created queen) 50.Re8 c6 51.Ra8+ Kb6 52.Rxa2! b1=Q (rightmost diagram)."} {"text":"White's resources finally seem to be at an end, but now Marshall reveals his deeply hidden point: 53.b8=Q+ Bxb8 54.Rb2+! Qxb2 55.Na4+ Kb5 56.Nxb2. Marshall has caught Black's pawn after all, and is now a pawn up in a position where it is Black who is fighting for a draw. Fred Reinfeld and Irving Chernev commented, \"Marshall's manner of extricating himself from his difficulties is reminiscent of an end-game by Rinck or Troitsky!\" Marshall eventually won the game after a further mistake by Black."} {"text":"International Master (IM) Simon Webb in his book \"Chess for Tigers\" identified five \"secrets of swindling\":"} {"text":"Grandmaster (GM) John Nunn adds a caveat: when in a bad position, one must decide between two strategies, which he calls \"grim \" and \"create confusion.\" \"Grim defence\" involves finding some way to hang on, often by liquidating to an ending. \"Create confusion\" entails trying to \"gain the initiative, even at material cost, hoping to stir up complications and cause the opponent to go wrong.\" Nunn cautions that, \"If you decide to go for 'create confusion' then you should press the panic button sufficiently early to give yourself a reasonable chance of success. However, you should be sure that your position is really bad enough to warrant such drastic measures. In my experience, it is far more common to panic too early than too late.\""} {"text":"Negi also notes that the prospective swindler should \"keep enough options on the board so your opponent has a chance to see ghosts and lose his bearings. The closer he gets to winning, the less he wants to work \u2013 exploit that state of mind!\""} {"text":"Such play-acting can be carried to extremes. GM Nikolai Krogius writes that Najdorf, in his game against Gligori\u0107 at the 1952 Helsinki Olympiad, \"left a pawn \"\" in time trouble, and then desperately clutched his head and reached out as if wanting to take the pawn back. ... Gligori\u0107 took the pawn, and soon thereafter lost the game. It transpired that Najdorf had staged the whole pantomime to blunt his opponent's watchfulness. This can hardly be called ethical.\""} {"text":"Swindles can occur in myriad different ways, but as illustrated below certain themes are often seen."} {"text":"One classic way of saving a draw in a losing position is by stalemate. Almost every master has at some point spoiled a won game by falling into a stalemate trap. The defender often achieves the stalemate by sacrificing all of his or her remaining mobile pieces, with check, in such a way that they must be captured, leaving the defender with only a king (and sometimes also pawns and\/or pieces) with no legal moves."} {"text":"Another well-known Marshall swindle is Marshall\u2013MacClure, New York 1923 (diagram at above left). Marshall, a rook down, played 1.Rh6! Rxh6 2.h8=Q+! Rxh8 3.b5! A very unusual position has arisen: now Black is up \"two\" rooks and on move, but the only way to avoid stalemate is 3...Rd7 4.cxd7 (threatening 5.d8=Q+, forcing stalemate) c5?? 5.bxc6 Kb8 6.Kxb6, when White even wins. Decades later, someone pointed out an alternative draw with 1.Rg6! fxg6 2.h8=Q+ Rxh8 3.b5 or 1...Re8 2.Rg8 Rb8 3.b5."} {"text":"In Chigorin\u2013Schlechter, Ostend 1905, (diagram at above right), a game between two of the leading players of the day, an unusual combination of stalemate and \"zugzwang\" enabled the great Schlechter to rescue a desperate position. Schlechter, in extreme , played 44...Qc7+! Chigorin, thinking Schlechter had blundered, responded 45.Qb6+??, seemingly forcing the trade of queens. Schlechter's 45...Ka8! forced an immediate draw: 46.Qxc7 is stalemate, and 46.Ka6 Qc8+! 47.Ka5 allows a draw with either 47...Qc7! (\"zugzwang\"), when White cannot make progress, or 47...Qc3+! 48.Ka6 Qc8+! with a perpetual check."} {"text":"In Kasparov\u2013McDonald, simultaneous exhibition, Great Britain 1986, (left-most diagram), the world champion had a winning advantage, which he could have converted with 54.Qd6+ Kg7 55.c6! Instead, he played 54.Bxe4??, allowing 54...Rxg3+! 55.Kxg3 Qe5+! , since the forced 56.Qxe5 gives stalemate (right-most diagram). Note that 55.Kh4 (instead of 55.Kxg3), with the strong threat of 56.Qh7#, would have been met by 55...Rg4+! 56.Kxg4 (forced) Qd7+! 57.Qxd7 with a different stalemate."} {"text":"For further examples of swindles based on stalemate, see Stalemate; Desperado (chess); Congdon\u2013Delmar, New York 1880; Post\u2013Nimzowitsch, Barmen Masters 1905; Schlechter\u2013Wolf, Nuremberg 1906; Znosko-Borovsky\u2013Salwe, Ostend B 1907; Walter\u2013Nagy, Gy\u0151r 1924; Janowski\u2013Gr\u00fcnfeld, Marienbad 1925; Heinicke\u2013Rellstab, German Championship 1939; Bernstein\u2013Smyslov, Groningen 1946; Horowitz\u2013Pavey, U.S. Championship 1951; Fichtl\u2013F. Blatny, Czechoslovakia 1956; Portisch\u2013Lengyel, M\u00e1laga 1964; Matulovi\u0107\u2013Suttles, Palma de Mallorca Interzonal 1970; Fuller\u2013Basin, Michigan Open 1992; Boyd\u2013Glimbrant, Alicante 1992; and Pein\u2013de Firmian, Bermuda 1995."} {"text":"This stunning reversal had a major impact on the match. Staunton had won seven and drawn one of the first eight games, and believed that St. Amant would have resigned the match if he had lost. Instead, St. Amant was able to continue the match for three more weeks, winning another five games, before finally succumbing."} {"text":"Draw by perpetual check is another often-seen way of swindling a draw from a lost position."} {"text":"The position at left is from Ivanchuk\u2013Moiseenko, Russian Team Championship, Sochi 2005. Black is down two pawns against the world's sixth highest-rated player. Worse, Ivanchuk's pieces dominate the board. IM Malcolm Pein notes that after almost any sensible move, for example 30.Qc2, Black would be completely lost. White would then threaten 31.Rd6 pinning the knight to the queen, and neither 30...Nf6 31.Bxf6 gxf6 32.Qxh7# nor 30...Nc5 31.Ree7 is an adequate response. 30.Qc2 would also guard against a possible ...Qd1+, the significance of which becomes apparent after seeing the game continuation."} {"text":"Moiseenko met Ivanchuk's 30.Rb7?? with 30...Nf8!! This not only threatens 31...Nxe6, but also enables Black to meet 31.Rxb8 with 31...Qd1+ 32.Kh2 Qh5+ 33.Kg1 Qd1+, drawing by perpetual check. The perpetual check is based on White's weak back rank combined with his slightly compromised king position (no h-pawn). Note how pieces that are well placed for attacking purposes may be misplaced for defensive purposes. White's rook on e6 was well placed when White had the initiative, but is of no use in stopping the threatened perpetual check. (Similarly, in Rhine\u2013Nagle, Black's rook on g5 was an excellent attacking piece, but was poorly placed to defend Black's back rank or stop White's passed c-pawn.)"} {"text":"White tried 31.Rh6, but could not avoid the perpetual: 31...Rxb7 32.Qxb7 Qd1+ 33.Kh2 Rh5+ 34.Rxh5 34.Kg3!? (hoping for 34...Rxh6?? 35.Qxg7#) is met by 34...Rg5+! and White must repeat moves with 35.Kh2! Rh5+, since 35.Kh3?? Qh1#; 35.Kh4?? Qg4#; and 35.Kf4?? Qg4# all get mated. 34...Qxh5+ 35.Kg3 Qg5+ 36.Kf3 Qf5+ \u00bd\u2013\u00bd since White cannot escape the perpetual check."} {"text":"David Bronstein, in his immortal losing game, valiantly but unsuccessfully tried to swindle Bogdan \u015aliwa with a surprise mating attack."} {"text":"Sometimes a player who is behind in material may achieve a draw by exchanging off, or sacrificing for, all of the opponent's pawns, leaving a position (for example, two knights versus lone king) where the superior side still has a material advantage but cannot force checkmate. (Properly speaking, this may or may not be a \"swindle\", depending on whether the superior side missed a clear win earlier.) The inferior side is also sometimes able to achieve an ending that is theoretically still lost, but where the win is difficult and may be beyond the opponent's abilities\u2014for example, bishop and knight versus lone king; queen versus rook; two knights versus pawn, which is sometimes a win for the knights; or two bishops versus knight."} {"text":"White drew similarly in Parr\u2013Farrand, England 1971. From the diagram at above right, play continued 1.Rd5 Bf6 2.Rxf5! On 2...gxf5 3.Kf4, White's king will capture Black's f-pawn, then retreat to h1, reaching a bishop and opposite-colored draw. Instead, Black tried 2...Ke7 3.Rb5 Ke6, \"but he soon had to admit that the draw was inevitable.\""} {"text":"Schmidt\u2013Schaefer, Rheinhessen 1997 (diagram at above left), is another straightforward example. Black has connected passed pawns, but if White can sacrifice his knights for them he can reach the drawn two knights versus lone king ending. Thus, 50.Nfe4! threatened to capture both pawns with the knights. 50...dxe4 51.Nxe4 Kd5 52.Nxc5! would also achieve that goal. Black tried 50...d4, but agreed to a draw after 51.Nxc5+ Kd6 52.Nb5+! Kxc5 53.Nxd4!"} {"text":"The five examples above arguably are not true swindles, but rather the inferior side's exploitation of a defensive resource available in the position. However, Chandler\u2013Susan Polgar, Biel 1987, (diagram at above right), is a \"bona fide\" swindle. Polgar has just played 53...Nh6!? (from g8), transparently playing for a rook pawn and wrong-colored bishop draw. GM Chandler obligingly played 54.gxh6+??, expecting 54...Kxh6 55.Kf6! when he will win because Black cannot get her king to h8. Polgar, however, responded 54...Kh8! with the standard draw. White's possession of a second h-pawn is immaterial, and the game concluded 55.Bd5 Kh7 56.Kf7 Kh8! \u00bd\u2013\u00bd"} {"text":"The position above left, the conclusion of an endgame study by the American master Frederick Rhine, provides a more complicated example of forcing a draw by material insufficiency. White draws with 5.Nxc4+! Nxc4 If 5...Kc6 6.Nxb6 Kxb6 7.Rxb2+, White's rook draws easily against Black's knight and bishop. 6.Rxb6+ Now Black's best try is 6...Kd5! or 6...Ke7!, when the endgame of rook against two knights and a bishop is a well-established theoretical draw. The more natural 6...Nxb6+ leads to a surprising draw after 7.Kd8! (diagram above), when any bishop move stalemates White, and any other move allows 8.Kxe8, when the two knights cannot force checkmate."} {"text":"Building a fortress is another method of saving an otherwise lost position. It is often seen in the endgame, for example in endings with bishops of opposite colors (see above)."} {"text":"In Ivanov\u2013Dolmatov, Novosibirsk 1976 (left-most diagram), Black, an down in the endgame, seemingly had a hopeless position. In desperation, he tried 1...e3! White replied 2.Rxb4?? Amatzia Avni wrote, \"Amazingly, this greedy collecting of further material gains throws away the win. After 2.fxe3 Black would probably resign.\" There followed 2...e2 3.Re4 Bxf5 4.gxf5 h4!! (right-most diagram). Despite White's extra rook, the position is drawn: his rook must stay on the e-file to stop Black's pawn from queening, while his king is trapped in the corner. 5.Rg4+ can be met by 5...Kf7 (not 5...Kh6?? 6.Rxh4+) 6.Re4 and now 6...h3, or any king move, holds the draw."} {"text":"\"Zugzwang\", though most often used by the superior side, is sometimes available as a swindling technique to the inferior side. Chigorin\u2013Schlechter above is one such instance."} {"text":"In the position at left, the natural 1...Kb4 would be a fatal blunder, turning a win into a loss after 2.Kd5!, reaching the noted \"tr\u00e9buchet\" position (diagram at right), where whoever is on move loses, a situation described as \"full-point mutual zugzwang.\" Instead, 1...Kb3! 2.Kd5 Kb4 wins."} {"text":"In the movie Tower Heist, Arthur Shaw (played by Alan Alda) mentions \"the Marshall Swindle\" in a scene where Shaw is playing chess alone, and the main character of Kovaks (played by Ben Stiller) and others are asking where their money is. Shaw specifically mentions the 1912 Master's Tournament of Levitsky versus Marshall and the Swindle in that game, which he describes as \"the greatest move in the history of chess\". This move is later spoken by Kovaks as Shaw is arrested for fraud at the end of the film."} {"text":"The exchange in chess refers to a situation in which one player exchanges a (i.e. a bishop or knight) for a rook. The side which wins the rook is said to have \"won the exchange\", while the other player has \"lost the exchange\", since the rook is usually more valuable. Alternatively, the side that has won the rook is \"up the exchange\", and the other player is \"down the exchange\". The opposing captures often happen on consecutive moves, although this is not strictly necessary. It is generally detrimental to lose the exchange, although occasionally one may find reason to purposely do so; the result is an \"exchange sacrifice\" (see below). The \"minor exchange\" is an uncommon term for the exchange of a bishop and knight."} {"text":"\"The exchange\" differs from the more general \"exchange\" or \"an exchange\", which refers to the loss and subsequent gain of arbitrary pieces, for example to \"exchange queens\" would mean that each side's queen is ."} {"text":"In the middlegame, the advantage of an exchange is usually enough to win the game if the side with the rook has one or more pawns. In an endgame without pawns, the advantage of the exchange is normally not enough to win (see pawnless chess endgame). The most common exceptions when there are no pawns are (1) a rook versus a bishop in which the defending king is trapped in a corner of the same color as his bishop, (2) a knight separated from its king that may be cornered and lost, and (3) the king and knight are poorly placed ."} {"text":"In the endgame of a rook and a pawn versus a knight and a pawn, if the pawns are passed the rook is much stronger and should win. If the pawns are not passed, the side with knight has good drawing chances if its pieces are well-placed ."} {"text":"In the endgame of a rook and a pawn versus a bishop and pawn, If the pawns are on the same file, the bishop has good chances to draw if the pawns are blocked and the opposing pawn is on a square the bishop can attack; otherwise the rook usually wins. If the pawns are passed the rook normally wins. If the pawns are not passed and are on adjacent files, it is difficult to assess but the bishop may be able to draw ."} {"text":"In an endgame with more pawns on the board (i.e. a rook and pawns versus a minor piece with the same number of pawns) the rook usually wins . This position is typical. The superior side should remember these things:"} {"text":"If the minor piece has an extra pawn (i.e. one pawn for the exchange), the rook should win, but with difficulty. If the minor piece has two extra pawns, the endgame should be a draw ."} {"text":"In this 2004 game between Ivan Sokolov and World Champion Vladimir Kramnik, White gave up the exchange for a pawn in order to create two strong connected passed pawns. The game continued:"} {"text":"and White won on move 41 ."} {"text":"Tigran Petrosian, the World Champion from 1963\u20131969, was well known for his especially creative use of this device. He once responded (only half jokingly), when asked what was his favourite piece, as saying \"The rook, because I can sacrifice it for minor pieces!\" In the game Reshevsky versus Petrosian at the 1953 Candidates Tournament in Zurich, he sacrificed the exchange on move 25, only for his opponent to sacrifice it in return on move 30. This game is perhaps the most famous and most frequently taught example of the exchange sacrifice."} {"text":"There are no open in this position for the rooks to exploit. Black sacrificed the exchange with"} {"text":"With the rook not on e7, the black knight will be able to get to a strong outpost on d5. From there the knight will be attacking the pawn on c3, and if the white bishop on b2 does not move to d2, it will be of little use. In addition, it will be practically impossible to break Black's defense on the white squares. The next few moves were:"} {"text":"The game was drawn on move 41 ."} {"text":"In the tenth game from the 1966 World Chess Championship between defending champion Tigran Petrosian and challenger Boris Spassky contained two exchange sacrifices by White. Black had just moved"} {"text":"White had no choice: 21.Rf2 Rxf4 22.Rxf4 Qg5+, etc. The game continued:"} {"text":"Black is helpless, despite being two exchanges ahead. White won back an exchange on move 29. On move 30 White forced the win of the other rook and the exchange of queens. Black resigned because the position was a winning endgame for White (two knights and five pawns versus one knight and four pawns) . Petrosian won the match by one game to retain his title."} {"text":"In a 1994 game between World Champion Garry Kasparov and Alexei Shirov, White sacrificed a pure exchange (rook for a bishop) with the move 17. Rxb7!!. As compensation for the sacrifice, Black became weak on the white squares, which were dominated by White's bishop. The exchange sacrifice also deprived Black of the and his remaining bishop was a . During the game, many spectating grandmasters were sceptical whether White's compensation was enough. Black returned the exchange on move 28, making the equal, but White had a strong initiative. Black missed a better 28th move after which White could have forced a draw, but would have had no clear advantage. White won the game on move 38 ."} {"text":"The minor exchange refers to the capture of the opponent's bishop for the player's knight (or, more recently, the stronger for the weaker) . Bobby Fischer used the term , but it is rarely used."} {"text":"In most chess positions, a bishop is worth slightly more than a knight because of its longer range of movement. As a chess game progresses, pawns tend to get traded, removing support points from the knight and opening up lines for the bishop. This generally leads to the bishop's advantage increasing over time. In general, bishops have relatively higher value in an and knights have relatively higher value in a ."} {"text":"Traditional chess theory espoused by masters such as Wilhelm Steinitz and Siegbert Tarrasch puts more value on the bishop than the knight. In contrast, the hypermodern school favored the knight over the bishop. Modern theory is that it depends on the position, but that there are more positions where the bishop is better than where the knight is better ."} {"text":"Occasions when a knight can be worth more than a bishop are frequent, so this exchange is not necessarily made at every opportunity to do so."} {"text":"In chess, a backward pawn is a pawn that is behind all pawns of the same color on the adjacent and cannot be safely advanced. In the diagram, the black pawn on the c6-square is backward."} {"text":"Backward pawns are usually a positional disadvantage because they are unable to be defended by pawns. Also, the opponent can place a piece, usually a knight, on the in front of the pawn without any risk of a pawn driving it away. The backward pawn also prevents its owner's rooks and queen on the same file from attacking the piece placed on the hole."} {"text":"If the backward pawn is on a half-open file, as in this case, the disadvantage is even greater, as the pawn can be attacked more easily by an opponent's rook or queen on the c-file. Pieces can become weak when they are devoted to protecting a backward pawn, since their obligation to defend the pawn keeps them from being deployed for other uses."} {"text":"Modern opening theory features several openings in which one of the players deliberately incurs a backward pawn in exchange for some other advantage such as initiative or better . An excellent example is the Sveshnikov Variation of the Sicilian Defence."} {"text":"After the moves 1. e4 c5 2. Nf3 Nc6 3. d4 cxd4 4. Nxd4 Nf6 (or 4...e5 5.Nb5 d6 \u2013 the Kalashnikov Variation) 5. Nc3 e5!? 6. Ndb5 d6 (see diagram), Black has a backward pawn on d6, but White now has to endure a displacement of his knights and an undermining of his after 7. Bg5 a6 8. Na3 b5 9. Bxf6 gxf6 10. Nd5 (dodging the threatened pawn-fork of the knights) 10... f5! (or 10...Bg7 11.c3 [facilitating the knight on a3 to return to the center via Na3\u2013c2\u2013e3] 11...f5!) 11. c3 Bg7, and so on."} {"text":"In chess, an isolated pawn is a pawn that has no friendly pawn on an adjacent . Isolated pawns are usually a weakness because they cannot be protected by other pawns. The square in front of the pawn may become a good outpost or otherwise a good square for the opponent to anchor pieces. Isolated pawns most often become weaker in the endgame, as there are fewer pieces available to protect the pawn."} {"text":"Isolated pawns can, however, provide improved development and associated opportunities for that offset or even outweigh their weaknesses. The files adjacent to the isolated pawn are either open or half-open, providing two lanes of attack for the rooks and the queen. The absence of pawns adjacent to the isolated pawn may also mobilize the player's knights and bishops."} {"text":"An isolated pawn on the d-file is called an isolated queen pawn or simply an isolani. In addition to the open or half-open c- and e-files, the isolated queen pawn can provide a good outpost on the c- and e-file squares diagonally forward of the pawn, which are especially favorable for the player's knights. The isolated queen pawn position favors a attack, freeing both the light and dark-squared bishops due to the absence of friendly pawns on the c- and e-files. Isolated queen pawns suffer, however, from the same weaknesses as other isolated pawns."} {"text":"Many \"textbook\" openings lead to isolated pawns, such as the French Defence, Nimzo-Indian Defence, Caro\u2013Kann and Queen's Gambit."} {"text":"In the endgame, isolated pawns are a weakness in pawn structure because they cannot be defended by other pawns as with connected pawns. In this diagram, the white pawn on the e4-square and the black pawn on a7 are isolated."} {"text":"Isolated pawns are weak for two reasons. First, the pieces attacking them usually have more flexibility than those defending them. In other words, the attacking pieces enjoy greater freedom to make other threats (win pieces, checkmate, etc.), while the defending pieces are restricted to the defense of the pawn. This is because a piece that is attacking a pawn can give up the attack to do something else, whereas the defending piece must stay rooted to the spot until the attacking piece has moved. The defending piece is thus said to be \"tied down\" to the pawn."} {"text":"The second reason is that the square immediately in front of the isolated pawn is weak, since it is immune to attack by a pawn (often providing an excellent outpost for a knight), and the enemy piece located in this square cannot be attacked by rooks because the isolated pawn blocks the file it is on. Thus an isolated pawn provides a typical example of what Wilhelm Steinitz called \"weak squares\"."} {"text":"An isolated queen pawn (IQP), called an \"isolani\", is often a special case. An isolated queen pawn is one on the queen's (d-file). The weakness of such a pawn's isolation arises from two factors associated with the absence of both neighboring pawns:"} {"text":"The presence of open or half-open king (e-) and queen's bishop (c-) files, as well as the outposts (for White) at e5 and c5, enable the player with the IQP favourable attacking chances in the middlegame, however. Once the game reaches the endgame, the pawn's isolation becomes more of a weakness than a strength. Therefore, the player with the IQP must take advantage of the temporary strength before an endgame is reached. proposed that with four each, an IQP is an advantage; with three minor pieces each, it is about even; and with two or fewer minor pieces each, it is a disadvantage. Sacrifice of the pawn by White and of the pawn by Black are common themes."} {"text":"The diagram shows some of the optimum piece placements for both sides in an IQP position."} {"text":"Making use of this arrangement of pieces White may plan to either advance thematically with d4\u2013d5 opening the position and dissolving the IQP, or use the greater activity of his pieces to launch an attack probably making use of the e5-square for a knight and possibly lifting a rook to the kingside. Typically there may also be a sacrifice on e6 or f7. It is important that White try to use the IQP to support an attack or dissolve it before the endgame as it would then become weak. The advance d4\u2013d5 or a tactic forcing Black to capture a piece on e5 and then recapturing with the d4-pawn would be typical ways of achieving this."} {"text":"The exchange of a rook for bishop or knight is an \"uneven exchange\" because a rook is generally more valuable than a bishop or knight. A \"minor exchange\" is a less commonly used term which refers to the exchange of a bishop for a knight."} {"text":"A \"forced exchange\" is an exchange in a position where one of the players is required to initiate or undergo an exchange, either because no alternative play is allowed by chess rules or because the consequence of not making the exchange would be unacceptably detrimental to that player's game. Many exchanges can be offered, but they are not forced. In such cases, the player presented with the possibility of an exchange may decide to make the initial capture, may decline making the initial capture, or may even move to avoid the exchange. The player can weigh the advantages and disadvantages of each move to decide. For a prospective uneven exchange, the values of the pieces are often the deciding factor."} {"text":"Chess positions are often set up where a player's piece on a certain square is defended by one or more of his other pieces. This typically means that if an opponent's piece captures the defended piece, the capturing piece would be subject to recapture by a defending piece (\"defender\"). An opponent's piece in a position to capture a given piece could be considered an attacking piece (\"attacker\"). Positions could develop where a player's piece on a square has one or more attackers and one or more defenders. This is a common way in which exchanges could occur, although there are other ways also."} {"text":"In such positions, a player with the attacking piece(s) may decide whether it is worthwhile for him to initiate a capture likely to result in recapture, likely decided by the value of the pieces to be taken in the ensuing exchange. Pinned pieces often cannot be counted on being attackers or defenders."} {"text":"In chess, a sacrifice is the deliberate giving up of a piece by a player, allowing or forcing an opponent to capture the piece or exchange it for a lower value piece."} {"text":"In a desperado situation, a trapped piece which would inevitably be lost can sometimes be exchanged for another piece, even if it has lower value, in order to minimize net material loss for the player having the inevitably lost piece."} {"text":"Exchanges of pieces are commonly involved in chess tactics and strategy."} {"text":"Exchanges are often made to try to improve a position from a strategic point of view. Since positional advantages are often smaller than those due to difference in material value, exchanges to gain a positional advantage are commonly even exchanges in terms of material."} {"text":"If a player gains material superiority in a game, a strategy can involve making even exchanges to eliminate other pieces for to make the superiority more decisive. The opponent with less material may try to avoid exchanges, but then the player with more material may try to force exchanges anyway."} {"text":"Strong players commonly play a materially even game with each other, often clearing out their pieces with even exchanges to transition from middlegame to the endgame."} {"text":"An exchange variation is a type of opening in which there is an early, voluntary exchange of pawns and\/or other pieces."} {"text":"An open file in chess is a with no pawns of either color on it. In the diagram, the e-file is an open file. An open file can provide a line of attack for a rook or queen. Having rooks or queens on open files or half-open files is considered advantageous, as it allows a player to attack more easily, since a rook or queen can move down the file to penetrate the opponent's position."} {"text":"A common strategic objective for a rook or queen on an open file is to reach its seventh or eighth (the opponent's second or first rank). Controlling the seventh rank is generally worth at least a pawn, as it threatens all the opponent's yet-unmoved pawns to some degree. Controlling the eighth rank is likely to force the opposing king into a more exposed position and puts pressure on any remaining pieces, or if the rank is already clear, allows unobstructed movement behind the enemy forces. Aron Nimzowitsch first recognized the power of a on an open file, writing in his famous book \"My System\" that the main objective of a rook or queen on an open file is \"the eventual occupation of the 7th or 8th rank\"."} {"text":"Many games are decided based on this strategy. In the game Anand\u2013Ivanchuk, Amber 2001, Anand sacrificed a pawn to open the d-file. White then used the open file to deploy his rooks to the seventh and eighth ranks and win the game, by exploiting the weakness of Black's a-pawn. White's dominance on the d-file allowed him to maneuver his rooks to aggressive posts deep within Black's defense."} {"text":"In chess, the fortress is an endgame drawing technique in which the side behind in sets up a zone of protection that the opponent cannot penetrate. This might involve keeping the enemy king out of one's position, or a zone the enemy cannot force one out of (e.g. see the opposite-colored bishops example). An elementary fortress is a theoretically drawn (i.e. a ) position with reduced material in which a passive defense will maintain the draw."} {"text":"Fortresses pose a problem for computer chess: computers fail to recognize fortress-type positions and are unable to achieve the win against them despite claiming a winning advantage ."} {"text":"Perhaps the most common type of fortress, often seen in endgames with only a few pieces on the board, is where the defending king is able to take refuge in a corner of the board and cannot be chased away or checkmated by the superior side. These two diagrams furnish two classic examples. In both cases, Black simply shuffles his king between a8 and the available square adjacent to a8 (a7, b7, or b8, depending on the position of the white king and pawn). White has no way to dislodge Black's king, and can do no better than a draw by stalemate or some other means."} {"text":"Note that the bishop and wrong rook pawn ending (i.e. where the pawn is a rook pawn whose promotion square is the color opposite to that of the bishop) in the diagram is a draw even if the pawn is on the seventh rank or further back on the a-. Heading for a bishop and wrong rook pawn ending is a fairly common drawing resource available to the inferior side ."} {"text":"The knight and rook pawn position in the diagram, however, is a draw only if White's pawn is already on the seventh rank, making this drawing resource available to the defender much less frequently. White wins if the pawn is not yet on the seventh rank and is protected by the knight from behind. With the pawn on the seventh rank, Black has a stalemate defense with his king in the corner ."} {"text":"A fortress is often achieved by a sacrifice, such as of a piece for a pawn. In the game between Grigory Serper and Hikaru Nakamura, in the 2004 U.S. Chess Championship, White would lose after 1.Nd1 Kc4 or 1.Nh1 Be5 or 1.Ng4 Bg7. Instead he played"} {"text":"Heading for h1. After another 10 moves the position in the following diagram was reached:"} {"text":"Black has no way of forcing White's king away from the corner, so he played"} {"text":"and after 13.h4 gxh4 the game was drawn by stalemate."} {"text":"The back-rank defense in some rook and pawn versus rook endgames is another type of fortress in a corner (see diagram). The defender perches his king on the pawn's queening square, and keeps his rook on the back rank (on the \"long side\" of the king, not, e.g., on h8 in the diagram position) to guard against horizontal checks. If 1.Rg7+ in the diagram position, Black heads into the corner with 1...Kh8! Note that this defense works \"only\" against rook pawns and knight pawns ."} {"text":"In the ending of a rook versus a bishop, the defender can form a fortress in the \"safe\" corner\u2014the corner that is not of the color on which the bishop resides (see diagram). White must release the potential stalemate, but he cannot improve his position ."} {"text":"In this position from de la Villa, White draws if his king does not leave the corner. It is also a draw if the bishop is on the other color, so it is not a case of the wrong bishop ."} {"text":"In the diagram, Black draws by moving his rook back and forth between the d6- and f6-squares, or moves his king when checked, staying behind the rook and next to the pawn. This fortress works when all of these conditions are met:"} {"text":"The white king is not able to cross the rank of the black rook and the white queen is unable to do anything useful."} {"text":"Positions such as these (when the defending rook and king are near the pawn and the opposing king cannot attack from behind) are drawn when (see diagram):"} {"text":"In this position, with Black to move, Black can reach a drawing fortress."} {"text":"and now 3...Ka3 and several other moves reach the fortress. In the actual game, Black made the weak move 3...Rd3? and lost ."} {"text":"In this 1959 game between Whitaker and Ferriz, White sacrificed a rook for a knight in order to exchange a pair of pawns and reach this position, and announced that it was a draw because (1) the queen cannot mate alone, and (2) the black king and pawn cannot approach to help . However, endgame tablebase analysis shows Black to have a forced win in 19 moves starting with 50... Qc7+ (the only winning move), taking advantage of the fact that the rook is currently unprotected \u2013 again illustrating how tablebases are refining traditional endgame theory."} {"text":"From the diagram, in Salov vs. Korchnoi, Wijk aan Zee 1997, White was able to hold a draw with a rook versus a queen, even with the sides having an equal number of pawns. He kept his rook on the fifth rank blocking in Black's king, and was careful not to lose his rook to a fork or allow a queen sacrifice for the rook in circumstances where that would win for Black. The players agreed to a draw after:"} {"text":"In endings with bishops of opposite colors (i.e. where one player has a bishop that moves on light squares, while the other player's bishop moves on dark squares), it is often possible to establish a fortress, and thus hold a draw, when one player is one, two, or occasionally even three pawns behind. A typical example is seen in the diagram. White, although three pawns behind, has established a drawing fortress, since Black has no way to contest White's stranglehold over the light squares. White simply keeps his bishop on the h3\u2013c8 diagonal ."} {"text":"In an endgame with opposite-colored bishops, positional factors may be more important than material. In this position, Black sacrifices a pawn (leaving him three pawns down) to reach a fortress."} {"text":"After 4...Be2 5.Kh6 Bd1 6.h5 Black just waits by playing 6...Be2 ."} {"text":"Here are drawing fortresses with two versus a queen . Usually the defending side will not be able to get to one of these positions."} {"text":"The bishop and knight fortress is another type of fortress in a corner. If necessary, the king can move to one of the squares adjacent to the corner, and the bishop can retreat to the corner. This gives the inferior side enough tempo moves to avoid zugzwang. For example:"} {"text":"In the two bishop versus queen ending, the queen wins if the Lolli position is not reached, but some of them take up to seventy-one moves to either checkmate or win a bishop, so the fifty-move rule comes into play. From the diagram:"} {"text":"and White cannot prevent ... Bb6, which gets back to the Lolli position ."} {"text":"In the two knights fortress, the knights are next to each other and their king should be between them and the attacking king. The defender must play accurately, though ."} {"text":"There are several drawing positions with two knights against a queen. The best way is to have the knights adjacent to each other on a file or rank, with their king between them and the enemy king. This is not a true fortress since it is not static. The position of the knights may have to change depending on the opponent's moves. In this position (Lolli, 1763),"} {"text":"and Black has an ideal defensive position."} {"text":"If the knights cannot be adjacent to each other on a file or rank, the second best position is if they are next to each other diagonally (see diagram)."} {"text":"The third type of defensive formation is with the knights protecting each other, but this method is more risky ."} {"text":"Sometimes the two minor pieces can achieve a fortress against a queen even where there are pawns on the board. In"} {"text":"Ree-Hort, Wijk aan Zee 1986 (first diagram), Black had the material disadvantage of rook and bishop against a queen. Dvoretsky writes that Black would probably lose after the natural 1...Bf2+? 2.Kxf2 Rxh4 because of 3.Kg3 Rh7 4.Kf3, followed by a king march to c6, or 3.Qg7!? Rxf4+ 4.Kg3 Rg4+ 5.Kf3, threatening 6.Qf6 or 6.Qc7 . Instead, Hort forced a draw with 1... Rxh4!! 2. Kxh4 Bd4! (imprisoning White's queen) 3. Kg3 Ke7 4.Kf3 Ba1 (second diagram), and the players agreed to a draw. White's queen has no moves, all of Black's pawns are protected, and his bishop will shuttle back and forth on the squares a1, b2, c3, and d4."} {"text":"At the great New York City 1924 tournament, former world champion Emanuel Lasker was in trouble against his namesake Edward Lasker, but surprised everyone by discovering a new endgame fortress . Despite having only a knight for a rook and pawn, White draws by moving his knight back and forth between b2 and a4. Black's only real winning try is to get his king to c2. However, to do so Black has to move his king so far from the pawn that White can play Ka3\u2013b2 and Nc5xb3, when the rook versus knight ending is an easy draw. The game concluded:"} {"text":"If 99...Ke2, 100.Nc5 Kd2 101.Kb2! (101.Nxb3+?? Kc2 and Black wins) and 102.Nxb3 draws."} {"text":"Bishop versus rook and bishop pawn on the sixth rank."} {"text":"A bishop can make a fortress versus a rook and a bishop pawn on the sixth rank, if the bishop is on the color of the pawn's seventh rank square and the defending king is in front of the pawn. In this position, White would win if he had gotten the king to the sixth rank ahead of the pawn. Black draws by keeping the bishop on the diagonal from \"a2\" to \"e6\", except when giving check. The bishop keeps the white king off \"e6\" and checks him if he goes to \"g6\", to drive him away. A possible continuation:"} {"text":"2.f7 is an interesting attempt, but then Black plays 2...Kg7! and then 3...Bxf7, with a draw. 2...Kg7 prevents 3.Kf6, which would win."} {"text":"The only move to draw, since the bishop must be able to check the king if it goes to g6."} {"text":"If 7.f7 Bxf7!: the pawn can be safely when the white king is on h6."} {"text":"Draw, because White cannot make progress ."} {"text":"A \"defense perimeter\" is a drawing technique in which the side behind in or otherwise at a disadvantage sets up a perimeter, largely or wholly composed of a pawn chain, that the opponent cannot penetrate. Unlike other forms of fortress, a defense perimeter can often be set up in the middlegame with several pieces remaining on the board."} {"text":"The above example may seem fanciful, but Black achieved a similar defense perimeter in"} {"text":"Here are some other drawing fortresses ."} {"text":"This game between J\u00f3zsef Pint\u00e9r and David Bronstein demonstrates the human play of the endgame. The defender has two ideas: (1) keep the king off the edge of the board and (2) keep the knight close to the king. White reaches the semi-fortress after 71. Nb2!, which falls after 75... Kb5!. White gets to a semi-fortress again in another corner after 90. Ng2+. After 100. Ke3 White cannot hold that semi-fortress any longer, but forms one in another corner after 112. Nb7!. On move 117 White claimed a draw by the fifty move rule ."} {"text":"A \"positional draw\" is a concept most commonly used in endgame studies and describes an impasse other than stalemate. It usually involves the repetition of moves in which neither side can make progress or safely deviate. Typically a advantage is balanced by a positional advantage. Fortresses and perpetual check are examples of positional draws . Sometimes they salvage a draw from a position that seems hopeless because of a material deficit . Grandmaster John Nunn describes a positional draw as a position in which one side has enough material to normally win and he is not under direct attack, but some special feature of the position (often a blockade) prevents him from winning ."} {"text":"A simple example is shown in the game between Lajos Portisch and Lubomir Kavalek. White could have won easily with 1.Be1 Kc6 2.b4. However, play continued 1. b4? Nb8 2. b5 Nc6+! The only way to avoid the threatened 3...Nxa5 is 3.bxc6 Kxc6, but the resultant position is a draw because the bishop is on the wrong color to be able to force the promotion (see above, wrong bishop, and wrong rook pawn) ."} {"text":"Lud\u011bk Pachman cites the endgame position in the diagram as a simple example of a positional draw. White on move simply plays waiting moves with the bishop (Bb1\u2013c2\u2013d3). As for Black, \"If he is unwilling to allow the transition to the drawn ending of Rook versus Bishop, nothing else remains for him but to move his Rook at [e5] continuously up and down the [e-file].\" Pachman explains, \"The indecisive result here contradicts the principles concerning the value of the pieces and is caused by the bad position of the black pieces (pinned rook at [e4]).\" ."} {"text":"This position from a game between Mikhail Botvinnik and Paul Keres in the 1951 USSR Championship is drawn because the black king cannot get free and the rook must stay on the c-. The players agreed to a draw four moves later ."} {"text":"The first diagram shows a position from a game between former World Champion Mikhail Tal and future World Champion Bobby Fischer from the 1962 Candidates Tournament in Cura\u00e7ao. After 41 moves Tal had the advantage but Fischer sacrificed the exchange (a rook for a knight). The game was drawn on the 58th move ."} {"text":"In this position from a game between Pal Benko and International Master Jay Bonin, White realized that the blockade cannot be broken and the game is a draw despite the extra material ."} {"text":"Can White stop the h-pawn from queening? The position looks lost for White but he does have a defence which seems to defy the rules of logic. White will calmly construct a \"fortress\" which will hide his pieces from attack. The only weakness in White's \"fortress\" is the g-pawn. This pawn has to be defended by the bishop and the only square where this can be done safely is from h6."} {"text":"White threatens to stop the advance of the h-pawn with ...Be5+;"} {"text":"building the fortress immediately does not work: 1.f6? h2 2.Kf8 h1=Q 3.Kg7 (3.Kg8 Qg2 4.Bf8 Qa8 5.Kg7 Kd7 6.Kg8 Ke6 7.Kg7 Kf5 8.Kg8 Bb3 9.Kg7 Qh1\u2212+) 3...Kd7 4.Bb4 Ke6 5.Bd2 Kf5 6.Be3 Qf3 7.Bd2 Qe2 8.Bc1 Qd1 9.Be3 Qd3 10.Bc1 Qc3\u2212+;"} {"text":"2.fxg6? This move destroys the fortress 2...fxg6 3.Be7+ Kc6\u2212+. Chess computer programs have difficulty assessing \"fortress\" positions because the normal values for the pieces do not apply."} {"text":"White can draw in another way without the need of a \"fortress\": 3.fxg6 fxg6 4.Bd8 Kd6 5.Nf6! h2 6.Ne4+ Ke6 7.Nf2 Bd5 8.Bf6 h1=Q 9.Nxh1 Bxh1=;"} {"text":"White has achieved the closing of the long diagonal a8\u2013h1. The only way to avoid this would be for Black to repeat moves. Now White can build his \"fortress\" without the worry of the queen getting to the back rank via the long diagonal."} {"text":"5. f6! h2 6. Bf8! h1=Q 7. Bh6!"} {"text":"with the idea of 8.Kf8 and 9.Kg7. White will be safe behind the barrier of pawns. It is a positional draw."} {"text":"In chess, luft (the German word for \"air\", sometimes also \"space\" or \"breath\") designates the space or square left by a pawn move into which a king (usually a castled one) may then retreat, especially such a space made intentionally to avoid back-rank checkmate. A move leaving such a space is often said to \"give the king some luft\". The term \"luft\", \"lufting\", or \"lufted\" may also be used (as an English participle) to refer to the movement of the relevant pawn creating luft."} {"text":"Preventing an opponent from lufting a pawn (for example by pinning it or moving a piece to the square in front of it) is a tactic that may lead to checkmate. A king's access to his luft might also be denied by the opponent subjecting the space or square to attack."} {"text":"The German \"luft\" is a close cognate to the English \"lift\", which is also used in chess, e.g., ."} {"text":"In the diagram, \"X\"s mark \"luft\" to which the king can escape back-rank checkmate delivered by the queen. Theoretical enemy knights in the indicated positions deny the king access to his \"luft\". Black dots indicate areas where threats emanating from enemy pieces capable of capturing diagonally could also deny access. The pawn structure seen in Black's position is , but it is a risk commonly accepted to fianchetto."} {"text":"Being up a queen, Black will win unless he overlooks the threat of Ng6 (which sets up checkmate via Rh8#). Black wouldn't be able to capture the knight or create luft because his f-pawn is pinned by White's bishop, and his g-pawn cannot advance if a piece is on g6 blockading it. White's king is temporarily safe from check in his luft. (Black can neutralize the threat of Ng6 by playing Qb8, as then Ng6 can be met by the discovered check of Nf5+, winning the checkmate-threatening h4 rook after White reacts.)"} {"text":"An outpost is a square on the fourth, fifth, sixth, or seventh which is protected by a pawn and which cannot be attacked by an opponent's pawn. Such a square is a hole for the opponent . In the figure to the right, c4 is an outpost, occupied by White's knight. It cannot be attacked by Black's pawns \u2013 there is no pawn on the d- and Black's pawn on the b-file is too far advanced."} {"text":"Outposts are a favourable position from which one can launch an attack, particularly using a [[Knight (chess)|knight]]. It is usually a good idea to post knights on opposite color of the opponent's single bishop, other things being equal."} {"text":"Knights are most efficient when they are close to the enemy's stronghold. This is because of their short reach, something not true of [[Bishop (chess)|bishops]], [[rook (chess)|rooks]] and [[queen (chess)|queens]]. They are also more effective in the centre of the board than on the edges. Therefore, the ideal to be aimed at is an outpost in one of the central (c-, d-, e- or f-) files in an advanced position (e.g. the sixth rank) with a knight. Knowledge of outposts and their effectiveness is crucial in exploiting situations involving an [[Isolated pawn|isolated queen's pawn]]."} {"text":"On the other hand, [[Aron Nimzowitsch|Nimzowitsch]] argued when the outpost is in one of the flank (a-, b-, g- and h-) files the ideal piece to make use of the outpost is a rook. This is because the rook can put pressure on all the squares along the rank."} {"text":"In chess, connected pawns are two or more pawns of the same color on adjacent , as distinct from isolated pawns. These pawns are instrumental in creating pawn structure because, when diagonally adjacent, like the two rightmost white pawns, they form a , a chain where the one behind protects the one in front. When attacking these chains, the weak spot is the backmost pawn, because it is not protected ."} {"text":"Connected pawns that are both passed, i.e., without any enemy pawns in front of them on the same file or adjacent files, are referred to as connected passed pawns. Such pawns can be very strong in the endgame, especially if supported by other pieces. Often the opponent must sacrifice to prevent one of the pawns from promoting."} {"text":"Connected passed pawns are usually superior to other passed pawns. An exception is in an opposite-colored bishops endgame with a bishop and two pawns versus a bishop on the opposite color. If the pawns are connected and not beyond their fifth , the position is a theoretical draw whereas widely separated pawns would win."} {"text":"Two connected pawns on the same without any friendly pawns on adjacent files are called ."} {"text":"There is a saying that two connected passed pawns on the sixth are stronger than a rook. This is true if the other side has nothing but a rook to defend against the pawns (and the defender cannot immediately capture one of the pawns). In this diagram, White wins:"} {"text":"A pawn storm is a chess strategy in which several pawns are moved in rapid succession toward the opponent's defenses."} {"text":"A pawn storm usually involves adjacent pawns on one side of the board, the (a-, b-, and c-) or the (f-, g-, and h-files)."} {"text":"A pawn storm will often be directed toward the opponent's king after it has castled toward one side (e.g. Fischer\u2013Larsen, 1958). Successive advances of the pawns on that side might rapidly cramp and overwhelm the opponent's position."} {"text":"A pawn storm might also be directed at queening a passed pawn; the diagram is taken from a game in which Tigran Petrosian was playing the black pieces against Bobby Fischer. Over the next fourteen moves, Petrosian storms his twin pawns down the a- and b- files, forcing Fischer's ."} {"text":"In chess, the pawn structure (sometimes known as the pawn skeleton) is the configuration of pawns on the chessboard. Since pawns are the least mobile of the chess pieces, the pawn structure is relatively static and thus largely determines the strategic nature of the position."} {"text":"Weaknesses in the pawn structure, such as isolated, doubled, or backward pawns and , once created, are usually permanent. Care must therefore be taken to avoid them (but there are exceptions\u2014for instance see \"Boleslavsky hole\" below). In the absence of these structural weaknesses, it is not possible to assess a pawn formation as good or bad\u2014much depends on the position of the pieces. The pawn formation does determine the overall strategies of the players to a large extent, however, even if arising from unrelated openings. Pawn formations symmetrical about a vertical line (such as the \"e5 Chain\" and the \"d5 Chain\") may appear similar, but they tend to have entirely different characteristics because of the propensity of the kings to castle on the ."} {"text":"Pawn structures often transpose into one another, such as the \"Isolani\" into the \"Hanging pawns\", and vice versa. Such transpositions must be considered carefully and often mark shifts in game strategy."} {"text":"In his 1995 book \"Pawn Structure Chess\", Andrew Soltis classified the major pawn formations into 17 categories. In 2015, the book \"Chess Structures\", by Mauricio Flores Rios, further studied the subject, subdividing pawn structures into the 28 most important. For a formation to fall into a particular category, it need not have a pawn position identical to the corresponding diagram, but only close enough that the character of the game and the major themes are unchanged. It is typically the whose position influences the nature of the game the most."} {"text":"Structures with mutually attacking pawns are said to have \"tension\". They are ordinarily unstable and tend to transpose into a stable formation with a pawn or exchange. Play often revolves around making the transposition happen under favorable circumstances. For instance, in the Queen's Gambit Declined, Black waits until White the to make the d5xc4 capture, transposing to the Slav formation (see below)."} {"text":"Openings: Primary: Caro\u2013Kann. Other: French, Scandinavian, Trompowsky (colors reversed), Alekhine's."} {"text":"Themes for White: Outpost on e5, kingside advantage, d4\u2013d5 break, possibility of in the endgame (typically after the exchange of White's d-pawn for Black's c-pawn)."} {"text":"Themes for Black: Weakness of the d4-pawn, ...c6\u2013c5 and ...e6\u2013e5 breaks. The latter break is usually preferable, but harder for Black to achieve."} {"text":"Openings: Primary: Slav. Other: Catalan, Queen's Gambit Accepted, Queen's Gambit Declined, Nimzo-Indian, Colle System (with colors reversed), London System (with colors reversed), Trompowsky (colors reversed)."} {"text":"Themes for White: Pressure on the c-file, weakness of Black's c-pawn (either after Black's ...b7\u2013b5 or after d4\u2013d5xc6 in response to ...e6\u2013e5), the d4\u2013d5 break."} {"text":"Themes for Black: e6\u2013e5 and c6\u2013c5 breaks."} {"text":"Openings: Primary: Sicilian (Najdorf, Richter\u2013Rauzer and Sozin variations), Sicilian Scheveningen, and several other Sicilian variations. Other: King's English (colors reversed)."} {"text":"Themes for White: Pressure on the d-file, space advantage, e4\u2013e5 break (often prepared with f2\u2013f4), f2\u2013f4\u2013f5 push, g2\u2013g4\u2013g5 blitz (see Keres Attack)."} {"text":"Themes for Black: Pressure on the c-file, (and counterplay in general) on the queenside, pressure on White's pawn on e4 or e5, d6\u2013d5 break, e6\u2013e5 transposing into the Boleslavsky hole (see below)."} {"text":"It is often unwise for White to exchange a piece on c6 allowing the recapture bxc6, because the phalanx of Black's center pawns becomes very strong."} {"text":"Openings: Primary: Sicilian Dragon. Other: (with colors reversed)."} {"text":"Character: Either a razor sharp middlegame with opposite side castling or a moderately sharp game with same side castling. The Sicilian Dragon requires a high level of opening memorization to play properly. This is especially true when it comes to the Yugoslav Attack in which White plays the moves Be3, f3, Qd2 and 0-0-0. Other variations include the following: the Classical Dragon, where White plays Be2 and 0-0; the Tal Attack, defined by Bc4 and 0-0; and the Fianchetto Defense, where White plays g3, Bg2 and 0-0. These less common variations lead to less tactical positions, with a potentially technical endgame."} {"text":"Themes for White: Outpost on d5, kingside attack (either f2\u2013f4\u2013f5 with kingside castling or h2\u2013h4\u2013h5 with queenside castling), weakness of Black's queenside pawn in the endgame."} {"text":"Themes for Black: Pressure on the long diagonal, queenside counterplay, exploiting White's often overextended kingside pawns in the endgame."} {"text":"Opening Lines: The most common variation of the Sicilian Dragon is the Yugoslav Attack. 1. e4 c5 2.Nf3 d6 3. d4 cxd4 4. Nxd4 Nf6 5. Nc3 g6 6. Be3 (the defining move of the Yugoslav attack) 6... Bg7 7. Qd2 0-0 8. f3 (necessary to prevent Black from playing 8...Ng4 to attack White's dark-squared bishop; 8.f3 also gives e4 extra defense and prepares to launch a pawn storm with the move g4) 8... Nc6 9. 0-0-0 (9.Bc4 is also a very common move in this position) 9... d5 (the main line; other ideas include 9...Nxd4 and 9...Bd7)."} {"text":"Openings: Primary: Sicilian Najdorf, Classical, Sveshnikov, Kalashnikov. Other: Sicilian Prins, Moscow, O'Kelly (2... a6), (with colors reversed)."} {"text":"Themes for White: taking control the d5 , exploiting the backward d6-pawn, f2\u2013f4 break."} {"text":"Themes for Black: d6\u2013d5 break, queenside minority attack, the c4-square."} {"text":"It is a paradoxical idea that Black can strive for equality by voluntarily creating a hole on d5. The entire game revolves around control of the d5-square. Black must play very carefully or White will place a knight on d5 and obtain a commanding positional advantage. Black almost always equalizes, and might even obtain a slight edge, if the d6\u2013d5 break can be made. Black has two options for their : on e6 and on b7 (after ...a7\u2013a6 and ...b7\u2013b5). Unusually for an open formation, bishops become inferior to knights because of the overarching importance of d5: White will often exchange Bg5xf6, and Black usually prefers to give up their queen's bishop rather than a knight in exchange for a white knight if it gets to d5."} {"text":"When White castles queenside, Black often delays castling because their king is quite safe in the ."} {"text":"Openings: Primary: Sicilian, King's Indian Defence. Other: Symmetrical English, King's English (with colors reversed), Queen's Indian Defence, Nimzo-Indian Defence."} {"text":"Themes for White: Nd4\u2013c2\u2013e3, fianchettoing one or both bishops, the Mar\u00f3czy hop (Nc3\u2013d5 followed by e4xd5 with terrific pressure on the e-file), kingside attack, c4\u2013c5 and e4\u2013e5 breaks."} {"text":"Themes for Black: b7\u2013b5 break, f7\u2013f5 break (especially with a fianchettoed king's bishop), d6\u2013d5 break (prepared with ...e7\u2013e6)."} {"text":"The Mar\u00f3czy bind, named after G\u00e9za Mar\u00f3czy, has a fearsome reputation. Chess masters once believed that allowing the bind was a mistake as Black always gave White a significant advantage. Indeed, if Black does not quickly make a , their will , with minor pieces lacking any squares to move to and possibly becoming cornered or pressed into a weak defense. Conversely, the formation takes time to set up and limits the activity of White's light-squared bishop, which can buy Black some breathing room to accomplish this break."} {"text":"Openings: Primary: Symmetrical English, Sicilian. Other: King's English (with colors reversed), King's Indian Defence (S\u00e4misch), Queen's Indian Defense, Nimzo-Indian Defence."} {"text":"The Hedgehog is a formation similar to the Mar\u00f3czy bind, and shares the strategic ideas with that formation. Typically, the Mar\u00f3czy bind would transpose into the Hedgehog formation."} {"text":"Openings: Primary: King's Indian, Old Indian (colors reversed), Ruy Lopez, Italian Game. Other: Ruy Lopez (colors reversed), Italian Game (colors reversed), Sicilian Kramnik. The notation in the rest of this section refers to the colors reversed version."} {"text":"Themes for White: d6 weakness, c4\u2013c5 push, a3\u2013f8 diagonal, queenside pawn storm."} {"text":"Themes for Black: d4 weakness, a1\u2013h8 diagonal, f4-square, kingside attack, trading pieces for a superior endgame."} {"text":"The Rauzer formation is named after Rauzer who introduced it in the Ruy Lopez. It can also rarely occur in the Ruy Lopez with colors reversed."} {"text":"It is considered to give Black excellent chances because d6 is much less of a hole than White's d4. If the black king's bishop is fianchettoed it is common to see it undeveloped to f8 to control the vital c5- and d6-squares, or remove White's dark-squared bishop, the guardian of the hole."} {"text":"The Rauzer formation is often misjudged by beginners. In the position on the left, White appears to have a development lead while Black's position appears to be riddled with holes. In reality, it is Black who stands clearly better, because White has no real way to improve their position while Black can improve by exploiting the d4-square (see complete game on Java (Applet) board)."} {"text":"Openings: Primary: King's Indian. Other: English, Pirc, Ruy Lopez, Philidor, Italian Game."} {"text":"Occurs after exchange of pawns on d4. Name given by Hans Kmoch."} {"text":"Themes for White: exploitation of d6 weakness, e4\u2013e5 and c4\u2013c5 breaks, minority attack with ...b2\u2013b4\u2013b5."} {"text":"Themes for Black: attacking the e4- and c4-pawns, d6\u2013d5 and f7\u2013f5 breaks, queenside play with ...a7\u2013a5\u2013a4."} {"text":"The wall is yet another structure that leaves Black with a d-pawn weakness, but prevents White from taking control of the center and gives Black active piece play and an opportunity to play on either side of the board."} {"text":"Openings: Primary: King's Indian, Pirc, Philidor. Other: Benoni, Ruy Lopez (Spanish), Trompowsky, English, Italian Game, Four Knights Game (Scotch variation)"} {"text":"Character: Closed game with opposite side activity."} {"text":"Themes for White: Massive queenside space advantage, c2\u2013c4\u2013c5 break (optionally prepared with b2\u2013b4), prophylaxis with ...g2\u2013g4 (after f2\u2013f3), f2\u2013f4 break."} {"text":"Themes for Black: kingside attack, f7\u2013f5 break, g7\u2013g5\u2013g4 break (after f2\u2013f3), c7\u2013c6 break, prophylaxis with ...c6\u2013c5 or ...c7\u2013c5 transposing to a full Benoni formation."} {"text":"The chain arises from a variety of openings but most commonly in the heavily analyzed King's Indian Classical variation. The theme is a race for a breakthrough on opposite flanks \u2013 Black must try to whip up a kingside attack before White's penetrate with devastating effect on the c-file. The position was thought to strongly favor White until a seminal game (Taimanov\u2013Najdorf 1953) where Black introduced the maneuver ...Rf8\u2013f7, ...Bg7\u2013f8, ...Rf7\u2013g7. When the chain arises in the Ruy Lopez, play is much slower with tempo being of little value and featuring piece maneuvering by both sides, Black focusing on the c7\u2013c6 break and White often trying to play on the kingside with the f2\u2013f4 break."} {"text":"Openings: Primary: French. Other: Nimzowitsch, Trompowsky, Caro\u2013Kann, Bogo-Indian, London System, Colle System, Sicilian (Rossolimo, Alapin, Closed, O'Kelly), Nimzo\u2013Larsen Attack (colors reversed)."} {"text":"Themes for White: kingside mating attack, f2\u2013f4\u2013f5 break."} {"text":"Themes for Black: Exchanging the hemmed-in queen's bishop, c7\u2013c5 and f7\u2013f6 breaks."} {"text":"Due to White's kingside space advantage and development advantage, Black must generate counterplay or be mated. Novices often lose to the sparkling Greek gift sacrifice. Attacking the head of the pawn chain with ...f7\u2013f6 is seen as frequently as attacking its base, because it is harder for White to defend the head of the chain than in the d5 chain. In response to exf6, Black accepts a backward e6-pawn in exchange for freeing their position (the b8\u2013h2 diagonal and the semi-open f-file) and the possibility of a further e6\u2013e5 break. If White exchanges with d4xc5 it is called the Wedge formation. White gets an outpost on d4 and the possibility of exploiting the dark squares while Black gets an overextended e5 pawn to work on."} {"text":"Openings: Primary: Modern Benoni, Queen's Indian Defence, King's Indian Defence Modern Defence, Ruy Lopez, Italian Game. Other: Trompowsky, Ruy Lopez (colors reversed), Italian Game (colors reversed), R\u00e9ti Opening (colors reversed), King's Indian Attack (colors reversed), Sicilian Defence (Moscow, Rossolimo)."} {"text":"Themes for White: Central pawn majority, e4\u2013e5 break."} {"text":"Openings: Primary: Giuoco Piano. Other: French (Steiner, Exchange), Ruy Lopez (Berlin), Petrov, King's English, French (colors reversed), Sicilian Alapin (colors reversed)."} {"text":"Themes for Black: Blockading the isolani, trading pieces for a favorable endgame."} {"text":"Openings: Primary: Queen's Gambit. Other: French, Sicilian Alapin, Symmetrical English, Caro\u2013Kann, Nimzo-Indian, Slav."} {"text":"Themes for White: d4\u2013d5 break, sacrifice of the isolani, outpost on e5, kingside attack."} {"text":"The isolani leads to lively play revolving around the d5-square. If Black can clamp down on the pawn, their positional strengths and threat of exchanges give them the advantage. If not, the threat of the d4\u2013d5 break is ever-present, and the isolani can sometimes be sacrificed to unleash the potential of White's pieces, enabling White to whip up a whirlwind attack. Garry Kasparov is famous for the speculative d4\u2013d5 sacrifice."} {"text":"Openings: Primary: Queen's Gambit Declined. Other: Queen's Indian Defense, Symmetrical English, Sicilian (Alapin)."} {"text":"Themes for White: Line opening advance in the center, kingside attack."} {"text":"Themes for Black: Forcing a pawn advance and blockading the pair, conversion to isolani."} {"text":"Like the isolani, the are a structural weakness but with them usually comes increased piece activity to compensate. The play revolves around Black trying to force one of the pawns to advance. If Black can establish a permanent blockade the game is positionally won. On the other hand, White aims to keep the pawns hanging, trying to generate a kingside attack leveraging off of their superior center control. Other themes for White include tactical possibilities and line opening breaks in the center."} {"text":"Openings: Primary: Queen's Gambit Declined. Other: Caro\u2013Kann (colors reversed), Colle System (colors reversed), London System (colors reversed)."} {"text":"Themes for White: Minority attack, e3\u2013e4 break."} {"text":"Themes for Black: e4 outpost, kingside attack."} {"text":"Openings: Primary: Queen's Gambit Declined, Caro\u2013Kann. Other: Alekhine Defense, QGD Tarrasch Defense (colors reversed), Symmetrical English, Symmetrical English (colors reversed)."} {"text":"Themes for White: Exploiting the dark squares, d6 outpost; queenside majority in the endgame, with an advanced pawn."} {"text":"Themes for Black: e4 outpost, kingside attack, White's overextended pawn, e6\u2013e5 and b7\u2013b5 breaks."} {"text":"Openings: Primary: Dutch Defense. Other: Colle System, Bird's Opening (with colors reversed)."} {"text":"Themes: Exchanging the bad bishop, e4\/e5 outposts, breaks on the c and g files."} {"text":"Players must carefully consider how to recapture on the e4\/e5-square, since it alters the symmetric pawn formation and creates strategic subtleties."} {"text":"Openings: Primary: English, Dutch, King's Indian Attack. Other: Sicilian (Closed, Moscow), Vienna Game, Bishop's Opening."} {"text":"Themes: Exchanging the bad bishop, d4\/d5 outposts, breaks on the b- and f-files."} {"text":"This structure appears in one of Botvinnik's treatments of the English. Players must carefully consider how to recapture on the d4\/d5-square, since it alters the symmetric pawn formation and creates strategic subtleties. Adding the typical White fianchetto of the king's bishop to this structure provides significant pressure along the long diagonal, and usually prepares the f2\u2013f4\u2013f5 break."} {"text":"Openings: Primary: Closed Sicilian, Closed English (colors reversed)."} {"text":"Themes for White: kingside pawn storm, c2\u2013c3 and d3\u2013d4 break."} {"text":"Themes for Black: queenside pawn storm, a1\u2013h8 diagonal."} {"text":"In chess, doubled pawns are two pawns of the same color residing on the same file. Pawns can become doubled only when one pawn captures onto a file on which another friendly pawn resides. In the diagram, the white pawns on the b-file and e-file are doubled. The pawns on the are doubled and isolated."} {"text":"In most cases, doubled pawns are considered a weakness due to their inability to defend each other. This inability, in turn, makes it more difficult to achieve a breakthrough which could create a passed pawn (often a deciding factor in endgames). In the case of isolated doubled pawns, these problems are only further aggravated. Several chess strategies and openings are based on burdening the opponent with doubled pawns, a strategic weakness."} {"text":"There are, however, cases where accepting doubled pawns can be advantageous because doing so may open up a file for a rook, or because the doubled pawns perform a useful function, such as defending important squares. Also, if the opponent is unable to effectively attack the pawns, their inherent weakness may be of little or no consequence. There are also a number of openings that accept doubled pawns in exchange for some prevailing advantage, such as the Two Knights Variation of Alekhine's Defence."} {"text":"It is possible to have tripled pawns (or more). The diagram shows a position from Lubomir Kavalek\u2013Bobby Fischer, Sousse Interzonal 1967. The pawns remained tripled at the end of the game on move 28 (a draw)."} {"text":"Quadrupled pawns occurred in the game Alexander Alekhine\u2013Vladimir Nenarokov, 1907, in John van der Wiel\u2013Vlastimil Hort, 1981, and in other games. The longest lasting case of quadrupled pawns was in the game Kovacs\u2013Barth, Balatonbereny 1994, lasting 23 moves. The final position was drawn, demonstrating the weakness of the extra pawns (see diagram)."} {"text":"There are different types of doubled pawns (see diagram). A doubled pawn is weak because of four considerations:"} {"text":"The doubled pawns on the b-file are in the best situation, the f-file pawns are next. The h-file pawns are in the worst situation because two pawns are held back by one opposing pawn, so the second pawn has little value . See Chess piece relative value for more discussion."} {"text":"In chess, a half-open file (or semi-open file) is a with pawns of only one color. The half-open file can provide a line of attack for a player's rook or queen. A half-open file is generally exploited by the player with no pawns on it."} {"text":"Many openings, such as the Sicilian Defense, aim to complicate the position. In the main line Sicilian, 1.e4 c5 2.Nf3 d6 (or 2...e6, or 2...Nc6) 3.d4 cxd4 4.Nxd4, White obtains a half-open d-file, but Black can pressure White along the half-open c-file."} {"text":"In positions where White has no pawns on a file but Black has one pawn or more on that file, the position is considered to be half-opened for white. During instances where Black has zero pawns on a file but White has one or more pawns on that file, the position is considered to be half-opened for black."} {"text":"In such instances where pawns capture or advance, in a way that it opens or half-opens a file or files, this instance is called a pawn break."} {"text":"The demolition of the pawn structure is a common theme in positions with half-open files, since doubled pawns or isolated pawns may create half-open files."} {"text":"The game Loek van Wely\u2013Judit Polg\u00e1r, Hoogeveen, 1997 demonstrates the power of half-open files in attacks. Despite having one fewer pawn than White, Black's possession of two powerful half-open files (her rook on the f-file and queen on the g-file) gives her a winning advantage (see diagram)."} {"text":"and White resigned, anticipating 31.Rxf2 Qxg3+ 32.Kf1 Qxf2."} {"text":"The Brazilian Defense, also known as Camara Defense or Gunderam Defense, is a chess defense that starts with the moves:"} {"text":"Followed by moves ...g6, ...Bg7 and ...Nf6, creating the typical King's Indian formation."} {"text":"It was created by International Master H\u00e9lder C\u00e2mara, who played it for the first time in 1954, in the IV Centennial of the City of S\u00e3o Paulo Tournament and the XXII Brazilian Chess Championship. It became popular among Brazilian players, being employed in the top national competition every year, so much so that they begun calling it \"Brazilian Defense\". H\u00e9lder C\u00e2mara also used it in other important chess events, such as the South American Zonal in 1972 (where he attained his International Master title), the Netanya-A International Chess Tournament (Israel) in 1973, and the XLII e XLIII Brazilian Chess Championships, in 1975 and 1976, respectively."} {"text":"In 1969, a work dedicated to its analysis called \"Notas Sobre a Defesa Brasileira\" (\"Annotations on the Brazilian Defense\") was published."} {"text":"According to its creator, this defense was envisioned as an attempt to use the King's Indian Defense against the King's Pawn opening."} {"text":"The first official use of the Brazilian Defense was in a game between Manoel Madeira de Ley (white) and H\u00e9lder C\u00e2mara (black) during the fourth round of the IV Centennial of the City of S\u00e3o Paulo Tournament, on October 19, 1954."} {"text":"In the game of chess, prophylaxis (Greek \u03c0\u03c1\u03bf\u03c6\u03cd\u03bb\u03b1\u03be\u03b9\u03c2, \"prophylaxis,\" \"guarding or preventing beforehand\") or a \"prophylactic move\" is a move that stops the opponent from taking action in a certain area for fear of some type of reprisal. Prophylactic moves are aimed at not just improving one's position, but preventing the opponent from improving their own. Perhaps the most common prophylactic idea is the advance of the near a castled king to make luft averting the possibility of a back rank checkmate, or to prevent pins."} {"text":"In a more strategic sense, prophylaxis leads to a very , often frustrating for players with a strong tactical orientation. Players who play in the prophylactic style prevent the initiation of tactical play by threatening unpleasant consequences. One of the largest advantages of this approach is that it keeps risk to a minimum while causing an overaggressive opponent to lose patience and make a mistake. The disadvantage is that it frequently fails against an opponent who is content with a draw."} {"text":"Any move that prevents an opponent from threatening something can be called prophylactic, even if this word would not be used to describe the player's style. For example, Mikhail Tal and Garry Kasparov frequently played the move h3 in the Ruy Lopez\u2014a prophylactic move intended to prevent Black from playing ...Bg4 and creating an irritating pin on the knight at f3\u2014yet neither player would ever be described as playing in the prophylactic style. All grandmasters make use of prophylaxis in one way or another."} {"text":"Advanced prophylactic play cannot usually be employed by novice players. However, many standard and widespread opening moves can be considered prophylactic."} {"text":"The board above shows a common tactical position. White is looking for Nf7, resulting in a fork on the queen and rook, guaranteeing a material advantage - as the king cannot take the knight as it is defended by the bishop. In response, black plays h6, denying the knight of the g5 square, thus anticipating the attack."} {"text":"Pawn moves such as h6 or a6 don't require such an immediate threat in order to be played. They can often be played as recreational moves to stop tactics of bishops or knights occupying the g5\/b5 squares entirely. Bishops have incentive to be played on these squares as it results in pins on the queen and king respectively - forcing the knight to stay stationary."} {"text":"The aim of this strategy is to put immediate pressure on the opponent with the intent of ending points quickly. Good returns must be made, or else the server can gain the advantage. This tactic is especially useful on fast courts (e.g. grass courts) and less so on slow courts (e.g. clay courts). For it to be successful, the player must either have a good serve to expose an opponent's poor return or be exceptionally quick and confident in movement around the net to produce an effective returning volley. Ken Rosewall, for instance, had a feeble serve but was a very successful serve-and-volley player for two decades. Goran Ivani\u0161evi\u0107, on the other hand, had success employing the serve-and-volley strategy with great serves and average volleys."} {"text":"In the mid-1950s, when Pancho Gonzales was dominating professional tennis with his serve-and-volley game, occasional brief attempts were made to partially negate the power of his serve. This, it was felt, would lead to longer rallies and more spectator interest. At least three times the rules were modified:"} {"text":"Other male tennis players known for their serve-and-volley technique include Pancho Segura, Frank Sedgman, Ken Rosewall, Lew Hoad, Rod Laver, Roy Emerson, John McEnroe, Stefan Edberg, Pat Cash, Boris Becker, Patrick Rafter, Pete Sampras and Tim Henman. Sampras, despite being known for his great serve and volley game, did not always come to the net behind the serve on slower courts, particularly on the second serve. This was especially the case when he was younger."} {"text":"Although the strategy has become less common in both the men's and women's game, a few players still prefer to approach the net on their serves in the twenty-first century. Examples of players who employ serve-and-volley as the chief style of play include: Feliciano L\u00f3pez, Nicolas Mahut, Rajeev Ram, Ivo Karlovi\u0107, Dustin Brown, Pierre-Hugues Herbert, Sergiy Stakhovsky, \u0141ukasz Kubot, Leander Paes, and Mischa Zverev."} {"text":"Other players, despite not being pure serve-and-volleyers, do employ serve-and-volley as a surprise tactic. Examples include Roger Federer, Rafael Nadal, and Daniil Medvedev."} {"text":"On the women's side, serve-and-volley has become almost extinct at the very top level. Hsieh Su-wei is the only active notable (WTA elite) player that prefers to play with this style."} {"text":"Some of the most interesting matches of all time according to Pat Cash have pitted great baseliners such as Bj\u00f6rn Borg, Mats Wilander or Andre Agassi against great serve-and-volleyers such as John McEnroe, Pat Rafter or Pete Sampras. Since Tilden's time, head-to-head results on various surfaces, such as those played out in the famous rivalry between Borg and McEnroe, contradict his theory that great baseline players will tend to defeat great serve-and-volley ones."} {"text":"Despite the improvements in racquet technology made towards the end of the twentieth century which made serve-and-volley a rarer tool in a tennis player's skill set, players familiar with the strategy still advocate it. Roger Federer advocated up-and-coming players not to ignore the tactic's strategy of coming to the net, especially on faster surfaces and as a surprise tactic. Yet other players, such as Mischa Zverev, acknowledged the difficulty of mastering serve-and-volley, recalling his 36-month effort to adopt the style. He said: \"Every point, you have to be ready. You're either going to get passed, you're going to miss an easy volley or you're going to win the point,\" and likened it to the stochastic nature of flipping a coin."} {"text":"Players use different tennis strategies to enhance their own strengths and exploit their opponent's weaknesses in order to gain the advantage and win more points."} {"text":"Players typically specialize or naturally play in a certain way, based on what they can do best. Based on their style, players generally fit into one of three types, \"baseliners\", \"volleyers\", \"all-court players\". Many players have attributes of all three categories but, at times, may also focus on just one style based on the surface, or on the condition, or on the opponent."} {"text":"A \"baseliner\" plays from the back of the tennis court, around\/behind\/within the baseline, preferring to hit groundstrokes, allowing themselves more time to react to their opponent's shots, rather than to come up to the net (except in certain situations)."} {"text":"A \"volleyer\" plays nearer towards the net, preferring to hit volleys, allowing less time for their opponent to react to their shots, rather than to stay\/play from further back on the tennis court (except in certain situations)."} {"text":"\"All-court players\" fall somewhere in between, employing both \"baseliner\" strategies and \"volleyer\" strategies depending on the situations."} {"text":"A player's weaknesses may also determine strategy. For example, most players have a stronger forehand, therefore they will favor the forehand even to the point of \"running around\" a backhand to hit a forehand."} {"text":"While they tend to make relatively few errors because they do not attempt the complicated and ambitious shots of the aggressive baseliner, the effective counterpuncher must be able to periodically execute an aggressive shot, either using the pace given by their opponent or using precision and angle. Speed and agility are key for the counterpuncher, as well as a willingness to patiently chase down every ball to frustrate opponents. Returning every aggressive shot that the opponent provides is often the cause of further errors due to the effort required in trying increasingly harder and better shots. However, it is noted that for some faster players, including Ga\u00ebl Monfils, Gilles Simon, Lleyton Hewitt and Andy Murray, standing too deep behind the court can hinder their attacking abilities."} {"text":"At lower levels, the defensive counter-puncher often frustrates their opponent so much that they may try to change their style of play due to ineffective baseline results. At higher levels, the all-court player or aggressive baseliner is usually able to execute winners with higher velocity and better placement, taking the counterpuncher out of the point as early as possible."} {"text":"Most counter-punchers often excel on slow courts, such as clay. The court gives them extra time to chase down shots and it is harder for opponents to create winners. However, some counter-punchers who have the ability to mix up their game and turn defense into offense, like Lleyton Hewitt, Andy Murray and Agnieszka Radwanska have excelled on faster courts like hard and grass as well as slower courts. Counter-punchers are often particularly strong players at low-level play, where opponents cannot make winners with regularity."} {"text":"A serve and volleyer has a great net game, is quick around the net, and has fine touch for volleys. \"Serve and volleyers\" come up to the net at every opportunity when serving. They are almost always attackers and can hit many \"winners\" with varieties of volleys and drop volleys. When not serving, they often employ the \"chip-and-charge\", chipping back the serve without attempting to hit a winner and rushing the net. The serve-and-volleyers' strategy is to put pressure on the opponent to try to hit difficult passing shots. This strategy is extremely effective against pushers."} {"text":"Bill Tilden, the dominant player of the 1920s, preferred to play from the back of the court, and liked nothing better than to face an opponent who rushed the net \u2013 one way or another Tilden would find a way to hit the ball past him. In his book Match Play and the Spin of the Ball, Tilden propounds the theory that \"by definition\" a great baseline player will always beat a great serve-and-volleyer. Some of the best matches of all time have pitted great baseliners such as Bj\u00f6rn Borg, Mats Wilander or Andre Agassi against great serve-and-volleyers such as John McEnroe, Boris Becker, Stefan Edberg, or Pete Sampras."} {"text":"Some players, such as Tommy Haas, Roger Federer and Andy Roddick will only employ this strategy on grass courts or as a surprise tactic on any surface. Roger Federer uses this commonly against Rafael Nadal, to break up long rallies and physically taxing games."} {"text":"All-court players, or all-rounders, have aspects of every tennis style, whether that be offensive baseliner, defensive counter-puncher or serve-and-volleyer. All-court players use the best bits from each style and mix it together to create a truly formidable tennis style to play against. In game situations they are very versatile; when an all-court player's baseline game is not working, he\/she may switch to a net game, and vice versa. All-court players have the ability to adjust to different opponents that play different styles more easily than pure baseliners or serve and volleyers. All-court players typically have the speed, determination and fitness of a defensive counter-puncher, the confidence, skill and flair of offensive baseliners and have the touch, the agility around the net and tactical thinking of the serve-and-volleyer."} {"text":"However, just because the all-court player has a combination of skills used by all tennis styles does not necessarily mean that they can beat an offensive baseliner or a defensive counter-puncher or even a serve-and-volleyer. It just means it would be more difficult to read the game of an all-court player."} {"text":"Perfect examples for all-rounders are Boris Becker or Pete Sampras in men's singles and Daniela Hantuchova or Martina Hingis in women's singles."} {"text":"Holding serve is crucial in tennis. To hold serve, serves must be accurately placed, and a high priority should be placed on first serve percentage. In addition, the velocity of serve is important. A weak serve can be easily attacked by an aggressive returner. The first ball after the serve is also key. Players should serve in order to get a weak return and keep the opponent on the defense with that first shot. For example, following a wide serve, it is ideal to hit the opponent's return to the open court."} {"text":"There are three different types of serves and each one of them can be used in different situations. One type of serve is the serve with slice. The slice serve works better when the player tosses the ball to the right and immediately hits the outer-right part of the ball. This serve is best used when you hit it wide so you get your opponent off the court."} {"text":"Another type is the kick serve. To achieve a good execution, the player must toss the ball above the head and immediately spin the bottom-left part of the ball. Since the ball is tossed above the head, it is necessary for the player to arch correctly under the ball. This serve is best used as second serve because the amount of spin that is added to the ball makes it very safe. The kick serve is also effective when a change of rhythm is needed or when the opponent struggles with the high bounce that results from the effect."} {"text":"A third type of serve is the flat one. To execute this serve, the player must toss the ball right in front and immediately hit the middle-top part of the ball. This is usually a very hard serve and therefore risky. However, if the flat serve is executed with enough power and precision, it can turn into a great weapon to win points faster."} {"text":"Though strategy is important in singles, it is even more important in doubles. The additional width of the alleys on the doubles court has a great effect on the angles possible in doubles play. Consequently, doubles is known as a game of angles."} {"text":"The ideal is both-up strategy, often called \"Attacking Doubles\" because the net is the \"high ground\", and the both-up strategy puts both players close to it, in a position to score because of their excellent vantage points and angles. A team in the both-up formation, however, is vulnerable to a good lob from either opponent at any time. To be successful with Attacking Doubles, teams must have effective serves and penetrating volleys to prevent good lobs and good overhead shots to put away poor returns."} {"text":"Teams that play attacking doubles try to get into the both-up formation on every point. When serving, their server follows most first serves to the net and some second serves. As a result, attacking doubles is also called \"serve-and-volley doubles\". When receiving, their receiver follows most second-service returns to the net."} {"text":"At the professional level, attacking doubles is the standard, though slowly degrading, strategy of choice."} {"text":"At lower levels of the game, not all players have penetrating volleys and strong overhead shots. So, many use up-and-back strategy. The weakness in this formation is the large angular gap it creates between partners, a gap that an opposing net player can easily hit a clean winner through if they successfully poach a passing shot."} {"text":"Nonetheless, up-and-back strategy is versatile, with elements of both offense and defense. In fact, since the server must begin each point at the baseline and the receiver must be far enough back to return the serve, virtually every point in doubles begins with both teams in this formation."} {"text":"Teams without net games strong enough to play Attacking Doubles can still play both-up when they have their opponents on the defensive. To achieve this, a team would patiently play up-and-back for a chance to hit a forcing shot and bring their baseliner to the net."} {"text":"Australian Doubles and the I-Formation are variations of up-and-back strategy. In Australian doubles, the server's partner at net lines up on the same side of the court, fronting the opposing net player, who serves as a poaching block and blind. The receiver then must return serve down the line and is liable to have that return poached. In the I-Formation, the server's net partner lines up in the center, between the server and receiver so he or she can poach in either direction. Both Australian Doubles and the I-Formation are poaching formations that can also be used to start the point for serve-and-volley doubles."} {"text":"Both-back strategy is strictly defensive. It is normally seen only when the opposing team is both-up or when the returner is passing the net player on the return. This might be a good tactic when the opponent has a serve with a lot of pressure and an aggressive player at the net. From here the defenders can return the most forcing shots till they get a chance to hit a good lob or an offensive shot. If their opponents at net become impatient and try to angle the ball away when a baseliner can reach it, the defender can turn the tables and score outright. However this strategy leaves the volley court open to drop shots from the opposition."} {"text":"In baseball, a first-pitch strike is when the pitcher throws a strike to the batter during the first pitch of the at bat. Statistics indicate that throwing a strike on the first pitch allows the pitcher to gain an advantage in the at bat, limiting the hitter's chance of getting on base."} {"text":"With the continued interest and development of statistics in the game of baseball, first-pitch strikes have been under the microscope of many fans and sabermetricians (those who study the game based on evidence, mainly stats that measure game activity). Many studies have proven that the first pitch in the at bat is the most important one. And according to Craig Burley's 2004 study in The Hardball Times, throwing a strike on a 0-0 count could potentially save over 12,000 runs scored in a single Major League Baseball season."} {"text":"In Burley's study, he used stats from the 2003 MLB season. He found that when a pitcher throws a strike on the first pitch of the at bat, hitters collected a .261 batting average. But if the first pitch was a ball, their batting average jumped to .280, a substantial difference."} {"text":"From Burley, \"Let's imagine that we have two pitchers, both of whom are otherwise perfectly average but one of whom always throws a strike on the first pitch, while the other always throws a ball. The first pitcher, the \"strike one\" pitcher, has an expected ERA (earned run average) of about 3.60. The second one, the otherwise perfectly average one who always throws a ball on pitch one, has an expected ERA of about 5.50. He'll also pitch about 12% fewer innings (without taking into account the higher pitch counts that would result from starting 1-0).\""} {"text":"While there are some players in the game who are notorious for swinging at the first pitch, Burley's study proved that there is little risk in jumping ahead early in the count. Less than 8 percent of first-pitch strikes turn into base hits."} {"text":"After that it becomes even more difficult for the hitter. Once a pitcher gets to a 0-1 count, hitters hit just .239 against him from there on out."} {"text":"The Minnesota Twins franchise has taken the idea of command and first-pitch strikes to a new level. Considered a small-market team, the Twins needed to find any advantage they could to keep pace with the larger franchises."} {"text":"Twins pitchers are taught from the very beginning to get ahead in the count, throwing first-pitch strikes as often as possible. In training camp, pitchers who collect the most first-pitch strikes are given free dinner or other rewards."} {"text":"The scouts and coaches throughout the organization are trained to look for pitchers with consistent arm slots and deliveries, allowing them to spot young players who will harness the command that the franchise looks for."} {"text":"As a team, the Twins haven\u2019t ranked outside the top five in fewest walks allowed since 1996, and they\u2019ve been first or second in that category in nine of the past 13 seasons."} {"text":"Former Minnesota pitcher Brad Radke became the poster boy for first-pitch strikes, and his rate of 1.63 walks per nine innings ranks 32nd in baseball history."} {"text":"Minnesota has become of the most successful small-market teams in the game, and as the Twins opened their new stadium, Target Field, for the 2010 season, their payroll ($97.5 million) ranked 11th among 30 big league clubs, a sign of how far the franchise has come and a testament to the importance of throwing first-pitch strikes."} {"text":"\"It stems from a manifesto we put together way back in the day: As a small-market club, how are you going to get an edge? We believe that command and control and makeup are true separators in the pitching category.\"\u2014Twins scouting director Mike Radcliff told ESPN's Jerry Crasnick in May, 2010."} {"text":"Despite this lip service, however, the Twins have been below-average in the frequency with which they throw first-pitch strikes over the last three seasons."} {"text":"Following the 2009 season, a contributor to FederalBaseball.com (an unofficial Washington Nationals blog) collected data to compare first-pitch strike percentages to earned run averages. The results indicated that there was a correlation between the two statistics, and pitchers who harnessed a higher first-pitch strike percentage often carried a lower ERA."} {"text":"Of the starting pitchers with the 20 lowest ERAs in 2009, 16 of them had above-average first-pitch strike percentages. The contributor created a graph to plot the results. To view the graph, click here."} {"text":"When viewing the graph, keep in mind:"} {"text":"The chart includes two dashed orange lines. The ERA line is at 4.20, which was the 2009 National League average. The first-pitch strike line is at the MLB average 58.13 percent."} {"text":"In the upper-left corner are pitchers with higher than average first-pitch strike percentages and lower than average ERAs. In the bottom-left corner are pitchers with lower than average first-pitch strike percentages and lower than average ERAs."} {"text":"According to FanGraphs.com, as of Aug. 11, 2010, the three starting pitchers with the highest first-pitch strike percentages were Cliff Lee (70.8 percent), Carl Pavano (68 percent), and Roy Halladay (67.6 percent). Pavano (3.28) had the highest ERA of the three, with Halladay and Lee both carrying ERAs below 2.50."} {"text":"Starting pitchers throughout the league have acknowledged that throwing first-pitch strikes gives them a better chance for success."} {"text":"Daniel Hudson, a 23-year-old starting pitcher for the Arizona Diamondbacks told FoxSports.com on Aug. 6, 2010 that throwing first-pitch strikes has aided in his increased performance."} {"text":"After a winning start in which he threw first-pitch strikes to 20 of the 29 hitters he faced, he told FoxSports.com, \"When you get that first-pitch strike, it automatically puts [the hitters] in a hole and gives me an advantage. It's very important to get that first pitch over in every at-bat.\""} {"text":"Seattle Mariners\u2019 pitcher Jason Vargas was enjoying the best season of his career through Aug. 11, 2010, with an ERA close to 3.00. Following a 2009 season in which he won just three games in 14 starts and had an ERA of 4.91, Vargas took a new approach. After throwing just 51 percent strikes on the first pitch in 2009, that number jumped to 63 percent in 2010, above the MLB average. From SeattlePI.com, \"It puts him in the drivers' seat to execute pitch sequences to hitters on his own accord, rather than having to give in and offer hitters fastballs in fastball-counts.\""} {"text":"Phil Hughes of the New York Yankees has excelled in his first full season as a starting pitcher and was named to the American League All-Star team. Hughes has developed a knack for getting one over on the first pitch, increasing his first-pitch strike percentage in each of his four seasons in the majors. His percentage of 64.3 through Aug. 11, 2010 is the highest of his career, and the eighth best in the American League."} {"text":"On June 19, 2010, Hughes told NJ.com, \"There's a lot of good strike-throwers out there, but that's been my main goal, just get strike one and take it one pitch at a time. Get ahead, and go from there \u2026 When you\u2019re falling behind 1-0 as opposed to 0-1, it's a huge difference \u2026 That's all I try to do is just throw strikes and be aggressive. And know that if I put myself in those good situations, good counts, more or less good things are going to happen.\""} {"text":"Hughes backed up his comments with statistics. Through Aug. 11, 2010, Hughes allowed just a .221 batting average against after throwing a first-pitch strike, as opposed to a .273 batting average against after throwing a ball on the first pitch. His win total on the season is the highest of his career."} {"text":"In baseball, an ace is the best starting pitcher on a team and nearly always the first pitcher in the team's starting rotation. Barring injury or exceptional circumstances, an ace typically starts on Opening Day. In addition, aces are usually preferred to start crucial playoff games, sometimes on three days' rest."} {"text":"The term may be a derivation of the nickname of Asa Brainard (real first name: \"Asahel\"), a 19th-century star pitcher, who was sometimes referred to as \"Ace\"."} {"text":"In the early days of baseball, the term \"ace\" was used to refer to a run."} {"text":"Modern baseball analysts and fans have started using the term \"ace\" to refer to the elite pitchers in the game, not necessarily to the best starting pitcher on each team. For example, the April 27, 1981, \"Sports Illustrated\" cover was captioned \"The Amazing A's and Their Five Aces\" to describe the starting rotation of the 1981 Oakland Athletics."} {"text":"In baseball, the double switch is a type of player substitution, usually performed by a team while playing defense. The double switch is typically used to make a pitching substitution, while simultaneously placing the incoming pitcher in a more favorable spot in the batting order than was occupied by the outgoing pitcher. (On the assumption that the pitcher will be a poor hitter, the incoming pitcher will generally take the spot in the batting order of a position player who has recently batted, so as to avoid the pitcher making a plate appearance in the next couple of innings.) To perform a double switch (or any other substitution), the ball must be dead."} {"text":"Since the batting order can be changed only as a result of a player substitution, while the defensive arrangement may be changed freely (among players currently in the game), the double switch typically takes the following form:"} {"text":"In the short term, the lineup is strengthened because a poor-hitting pitcher will not make a plate appearance soon. The disadvantage is that a position player must be removed from play and replaced by another, often inferior, position player. The advantage of the double switch over pinch hitting is that it uses up fewer players. If a relief pitcher is brought in before the at-bat, then the manager can substitute a pinch-hitter for him. However, this would require a new pitcher for the next half-inning. By using a double switch, an incoming pitcher can be left in the game for a substantial period before his turn in the batting lineup arrives, no matter what the previous batting order was."} {"text":"When the team is up to bat, a manager can get the same effect as a double switch by leaving in the player who has pinch-hit for the pitcher and replacing another player in the lineup who has made the last out of the inning with a new pitcher. This will take the following form:"} {"text":"A double switch has infrequently resulted in a team batting out of turn because the lineup card was not updated to reflect the change, either because the umpires were not informed of the change, or because the change was not recorded. In addition, because double-switches are typically communicated verbally, it creates opportunities for confusion and miscommunication that can be costly to the switching team."} {"text":"Power pitcher is a term in baseball for a pitcher who relies on pitch velocity at the expense of accuracy. Power pitchers usually record a high number of strikeouts, and statistics such as strikeouts per 9 innings pitched are common measures of power. An average pitcher strikes out about 5 batters per nine innings while a power pitcher will often strike out one or more every inning. The prototypical power pitcher is National Baseball Hall of Fame member, Nolan Ryan, who struck out a Major League Baseball record 5,714 batters in 5,386 innings. Ryan recorded seven no-hitters, appeared in eight Major League Baseball All-Star Games but also holds the record for most walks issued (2,795)."} {"text":"A famous fictional example of a power pitcher is Ricky \"Wild Thing\" Vaughn from the film \"Major League\", a character sports journalist Scott Lauber once called \"the power pitcher everyone on my high school baseball team wished they were\". Actor Charlie Sheen performed that role; he had actually played baseball earlier in his life, prior to acting, as a pitcher. Additional, non-fictional prominent power pitchers include Hall of Famers Walter Johnson, Bob Gibson, Sandy Koufax, Randy Johnson and Bob Feller. Feller himself famously led his league in strikeouts and walks several times."} {"text":"The traditional school of thought on power pitching was known as \"throw till you blow\". However, multimillion-dollar contracts have changed mentalities. The number of pitches thrown is now counted by a team's staff, with particular attention paid to young power arms. The care which some of the older power pitchers took with their arms has allowed for long careers and further opportunity after they have stopped playing. For example, player Roger Clemens has remained in the public eye for years."} {"text":"The infield shift in baseball is a defensive realignment from the standard positions to blanket one side of the field or another. Used primarily against left-handed batters, it is designed to protect against base hits pulled hard into the gaps between the fielders on one side. Originally called the Williams shift, it has periodically been referred to as the Boudreau shift or Ortiz shift since then."} {"text":"The infield shift strategy is often associated with Ted Williams, but it was actually first employed against Cy Williams during the 1920s. Cy Williams, a left-handed outfielder with the Chicago Cubs (1912\u20131917) and Philadelphia Phillies (1918\u20131930), was second only to Babe Ruth in major league career home runs from 1923 to 1928. Opposing defenses would shift \"practically to the entire right side\" when he batted."} {"text":"The shift was later used against Ted Williams of the Boston Red Sox during the 1946 World Series, as a defensive gimmick by St. Louis Cardinals manager Eddie Dyer to psych out and hopefully contain the Boston slugger. It was devised by Cleveland Indians manager Lou Boudreau between games of a doubleheader in July 1946 to halt Williams' hot hitting. In his book \"Player-Manager\", Boudreau wrote, \"I have always regarded the Boudreau Shift as a psychological, rather than a tactical, victory.\""} {"text":"The shift has subsequently been employed to thwart extreme pull hitters (mostly lefties), such as Barry Bonds, Ryan Howard, Jason Giambi, David Ortiz, Jim Thome, Adam Dunn, and Mark Teixeira."} {"text":"The Ortiz shift consists of a baseball infield defense, however the shortstop and second baseman move to the outfield between first and second base while the left and center fielder are moved towards the right side of the field with the third baseman going to the left side of the outfield."} {"text":"Baseball historian Bill James\u2014who worked for the Red Sox at the time\u2014criticized the Ortiz shift as only working for ground balls and not for home runs, which he described as Ortiz's true danger. Though the shift was mostly used against Ortiz, it has been used elsewhere in baseball."} {"text":"As the infield shift leaves some areas less covered than others, the batter who hits toward those areas may obtain better results than against an un-shifted infield. A stark example occurred in a 1970 game between the Philadelphia Phillies and the San Francisco Giants: Giant Willie McCovey bunted hard down the third base line when the shift was on. With no one covering third, Willie Mays, on first at the time, came all the way around to score, while McCovey reached second for a double."} {"text":"Infield shifts can also provide base running opportunities to the batting team. A notable example occurred in Game 4 of the 2009 World Series: with switch hitter Mark Teixeira of the New York Yankees batting left-handed, and the Philadelphia Phillies implementing an infield shift, baserunner Johnny Damon stole second base and then continued on to third base in one continuous play, as there was no fielder on the left side of the infield. Damon would later score what proved to be the winning run of the game."} {"text":"The shift can be countered by the batter bunting towards third base, as the third baseman is positioned in the outfield. For example, Ortiz started to hit more balls towards the left side of the field, taking advantage of the lack of position players in left field. It was stated that a 2015 Major League Baseball proposal to ban defensive shifts would have benefited Ortiz."} {"text":"As early as 2015, the Commissioner of Baseball considered banning the shift, with some MLB managers expressing agreement, although there is no consensus on such an idea. In 2019, the independent Atlantic League of Professional Baseball, as part of an agreement with MLB to test experimental rules, has banned (or significantly restricted) the shift by requiring two infielders to be positioned on either side of second base."} {"text":"In baseball, a ground ball pitcher (also ground-ball pitcher or groundball pitcher) is a type of pitcher that has a tendency to induce ground balls from opposing batters. The average ground ball pitcher has a ground ball rate of at least 50% with extreme ground ball pitchers maintaining a ground ball rate of around 55%. Pitchers with a ground ball rate lower than 50% may be classified as flyball pitchers or as pitchers who exhibit the tendencies of both ground ball and fly ball pitchers. Ground ball pitchers rely on pitches that are low in the strike zone with substantial downward movement, such as splitters and sinker balls."} {"text":"Baseball analysts and sabermetricians Tom Tango, Mitchel Lichtman, and Andrew Dolphin agree that ground ball pitchers are generally better pitchers than those with fly ball tendencies. Meanwhile, baseball writer and analyst Bill James argues the opposite because of injury patterns among ground ball pitchers."} {"text":"Against a ground ball pitcher, batters tend to ground out rather than fly out. A ground ball pitcher\u2019s ability to keep balls in the infield in turn keeps balls from resulting in home runs which, according to Hardball Times writer David Gassko, is the strongest benefit of a ground ball pitcher. When a ground ball pitcher does allow a pitch to be bat into the air, it is likely to result in a line drive."} {"text":"Compared to fly ball pitchers, ground ball pitchers generally allow fewer extra base hits yet more total hits. Likewise, ground ball pitchers tend to give up fewer home runs than fly ball pitchers."} {"text":"Ground ball pitchers tend to perform better against ground ball hitters than they perform against fly ball hitters."} {"text":"Compared to fly ball pitchers, ground ball pitchers are more likely to allow unearned runs. David Gassko notes that 2.23% of ground balls result in an error, and these errors account for 85% of all errors. Accordingly, as Gassko argues, the susceptibility of ground balls to errors results in more unearned runs."} {"text":"With runners on base, ground ball pitchers often force double plays because the weak contact batters make with a ground ball pitcher\u2019s pitches prevents the ball from passing the infield defense."} {"text":"Ground ball rate, or ground ball percentage, is the percentage of batted balls that are hit as ground balls against a pitcher. A typical ground ball pitcher has a ground ball rate over 50% while an extreme ground ball pitcher maintains a ground ball rate of 55% or higher. Pitchers with high ground ball rates sustain lower BABIP, or Batting Average against Balls in Play (Hardball), on ground balls than those with low ground ball rates."} {"text":"Ground ball pitchers rely on pitches that are likely to induce weak contact from the batter, thus resulting in a ground ball. Pitches that are low in the strike zone with high negative horizontal or vertical movement and high velocity, such as splitters, sinkers, curveballs, and two-seam fastballs, result in the highest percentage of ground balls. According to data from the 2012 major league season, splitters and sinker balls result in the highest percentages of ground balls compared to other pitches, with 50.3% and 49.8%, respectively."} {"text":"The sinker ball has an ability to \u201cdive\u201d at the plate, often resulting in ground balls. Several ground ball pitchers such as Tim Hudson, Greg Maddux, Derek Lowe, Chien-Ming Wang, Brandon Webb, and Jake Westbrook rely heavily on their sinker pitches and may often be considered sinkerballers. Self-proclaimed ground ball pitcher Zach Day has indicated that his primary pitch is a sinker ball as well."} {"text":"Tim Hudson notes that he transformed from a strikeout pitcher to a ground ball pitcher because of the capabilities of his sinker ball. He also notes that he feels double plays are easy to force with a ground ball."} {"text":"As of 1998, 72% of balls put in play against Greg Maddux resulted in ground balls, who often relies on a sinker ball."} {"text":"In June 2002, Lowe allowed eleven fly balls to 129 batters, relying on his sinker to induce ground balls."} {"text":"According to a scouting report by Lewis Shaw, Brandon Webb\u2019s sinker possesses heavy downward movement and high velocity, and one of his notable tendencies is to induce ground balls from right-handed hitters."} {"text":"In a World Series game on October 21, 1996, against the New York Yankees, then-Atlanta Braves pitcher Greg Maddux pitched one fly ball and eighteen ground balls, earning nineteen of twenty-four outs on ground balls with Wade Boggs grounding into a double play. Yankees\u2019 catcher Joe Girardi said of Maddux\u2019s performance, \u201c[H]e has a great sinker and he gets a lot of ground balls.\u201d Braves center fielder Marquis Grissom noted, \u201cHe [Maddux] works fast. His games are not boring, by no means. That\u2019s his style of pitching. He\u2019s a ground ball pitcher.\u201d"} {"text":"Baseball writer Murray Chass noted the similarities between this World Series game and a World Series game Maddux pitched against the Cleveland Indians a year prior, which resulted in a loss by the Indians, who scored two unearned runs. In this game, Maddux earned nineteen ground outs and pitched two fly balls."} {"text":"In game three of the American League Championship Series between the Cleveland Indians and Boston Red Sox in 2007, Indians pitcher Jake Westbrook used his sinker ball to induce fifteen ground ball outs and also forced two 6-4-3 double plays."} {"text":"However, the 2014\/2015 Kansas City Royals (who play in the American League) are the most recent example of a team with a small ball orientation."} {"text":"A team may incorporate a small-ball strategy for a variety of reasons, including:"} {"text":"Most commonly, managers will switch to small-ball tactics while a game is in progress, doing so upon the convergence of a variety of factors including having appropriate hitters coming up next in the batting order and, often, having fast runners already on base. A team could also start the game with the intention of playing small ball but then change from this strategy at some point during a game, depending on circumstances, such as when the opposing pitcher is struggling or has left the game or when the team is ahead or behind by several runs."} {"text":"Small ball is a contrast to a style sometimes called the \"big inning\", where batters focus more on drawing walks or getting extra-base hits and home runs. This may produce many innings with little but strikeouts and flyouts, but occasionally innings with several runs. By playing small ball, the team trades the longer odds of a big inning for the increased chances of scoring a single run. Specifically, small ball often requires the trading of an out to advance a runner and therefore usually reduces the number of batting opportunities that a team will have in a given inning."} {"text":"Small ball was once the standard by which the game was played during the \"dead-ball era\" at the beginning of the 20th century, when both batting averages and home-run totals dropped to historic lows. Teams relied on bunting and stolen bases to score runs. The advent of new, cork-centered baseballs in 1910, as well as the outlawing of specialty pitches such as the spitball, saw a jump in batting averages and home runs."} {"text":"Small ball has become less common because of the general trend toward smaller parks and more home runs, especially in the American League where the designated hitter rule further increases offensive power. However, all big league managers are still skilled at managing from a small ball perspective, as it is sometimes necessary, especially in critical games. White Sox manager Ozzie Guill\u00e9n was widely credited for saying his 2005 World Series champion team played not small ball or big inning ball, but \"smart ball\", which has come to mean a more adaptable strategy."} {"text":"The general idea of playing small ball is much more widely accepted and used in Japan; good hitters will frequently be asked to lay down a sacrifice bunt in an attempt to advance the runner if the lead off batter reached first or second base."} {"text":"The San Francisco Giants were widely credited with winning the second game of the 2012 World Series against the Detroit Tigers on small ball. In a 2-0 victory, the Giants scored their first run on a ball that was grounded into a double play and later earned a run on a sacrifice fly."} {"text":"Sometimes, the term may be used (also correctly, since it is an informal term) to refer to any of the parts of the broader strategy defined above. This may include a bunt single, the hit and run play, a sacrifice fly, the contact play, etc."} {"text":"When aggregated, such individual efforts can amount to small-ball tactics even when not deliberately deployed by a team's manager. For example, if the lead-off batter reaches base, a series of individual moves can lead to run totals resembling those of the big-inning strategy but scored one at a time."} {"text":"In baseball, a left-handed specialist (also known as lefty specialist) is a relief pitcher who throws left-handed and specializes in pitching to left-handed batters, weak right-handed batters, and switch-hitters who bat poorly right-handed. Because baseball practices permanent substitution, these pitchers frequently pitch to a very small number of batters in any given game (often only one), and rarely pitch to strictly right-handed batters. Most Major League Baseball (MLB) teams have several left-handed pitchers on their rosters, at least one of whom is a left-handed specialist. A left-handed specialist is sometimes called a \"LOOGY\" (or Lefty One-Out GuY), coined by John Sickels, and may be used pejoratively."} {"text":"In the 1991 MLB season, there were 28 left-handed relievers who were not their team's closer and pitched 45 or more games. Only four averaged fewer than an inning per appearance. From 2001 to 2004, over 75 percent of left handed relievers meeting those criteria averaged less than one inning. Left-handed reliever John Candelaria was one of the early specialists in 1991, pitching 59 games and averaged .571 innings. In 1992, he allowed no earned runs\u2014excluding inherited runners\u2014in 43 of the 50 games. Jesse Orosco became a left-handed specialist later in his 24-season career and retired at the age of 46. From 1991 to 2003, he never averaged more than an inning pitched per appearance."} {"text":"During the 2013 MLB season, there were seven relief pitchers who averaged less than two outs recorded per appearance, all of whom were left-handed. Joe Thatcher, a left-handed specialist, appeared in 72 games with 39.2 innings pitched, and had the fewest outs recorded per appearance with 1.6."} {"text":"Starting with the 2020 season, all pitchers, whether starters or relievers, will be required to face at least three batters, or pitch to the end of the half-inning in which they enter the game. Exceptions will be allowed only for incapacitating injury or illness while pitching. According to \"MLB.com\" journalist Anthony Castrovince, \"This will effectively end the so-called \"LOOGY\" (left-handed one-out guy) and other specialist roles in which pitchers are brought in for one very specific matchup.\""} {"text":"The right-handed specialist (sometimes called a \"ROOGY\", for Righty One-Out GuY) is less common than the left-handed specialist, but are occasionally featured."} {"text":"A bunt is a batting technique in baseball or fastpitch softball. Official Baseball Rules, define a bunt as follows: \"A BUNT is a batted ball not swung at, but intentionally met with the bat and tapped slowly within the infield.\" To bunt, the batter loosely holds the bat in front of home plate and intentionally taps the ball into play. A properly executed bunt will create weak contact with the ball and\/or strategically direct it, forcing the infielders to make a difficult defensive play to record an out."} {"text":"The strategy in bunting is to ground the ball into fair territory, as far from the fielders as possible but within the infield. This requires not only physical dexterity and concentration, but also an awareness of the fielders' positions in relation to the baserunner or baserunners, their likely reactions to the bunt, and knowledge of the pitcher's most likely pitches."} {"text":"The bunt is typically executed by the batter turning his body toward the pitcher and sliding one hand up the barrel of the bat to help steady it. This is called squaring up. Depending on the situation, the batter might square up either before the pitcher winds up, or as the pitched ball approaches the plate. Sometimes, a batter may square up, then quickly retract the bat and take a full swing as the pitch is delivered."} {"text":"In a sacrifice bunt, the batter will put the ball into play with the intention of advancing a baserunner, in exchange for the batter being thrown out. The sacrifice bunt is most often used to advance a runner from first to second base, though the runner may also be advanced from second to third base, or from third to home. The sacrifice bunt is most often used in close, low-scoring games, and it is usually performed by weaker hitters, especially against pitchers in games played in National League parks. A sacrifice bunt is not counted as an at-bat. In general, when sacrifice bunting, a batter will square to bunt well before the pitcher releases the ball."} {"text":"The squeeze play occurs when the batter sacrifices with the purpose of scoring a runner from third base. In the suicide squeeze, in which the runner on third base starts running for home plate as soon as the pitcher starts to pitch the ball, it is integral that the batter bunt the ball successfully, or the runner will likely be tagged out easily. Due to the high-risk nature of this play, it is not often executed, but can often be an exciting moment within the game. Alternatively, in the lower-risk safety squeeze, the runner on third waits for the ball to be bunted before breaking for home. If a runner scores in a squeeze play, the batter may be credited with an RBI."} {"text":"Often when attempting to bunt for a base hit, the batter will begin running as he is bunting the ball. This is called a drag bunt. Left-handed batters perform this more often than right-handed hitters, because their stance in the batter's box is closer to first base, and they do not need to run across home plate, where the ball will be pitched, as they bunt."} {"text":"The action of squaring to bunt is compromised during a drag bunt, as the feet are not set. Players sometimes get one hand up the barrel, and other times bunt with both hands at the base of the bat. There have been instances of one-handed drag bunts as well; Rafael Furcal has been known to try such a bunt."} {"text":"A swinging bunt occurs when a poorly hit ball rolls a short distance into play, much like a bunt. A swinging bunt is often the result of a checked swing, and only has the appearance of a bunt. It is not a true bunt, and if the scorer judges that the batter intended to hit the ball, it cannot be counted as a sacrifice. There is also a \"slug\" bunt that is intended to surprise the opposing defense, as the desired effect is a hard-hit ball into the infield defense that is expecting a standard bunt."} {"text":"A foul bunt that is not caught in flight is always counted as a strike, even if it is a third strike and thus results in a strikeout of the batter. This is distinct from all other foul balls which, if not caught in flight, are only counted as a strike if \"not\" a third strike. This special exception applies only to true bunts, not on any bunt-like contacts that might occur during a full swing or check-swing. If a batter bunts the ball and his bat hits the ball again after initial contact, it is a dead ball even if by accident."} {"text":"Additionally, the infield fly rule is not applied to bunts popped-up in the air. Instead, the intentional drop rule (Rule 6.05l) that also applies to line drives can be invoked."} {"text":"In baseball, a closing pitcher, more frequently referred to as a closer (abbreviated CL), is a relief pitcher who specializes in getting the final outs in a close game when his team is leading. The role is often assigned to a team's best reliever. Before the 1990s, pitchers in similar roles were referred to as a fireman, short reliever, and stopper. A small number of closers have won the Cy Young Award. Eight closers have been inducted into the Baseball Hall of Fame: Dennis Eckersley, Rollie Fingers, Goose Gossage, Trevor Hoffman, Mariano Rivera, Lee Smith, Bruce Sutter and Hoyt Wilhelm."} {"text":"A closer is generally a team's best reliever and designated to pitch the last few outs of games when his team is leading by a margin of three runs or fewer. Rarely does a closer enter with his team losing or in a tie game. A closer's effectiveness has traditionally been measured by the save, an official Major League Baseball (MLB) statistic since 1969. Over time, closers have become one-inning specialists typically brought in at the beginning of the ninth inning in save situations. The pressure of the last three outs of the game is often cited for the importance attributed to the ninth inning."} {"text":"Closers are often the highest paid relievers on their teams, making money on par with starting pitchers. In the rare cases where a team does not have one primary pitcher dedicated to this role, the team is said to have a \"closer by committee\"."} {"text":"Eight pitchers who were primarily relievers have been inducted into the Baseball Hall of Fame. Hoyt Wilhelm was the first to be elected in 1985, followed by Rollie Fingers, Dennis Eckersley, Bruce Sutter, Goose Gossage, Trevor Hoffman, Lee Smith, and Mariano Rivera. Eckersley was the first closer in the one-inning save era to be inducted. He believed that he was inducted because he was both a starter and a reliever. \"If I came up today as a closer and played 20 years, would I have made it [into the Hall of Fame]? These pitchers did the job they were supposed to do for 20 years. What else are they supposed to do?\" said Eckersley."} {"text":"In baseball, middle relief pitchers (or \"middle relievers\") are relief pitchers who commonly pitch in the fifth, sixth, or seventh innings. In the National League, a middle reliever often comes in after the starting pitcher has been pulled for a pinch hitter. A middle reliever is usually replaced in the eighth or ninth innings by a left-handed specialist, setup pitcher or closers; middle relief pitchers may work these innings as well, especially if the game is not close."} {"text":"A platoon system in basketball, baseball, or football is a method for substituting players in groups (platoons), to keep complementary players together during playing time."} {"text":"In baseball, a platoon is a method of sharing playing time, where two players are selected to play a single defensive position. Usually, one platoon player is right-handed and the other is left-handed. Typically the right-handed half of the platoon is played on days when the opposing starting pitcher is left-handed and the left-handed player is played otherwise. The theory behind this is that generally players hit better against their opposite-handed counterparts, and that in some cases the difference is extreme enough to warrant complementing the player with one of opposite handedness."} {"text":"Platooning can be viewed negatively. Players prefer to play every day, and managers, including Walter Alston, feared that sharing playing time could decrease confidence. Mookie Wilson of the New York Mets requested a trade in 1988 after serving in a platoon for three seasons with Lenny Dykstra."} {"text":"Terms for this strategy included \"double-batting shift, \"switch-around players\", and \"reversible outfield\". Tris Speaker referred to his strategy as the \"triple shift\", because he employed it at three positions. The term \"platoon\" was coined in the late 1940s. Stengel, now managing the New York Yankees, became a well known proponent of the platoon system, and won five consecutive World Series championships from 1949 through 1953 using the strategy. Stengel platooned Bobby Brown, Billy Johnson, and Gil McDougald at third base, Joe Collins and Moose Skowron at first base, and Hank Bauer and Gene Woodling in left field. Harold Rosenthal, writing for the \"New York Herald\", referred to Stengel's strategy as a \"platoon\", after the American football concept, and it came to be known as \"two-platooning\"."} {"text":"Following Stengel's success, other teams began implementing their own platoons. In the late 1970s through early 1980s, Baltimore Orioles manager Earl Weaver successfully employed a platoon in left field, using John Lowenstein, Benny Ayala, and Gary Roenicke, using whichever player was performing the best at the time. Weaver also considered other factors, including the opposing pitcher's velocity, and his batters' ability in hitting a fastball. The Orioles continued to platoon at catcher and all three outfield positions in 1983 under Joe Altobelli, as the Orioles won the 1983 World Series, leading other teams to pursue the strategy."} {"text":"Platooning decreased in frequency from the late 1980s through the 1990s, as teams expanded their bullpens to nullify platoon advantages for hitters. However, the use of platoons has increased in recent years. As teams increase their analysis of data, they attempt to put batters and pitchers in situations where they are more likely to succeed. Generally, small market teams, which cannot afford to sign the league's best players to market-value contracts, are most likely to employ platoons. Under manager Bob Melvin, the Athletics have employed many platoons, with Josh Reddick calling Melvin the \"king of platoons\". Joe Maddon began to employ platoons as manager of the Tampa Bay Rays."} {"text":"The 2013 World Series champion Boston Red Sox platooned Jonny Gomes and Daniel Nava in left field. After the 2013 season, left-handed relief pitchers Boone Logan and Javier L\u00f3pez, both considered left-handed specialists because of their ability to limit the effectiveness of left-handed batters, signed multimillion-dollar contracts as free agents."} {"text":"When a football team uses two (or more) quarterbacks to run their offense, rather than the traditional one in football, it is known as \"platooning quarterbacks\". This tactic becomes less common the higher the level of football (high school teams are more likely to do it than National Football League teams for example). Quarterbacks may be switched in and out of the game every play, every drive, every quarter, or depending on certain situations. If quarterbacks are switched game to game that is not platooning, that is a \"quarterback controversy\" or a simple benching."} {"text":"Using two different quarterbacks allows an offense to use players with different skill sets. One common reason teams platoon quarterbacks is because one player is a good passer and the other a good runner (see for example Stanley Jackson and Joe Germaine of the 1997 Ohio State Buckeyes). Thus defenses have to prepare for two types of quarterback, not just one. It also allows offenses to run a greater variety of plays."} {"text":"In baseball, a setup man (or set-up man, also sometimes referred to as a setup pitcher or setup reliever) is a relief pitcher who regularly pitches before the closer. They commonly pitch the eighth inning, with the closer pitching the ninth."} {"text":"As closers were reduced to one-inning specialists, setup men became more prominent. Setup pitchers often come into the game with the team losing or the game tied. They are usually the second best relief pitcher on a team, behind the closer. After closers became one-inning pitchers, primarily in the ninth inning, setup pitchers became more highly valued. A pitcher who succeeds in this role is often promoted to a closer. Setup men are paid less than closers and mostly make less than the average Major League salary."} {"text":"The most common statistic used to evaluate relievers is the save. Due to the definition of the statistic, setup men are rarely in position to record a save even if they pitch well, but they can be charged with a blown save if they pitch poorly. The hold statistic was developed to help acknowledge a setup man's effectiveness, but it is not an official Major League Baseball (MLB) statistic."} {"text":"Historically, setup men were rarely selected to MLB All-Star Games, with the nod usually going to closers with large save totals. From 1971 through 2000, only six relievers with fewer than five saves at midseason were selected as All-Stars. There were 10 such players from 2001 through 2009. In 2015, the majority of the American League's All-Star relievers were not closers, outnumbered 4\u20133. Setup men who have been named All-Stars multiple times include Justin Duchscherer, Tyler Clippard, Dellin Betances, and Andrew Miller."} {"text":"Francisco Rodriguez, who was a setup pitcher for the Anaheim Angels in 2002, tied starting pitcher Randy Johnson's Major League Baseball record for wins in a single postseason after recording his fifth victory in the 2002 World Series."} {"text":"Tim McCarver wrote that the New York Yankees in 1996 \"revolutionized baseball\" with Mariano Rivera, \"a middle reliever who should have been on the All-Star team and who was a legitimate MVP candidate.\" He finished third in the voting for the American League (AL) Cy Young Award, the highest a setup man has finished. That season, Rivera primarily served as a setup pitcher for closer John Wetteland, typically pitching in the seventh and eighth inning of games before Wetteland pitched in the ninth. Their effectiveness gave the Yankees a 70\u20133\u00a0win\u2013loss record that season when leading after six innings. McCarver said the Yankees played \"six-inning games\" that year, with Rivera dominating for two innings and Wetteland closing out the victory."} {"text":"Illustrating the general trend, both Rivera and Rodriguez were moved to closer soon after excelling as setup men. On January 22, 2019, Rivera became the first unanimously elected baseball hall-of-famer having been inducted his first eligible year on the ballot."} {"text":"In baseball, an opening pitcher, more frequently referred to as an opener, is a pitcher who specializes in getting the first outs in a game, before being replaced by a long reliever or a pitcher who would typically be a starting pitcher. Pitchers employed in the role of opener have usually been relief pitchers by trade. The strategy was frequently employed in Major League Baseball (MLB) by the Tampa Bay Rays during the 2018 season, when it was adopted by other teams as well."} {"text":"By the 1980s, MLB teams had adopted starting rotations consisting of five starting pitchers, with all other pitchers on the active roster serving as relief pitchers. Traditionally, a starter was expected to throw the most innings of any pitcher in a game. Starters typically pitched until they got into trouble or reached a pitch count threshold."} {"text":"When Farhan Zaidi became general manager of the San Francisco Giants after the 2018 season, he spoke about using an opener to protect Dereck Rodriguez and Andrew Suarez from being overworked."} {"text":"The Tampa Bay Rays continued to use an opener in many of their games, with Ryne Stanek often filling the role. The New York Yankees coped with having three of their starting pitchers on the injured list by using reliever Chad Green as an opener. Green would pitch the first inning or two and then hand over the game to a long reliever. During the 2019 regular season, Green opened 15 games for the Yankees; the Yankees won 11 of the games that he started. The Los Angeles Angels pitched a no-hitter using an opener, with Taylor Cole working the first two innings and F\u00e9lix Pe\u00f1a the last seven in their 13\u20130 no-hitter against the Seattle Mariners on July 12."} {"text":"One advantage of the strategy is that the opener, who is often a hard-throwing specialist, can be called in to face the most dangerous hitters, who are usually near the top of the batting order, the first time they come to bat. If the opener is successful, the job of the next pitcher is easier since they will start with less-dangerous hitters. The strategy also throws off the timing of the top-of-the-order hitters, who are not used to seeing different pitchers each time they come to bat, and allows the usual starting pitcher to face the top of the lineup two times rather than three."} {"text":"From a financial perspective, the strategy allows teams to make more use of relief pitchers who are still under low-paying contracts, potentially reducing the salaries paid to starting pitchers because the latter are used less."} {"text":"In baseball, a catch occurs when a fielder gains secure possession of a batted ball in flight, and maintains possession until he voluntarily or intentionally releases the ball. When a catch occurs, the batter is out, and runners, once they properly tag up (retouch their time-of-pitch base), may attempt to advance at risk of being tagged out."} {"text":"Unlike in American football and other sports, neither secure possession for a time nor for a number of steps is enough to demonstrate that a catch has occurred. A fielder may, for example, appear to catch and hold a batted ball securely, take a few more steps, collide with a wall or another player, and drop the ball. This is not a catch."} {"text":"Umpires signal a catch with the out signal: a fist raised into the air, often with a hammering motion; if there is doubt about it, the umpire will likely shout \"That's a catch!\" On a close no-catch, the umpire will signal with the safe signal, which is both arms swept to the side and extended, accompanied by the call \"\"No\" catch, \"no\" catch!\" with an emphasis on the word \"no\"."} {"text":"The fielder must catch the ball with his hand or glove. If the fielder uses his cap, protector, pocket or any other part of his uniform in getting possession, it is not a catch. Therefore, a foul ball which directly becomes lodged in the equipment of the catcher (other than his or her glove) is not considered a catch and hence not a foul tip."} {"text":"It is not a catch if the batted ball hits a fielder, then hits a member of the offensive team or an umpire, and then is caught by another defensive player."} {"text":"A catch is legal if the ball is finally held by any fielder before it touches the ground. Runners may leave their bases the instant the first fielder touches the ball. A fielder may reach over a fence, a railing, a rope, or a line of demarcation to make a catch. He may jump on top of a railing or a canvas that may be in foul ground. Interference should not be called in cases where a spectator comes into contact with a fielder and a catch is not made if the fielder reaches over a fence, a railing, a rope. The fielder does so at his or her own risk."} {"text":"If a fielder, attempting a catch at the edge of the dugout, is \"held up\" and kept from an apparent fall by a player or players of either team and the catch is made, it shall be allowed."} {"text":"To avoid ambiguity with the common term \"catch\" meaning any action that gains possession of a ball, some may say that a fielder gloved a thrown ball or a batted, bouncing ball."} {"text":"In Major League history, the term knuckle curve or knuckle curveball has been used to describe three entirely different pitches."} {"text":"The third type of knuckle curve was thrown by Dave Stenhouse in the 1960s. Stenhouse's knuckle curve was thrown like a fastball but with a knuckleball grip. Stenhouse discovered that this pitch had excellent movement, and when he came to the majors, he utilized it as a breaking pitch. This pitch may have been the same as the knuckleball thrown by Jesse Haines and Freddie Fitzsimmons. The pitch would be perfected by Chicago White Sox legend Hoyt Wilhelm during the later stages of his career, after flirting with it for most of his time in the majors."} {"text":"In baseball, a force is a situation when a baserunner is compelled (or \"forced\") to vacate his time-of-pitch base\u2014and thus try to advance to the next base\u2014because the batter became a runner. A runner at first base is always forced to attempt to advance to second base when the batter becomes a runner. Runners at second or third base are forced only when all bases preceding their time-of-pitch base are occupied by other baserunners and the batter becomes a runner."} {"text":"A forced runner's force base is the next base beyond his time-of-pitch base. Any attempt by fielders to put a forced runner out is called a force play."} {"text":"A force on a runner is \"removed\" when the batter or a following runner is put out. This most often happens on fly outs\u2014on such, the batter-runner is out, and the other runner(s) must return to their time-of-pitch base, known as tagging up. It also occasionally happens when a sharply hit ground ball is fielded by the first baseman, who then quickly steps on first base to force out the batter-runner. This removes the requirement that the runner already on first must advance to second base; he cannot be forced out by a defensive player holding the ball while touching second base, and the runner can try to escape from a rundown by returning to first base."} {"text":"For force outs resulting from neighborhood plays, see the highlighted link."} {"text":"An appeal play may also be a force play; for example, with runners on first and third bases and two out, the batter gets a hit but the runner from first misses second base on the way to third. After a proper appeal, this runner will be called out. This is a force out because the runner was out for failing to touch a base to which he was forced; this force out is the third out and thus the run does not score. However, most appeals are not force plays, because appeals usually do not involve a forced runner."} {"text":"It is not a force out when a runner is put out while trying to tag up after a caught fly ball. Because this out is similar to a true force out, in that the runner can be put out by a fielder possessing the ball at the base that the runner needs to reach, there is a widespread misconception that this out is a force out. But it is not, which means the run would count if it scored before the third out is made on a runner trying to tag up"} {"text":"A rundown, informally known as a pickle or the hotbox, is a situation in the game of baseball that occurs when the baserunner is stranded between two bases, also known as no-man's land, and is in jeopardy of being tagged out. When the base runner attempts to advance to the next base, he is cut off by the defensive player who has a live ball and attempts to return to his previous base before being tagged out. As he is doing this, the defenseman throws the ball past the base runner to the previous base, forcing him to reverse directions again. This is repeated until the runner is put out or reaches a base safely."} {"text":"A rundown can be escaped if a fielder makes an error, the runner gets around the fielder with the ball without running out of the baseline, a fielder throws the ball elsewhere (e.g., toward home plate if another runner is trying to score), or the runner manages to get by the fielder without the ball while there is no other fielder to cover the runner's destination base."} {"text":"In baseball, an appeal play occurs when a member of the defensive team calls the attention of an umpire to an infraction which he would otherwise ignore."} {"text":"A runner shall be called out, after a successful live ball appeal, if he:"} {"text":"Fielders have the right to appeal any runner at any base he has reached or passed, at any time while the ball is alive, subject to the following restrictions:"} {"text":"An appeal is \"legal\" if the fielder"} {"text":"Umpires will only rule on legal appeals. A potential appeal is \"viable\" if the appeal is legal and the umpire knows that the runner has indeed committed an infraction and will be called out if the appeal is executed by a fielder."} {"text":"Suppose that runners are on first and third base, and the batter hits a fly ball. The runner on third tags up, leaving third base immediately after the outfielder touches the ball. The runner seems to score, beating the throw home, but failing to touch home plate. He proceeds into his dugout without again attempting to touch home base. The runner on first base stays at first base, and action becomes relaxed while the ball is in the infield."} {"text":"The fielders now suspect that the runner left third base too early and also missed the plate. Suppose that a fielder, with the live ball, touches third base and tells the nearest umpire, \"I think he left too early.\" This is a proper legal appeal, and the umpire should rule with a safe signal, perhaps saying, \"No, he was fine.\" Now no legal appeal may again occur on that runner at third base. Suppose then that a fielder, with the live ball, touches home base and says to the nearest umpire, \"I think he never touched home.\" This is a legal and viable appeal, and so the umpire should call the runner out and direct that his run shall not count."} {"text":"Since the ball was live (and indeed must be for appeals to be legal), the runner from first could have attempted to advance at any time during the appeals. If the defense attempts to play on that runner, their opportunity to appeal the runner from third base is lost, and the run would count regardless of any subsequent attempt to appeal."} {"text":"A member of the defensive team may appeal to the umpire when a batter bats out of turn. The umpire then enforces the penalty for batting out of turn, if any. The ball must be live for this as for any appeal. After the appeal is made, the umpire will usually signal \"Time\" and figure out whether the appeal is successful."} {"text":"In U.S. high school games or other games governed by NFHS rules, the defense may execute any of the live ball appeals above during a dead ball by simply communicating the infraction to the umpire, so it is never necessary to attempt a live ball appeal; it is always safer for the defense to ask for time to make the ball dead, and then make any requests to the umpire."} {"text":"Tie goes to the runner is a popular interpretation of baseball rules. The claim is that a batter-runner who arrives on base the same time as the ball is safe. However, umpires generally reject the concept that baseball provides for a tie in this way, and instead rule on the basis that either the player or the ball has reached the base first."} {"text":"The wording of rule 5.09(a)(10), formerly 6.05(j), of the \"Official Baseball Rules\" is that a batter is out when \"After a third strike or after he hits a fair ball, he or first base is tagged before he touches first base\". Therefore, if the runner or first base is not tagged before he touches first base, he is safe."} {"text":"In response to a question from a Little League umpire, Major League Baseball umpire Tim McClelland has written that the concept of a tie at a base does not exist, and that a runner either beats the ball or does not. In 2009, umpire Mark Dewdeny, a contributor for Bleacher Report, citing McClelland, also rejected the idea of a tie, and further commented that even if a \"physicist couldn't make an argument one way or the other\" from watching an instant replay, the runner would still be out."} {"text":"One of the most notorious MLB players with a reputation for wall climbing is Minnesota Twins outfielder Torii Hunter. He has won nine Gold Gloves in his sixteen-year major league career. He once robbed Barry Bonds of a home run in right-center field in the first inning of the 2002 MLB All-Star Game."} {"text":"In baseball, an unassisted triple play occurs when a defensive player makes all three outs by himself in one continuous play, without his teammates making any assists. Neal Ball was the first to achieve this in Major League Baseball (MLB) under modern rules, doing so on July 19, 1909. For this rare play to be possible there must be no outs in the inning and at least two runners on base, normally with the runners going on the pitch (e.g., double steal or hit-and-run). An unassisted triple play usually consists of a hard line drive hit directly at an infielder for the first out, with that same fielder then able to double off one of the base runners and tag a second for the second and third outs."} {"text":"Most unassisted triple plays in MLB have taken this form: an infielder catches a line drive (one out), steps on a base to double off a runner (two outs), and then tags another runner on the runner's way to the next base (three outs). In general, the \"next base\" is usually the same base that the infielder stepped on to record the second out, and the last runner is tagged before he can return to the previous base. Infrequently, the order of the last two putouts is reversed."} {"text":"It is nearly impossible for an unassisted triple play to occur unless the fielder is positioned between the two runners. For this reason, all but two of these plays have been accomplished by middle infielders (second basemen and shortstops). The other two were completed by first basemen, who were able to reach second base before the returning baserunner. For example, after collecting the first two outs, Tigers' first baseman Johnny Neun ignored his shortstop's shouts to throw the ball, and instead ran to second base to get the final out himself. The only unassisted triple play that did not take one of these forms occurred in the 19th century, under rules that are no longer in effect (see below)."} {"text":"It is plausible that a third baseman could complete an unassisted triple play with runners at second and third or with bases loaded, but this has never happened in MLB. Players in other positions (pitcher, catcher, outfielders) completing an unassisted triple play would require unusual confusion or mistakes by the baserunners, or an atypical defensive alignment (for example, repositioning an outfielder as a fifth infielder)."} {"text":"The unassisted triple play, the perfect game, hitting four home runs in one game and five extra-base hits in a game are thus comparable in terms of rarity, but the perfect game and the home run and extra-base hit records require an extraordinary effort along with a fair amount of luck. By contrast, the unassisted triple play is essentially always a matter of luck: a combination of the right circumstances with the relatively simple effort of catching the ball and running in the right direction with it. Troy Tulowitzki said of his feat, \"It fell right in my lap\", and as WGN-TV sports anchor Dan Roan commented, \"That's the way these plays always happen.\""} {"text":"In baseball and softball, the curveball is a type of pitch thrown with a characteristic grip and hand movement that imparts forward spin to the ball, causing it to dive as it approaches the plate. Varieties of curveball include the 12\u20136 curveball, power curveball, and the knuckle curve. Its close relatives are the slider and the slurve. The \"curve\" of the ball varies from pitcher to pitcher."} {"text":"The expression \"to throw a curveball\" essentially translates to introducing a significant deviation to a preceding concept."} {"text":"The delivery of a curveball is entirely different from that of most other pitches. The pitcher at the top of the throwing arc will snap the arm and wrist in a downward motion. The ball first leaves contact with the thumb and tumbles over the index finger thus imparting the forward or \"top-spin\" characteristic of a curveball. The result is the exact opposite pitch of the four-seam fastball's backspin, but with all four seams rotating in the direction of the flight path with forward-spin, with the axis of rotation perpendicular to the intended flight path, much like a reel mower or a bowling ball."} {"text":"From a hitter's perspective, the curveball will start in one location (usually high or at the top of the strike zone) and then dive rapidly as it approaches the plate. The most effective curveballs will start breaking at the apex of the arc of the ball flight, and continue to break more and more rapidly as they approach and cross through the strike zone. A curveball that a pitcher fails to put enough spin on will not break much and is colloquially called a \"hanging curve\". Hanging curves are usually disastrous for a pitcher because the low velocity, non-breaking pitch arrives high in the zone where hitters can wait on it and drive it for power."} {"text":"The curveball is a popular and effective pitch in professional baseball, but it is not particularly widespread in leagues with players younger than college level. This is with regard for the safety of the pitcher \u2013 not because of its difficulty \u2013 though the pitch is widely considered difficult to learn as it requires some degree of mastery and the ability to pinpoint the thrown ball's location. There is generally a greater chance of throwing wild pitches when throwing the curveball."} {"text":"When thrown correctly, it could have a break from seven to as much as 20\u00a0inches in comparison to the same pitcher's fastball."} {"text":"Due to the unnatural motion required to throw it, the curveball is considered a more advanced pitch and poses inherent risk of injury to a pitcher's elbow and shoulder. There has been a controversy, as reported in \"The New York Times\", March 12, 2012, about whether curveballs alone are responsible for injuries in young pitchers or whether it is the number of pitches thrown that are the predisposing factor. In theory, allowing time for the cartilage and tendons of the arm to fully develop would protect against injuries. While acquisition of proper form might be protective, Dr. James Andrews is quoted in the article as stating that in many children, insufficient neuromuscular control, lack of proper mechanics, and fatigue make maintenance of proper form unlikely."} {"text":"The parts of the arm most commonly injured by the curveball are the ligaments in the elbow, the biceps, and the forearm muscles. Major elbow injury requires repair through elbow ligament reconstruction, or Tommy John surgery."} {"text":"The \"12\u20136 curveball\" vs. the \"Roundhouse Curveball\" vs. the \"Slurve\"."} {"text":"Curveballs have a variety of trajectories and breaks among pitchers. This chiefly has to do with the arm slot and release point of a given pitcher, which is in turn governed by how comfortable the pitcher is throwing the overhand curveball."} {"text":"Pitchers who can throw a curveball completely overhanded with the arm slot more or less vertical will have a curveball that will break straight downwards. This is called a 12\u20136 curveball as the break of the pitch is on a straight path downwards like the hands of a clock at 12 and 6. The axis of rotation of a 12\u20136 curve is parallel with the level ground and perpendicular to its flight path."} {"text":"Generally the Magnus effect describes the laws of physics that make a curveball curve. A fastball travels through the air with backspin, which creates a higher pressure zone in the air ahead of and under the baseball. The baseball's raised seams augment the ball's ability to develop a boundary layer and therefore a greater differential of pressure between the upper and lower zones. The effect of gravity is partially counteracted as the ball rides on and into increased pressure. Thus the fastball falls less than a ball thrown without spin (neglecting knuckleball effects) during the 60\u00a0feet 6\u00a0inches it travels to home plate."} {"text":"On the other hand, a curveball, thrown with topspin, creates a higher pressure zone on top of the ball, which deflects the ball downward in flight. Instead of counteracting gravity, the curveball adds additional downward force, thereby giving the ball an exaggerated drop in flight."} {"text":"There was once a debate on whether a curveball actually curves or is an optical illusion. In 1949, Ralph B. Lightfoot, an aeronautical engineer at Sikorsky Aircraft, used wind tunnel tests to prove that a curveball curves. On whether a curveball is caused by an illusion, Baseball Hall of Fame pitcher Dizzy Dean has been quoted in a number of variations on this basic premise: \"Stand behind a tree 60\u00a0feet away, and I will whomp you with an optical illusion!\""} {"text":"However, optical illusion caused by the ball's spinning may play an important part in what makes curveballs difficult to hit. The curveball's trajectory is smooth, however the batter perceives a sudden, dramatic change in the ball's direction. When an object that is spinning and moving through space is viewed directly, the overall motion is interpreted correctly by the brain. However, as it enters the peripheral vision, the internal spinning motion distorts how the overall motion is perceived. A curveball's trajectory begins in the center of the batter's vision, but overlaps with peripheral vision as it approaches the plate, which may explain the suddenness of the break perceived by the batter. A peer-reviewed article on this hypothesis was published in 2010."} {"text":"Popular nicknames for the curveball include \"the bender\" and \"the hook\" (both describing the trajectory of the pitch), as well as \"the yakker\" and \"Uncle Charlie\". New York Mets pitcher Dwight Gooden threw a curve so deadly that it was nicknamed \"Lord Charles\" and the great hitter Bill Madlock called it \"the yellow hammer\u201d - apparently because it came down like a hammer and was too yellow to get hit by a bat. Because catchers frequently use two fingers to signal for a curve, the pitch is also referred to as \"the deuce\" or \"number two\"."} {"text":"Records of the Princeton University (then the College of New Jersey) game from September 26, 1863 in the \"New York Clipper\" of the Nassaus facing the Athletics refer to F. P. Henry, Princeton Class of 1866, \"slow pitching with a great twist to the ball achieved a victory over fast pitching.\" By 1866, many Princeton players were pitching and hitting \"curved balls\"."} {"text":"In the past, major league pitchers Tommy Bridges, Bob Feller, Virgil Trucks, Herb Score, Camilo Pascual and Sandy Koufax were regarded as having outstanding curveballs."} {"text":"In baseball, a sacrifice bunt (also called a sacrifice hit) is a batter's act of deliberately bunting the ball, before there are two outs, in a manner that allows a baserunner to advance to another base. The batter is almost always putout, and hence sacrificed (to a certain degree that is the intent of the batter), but sometimes reaches base due to an error or fielder's choice. In that situation, if runners still advance bases, it is still scored a sacrifice bunt instead of the error or the fielder's choice. Sometimes the batter may safely reach base by simply outrunning the throw to first; this is not scored as a sacrifice bunt but rather a single."} {"text":"A successful sacrifice bunt does not count as an at bat, does not impact a player's batting average, and counts as a plate appearance. Unlike a sacrifice fly, a sacrifice bunt is not included in the calculation of the player's on-base percentage. If the official scorer believes that the batter was attempting to bunt for a base hit and not solely to advance the runners, the batter is charged an at bat and is not credited with a sacrifice bunt."} {"text":"In leagues without a designated hitter, sacrifice bunts are most commonly attempted by pitchers, who are typically not productive hitters. Managers consider that if a pitcher's at bat will probably result in an out, they might as well go out in a way most likely to advance the runners. The play also obviates the need for the pitcher to run the base paths, and hence avoids the risk of injury. Some leadoff hitters also bunt frequently in similar situations and may be credited with a sacrifice, but as they are often highly skilled bunters and faster runners, they are often trying to get on base as well as advance runners."} {"text":"A sacrifice bunt attempted while a runner is on third is called a squeeze play. A sacrifice bunt attempted while a runner on third is attempting to steal home is called a suicide squeeze."} {"text":"Although a sacrifice bunt is not the same as a sacrifice fly, both fell under the same statistical category until 1954."} {"text":"In scoring, a sacrifice bunt may be denoted by SH, S, or occasionally, SAC."} {"text":"Notable players with 300 or more sacrifice bunts."} {"text":"The following players have accumulated 300 or more sacrifice bunts in their playing careers:"} {"text":"Since the beginning of the live-ball era (1920), the career leader in sacrifice bunts is Joe Sewell with 275. He was first called up by the Cleveland Indians late in the 1920 season shortly after the death of Indians star shortstop Ray Chapman after being hit in the head by a pitch, the event which is generally regarded as the start of the live-ball era."} {"text":"Though touted as good strategy by traditionalists, the sacrifice bunt has received significant criticism by modern sabermetrics. Simply, sabermetricians argue that the value of moving a runner to another base is offset by the team's sacrificing one of its limited and valuable 27 outs. An out conceded is an out wasted, in other words."} {"text":"The following stats illustrate the argument. From 1993-2010, if a team had a runner on first base with no outs, on average it would score .941 runs from that point until the end of the inning. If a team had a runner on second base with one out, however, the average was .721 runs from that point forward. Thus, if a batter walks to lead off an inning and his team bats, that team will, on average, score almost one run in the inning. On the other hand, that team decreases its run expectancy by 23 percent if it successfully bunts and moves the runner to second with one out."} {"text":"Complicating affairs are the many difficulties and risks associated with bunting. The runner or runners on base must have speed, or the defense may get an easy force out. A manager could feasibly pinch run, but then his bench becomes smaller (that is, there are fewer substitute players available). The player at the plate must also lay down a quality bunt. That is, the player must lay down a bunt that does not pop up, go foul, or go straight to a fielder. Even if all goes well, if the sacrifice bunt is successful, the team must still get a hit to score the runner, and they now have 2 outs remaining instead of three."} {"text":"In baseball, a sacrifice fly (sometimes abbreviated to sac fly) is defined by Rule 9.08(d):"} {"text":"\"Score a sacrifice fly when, before two are out, the batter hits a ball in flight handled by an outfielder or an infielder running in the outfield in fair or foul territory that"} {"text":"It is called a \"sacrifice\" fly because the batter allows a teammate to score a run, while sacrificing his own ability to do so. Sacrifice flies are traditionally recorded in box scores with the designation \"SF\"."} {"text":"As addressed within Rule 9.02(a)(1) of the Official Baseball Rules a sacrifice fly is not counted as a time at bat for the batter, though the batter is credited with a run batted in."} {"text":"The purpose of not counting a sacrifice fly as an at-bat is to avoid penalizing hitters for a successful action. The sacrifice fly is one of two instances in baseball where a batter is not charged with a time at bat after putting a ball in play; the other is the sacrifice hit (also known as a sacrifice bunt). But, while a sacrifice fly does not affect a player's batting average, it counts as a plate appearance and lowers his on-base percentage. A player on a hitting streak will have the hit streak end if he has no official at-bats but has a sacrifice fly."} {"text":"The sacrifice fly is credited even if another runner is put out so long as the run scores. The sacrifice fly is credited on a dropped ball even if another runner is forced out by reason of the batter becoming a runner."} {"text":"On any fly ball, a runner can initiate an attempt to advance bases as soon as a fielder touches the ball by tagging up, even before the fielder has full control of the ball."} {"text":"The most sacrifice flies by a team in one game is five; the record was established by the Seattle Mariners in 1988, tied by the Colorado Rockies in 2006, and tied again by the Mariners in 2008."} {"text":"Five teams have collected three sacrifice flies in an inning: the Chicago White Sox (fifth inning, July 1, 1962 against the Cleveland Indians); the New York Yankees twice (fourth inning, June 29, 2000 against the Detroit Tigers and third inning, August 19, 2000 against the Anaheim Angels); the New York Mets (second inning, June 24, 2005 against the Yankees); and the Houston Astros (seventh inning, June 26, 2005 against the Texas Rangers). In these cases one or more of the flies did not result in a putout due to an error."} {"text":"Since the rule was reinstated in its present form in 1954, Gil Hodges of the Dodgers holds the record for most sacrifice flies in one season with 19, in 1954; Eddie Murray holds the record for most sacrifice flies in a career with 128."} {"text":"As of the end of the 2018 season, players who had hit 115 or more career sacrifice flies:"} {"text":"Only once has the World Series been won on a sac fly. In 1912, Larry Gardner of the Boston Red Sox hit a fly ball off a pitch from the New York Giants' Christy Mathewson. Steve Yerkes tagged up and scored from third base to win game 8 in the tenth inning and take the series for the Red Sox."} {"text":"The New York Yankees' former closer Mariano Rivera, one of the foremost practitioners of the cutter, made the pitch famous after the mid-1990s, though the pitch itself has been around since at least the 1950s."} {"text":"When the cut fastball is pitched skillfully at speed, particularly against the opposite hand batter (that is, a right-handed pitcher facing a left-handed hitter), the pitch can crack and split a hitter's bat, hence the pitch's occasional nickname of \"the buzzsaw\". Batter Ryan Klesko, then of the Atlanta Braves, broke three bats in a single plate appearance during the 1999 World Series while facing Rivera. To deal with this problem a few switch hitters batted right-handed against the right-handed Rivera\u2014that is, on the \"wrong\" side, as switch hitters generally bat from the same side of the plate as the pitcher's glove hand."} {"text":"In , Dan Haren led all major league starting pitchers with nearly 48% of his pitches classified by PITCHf\/x as cutters. Roy Halladay was close behind at 45%. Other pitchers who rely (or relied) heavily on a cut fastball include Jon Lester, James Shields, Josh Tomlin, Will Harris, Mark Melancon, Jaime Garcia, Wade Miley, David Robertson, Jerry Reuss, and Andy Pettitte. Over the course of Kenley Jansen's career from (2010-present) he has thrown his cutter 85.1% of the time, second only to Rivera at 87.2% among pitchers with at least 30 innings during that time period."} {"text":"The cutter grew in popularity as certain pitchers, including Dan Haren, looked to compensate for loss of speed in their four-seam fastball. Braves third baseman Chipper Jones attributed the increased dominance of pitchers from 2010\u20132011 to a more prolific use of the cutter, as did Cleveland Indians pitcher Chris Perez. By 2011, it was commonly being called the \"pitch du jour\" in the baseball press."} {"text":"Some pushback has developed against (overuse of) the pitch, due to concerns that a pitcher overusing the cutter could develop arm fatigue. Baltimore Orioles General Manager Dan Duquette instructed prized prospect Dylan Bundy not to throw the pitch in the minor leagues, believing its use could make Bundy's fastball and curve less effective."} {"text":"The wheel play is a defensive strategy in baseball designed to defend against a sacrifice bunt. The play's name derives from the wheel-like rotation of the infielders."} {"text":"The wheel play is typically only employed when all of the following conditions exist:"} {"text":"In such a scenario, the batting team may attempt a sacrifice bunt in order to move the runner at second base to third base, accepting that the batter will be put out at first base. If that happens, the offense would then have a runner at third base with one out, and that runner could subsequently score on a sacrifice fly."} {"text":"To defend against this scenario, the wheel play is used by the defense in an attempt to prevent the offense from advancing the runner at second base to third base via a sacrifice bunt."} {"text":"The wheel play is a unique bunt defense in that the play is designed to put out the lead runner at third base. Most bunt defense strategies give priority to making sure the defense gets an out at first base."} {"text":"The wheel play begins with the shortstop running to cover (defend) third base. As the pitch is thrown by the pitcher, the third baseman and first baseman rush toward home plate, to be in position to field the bunted ball as quickly as possible, while the second baseman runs to cover (defend) first base. Additionally, the pitcher moves into a defensive position, backing up one of the inrushing fielders (which one, usually depends on which direction the pitcher's pitching motion carries him towards)."} {"text":"The defense seeks to have defenders in position such that once the ball is bunted, it can be picked up quickly by one of the charging fielders, who will be much closer to the batter than they would be in their normal fielding positions. If that occurs, the fielder who picks up the ball can throw it to the shortstop (who is covering third base) to retire the runner advancing from second base, either via a force play (when applicable) or tag out. Recording an out at third base represents a successfully executed wheel play. Additionally, if the batter is not a fast runner, the shortstop (at third base) may be able to throw to the second baseman (at first base) to successfully complete a double play."} {"text":"Alternately, if a fielder is slow in picking up the ball, and\/or he sees that the runner advancing from second base is unlikely to be retired at third base, the fielder can throw the ball to the second baseman (who is covering first base) to retire the batter. While this is not a successfully executed wheel play, it provides no worse an outcome than would have occurred on a normally executed sacrifice bunt."} {"text":"The offense can attempt to defeat the wheel play:"} {"text":"One of the earliest recorded instances of the wheel play being used in Major League Baseball (MLB) was when it was executed by the Pittsburgh Pirates against the St. Louis Cardinals on August 14, 1960, resulting, as reported by \"The Pittsburgh Press\", in \"an electrifying double play [...] that had the 36,775 fans screaming.\" Several Pirate players and coaches said they had never seen the play before, but the Pirate players who executed the play attributed the original idea to former Chicago Cubs manager Charlie Grimm, whom they thought used it in 1950."} {"text":"The Cardinals successfully used the wheel play against the Texas Rangers in the second inning of Game 6 of the 2011 World Series. With runners on first base and second base and no outs, Texas pitcher Colby Lewis attempted a sacrifice bunt, resulting in a double play when third baseman David Freese fielded the bunt, threw to shortstop Rafael Furcal at third base for the first out, and Furcal threw to second baseman Nick Punto at first base for the second out."} {"text":"A hit and run is a high risk, high reward offensive strategy used in baseball. It uses a stolen base attempt to try to place the defending infielders out of position for an attempted base hit."} {"text":"The hit and run was introduced to baseball by Ned Hanlon, who was often referred to as \"The Father of Modern Baseball\", at the beginning of the 1894 season of the National League, as part of what came to be called \"inside baseball\". Hanlon was manager of the Baltimore Orioles at the time. His team developed the hit and run along with other tactics during spring training at Macon, Georgia. After its implementation in the season's series opener against the New York Giants, the opposing manager objected to its use; however, it was deemed acceptable."} {"text":"The hit and run relies on the positioning of the defensive players in the infield. The first and third basemen normally stand near the foul lines, generally near the inside of their bases, set slightly back to allow more time to react to sharply hit balls. However, if the runner is on first, the first baseman stands closer to the base to prevent steals by means of pick-off attempts by the pitcher; consequently, such positioning produces a bigger gap between second and first basemen. The second baseman and shortstop stand on opposite sides of second base, covering the areas between first and second, and second and third, respectively. Second base itself is not directly covered, as the pitcher can field batted balls in this direction."} {"text":"In normal play, if the ball is hit into the infield, one of the infielders will run toward the ball while another runs toward the base that is no longer covered. For instance, if the ball is hit toward the second baseman, he will run toward the ball while the shortstop runs to cover second base. This allows the fielding player to throw the ball to the player covering the base to attempt a put out."} {"text":"However, during a stolen base attempt, the normal gameplay and positioning is altered. In the typical case, a baserunner on first base will start running toward second, causing the middle infielders to move toward that base in order to tag the runner when the ball is thrown to them from the pitcher or catcher. This reaction places the infielders out of position for a hit ball, with gaps opening at midway points between first and second and second and third."} {"text":"The hit and run takes advantage of this difference by having the baserunner attempt to steal as soon as the pitch is thrown; the batter then attempts to hit the ball into one of the resulting gaps in the infield defense."} {"text":"The name \"hit and run\" is therefore a potential misnomer in that the chronological order of the offensive play is \"run and hit,\" with the runner beginning the steal attempt before the batter makes contact, although in a logical sense it is accurate in that the batter's swing occurs while the runner's steal attempt is ongoing, such that any contact (\"hit\") will occur simultaneously with (\"and\") the steal attempt (\"run\")."} {"text":"Ideally, the ball will be hit into a gap and travel into the outfield, allowing the runners plenty of time to reach the bases. Even if the ball is hit toward a fielder's initial position before the fielder has had time to move away from it, however, the fielder may have turned to run toward the base in order to cover the baserunner. In normal play the fielders would face the batter, allowing them to react in any direction, but after they have turned toward the base this becomes much more difficult. Their momentum in this direction adds to this problem."} {"text":"The risk in the hit and run is that, if the batter fails to make contact with the ball, the runner is vulnerable to being thrown out at second base, which the official scorer will record as a caught stealing. The defensive team can improve its odds in this case by using a pitchout, having the pitcher throw the ball far outside the strike zone so the catcher can easily catch it and attempt to pick off the runner."} {"text":"The batter may choose to take a swing at a bad pitch to make it harder for the catcher to handle the incoming pitch, or so the ball goes foul (in which case the runner is allowed to return to first, so the attempt protects the runner from being caught stealing). Either way, this can cause the batter to fall behind in the count, making it harder for him to get a hit. And if he does hit a bad pitch he really can't handle, it could result in poor contact leading directly to the batter being put out, so he may end up giving his at-bat away with no advantage to the offense."} {"text":"The hit and run has the best chance to be successful when the batter is someone who does not frequently swing and miss, at a time when the count won't disadvantage a hitter if he takes a bad swing, with a runner fast enough to take second base even if the batter does swing and miss."} {"text":"Often the precise circumstance to call for a hit and run occurs with a two-balls, one-strike count on a hitter, as this situation may meet all of the above criteria, depending on who is at bat and who is on base, but it can occur at other times. An alert defense understands the probability that the offense will call the play at a specific moment, and thus it may choose to call for a pitchout at that moment to defend it. An alert offense, in turn, understands the probability of a forthcoming pitchout, and use the hit and run opportunity as a decoy, causing the pitchout to become another ball in the count in the hitter's favor, increasing his chances of reaching base by walk or hit."} {"text":"The hit and run is a very old baseball strategy, dating back to the 19th-century game."} {"text":"In baseball, a triple play (denoted as TP in baseball statistics) is the rare act of making three outs during the same continuous play."} {"text":"Triple plays happen infrequently \u2013 there have been 723 triple plays in Major League Baseball (MLB) since 1876, an average of approximately five per season \u2013 because they depend on a combination of two elements, which are themselves uncommon:"} {"text":"In baseball scorekeeping, the abbreviation GITP can be used if the batter grounded into a triple play."} {"text":"The most likely scenario for a triple play is no outs with runners on first base and second base, which has been the case for the majority of MLB triple plays. In that context, two examples of triple plays are:"} {"text":"The most recent triple play in MLB was turned by the Cincinnati Reds on April 17, 2021, against the Cleveland Indians in the top of the eighth inning at Great American Ball Park in Cincinnati, Ohio\u2014with runners at first base and third base, Indians batter Josh Naylor hit a line drive caught by Reds first baseman Joey Votto (first out) who tagged baserunner Franmil Reyes (second out); meanwhile, baserunner Eddie Rosario thought the ball hit the ground and ran home without returning to third base, so Votto threw the ball to Max Schrock at third base (third out)."} {"text":"The rarest type of triple play, and one of the rarest events of any kind in baseball, is for a single fielder to complete all three outs. There have only been 15 unassisted triple plays in MLB history, making this feat rarer than a perfect game."} {"text":"Typically, an unassisted triple play is achieved when a middle infielder catches a line drive near second base (first out), steps on the base before the runner who started there can tag up (second out), and then tags the runner advancing from first before he can return there (third out). Of the 15 unassisted triple plays in MLB history, 12 have been completed in this manner by a middle infielder."} {"text":"The most recent MLB unassisted triple play is consistent with the above \u2013 it occurred on August 23, 2009, by second baseman Eric Bruntlett of the Philadelphia Phillies, in a game against the New York Mets. In the bottom of the ninth inning with men on first and second, the base runners were both running when Jeff Francoeur hit a line drive very close to second base, which Bruntlett was covering. Bruntlett caught the ball (first out), stepped on second before Luis Castillo could tag up (second out), and then tagged Daniel Murphy who was approaching from first (third out). This was only the second game-ending unassisted triple play in MLB history, the first one having occurred in 1927."} {"text":"Political columnist and baseball enthusiast George Will posed one hypothetical way that a triple play could occur with no fielder touching the ball. With runners on first and second and no outs, the batter hits an infield fly, and is automatically out: one out. The runner from first passes the runner from second and is called out for that infraction: two outs. Just after that, the falling ball hits the runner from second, who is called out for interference: three outs."} {"text":"Whenever a batter or runner is out without a fielder touching the ball, MLB rule book section 10.09 provides for automatic putouts to be assigned by the official scorer. In this case, the first out would be credited to whoever the official scorer believes would have had the best chance of catching the infield fly. The second and third outs would be credited to the fielder(s) closest to the points the runners were, when their respective outs occurred. Under the scenario described above, the same fielder (the shortstop, for example) could be credited with all three putouts, thus attaining an unassisted triple play without having touched the ball."} {"text":"Texas League Hall of Famer Keith Bodie tells \"Sporting News\" that this event occurred in a 1986 spring training game."} {"text":"The statistics below reflect historical totals through April 17, 2021."} {"text":"Position of baserunners when the triple play started."} {"text":"June 11, 1885, by the New York Giants against the Providence Grays, scored as 4*-4*-3*, with a newspaper account the next day naming the fielders, batter, and runners at first and second; however, it is unknown if there was a runner at third base."} {"text":"Asterisks (*) denote which players recorded outs, per standard baseball positions.Combinations that have occurred at least 10 times are listed."} {"text":"On June 27, 1967, the New York Mets and Pittsburgh Pirates staged a triple play before their game at Shea Stadium for the film \"The Odd Couple\". The scene depicts Bill Mazeroski of the Pirates grounding into a game-ending 5-4-3 triple play. Mazeroski, who played 17 major league seasons, was only involved in one actual MLB triple play; he was the runner on second base when the Chicago Cubs turned a 3-3-6 triple play on October 3, 1965."} {"text":"Other names include change-of-pace, change or off-speed pitch, although that term can also be used simply to mean any pitch that is slower than a fastball. In addition, before at least the second half of the twentieth century, the term \"slow ball\" was used to denote pitches that were not a fastball or breaking ball, which almost always meant a type of changeup. Therefore, the terms \"slow ball\" and \"changeup\" could be used interchangeably."} {"text":"The changeup is analogous to the slower ball in cricket."} {"text":"Since the rise of Pedro Mart\u00ednez, a Dominican pitcher whose changeup was one of the tools that led to his three Cy Young Awards, the changeup has become increasingly popular in the Dominican Republic. Dominican pitchers including Edinson V\u00f3lquez, Michael Ynoa, and Ervin Santana are all known to have developed effective changeups in the Dominican Republic after Mart\u00ednez's success with the pitch."} {"text":"Probably the most famous changeup thrower of the last 30 years, Atlanta Braves southpaw Tom Glavine utilized a two-seam changeup as his number one pitch on the way to winning two Cy Young Awards, a World Series MVP, and 305 wins in a celebrated Hall of Fame career."} {"text":"Hall of Famer reliever Trevor Hoffman had one of the best changeups in his prime and used it to record 601 saves."} {"text":"In recent years, some of the game's best pitchers have relied heavily on the changeup. A 2013 article published by \"Sports Illustrated\" noted that Justin Verlander, F\u00e9lix Hern\u00e1ndez, Stephen Strasburg, David Price, and Max Scherzer have revolutionized the pitch and used it abundantly in their arsenal."} {"text":"There are several variations of changeups, which are generated by using different grips on the ball during the pitch."} {"text":"The circle changeup is one well-known grip. The pitcher forms a circle with the index finger and thumb and lays the middle and ring fingers across the seams of the ball. By pronating the wrist upon release, the pitcher can make the pitch break in the same direction as a screwball. More or less break will result from the pitcher's arm slot. Pedro Mart\u00ednez used this pitch throughout his career to great effect, and many considered it to be his best pitch."} {"text":"The most common type is the straight changeup. The ball is held with three fingers (instead of the usual two) and closer to the palm, to kill some of the speed generated by the wrist and fingers. This pitch generally breaks downward slightly, though its motion does not differ greatly from a two-seam fastball."} {"text":"Other variations include the palmball, vulcan changeup and fosh. The split-finger fastball is used by many pitchers as a type of changeup."} {"text":"In baseball, a tag out, sometimes just called a tag, is a play in which a baserunner is out because a fielder touches him with the ball or with the hand or glove holding the ball, while the ball is live and the runner is in jeopardy of being put out \u2013 usually when he is not touching a base."} {"text":"A baserunner is in jeopardy when any of the following are true:"} {"text":"A tag is therefore the most common way to retire baserunners who are not in danger of being forced out, though a forced runner may be tagged out in lieu of stepping on the forced base. Additionally, a tag out can be used on an appeal play."} {"text":"Runners attempting to advance are sometimes thrown out, which means that a fielder throws the ball to someone covering the base, who then tags the runner before the runner touches the base. A runner who leads off a base too far might be picked off; that is, the pitcher throws to a fielder covering the base, who then tags the runner out."} {"text":"When a runner is tagged out, a farther advanced runner who had been forced to advance no longer has to do so. For example, when a sharply hit ball is caught on one hop by the first baseman, he might immediately tag out the runner at first who is forced to advance to second; but when this is done a runner already at second is no longer forced to advance to third base. The result of such a tag is called \"removing the force\"."} {"text":"The fastball is the most common type of pitch thrown by pitchers in baseball and softball. \"Power pitchers,\" such as former American major leaguers Nolan Ryan and Roger Clemens, rely on speed to prevent the ball from being hit, and have thrown fastballs at speeds of (officially) and up to (unofficially). Pitchers who throw more slowly can put movement on the ball, or throw it on the outside of home plate where batters can't easily reach it."} {"text":"Fastballs are usually thrown with backspin, so that the Magnus effect creates an upward force on the ball. This causes it to fall less rapidly than expected, and sometimes causes an optical illusion often called a rising fastball. Although it is impossible for a human to throw a baseball fast enough and with enough backspin for the ball to actually rise, to the batter the pitch seems to rise due to the unexpected lack of natural drop on the pitch."} {"text":"A straight pitch is achieved by gripping the ball with the fingers across the wide part of the seam (called a \"four-seam fastball\") so that both the index and middle fingers are touching two seams perpendicularly. A sinking fastball is thrown by gripping it across the narrow part (a \"two-seam fastball\") so that both the index and middle fingers are along a seam. Lateral motion is achieved by holding a four-seam fastball off-center (a \"cut fastball\"), and sinking action with a lateral break is thrown by splitting the fingers along the seams (a \"split-finger fastball\")."} {"text":"Colloquially, a fastball pitcher 'throws heat' or 'puts steam on it', among many other variants."} {"text":"The four-seam fastball is the most common variant of the fastball. The pitch is used often by the pitcher to get ahead in the count or when he needs to throw a strike. This type of fastball is intended to have minimal lateral movement, relying more on its velocity. It is often perceived as the fastest pitch a pitcher throws, with recorded top speeds above 100\u00a0mph. The fastest pitch recognized by MLB was on September 25, 2010, at Petco Park in San Diego by then Cincinnati Reds left-handed relief pitcher Aroldis Chapman. It was clocked at 105.1 miles per hour."} {"text":"April 19, 2011 Chapman lit up the stadium radar gun at 106 MPH (the TV-reading had his pitch at 105 MPH, and the pitchF\/X reading was actually 102.4 MPH)."} {"text":"Two general methods are used to throw a four-seam fastball. The first and most traditional way is to find the horseshoe seam area, or the area where the seams are the farthest apart. Keeping those seams parallel to the body, the pitcher places his index and middle fingers perpendicular to them with the pads on the farthest seam from him. The thumb then rests underneath the ball about in the middle of the two fingers. With this grip, the thumb will generally have no seam on which to rest."} {"text":"The pitch velocity gone up so much mostly by the development of better training and clearer communication within the baseball community that velocity is so valued. People like Tom House, Eric Cressey, Kyle Boddy, and Ron Wolforth have all pushed the edge and dedicated careers to research on what makes the ultimate pitcher. Pitchers are getting bigger, faster and stronger, and pushing their bodies in the weight room as well as with weighted ball throwing. All of this has created a faster, more powerful game for pitchers on the mound today."} {"text":"Higher pitch velocities have resulted in fewer hits and other imbalances. A more distant pitcher's mound and other changes have been proposed to restore balance."} {"text":"A two-seam fastball, sometimes called a two-seamer, tailing fastball, running fastball, or sinker is another variant of the straight fastball. It is designed to have more movement than a four-seam fastball, so the batter cannot hit it hard, but it can be more difficult to master and control. Because of the deviation from the straight trajectory, the two-seam fastball is sometimes called a moving fastball."} {"text":"The pitcher grabs a baseball and finds the area on it where the seams are the closest together, and puts his index and middle fingers on each of those seams. A sinker is a similar pitch that drops 3 to 6\u00a0inches more than a typical two-seam fastball; this causes batters to hit ground balls more often, mostly due to the tilted sidespin on the ball."} {"text":"Each finger should be touching the seam from the pads or tips to almost the ball of each finger. The thumb should rest underneath the ball in the middle of those two fingers, finding the apex of the horseshoe part of the seam. The thumb needs to rest on that seam from the side to the middle of its pad. If the middle finger is used, more whipping action occurs, making the pitch go around 10\u00a0mph faster. This ball tends to move for the pitcher a little bit depending on velocity, arm slot angle, and pressure points of the fingers. Retired pitchers Greg Maddux and Pedro Mart\u00ednez were known for their effective two-seamers."} {"text":"The rising fastball is an effect perceived by some batters, but is a baseball myth. Some batters are under the impression that they have seen a \"rising\" fastball, which starts with the trajectory of a normal fastball, but which as it approaches the plate rises several inches and gains a burst of speed. Tom Seaver, Jim Palmer, Sandy Koufax, Dwight Gooden, Nolan Ryan and Chan Ho Park have been described as paramount pitchers with this kind of ball action."} {"text":"Such a pitch is known to be beyond the physical capabilities of pitchers, due to the very high backspin required to overcome gravity with the Magnus effect. While not physically impossible (conservation of momentum is maintained through imparting the required opposing momentum to air, as an airplane does at takeoff), the amount of spin required is beyond the capabilities of a human arm. It has been explained as an optical illusion."} {"text":"What is likely happening is that the pitcher first throws a fastball at one speed, and then, using an identical arm motion, throws another fastball at a higher speed. The higher-speed fastball arrives faster and sinks less due to its high speed. The added back-spin from the higher speed further decreases the amount of sink. When the pitch is thrown, the batter expects a fastball at the same speed, yet it arrives more quickly and at a higher level. The batter perceives it as a fastball which has risen and increased in speed. A switch from a two-seam to a fastball can enhance this effect."} {"text":"It is possible for a rising fastball to be thrown by a submarine pitcher because of the technique with which they throw the ball. Because they throw almost underhanded with their knuckles near the field surface, the batter perceives the sensation of the ball going upward because of its low starting point and flight trajectory. This is not the traditional rising fastball batters believe they see. This type of movement is similar to a rising fastball in fast-pitch softball. Left-hander Sid Fernandez was known for throwing a rising fastball from a slightly \"submarine\" motion."} {"text":"A cut fastball, or \"cutter\", is similar to a slider, but the pitcher tends to use a four-seam grip. The pitcher shifts the grip on a four-seamer (often by slightly rotating the thumb inwards and the two top fingers to the outside) to create more spin. This usually causes the pitch to shift inwards or outwards by a few inches, less than a typical slider, and often late. A cutter is effective for pitchers with a strong four-seamer since the grip and delivery look virtually identical. The unexpected motion will often fool batters into hitting the ball off-center, or missing it altogether."} {"text":"It helps to have larger hands to throw this pitch. Because the fingers are spread wider than normal on the baseball, this pitch produces more stress from the hand up through the arm. While the mechanics are the same as a normal fastball, the stress it places on the hand and arm is different. Over time it is possible to damage the arm. It is therefore not recommended for younger pitchers to learn this pitch. Older pitchers should feel comfortable deploying this pitch, but to use it in moderation. The splitter is an effective pitch because the hitter generally picks up the movement later and either swings over the ball or produces a weakly hit ground ball."} {"text":"The split-finger is used currently by pitchers such as Jonathan Papelbon, and Masahiro Tanaka. Former players noted for use of the split-finger fastball include Bruce Sutter, Mike Scott, John Smoltz, Jack Morris, Kazuhiro Sasaki, Bryan Harvey, Roger Clemens, Dan Haren, and Fred Breining."} {"text":"The incurve was a term used until about 1930 used to describe a simple fastball. As a curveball was often called an \"outcurve\", one might assume that an incurve is the opposite of a curveball, in other words, the modern screwball. However, this does not appear to be so, as cited by John McGraw."} {"text":"A side-arm fast ball is thrown from an angle different from the normal one. It is at a lower angle and is thrown from the side, hence the name \"side\"-arm. It will have a sinking motion to the right if the pitcher is right-handed, or to the left if the pitcher is left-handed. It is usually slower than a normal four-seam fastball."} {"text":"In baseball and softball, a double play (denoted as DP in baseball statistics) is the act of making two outs during the same continuous play. Double plays can occur any time there is at least one baserunner and fewer than two outs."} {"text":"In Major League Baseball (MLB), the double play is defined in the Official Rules in the Definitions of Terms, and for the official scorer in Rule 9.11. During the 2016 Major League Baseball season, teams completed an average 145 double plays per 162 games played during the regular season."} {"text":"The simplest scenario for a double play is a runner on first base with less than two outs. In that context, five example double plays are:"} {"text":"Double plays can occur in many ways in addition to these examples, and can involve many combinations of fielders. A double play can include an out resulting from a rare event, such as interference or an appeal play."} {"text":"Per standard baseball positions, the examples given above are recorded, respectively, as:"} {"text":"Double plays that are initiated by a batter hitting a ground ball are recorded in baseball statistics as GIDP (grounded into double play). This statistic has been tracked since 1933 in the National League and since 1939 in the American League. This statistic does not include line-outs into double plays, for which there is no official statistic for a batter."} {"text":"The double play is a coup for the fielding team and debilitating to the batting team. The fielding team can select pitches to induce a double play \u2014 such as a sinker, which is more likely to be hit as a ground ball \u2014 and can position fielders to make a ground ball more likely to be turned into a double play. The batting team may take action \u2014 such as a hit and run play \u2014 to reduce the chance of grounding into a force double play."} {"text":"In baseball slang, making a double play is referred to as \"turning two\" or a \"twin killing\" (a play on \"twin billing\", a moviehouse offering two features on the same ticket). Double plays are also known as \"the pitcher's best friend\" because they disrupt offense more than any other play, except for the rare triple play. A force double play made on a ground ball hit to the third baseman, who throws to the second baseman, who then throws to the first baseman, is referred to as an \"around the horn\" double play."} {"text":"The ability to \"make the pivot\" on a force double play \u2013 receiving a throw from the third base side, then quickly turning and throwing to first base \u2013 is a key skill for a second baseman."} {"text":"The most famous double play trio\u2014although they never set any records\u2014were Joe Tinker, Johnny Evers and Frank Chance, who were the shortstop, second baseman and first baseman, respectively, for the Chicago Cubs between 1902 and 1912. Their double play against the New York Giants in a 1910 game inspired Giants fan Franklin Pierce Adams to write the short poem \"Baseball's Sad Lexicon\", otherwise known as \"Tinker to Evers to Chance\", which immortalized the trio. All three players were part of the Cubs team that won the National League pennant in 1906, 1907, 1908, and 1910, and the World Series in 1907 and 1908, turning 491 double plays on the way. They were elected to the National Baseball Hall of Fame in 1946."} {"text":"Jim Rice: 36 (Boston Red Sox, 1984)"} {"text":"Albert Pujols: 399 (through October 9, 2020)"} {"text":"The team record for a single game is seven GIDPs. It was set by the San Francisco Giants, who grounded into seven double plays on May 4, 1969, in a 3\u20131 loss to the Houston Astros. The Pittsburgh Pirates suffered seven double plays (only six GIDPs) on August 17, 2018, in a 1\u20130 loss to the Chicago Cubs. The 1990 Boston Red Sox grounded into 174 double plays to set the single season team record."} {"text":"In baseball, a pickoff is an act by a pitcher or catcher, throwing a live ball to a fielder so that the fielder can tag out a baserunner who is either leading off or about to begin stealing the next base."} {"text":"A pickoff attempt occurs when this throw is made in an attempt to make such an out or, more commonly, to \"keep the runner close\" by making it clear that the pitcher is aware and concerned with the runner's actions. A catcher may also attempt to throw runners out who likewise \"stray too far\" from their bases after a pitch; this can also be called a pickoff attempt. A runner who is picked off is said to have been \"caught napping\", especially if he made no attempt to return to his base."} {"text":"A pickoff move is the motion the pitcher goes through in making this attempt; some pitchers have better pickoff moves than others. Pitchers in professional baseball use the pickoff move often, perhaps several times per game or even per inning if speedy baserunners reach base. Pitchers with more confidence in their ability to eliminate batters directly via strikeouts or flyouts use fewer pickoff attempts. In lower-skilled amateur games, the pickoff move is less common due to the potential for an error if the pitcher throws wild or the fielder fails to make the catch. In youth leagues that don't allow leading off, such as Little League and Cal Ripken League, the need for a pickoff move is eliminated."} {"text":"A pitcher uses many tactics to attempt to disguise whether he is going to begin a pitch or a pickoff attempt. However, some deceptive actions are illegal and may be called a balk."} {"text":"There are a few reasons to use this tactic:"} {"text":"A baserunner with a reputation for stealing bases, can also take advantage of the pitcher's desire to hold them to their base, as a means to throw off the pitcher's concentration. By taking a large lead, the runner can hope to force the pitcher to make mistakes, therefore helping the batter, by improving their chances at the plate. Prolific base stealers can accomplish this without a true intention to steal any base at all. Pitchers should be aware of this, and take care not to attempt to pick-off a runner, to the point of fatigue or losing focus on the batter."} {"text":"On August 24, 1983, Tippy Martinez of the Baltimore Orioles picked off three consecutive Toronto Blue Jays base runners in the first half of the 10th inning. The catcher for the Orioles, utility infielder Lenn Sakata, had replaced the backup catcher at the start of the inning. Sakata hadn't played as a catcher since Little League, and the Blue Jays thought it would be easy to steal off him. In the bottom half of the same inning, Sakata hit a walk-off home run."} {"text":"Game 4 of the 2013 World Series ended with a pickoff, as Koji Uehara of the Boston Red Sox threw to first base, causing St. Louis Cardinals' runner Kolten Wong to be tagged out."} {"text":"Pickoff records are imprecise, as it is not an official MLB statistic, and historical box scores typically did not distinguish between a pickoff and a caught stealing."} {"text":"Note that each of the pitchers listed in this section is left-handed."} {"text":"Slap bunting is an offensive baseball and softball technique wherein the batter attempts \"to hit the ball to a place on the infield that's farthest from the place where the out needs to be made\"."} {"text":"The technique is quite common in softball because of the difficulty of getting a hit with a pitcher only away. By already being in the front of the batter's box with the batter's body turned halfway toward first base, the batter already has some momentum toward first base and might be in better position to get a base hit."} {"text":"The technique is often successful in sacrifice circumstances, where the placement of the ball could help advance a runner already on base. It is also often used when batters are having difficulty getting a hit off of a difficult pitcher, or when they have a better opportunity of getting on base because of the slap bunt than a hit, perhaps because of the player's running speed."} {"text":"Some advanced players might perform a slap hit, which is the same technique except that the player swings to place the ball in an infield hole or over the infielders' heads."} {"text":"In baseball, the squeeze play (a.k.a. squeeze bunt) is a maneuver consisting of a sacrifice bunt with a runner on third base. The batter bunts the ball, expecting to be thrown out at first base, but providing the runner on third base an opportunity to score. Such a bunt is most common with one out. According to Baseball Almanac, the squeeze play was invented in 1894 by George Case and Dutch Carter during a college game at Yale University."} {"text":"In a safety squeeze, the runner at third takes a lead, but does not run towards homeplate until the batter makes contact bunting."} {"text":"In a suicide squeeze, the runner takes off as soon as the pitcher begins the windup to throw the pitch, and before releasing the ball. If properly executed, and the batter bunts the ball nearly anywhere in fair territory on the ground, a play at home plate is extremely unlikely. However, if the batter misses the ball the runner will likely be tagged out, and if the batter pops the ball up a double play is likely."} {"text":"These plays are often used in the late innings of a close game in order to score a tying, winning, or insurance run. A pitcher's typical defense against a squeeze play, if he sees the batter getting into position to attempt a bunt, is to throw a high pitch that is difficult to bunt on the ground."} {"text":"Although shagging is not considered to be dangerous, several freak injuries have occurred as a result of engaging in it. In 1943, just one season after collecting his 3,000th hit, Paul Waner accidentally gashed his foot while shagging a fly ball in a game against the Pittsburgh Pirates, his former team. This was probably due to Waner being nearsighted and his refusal to wear glasses; thus, he \"played the outfield by ear.\" Nearly four decades later, Jerry Reuss was handed the honor of pitching on Opening Day in 1981, but suffered an injury to his calf while shagging for his teammates. He was replaced by unheralded rookie Fernando Valenzuela, who went on to win his next 8 consecutive decisions."} {"text":"Other players who have suffered serious injuries due to shagging include Mark Fidrych and Brendan Donnelly. Fidrych suffered a left knee injury after tearing cartilage in 1977 spring training, starting a downward spiral in his career. Donnelly ended up breaking his nose while shagging, resulting in him losing half of his blood and necessitating three operations."} {"text":"Mariano Rivera, the all-time leader in saves, suffered arguably the most well-known injury from shagging on May 3, 2012. While helping out in pre-game batting practice, Rivera attempted to catch a fly ball from Jayson Nix when he twisted his knee on the warning track of Kauffman Stadium and fell to the ground. An MRI scan revealed he had torn his anterior cruciate ligament (ACL) and part of his meniscus. This prematurely ended his season and led to fears that this could potentially be a career-ending injury. Rivera was able to come back and pitch for the 2013 season, his final season in the major leagues before retiring."} {"text":"Despite the seriousness of Rivera's injuries, pitchers from across Major League Baseball (MLB) who engaged in shagging flies during batting practice said they would not drop the activity or modify their training routine. These included James Shields and J. J. Putz, along with 2012 Cy Young Award winners R.A. Dickey and David Price. Furthermore, several MLB managers at the time\u2014namely Dale Sveum, Joe Maddon, Jim Leyland and Terry Collins\u2014confirmed they would not order their pitchers to stop shagging."} {"text":"A hidden ball trick is a play in which a player deceives the opposing team about the location of the ball. Hidden ball tricks are most commonly observed in Baseball, where the defence deceives the runner about the location of the ball, to tag out the runner. In goal-based sports (e.g., American football and lacrosse), the offence deceives the defence about the location of the ball, in an attempt to get the defence running the wrong way, such as in a fumblerooski."} {"text":"In the sports of baseball and softball, the hidden ball trick usually involves a fielder using sleight of hand or misdirection to confuse a baserunner as to the location of the ball, allowing the fielder to tag out the runner unawares. Though several variations of the play exist, they usually involve a fielder keeping the ball without the runner's knowledge, waiting for the runner to step off a base, and then quickly tagging the runner out. For the trick to work, the fielder (generally an infielder) must get the ball while the ball is still in the play, and the runner must either not know that the fielder has the ball or think that the play is over."} {"text":"Fielders usually try to fool the runner by miming a throw to the pitcher or another defender while keeping the baseball out of sight, often in his glove. If the runner is not paying attention and assumes that the closest fielder no longer has the ball, he may stray off the base and be tagged out. A related tactic is to quickly re-tag the runner after an unsuccessful tag in the hope that his hand or foot has lost contact with the base after a slide but before time has been called."} {"text":"While variations exist, the use of the play in major league baseball is somewhat rare. Some say that the hidden-ball trick has been pulled fewer than 300 times in over 100 years of major league baseball."} {"text":"A first baseman may attempt the play after a pitcher, in an attempt to pick off a runner, throws to first. The first baseman then fakes the throwback to the pitcher while keeping the ball in his glove, and if and when the runner leaves the base, tags the runner. Dave Bergman is a former first baseman who pulled this off on multiple occasions. A second baseman could attempt a similar play after a successful steal of second base, having received a throw from the catcher."} {"text":"Former second baseman Marty Barrett also successfully performed the trick more than once. After a runner reached second base on a ball hit to the outfield, and after receiving the throw-in from the outfield, he faked a throw to the pitcher while retaining the ball. To aid the deception, Barrett took the throw with his back to the runner, then placed the ball between the back of his glove and one of his fingers: this way, he exposed his glove to the runner without the ball in the pocket, suggesting that he did not have the ball. Other players have hidden the ball in their armpit."} {"text":"Former third baseman Matt Williams used a different technique which asked the runner to step off the base so that Williams could sweep the dirt off it, then tagged out the runner when the runner complied. This worked twice."} {"text":"Former third baseman Mike Lowell also made the trick work twice, each time after a throw-in from the outfield. The key to Lowell's success was acting, placement, and waiting: acting as if nothing was on, standing away from the bag but not too far from it, and waiting, at least 10 seconds, until the runner on third took a few steps."} {"text":"Mike Lowell did it again on September 3, , catching George Stone in the first inning. In-Game 2 of the 1907 World Series, Coughlin caught Jimmy Slagle with a hidden ball trick, the only one in World Series history. The play went from Germany Schaefer to Coughlin."} {"text":"Willie Kamm was considered another master of the trick. On April 30, , in a game against the Cleveland Indians, Kamm was involved in a rare triple play involving a hidden-ball trick. The Indians had baserunners on first and second bases when Carl Lind grounded out to the shortstop. Johnny Hodapp, who had been on second base, tried to score but got caught in a rundown between third and home. Charlie Jamieson advanced to third. Kamm retrieved the ball and tagged both runners, whereupon the umpire ruled Hodapp out. Kamm then hid the ball under his arm and waited for Jamieson to step off the base. When he did so, Kamm tagged him out to complete the triple play."} {"text":"In the minor leagues, on August 31, 1987, catcher Dave Bresnahan of the Williamsport Bills pulled an unusual hidden ball trick against the Reading Phillies in the Eastern League. With a runner on third base, Bresnahan switched catcher's mitts and put on a glove in which he had secreted a peeled potato. When the pitch came in, Bresnahan fired the white potato down the third-base line, enticing the runner to sprint home. Bresnahan then tagged the runner with the baseball which he kept in his mitt. The umpire awarded the runner home plate for Bresnahan's deception. Bresnahan was subsequently released from the Bills for the incident, but the fans of the Bills loved the play and the team eventually retired Bresnahan's number."} {"text":"In goal-based sports (e.g., American football and lacrosse), the offence deceives the defence about the location of the ball, in an attempt to get the defence running the wrong way."} {"text":"A hidden ball trick is considered a trick play in American football."} {"text":"There are various executions of the hidden ball plays in American football, including the Statue of Liberty play and Fumblerooski."} {"text":"On November 9, 1895 John Heisman executed a hidden ball trick utilizing quarterback Reynolds Tichenor to get Auburn's only touchdown in a 6 to 9 loss to Vanderbilt. During the play, the ball was snapped to a half-back who was able to slip it under the back of the quarterback's jersey and who in turn was able to trot in for the touchdown. This was also the first game in the south decided by a field goal. Heisman later used the trick against Pop Warner's Georgia team. Warner picked up the trick and later used it at Cornell against Penn State in 1897. He then used it in 1903 at Carlisle against Harvard and garnered national attention."} {"text":"The hidden ball trick was famously parodied in the 1930s by the Marx Brothers in the film \"Horse Feathers\" and by the Three Stooges in the comedy short \"Three Little Pigskins\"."} {"text":"Hidden ball tricks can be used in rugby and lacrosse."} {"text":"In baseball, the field manager (commonly referred to as the manager) is the equivalent of a head coach who is responsible for overseeing and making final decisions on all aspects of on-field team strategy, lineup selection, training and instruction. Managers are typically assisted by a staff of assistant coaches whose responsibilities are specialized. Field managers are typically not involved in off-field personnel decisions or long-term club planning, responsibilities that are instead held by a team's general manager."} {"text":"The manager chooses the batting order and starting pitcher before each game, and makes substitutions throughout the game \u2013 among the most significant being those decisions regarding when to bring in a relief pitcher. How much control a manager takes in a game's strategy varies from manager to manager and from game to game. Some managers control pitch selection, defensive positioning, decisions to bunt, steal, pitch out, etc., while others designate an assistant coach or a player (often the catcher) to make some or all of these decisions."} {"text":"Some managers choose to act as their team's first base or third base coach while their team is batting in order to more closely communicate with baserunners, but most managers delegate this responsibility to an assistant. Managers are typically assisted by two or more coaches."} {"text":"In many cases, a manager is a former professional, semi-professional or college player. A high proportion of current and former managers played the central position of catcher during their playing days, including Yogi Berra, Bruce Bochy, Wilbert Robinson, Joe Girardi, Mike Scioscia, Joe Torre, Connie Mack, Ralph Houk, and Ned Yost."} {"text":"The manager's responsibilities normally are limited to in-game decisions, with off-field roster management and personnel decisions falling to the team's general manager. The term \"manager\" used without qualification almost always refers to the field manager (essentially equivalent to the head coach in other North American professional sports leagues), while the general manager is often called the GM. This usage dates back to the early days of professional baseball when it was common practice for teams to have just one \"manager\" on their staff, and where GM duties were performed either by the field manager or (more commonly) by the owner of the team. Some owners (most famously, Connie Mack of the Philadelphia Athletics) carried out both GM and field managerial duties themselves."} {"text":"Major League Baseball managers differ from the head coaches of most other professional sports in that they dress in the same uniform as the players and are assigned a jersey number. The wearing of a matching uniform is frequently practiced at other levels of play, as well. The manager may be called \"skipper\" or \"skip\" informally by his players."} {"text":"Control pitchers, who succeed by avoiding surrendering walks, are different from power pitchers who succeed by striking out batters and keeping the ball out of play."} {"text":"Three of the most famous examples of control pitchers in the history of baseball are Christy Mathewson, Ferguson Jenkins, and Greg Maddux, though Maddux and Jenkins have also had significant strikeout totals (they are members of the 3,000 strikeout club) because of their ability to change speeds and the deceptive nature of their pitches."} {"text":"In an interview for \"ESPN The Magazine\" before the , an NBA general manager who chose to remain anonymous (though speculated to be either Rob Hennigan of the Orlando Magic or Ryan McDonough of the Phoenix Suns) stated that because \"the last place you want to be is in the middle\", his team would try to tank that season to have the best chance at a top pick in the 2014 NBA draft, which was anticipated to be one of the deepest in recent league history. The GM explained how he got the team's owners and the coach to agree to it while trying to keep it a secret from the players."} {"text":"One of the first teams to \"tank\" was the 1983\u201384 Houston Rockets, who considered the season lost after starting 20\u201326 and decided to play more bench players in order to fall in the standings and get higher in the draft order for the following season. In the 1983\u201384 NHL season, the Pittsburgh Penguins and New Jersey Devils admitted they wanted to lose in order to get the number one pick in the draft and select Mario Lemieux. But tanking did not become prevalent until the 2010s, when teams in all four major American leagues (the MLB, NFL, NBA, and NHL) were engaged in various forms of the practice."} {"text":"The Chicago Cubs and Houston Astros pioneered the practice in the MLB in the 2010s, finishing last in their respective leagues for several years. Both teams used subsequent draft picks to select star players who led them to championships, as the Cubs won the 2016 World Series and the Astros won in . Other teams like the Miami Marlins, Baltimore Orioles, Kansas City Royals, and Detroit Tigers have sought to emulate the strategy by trading away top players with the goal of drafting and developing top players and cutting costs in order to become competitive again several years later."} {"text":"In 2014, the Australian Football League's Melbourne Football Club were fined $500,000 for their involvement in a 2009 tanking scandal."} {"text":"When Jon Gruden retook control of the Oakland Raiders prior to the 2018 NFL season he liquidated most of the Raiders' talent, most notably trading five-time Pro Bowler Khalil Mack to the Chicago Bears for two first round draft picks, leading to accusations that he was intentionally tanking the team in hopes of fielding a competitive team when the Raiders moved to Las Vegas in 2020. The Raiders, who had finished 12\u20134 and qualified for the playoffs two seasons prior, finished their 2018 season with only four wins, but saw significant improvement the next season thanks to strong play from the team's rookies."} {"text":"Philadelphia Eagles head coach Doug Pederson faced allegations of deliberately losing the final game of the 2020 season after he replaced starting quarterback Jalen Hurts for backup Nate Sudfeld. The Eagles were losing by only three points against the Washington Football Team early into the fourth quarter, but an ineffective Sudfeld committed two turnovers on consecutive drives that allowed Washington to win 20-14. As a result, the 4-11-1 Eagles moved up from ninth overall to sixth overall in the 2021 NFL Draft, while Washington clinched the NFC East, which would have been clinched by the New York Giants if Philadelphia won. Pederson denied the allegations, stating he intended to give Sudfeld the opportunity to play, although he would be fired a week after the game."} {"text":"Fans of the Philadelphia 76ers adopted the mantra \"Trust the Process\" when the team was tanking from 2013 to 2016."} {"text":"While tanking can be a successful strategy in eventually building a winning team, it alienates fans in the midst of the rebuilding process as fans are frustrated by losing teams. During the Astros' rebuilding years of 2011\u20132013 when they lost an average of 108 games per season, attendance was cut in half and one game had a television rating of 0.0. The Sabres have also seen dips in attendance since their alleged rebuilding years in the 2010s and have also been described as a \"toxic environment\"."} {"text":"Tanking can lead to strife with players' unions as tanking teams choose rookies on inexpensive contracts over free agents wanting multimillion dollar deals."} {"text":"Leagues also see tanking as a threat to their existing revenue streams. The NBA, for example, sees this as a potentially major issue. One of professional leagues largest drivers of revenue generation is gate receipts from attendance.\u00a0Tanking has been shown to drastically reduce attendance and thus hurt the NBA's bottom line."} {"text":"The NBA and NHL have responded to the phenomenon in recent years by changing their draft from reverse-order to a lottery formula which is only loosely tied to the previous season's standings. Some observers have called for leagues to adopt a European-style relegation system where the worst teams are demoted to a minor league to make tanking less attractive. The NBA has even fined executives and owners for referencing the merits of losing."} {"text":"The NBA changed the way teams are given draft picks. In 2018, they decided to level the odds more at the top of the draft so that the worst team does not have the highest chance of getting the number one overall pick. \u00a0This change serves to dissuade teams from intentionally losing."} {"text":"Sabermetrics or SABRmetrics is the empirical analysis of baseball, especially baseball statistics that measure in-game activity."} {"text":"Sabermetricians collect and summarize the relevant data from this in-game activity to answer specific questions. The term is derived from the acronym SABR, which stands for the Society for American Baseball Research, founded in 1971. The term \"sabermetrics\" was coined by Bill James, who is one of its pioneers and is often considered its most prominent advocate and public face."} {"text":"Henry Chadwick, a sportswriter in New York, developed the box score in 1858. This was the first way statisticians were able to describe the sport of baseball by numerically tracking various aspects of game play. The creation of the box score has given baseball statisticians a summary of the individual and team performances for a given game."} {"text":"Sabermetrics research began in the middle of the 20th century with the writings of Earnshaw Cook, one of the earliest sabermetricians. Cook's 1964 book \"Percentage Baseball\" was one of the first of its kind. At first, most organized baseball teams and professionals dismissed Cook's work as meaningless. The idea of a science of baseball statistics began to achieve legitimacy in 1977 when Bill James began releasing \"Baseball Abstracts\", his annual compendium of baseball data. However, James's ideas were slow to find widespread acceptance."} {"text":"David Smith founded Retrosheet in 1989, with the objective of computerizing the box score of every major league baseball game ever played, in order to more accurately collect and compare the statistics of the game."} {"text":"Sabermetrics was created in an attempt for baseball fans to learn about the sport through objective evidence. This is performed by evaluating players in every aspect of the game, specifically batting, pitching, and fielding. These evaluation measures are usually phrased in terms of either runs or team wins as older statistics were deemed ineffective."} {"text":"The traditional measure of batting performance is considered to be hits divided by the total number of at-bats. Bill James, along with other fathers of sabermetrics, found this measure to be flawed, as it ignores any other way a batter can reach base besides a hit. This led to the creation of the On-base percentage, which takes walks and hit-by-pitches into consideration. To calculate the On-Base percentage, the total number of hits + bases on balls + hit by pitch are divided by at bats + bases on balls + hit by pitch + sacrifice flies."} {"text":"Another issue with the traditional measure of the batting average is that it does not distinguish between hits (i.e., singles, doubles, triples, and home runs) and gives each hit equal value. Thus, a measure that differentiates among these four hit outcomes, the slugging percentage, was created. To calculate the slugging percentage, the total number of bases of all hits is divided by the total number of times at bat. Stephen Jay Gould proposed that the disappearance of .400 batting average is actually a sign of general improvement in batting. This is because, in the modern era, players are becoming more focused on hitting for power than for average. Therefore, it has become more valuable to compare players using the slugging percentage and on-base percentage over the batting average."} {"text":"These two improved sabermetric measures are important skills to measure in a batter and have been combined to create the modern statistic OPS. On-base plus slugging is the sum of the on-base percentage and the slugging percentage. This modern statistic has become useful in comparing players and is a powerful method of predicting runs scored from a certain player."} {"text":"Some of the other statistics that sabermetricians use to evaluate batting performance are weighted on-base average, secondary average, runs created, and equivalent average."} {"text":"The traditional measure of pitching performance is earned run average. It is calculated as earned runs allowed per 9 innings. Earned run average does not separate the ability of the pitcher from the abilities of the fielders that he plays with. Another classic measure for pitching is a pitcher's winning percentage. Winning percentage is calculated by dividing wins by the number of decisions (wins and losses). Winning percentage is also heavily dependent on the pitcher's team, particularly on the number of runs it scores."} {"text":"\"Baseball Prospectus\" created another statistics called the peripheral ERA. This measure of a pitcher's performance takes hits, walks, home runs allowed, and strikeouts while adjusting for ballpark factors. Each ballpark has different dimensions when it comes to the outfield wall so a pitcher should not be measured the same for each of these parks."} {"text":"Batting average on balls in play (BABIP) is another useful measurement for determining pitcher's performance. When a pitcher has a high BABIP, they will often show improvements in the following season, while a pitcher with low BABIP will often show a decline in the following season. This is based on the statistical concept of regression to the mean. Others have created various means of attempting to quantify individual pitches based on characteristics of the pitch, as opposed to runs earned or balls hit."} {"text":"Value over replacement player (VORP) is considered a popular sabermetric statistic. This statistic demonstrates how much a player contributes to his team in comparison to a hypothetical player that performs at the minimum level needed to hold a roster position on a major league team. This measurement was invented by Keith Woolner, a former writer for the sabermetric group\/website \"Baseball Prospectus\"."} {"text":"Wins above replacement (WAR) is another popular sabermetric statistic for evaluating a player's contributions to his team. Similar to VORP, WAR compares a given player to a replacement-level player in order to determine the number of additional wins the player has provided to his team. WAR values vary with hitting positions and are largely determined by a player's successful performance and amount of playing time."} {"text":"Many traditional and modern statistics, such as ERA and Wins Shared, don't give a full understanding of what is taking place on the field. Simple ratios are not sufficient to understand the statistical data of baseball. Structured quantitative analysis is capable of explaining many aspects of the game, for example, to examine how often a team should attempt to steal."} {"text":"Sabermetrics can be used for multiple purposes, but the most common are evaluating past performance and predicting future performance to determine a player's contributions to his team. These may be useful when determining who should win end-of-the-season awards such as MVP and when determining the value of making a certain trade."} {"text":"Most baseball players tend to play a few years in the minor leagues before they are called up to the major league. The competitive differences coupled with ballpark effects make the exact comparison of a player's statistics a problem. Sabermetricians have been able to clear this problem by adjusting the player's minor league statistics, also known as the Minor-League Equivalency. Through these adjustments, teams are able to look at a player's performance in both AA and AAA to determine if he is fit to be called up to the majors."} {"text":"Sabermetrics methods are generally used for three purposes:"} {"text":"A machine learning model can be built using data sets available at sources such as baseball-reference. This model will give probability estimates for the outcome of specific games or the performance of particular players. These estimates are increasingly accurate when applied to a large number of events over a long term. The game outcome (win\/lose) is treated as having a binomial distribution."} {"text":"Predictions can be made using a logistic regression model with explanatory variables including: opponents' runs scored, runs scored, shutouts time at bat, winning rate, and pitcher whip."} {"text":"Many sabermetricians are still working hard to contribute to the field through creating new measures and asking new questions. Bill James' two \"Historical Baseball Abstract\" editions and \"Win Shares\" book have continued to advance the field of sabermetrics, 25 years after he helped start the movement. His former assistant Rob Neyer, who is now a senior writer at ESPN.com and national baseball editor of SBNation, also worked on popularizing sabermetrics since the mid-1980s."} {"text":"Nate Silver, a former writer and managing partner of \"Baseball Prospectus\", invented PECOTA. This acronym stands for \"Player Empirical Comparison and Optimization Test Algorithm\", and is a sabermetric system for forecasting Major League Baseball player performance. Simply put, it assumes that the careers of similar players will follow a similar trajectory. This system has been owned by \"Baseball Prospectus\" since 2003 and helps the website's authors invent or improve widely relied-upon sabermetric measures and techniques."} {"text":"Beginning in the 2007 baseball season, the MLB started looking at technology to record detailed information regarding each pitch that is thrown in a game. This became known as the PITCHf\/x system which is able to record the speed of the pitch, at its release point and as it crossed the plate, as well as the location and angle of the break of certain pitches through video cameras. FanGraphs is a website that favors this system as well as the analysis of play-by-play data. The website also specializes in publishing advanced baseball statistics as well as graphics that evaluate and track the performance of players and teams."} {"text":"In baseball, the left right switch is a maneuver by which a player that struggles against left- or right-handed players is replaced by a player who excels in the situation, usually only for the duration of the situation in question. For instance, a right-handed pitcher who is weak against left-handed hitting and is facing a left-handed hitter would be replaced with a pitcher, usually left-handed, who does a superior job of getting a left-handed hitter out. Similarly, a batter who has difficulty hitting against a left-handed pitcher will sometimes be pinch hit for by a batter who does well, even if the original player is superior in other respects."} {"text":"Conventional baseball wisdom suggests that, when a pitcher and a hitter pitch or bat with the same hand, the pitcher typically has the advantage. This especially holds true for left-handed pitchers, as lefties are less common in a major-league lineup than righties. As a result, the most common use of the lefty-righty switch is when a right-handed pitcher is facing a left-handed batter. The manager of the defensive team will sometimes go to the bullpen, especially in close games where a reliever has already entered the game, and pull out a left-handed specialist to face the left-handed batter. The new pitcher will then attempt to get the batter out. Whether he succeeds or fails, the pitcher will often be replaced after the at-bat."} {"text":"The lefty-righty switch can also be used against switch hitters who are noticeably poorer from one side of the plate than the other, or in the somewhat rarer instance of a batter who does poorly against \"opposite\"-handed pitchers. The basic principle in these cases remains the same."} {"text":"It is less common, although still frequent, for a batter to be replaced to gain a handedness advantage over a pitcher. For instance, with a left-handed pitcher in and a left-handed batter due up, a right-handed bat may be called in from the bench. The righty may not be as strong an all-round player as the player he replaced (thus, his absence from the everyday lineup), but he is a superior tactical choice for the purpose of getting on base in one at bat with a favorable matchup. Such a batter can be pinch run for if he gets on, replaced with a better defensive player for the next half-inning, or simply left in for the duration of the game."} {"text":"Similarly, position players must accept facing both left-handed and right-handed pitching as part of their job. Managers will usually juggle batters who are exceptionally weak against one sort of pitcher so that they only face starting pitchers who offer favorable matchups, but it is impossible to shield a batter from every instance in which he will face a pitcher who has him at a disadvantage. As a result, a position player must be prepared at all times to face a lefty-righty switch in a situation where his team cannot afford to pinch hit for him."} {"text":"Inside baseball is a strategy in baseball developed by the 19th-century Baltimore Orioles team and promoted by John McGraw. In his book, \"My Thirty Years of Baseball\", McGraw credits the development of the \"inside baseball\" to manager Ned Hanlon. In the 1890s, this kind of play was referred to as \"Oriole baseball\" or \"Baltimore baseball\"."} {"text":"Inside baseball is an offensive strategy that focuses on teamwork and good execution. It usually centers on tactics that keep the ball in the infield: walks, base hits, bunts, and stolen bases. One such play, where the batter deliberately strikes the pitched ball downward onto the infield surface with sufficient force such that the ball rebounds skyward, allowing the batter to reach first base safely before the opposing team can field the ball, remains known as a Baltimore Chop."} {"text":"Another term in use in the 1890s for this style was \"scientific baseball\", referring to calculated one-run game strategies based on intelligent, cooperative actions of the players. An article in \"The New York Times\" published in 1911 described \"scientific baseball\":Scientific baseball of to-day \u2013 \"inside ball\" they call it \u2013 consists in making the opposing team think you are going to make a play one way, then shift suddenly and do it in another."} {"text":"McGraw in his book writes: \"So-called inside baseball is mostly bunk. It is merely working out of definite plans that the public does not observe\"."} {"text":"This strategy did not rely on big hits and home runs and became the primary offensive strategy during the dead-ball era."} {"text":"The equivalent modern term is \"small ball\"."} {"text":"Critics also note that the reputation of the Orioles for the \"inside baseball\" grew only in retrospect. At the time, the Orioles were more famous for deliberately playing dirty."} {"text":"Win probability is a statistical tool which suggests a sports team's chances of winning at any given point in a game, based on the performance of historical teams in the same situation. The art of estimating win probability involves choosing which pieces of context matter. Baseball win probability estimates often include whether a team is home or away, inning, number of outs, which bases are occupied, and the score difference. Because baseball proceeds batter by batter, each new batter introduces a discrete state. There are a limited number of possible states, and so baseball win probability tools usually have enough data to make an informed estimate."} {"text":"American football win probability estimates often include whether a team is home or away, the down and distance, score difference, time remaining, and field position. American football has many more possible states than baseball with far fewer games, so football estimates have a greater margin of error. The first win probability analysis was done in 1971 by Robert E. Machol and former NFL quarterback Virgil Carter."} {"text":"As a brief example, guessing that each team playing at home will win is based on home advantage. This guess uses a single contextual factor and involves a very large number of games. But with only one factor, the accuracy of this guess is limited to home advantage itself (about 55\u201370% across sports) and does not change within the game based on in-game factors."} {"text":"Win probability added is the change in win probability, often how a play or team member affected the probable outcome of the game."} {"text":"Current research work involves measuring the accuracy of win probability estimates, as well as quantifying the uncertainty in individual estimates. That is, if a tool estimates a 24% win probability because 24% of previous teams in that situation won their games, do future teams win at the same 24% rate? Estimating from hidden data uses testing tools like cross-validation."} {"text":"While many models involve frequency analysis of past events, other models use Bayesian processes."} {"text":"Some models include a measure of teams' strength coming into the game, while others assume every team is average. Including strength estimates increases the number of possible states, and therefore decreases an estimate's power while possibly increasing its accuracy."} {"text":"Whiteyball is a style of playing baseball that was developed by former Major League Baseball manager Whitey Herzog. The term was coined by the press during the 1982 World Series to describe the style of Herzog's St. Louis Cardinals. The team won the Series without a typical power hitter, instead using speed on the base paths, solid pitching, excellent defense, and line drive base hits. Whiteyball was well-suited to the fast, hard AstroTurf surface that Busch Memorial Stadium had at the time, which created large, unpredictable bounces when the ball hit it at sharp angles. In his book \"White Rat\", Herzog says the approach was a response to the spacious, artificial surface stadiums of the time. He said of the media's dismay at his teams' success:"} {"text":"Herzog used this strategy for his team during the 1980s until he left the Cardinals in 1990."} {"text":"A 2012 sports article described Whiteyball as follows:"} {"text":"Herzog used many switch-hitters such as Ozzie Smith, Willie McGee, Tom Herr, Terry Pendleton, Vince Coleman, Jos\u00e9 Oquendo, Garry Templeton, Ted Simmons, Luis Alicea, Mike Ramsey, Tony Scott, and F\u00e9lix Jos\u00e9 in St. Louis, along with Willie Wilson and U L Washington when he managed in Kansas City. Kansas City Royals manager Ned Yost used his own version of Whiteyball to get to the 2014 World Series, and win the 2015 series."} {"text":"According to \"The Dickson Baseball Dictionary\", a team has \"batted around\" when each of the nine batters in the team's lineup has made a plate appearance, and the first batter is coming up again during a single inning. Dictionary.com, however, defines \"bat around\" as \"to have every player in the lineup take a turn at bat during a single inning.\" It is not an official statistic. Opinions differ as to whether nine batters must get an at-bat, or if the opening batter must bat again for \"batting around\" to have occurred."} {"text":"In modern American baseball, some batting positions have nicknames: \"leadoff\" for first, \"cleanup\" for fourth, and \"last\" for ninth. Others are known by the ordinal numbers or the term #-hole (3rd place hitter would be 3-hole). In similar fashion, the third, fourth, and fifth batters are often collectively referred to as the \"heart\" or \"meat\" of the batting order, while the seventh, eighth, and ninth batters are called the \"bottom of the lineup,\" a designation generally referring both to their hitting position and to their typical lack of offensive prowess."} {"text":"For example, Rule 36 (\"The Batsman's Position--Order of Batting\") in \"The Playing Rules of Professional Base Ball Clubs\" of 1896 stated the following: \"The Batsmen must take their positions within the batsmen's lines ... in the order in which they are named in the batting order, which batting order must be submitted by the Captains of the opposing teams to the Umpire before the game, and this batting order must be followed except in the case of a substitute player, in which case the substitute must take the place of the original player in the batting order. After the first inning the first striker in each inning shall be the batsman whose name follows that of the last man who completed his turn ... in the preceding inning.\""} {"text":"In cricket, the batting order is generally fixed so that players are sure of their role within the team, but there is no obligation to submit a definitive batting order and stick to it. A \"batsman\" can be \"promoted\" to a higher spot (or conversely, demoted to a lower one) in the batting order according to the team's wishes."} {"text":"The idea of a \"revolving\" batting order is unique to baseball, in which the on-deck batter at the time the final out is made in one inning becomes the lead-off batter (unless the current batter had not been struck-out or put a ball in play, in which case he returns as the lead-off batter with a 0-0 reset pitch count) in the next inning (unless his spot is taken by a pinch-hitter)."} {"text":"In the shorter form of cricket, there is only one innings per side, while in the longer form each side bats a maximum of two times. In a typical innings of this latter form, all eleven players on the team will have a chance to bat, and the innings finishes when 10 players are out. In the team's second innings, the batting order is usually maintained, but the team can make any changes it desires."} {"text":"As in baseball, many batting order configurations are possible, but a standard order might be:"} {"text":"The concept of a batting order in baseball is \"profoundly democratic; no matter how good a hitter you are, you have to wait your turn.\" In that respect, although baseball, like cricket, \"may have begun as a gentlemen's game,\" Americans gravitated toward baseball as a better embodiment of the country's egalitarian ideal, and as a symbol of cultural as well as political independence from the British colonial legacy."} {"text":"However, it should also be remembered that in cricket a single innings lasts hours or even days, and there are periods in which batting can be markedly easier or more difficult. A related factor is that a single ball is used in an innings for around 80 overs (approximately 5 hours of play). At the beginning of an innings, therefore, when bowlers are fresh and the ball is hard, it would be appreciably more challenging for the non-specialist batsmen to make an impact. Conversely, if such a player bats when the ball is old and the bowlers are tired, he can thrive, and this can often be a great source of pleasure to spectators, as insult is added to injury for the other side."} {"text":"Finally, in cricket, there is no such thing as a designated hitter, so even if a bowler has no batting ability, he will still be required to bat, usually as the last man in the order."} {"text":"The first player in the batting order is known as the leadoff hitter. The leadoff batter is traditionally an individual with a high on-base percentage, plate discipline, bat control, good speed, and the ability to steal bases. His goal is to ensure the team has baserunners when the later, more powerful hitters come to bat. Once on base, his main goal is to get into scoring position (that is, 2nd or 3rd base) as quickly as possible, either through steals, hit and run plays or intelligent baserunning decisions, and then on to score."} {"text":"His need for a high on-base percentage (OBP) exceeds that of the other lineup spots. Because leadoff hitters are selected primarily for their speed and ability to reach base, they are typically not power hitters, but contact hitters. Leadoff hitters typically hit mostly singles and doubles and draw walks to get on base. However, speed is not essential, as was shown by Wade Boggs, but it is highly desired among leadoff hitters."} {"text":"However, today's model for a leadoff hitter developed only gradually. An early \"job description\" for a leadoff hitter by baseball pioneer Henry Chadwick in 1867 advised only, \"Let your first striker always be the coolest hand of the nine.\" By 1898, though, a \"Sporting Life\""} {"text":"article noted, \"It is customary to have a small, active fellow who can hit, run and steal bases, and also worry a pitcher into a preliminary base on balls, as a leader in the list.\""} {"text":"Examples of classic leadoff hitters are Phil Rizzuto, Richie Ashburn, Maury Wills, Lou Brock, Pete Rose, Rod Carew, Tim Raines, and Ichiro Suzuki, with some having somewhat more power (Dick McAuliffe, Lou Whitaker, Rickey Henderson, Paul Molitor, Derek Jeter, Carlos G\u00f3mez, Gerardo Parra, Johnny Damon)."} {"text":"The term \u201cleadoff hitter\u201d can be used interchangeably to describe not only the first batter on the lineup card, but also the first batter up in any particular inning. For example, if, in the second inning, the fifth batter in the lineup card is the first batter up, it will be said that he is leading off or that he is the leadoff batter for that particular inning."} {"text":"The third batter, in the \"three-hole\", is generally the best all-around hitter on the team, often hitting for a high batting average but not necessarily very fast. Part of his job is to reach base for the cleanup hitter, and part of it is to help drive in baserunners himself. Third-place hitters are best known for \"keeping the inning alive\". However, in recent years, some managers have tended to put their best slugger in this position."} {"text":"Typically the greatest hitters for a combination of power and OBP on their teams bat third, as is shown by the use of such hitters as Rogers Hornsby, Babe Ruth, Stan Musial, Mel Ott, Ted Williams, Tony Gwynn, Willie Mays, Chipper Jones, Barry Bonds, Mickey Mantle, Carl Yastrzemski, Albert Pujols, Joey Votto, Andrew McCutchen, Miguel Cabrera, Ken Griffey Jr., Ryan Braun, Josh Hamilton, Evan Longoria, Jos\u00e9 Bautista, Edwin Encarnaci\u00f3n, Mike Trout, and Hank Aaron in this position in the lineup. Even without the combination of extreme power (Yogi Berra, Al Kaline, George Brett) or high batting average (Ernie Banks, Harmon Killebrew, Johnny Bench, Mike Schmidt, Reggie Jackson) this batting position contains an inordinate number of hitters who eventually become members of the Baseball Hall of Fame."} {"text":"The theory behind the cleanup hitter is that, at the beginning of the game, if at least one of the first three batters reaches base with a single-base hit or walk, a home run will result in two or more runs rather than just one (a \"solo\" home run). If all three players reach base, thereby loading the bases, the cleanup hitter has the chance to hit a grand slam, scoring four runs. But even without the grand slam, this batter can extend an inning with a high batting average and frequent walks."} {"text":"However, since home runs were a rarity before 1920, the concept of slotting a home run hitter fourth was slow to develop. However, the need for a good run producer in that position was recognized from the early days in baseball history, as demonstrated by player-manager Cap Anson generally penciling his name there. As power came to play a larger role in the game, the tendency to bat home run hitters fourth developed accordingly. In 1904, sportswriter Tim Murnane stated unequivocally that \"The heavy hitter of the team is located at the fourth place.\""} {"text":"The #3 and #4 hitters can often be switched in roles. For example, the 2011 Detroit Tigers had Miguel Cabrera as their #4 hitter but moved him to the #3 hitter after acquiring Prince Fielder as a free agent before the 2012 season."} {"text":"In the presence of the designated hitter, the ninth batter is often like the second leadoff. Ninth-hitters tend to be fast, and have a decent on-base percentage like the leadoff hitter."} {"text":"On August 18, 1956, major league manager Bobby Bragan placed his best hitter in the leadoff position and the remainder of his lineup in descending batting average order. Earnshaw Cook in his 1966 book, \"Percentage Baseball\", claimed that, using a computer, Bragan's lineup would result in 1 to 2 more wins per season. A recent computer simulation demonstrates the superiority of Bragan's lineup."} {"text":"Power hitter is a term used in baseball for a skilled player that has a higher than average ability in terms of his batting, featuring a combination of dexterity and personal strength that likely leads to a high number of home-runs as well as doubles and triples."} {"text":"In terms of detailed analysis, looking at a player's ability as a power hitter often involves using statistics such as someone's 'slugging percentage' (a function that's calculated by evaluating someone's number of moments at bat in relation to the nature of their hits and strikes). 'Isolated Power' (ISO), a measure showing the number of extra bases earned per time at bat that's calculated by subtracting someone's batting average from his slugging percentage, is another statistic used."} {"text":"The concept generally is analogous to that of a power pitcher, a player who relies on the velocity of his pitches (perhaps at the expense of accuracy) and a high record of strikeout associated with them (statistics such as strikeouts per nine innings pitched are common measures)."} {"text":"Barry Bonds, who set the record for the most home runs in a season in Major League Baseball history, is often cited as a power hitter. His career was later bogged down by issues regarding performance enhancing drugs. However, he managed a total of 762 home runs while also earning a comparatively high ISO compared to his rivals, with the publication \"Business Insider\" labeling him #3 in a list of the greatest power hitters of all time."} {"text":"Other baseball figures so cited include the famous hitters Babe Ruth, Lou Gehrig, and Ted Williams. Popular newspaper writer Victor O. Jones wrote about Williams in particular, \"Ted is lucky to come along in a baseball age that worships on the shrine of power, pure, unadulterated power.\""} {"text":"However, in an instance where if the runner at 3rd base scoring ends the game immediately, a team may elect to have a fifth infielder, as to decrease the chances of a groundball getting through. The drawback is that it decreases the chance of an outfielder getting to a potential fly ball that may result in a play at the plate, however, any fair fly ball hit deep enough is good enough to end the game."} {"text":"For scorekeeping purposes, whatever position a player is listed as in the box score is the number they get assigned. For example, if a left fielder moves in to play the third base position, and a ball is hit to him, and he throws to the catcher to get the out, the play is recorded as 7\u20132."} {"text":"In , in an Opening Day game between the Toronto Blue Jays and Cleveland Indians at Progressive Field in Cleveland, Ohio, in the bottom of the 12th, with the score tied 4\u20134, the Indians loaded the bases with one out. Manager John Farrell of the Blue Jays decided to take out left fielder Eric Thames and bring in veteran infielder Omar Vizquel (who won 11 Gold Gloves in his career) to play at second base, being the pivot of a potential double play. Sure enough, pitcher Luis P\u00e9rez got Asdr\u00fabal Cabrera to ground into the inning ending double play, 6\u20134\u20133, although Vizquel was not involved in the double play. The Jays ended up winning 7\u20134 in 16 innings, which was the longest Opening Day game in MLB history."} {"text":"Later in 2012, on September 13, 2012 in a game vs. the Tampa Bay Rays and Baltimore Orioles at Camden Yards in Baltimore, Maryland, with the game tied 2\u20132 in the bottom of the 13th, the Orioles loaded the bases with no one out. Rays manager Joe Maddon took out left fielder Sam Fuld to bring in infielder Reid Brignac to play in the middle of the infielder. Rays pitcher Chris Archer ended up getting Robert Andino to ground into a force at home plate before striking out Matt Wieters and Nate McLouth to get out of the jam. The Orioles ended up winning 3\u20132 in 14 innings."} {"text":"A fly ball pitcher is a type of baseball pitcher who produces an above-average number of fly balls, typically by keeping his fastball high up in the strike zone and relying on late movement to cause the batter to be unable to make solid contact. This designation is constructed around the ground ball fly ball ratio, which measures how frequently a pitcher gets batters out on fly balls versus ground balls."} {"text":"The downside of a fly ball pitcher is that, in a ballpark where the design tends to favor hitters over pitchers (an example being Yankee Stadium), a fly ball pitcher will tend to give up more home runs, resulting in a higher earned run average."} {"text":"Examples include pitchers Sid Fernandez, Ted Lilly, Chris Young and Marco Estrada."} {"text":"Sistema Peralta (\"Peralta system\") is a baseball strategy where the pitching rotation in a 9-inning game is approximately divided amongst three pitchers throwing three innings each (3-3-3). Simply put, \"one pitcher every three innings\". This system contrasts with the more traditional strategy of having a starting pitcher, who handles the bulk of the pitching workload (typically over 5 innings), and reliever(s) who finish up the game (i.e. 5-2-2, 5-2-1-1, among various other combinations). It bears the name of entrepreneur and Tigres del Mexico founder Alejo Peralta who established and implemented its use with Tigres starting in the 1970s."} {"text":"The strategy is an adaptation from early research theories proposed by mechanical engineering professor at Princeton University and baseball statistician Earnshaw Cook. For games not enforcing the designated hitter rule, Cook realized that he could start the game with a series of relievers and use a designated batter when the pitcher would normally bat. After doing so, in the following half inning where his team had to pitch, he would place the next reliever. This is done until the end of 4 whole innings where he would put in the starting pitcher and proceed as normal until the end of the game."} {"text":"2010 National League Championship Series (Game 6)."} {"text":"Mexican baseball analyst Tomas Morales pointed out that San Francisco Giants manager Bruce Bochy had used, albeit unaware, a similar version of the system in game 6 of the 2010 National League Championship Series."} {"text":"In baseball, a pull hitter is a batter who usually hits the ball to the side of the field from which he bats. For example, a right-handed pull hitter, who bats from the left side of the plate, will usually hit the ball to the left side of the field, termed \"left field\", from the batter's perspective. The opposite of pull hitting is known as \"hitting to the opposite field.\" Hitters who rarely hit to the opposite field or \"up the middle\" are often described as dead pull hitters."} {"text":"A long reliever is a relief pitcher in baseball who enters the game if the starting pitcher leaves the game early."} {"text":"Long relievers often enter in the first three innings of a game when the starting pitcher cannot continue, whether due to ineffective pitching, lack of endurance, rain delays, injury, or ejection. The hope is that the long reliever will be able to get the game under control, and hopefully his team's offense will be able to help get the team back into the game. The hope is also that the long reliever will pitch long enough to save other relievers in the bullpen from having to pitch."} {"text":"Long relievers are usually players who used to be starters either in the major leagues or in the minors (and still can be a temporary starter if one of the normal starters is injured or otherwise unavailable), but whose teams believe they have better starters available. Sometimes a team's long reliever is a former starter who has lost his effectiveness, either through a decline in skills or a series of injuries. Occasionally, long relievers are inexperienced pitchers who may have the potential to become starters or setup pitchers after gaining major league experience."} {"text":"The quality of long relievers can vary, but when the long reliever is known to be an ineffective former starter, he is often called the \"mop up man\" or \"mop.\""} {"text":"A secondary use of a long reliever is in the late extra innings of a tied game, once the team's other, generally more effective, relievers have already been used. While a long reliever is often a team's least effective pitcher, he is still often a far better choice in an extended game than resorting to one of the team's starting pitchers (which can spread chaos throughout a pitching rotation, as everyone's future schedule gets adjusted), or even worse, resorting to a position player on the mound. A long man generally enters the game somewhere between the 11th and 16th innings in this role, and can be expected to pitch 5 or more innings, before a team will be forced to resort to other options."} {"text":"Occasionally during the season, a team may find itself with enough rest days to allow it to use a four-man rotation rather than the now standard five. In these situations, a team may choose to keep their \"fifth\" starter on the roster in the long reliever role. This happens particularly in the post-season, when the fifth starter is a better pitcher than the \"regular\" long-reliever, allowing the team to carry either an additional short reliever, or position player, in lieu of the regular long man."} {"text":"In recent years, teams began experimenting with an \"opener\", a relief pitcher who starts a game but only pitches for at least the first inning. In this strategy, the opener usually pitches against the opponent's best batters at the start of a game in hopes of throwing them off guard, before giving way to a long reliever who would normally be a starter in this situation."} {"text":"Generally, most long kickers or extended kickers would accelerate in the penultimate lap or shortly after the bell indicating the last lap has begun. A speed kicker would behave more like an anchor runner in a 4x400 relay, positioning themselves on the shoulder of their opponent and using their burst of speed as late as the final straightaway."} {"text":"Mo Farah developed a reputation as a strategic runner. His finishing kick was not so much a burst of speed, but his extended ability to repeatedly accelerate just enough to discourage anybody from passing him during an intense final lap or so of his races. His Nike Oregon Project teammate Matthew Centrowitz Jr. employed a similar form of holding the lead to win his 2016 gold medal in the 1500."} {"text":"Because of the advantage of having the tool of a kick in a competitor's arsenal, the techniques to train to kick are a common discussion among runners and coaches."} {"text":"In tennis, a grip is a way of holding the racquet in order to hit shots during a match. The three most commonly used conventional grips are: the Continental (or \"Chopper\"), the Eastern and the Semi-Western. Most players change grips during a match depending on what shot they are hitting."} {"text":"In order to understand the grips, it is important to know that the handle of a racquet always consists of 8 sides or, in other words, has an octagonal shape. A square shape would hurt the hand, while a round shape would not give enough friction to gain a firm grip. The eight sides of the handle are called bevels. They can be numbered from 1 to 8 as follows: if the blade of the racquet is perpendicular to the ground, the bevel facing up is bevel #1. Rotating the racquet clockwise, the next bevel facing up is bevel #2 for the right-handed, and counter-clockwise for the left-handed, and so on to identify all 8 bevels."} {"text":"Popularized by Fred Perry back in the thirties, the Continental Grip requires no change of grip position, and was therefore considered to make for a faster playing game."} {"text":"The Eastern forehand grip is primarily used for flatter groundstrokes. In order to execute a proper Eastern forehand grip, players need both index knuckle and heel pad to rest on bevel #3. An easy way to implement this is to place the palm flat against the strings and slide down to the handle and grab, in order to achieve an Eastern forehand. Advantages are this is one of the easiest grips for learning the forehand, easier (faster) to change to a Continental to do some volleying, topspin or slice. Notable players with this grip include Juan Martin Del Potro, Roger Federer and Steffi Graf."} {"text":"The Semi-Western grip is an \"advanced\" form that most players either change to on purpose or naturally find through practice. This grip closes the racket face more upon contact, allowing for more topspin but is still able to generate pace. This grip is the most popular on tour and is used by several greats, such as Rafael Nadal and Andy Murray."} {"text":"The Western grip is one of the more extreme forehand grips used to generate topspin. This grip closes the racket face more than semi-western and was originally used by Rafael Nadal growing up. This grip is great for maximizing margin and hitting deep, loopy balls. Notable players using this grip are Karen Khachanov and Kei Nishikori. Another variation, popularized by Novak Djokovic, is the 3\/4 Western grip. For this grip, the knuckle is slightly on the Semi-Western bevel (4) and the heel pad more on the Western side."} {"text":"The Hawaiian grip is the most extreme forehand grip used to generate heavy topspin. Because of the extreme wrist position, it is not recommended to use because it may cause wrist pain and other joint problems. The nature of the grip is to generate topspin because of the closed racket face. This means that it is harder to drive through the ball, however, is still possible. The most popular player to use this grip is Alberto Berasategui."} {"text":"The Two-Handed Forehand Grip (F: Bevel #2 + B: Bevel #6)."} {"text":"The basic Two-Handed Forehand grip, is obtained by holding the racquet in a regular Continental grip, then placing the left hand above holding it in a left-handed Semi-Western Forehand grip. This places the reference bevels of the two hands exactly opposite each other. Holding the racquet using two hands for the forehand is highly unusual, but some well-known top WTA players (e.g. Monica Seles, Hsieh Su-wei) have used it successfully. While it shortens the forehand reach and reduces maximum power, it offers unrivalled accuracy, which may more than compensate the former drawbacks. Also, combined with a two-handed backhand, it is almost impossible for the opponent to see which side (backhand or forehand) is hitting the ball. The sides often are equally accurate, and no grip change is required."} {"text":"The Eastern Backhand grip is obtained when placing the hand such that the base knuckle of the index finger and heel of the hand are right on bevel #1. This grip allows for significant spin and control. The opposite face of the racket is used compared to the Eastern forehand. For someone who uses a Western forehand grip, on the other hand, the same face of the racket as in the forehand is used to strike the ball; no need to change grips if the forehand is played with a Western grip."} {"text":"The Semi-Western backhand grip, is used by placing the hand such that the base knuckle of the index finger is right on bevel #8. Compared to the Continental grip, the blade has rotated 90 degrees clockwise. This forces the wrist in an uncomfortable twist but allows for the greatest possible spin."} {"text":"This is basically equivalent to the Semi-Western forehand grip. The same face of the racquet as in the forehand is used to strike the ball. No need to change grips if the forehand is played with a Semi-Western grip."} {"text":"The Two-Handed Backhand Grip (F: Bevel #2 + B: Bevel #6)."} {"text":"The basic Two-Handed Backhand grip is obtained by holding the racquet in a regular Continental grip, then placing the left hand above holding it in a left-handed Semi-Western Forehand grip. This places the reference bevels of the two hands exactly opposite each other. Holding the racquet using two hands for the backhand is very common, but there are many variations in the precise positioning of the two hands. This also varies between right- and left-handed players."} {"text":"A different face of the racquet than in the forehand is used to strike the ball."} {"text":"The backhand can be executed with either one or both hands. three of the top 100 ranked women used a one-handed grip. Twenty-four of the top 100 ranked men used a one-handed grip, down from almost 50 a decade earlier."} {"text":"For most of the 20th century the backhand was performed with one hand, using either a backhand Eastern or Continental grip. In modern tennis, there are a few professional players who use a Western one-hand backhand. This shot is held in a similar manner to the Eastern forehand. It has much more topspin potential than the traditional Eastern one-hander. The Western one-handed backhand grip makes it easier for a one-handed player to hit balls at shoulder height, but harder to hit low balls, and vice versa for the eastern one-handed backhand. The eastern one-handed backhand and its variants are used by most pros with strong single-handed backhand drives, like Gustavo Kuerten (now retired), especially Richard Gasquet among the men, and Justine Henin (now retired) among the women."} {"text":"The two-handed backhand is most commonly used with the forehand hand holding the racquet with a Continental grip and the non-dominant hand holding the racquet with a Semi-western forehand grip. While this is by far the most common way to hit a two-handed backhand, there are players who use different ways of holding the racquet for a two-handed backhand."} {"text":"The player long considered to have had the best backhand of all time, Don Budge, had a very powerful one-handed stroke in the 1930s and 1940s that imparted topspin onto the ball. Ken Rosewall, a one-handed backhand, used a tremendously accurate slice backhand with underspin through the 1950s and 1960s. The one-handed backhand slice is often used in rallies as it is a comfortable shot. Andre Agassi in particular increased his use of the one-handed backhand and often hit an unreturnable dropshot with it."} {"text":"The grip for the serve depends on the type of serve. At professional levels, the continental grip is used to hit all serves. Some players turn the grip more, towards the Eastern backhand grip (bevel #1), to maximize spin during a kick serve."} {"text":"To impart slice onto a serve, the server tosses the ball a little to the right of their body (if they are right-handed) and cuts the ball diagonally to create side and topspin. For a right-hander, the slice serve curves to the left and down in the court. This pulls players out wide or jams them into their body to set up a high, put away ball."} {"text":"There is also the kick serve, widely used for the second serve because of its great margin, ability to drop into the court, and for offsetting opponents because of its spin. For most, the topspin serve is hit by using a Continental forehand grip (bevel #2) and some use an Eastern backhand grip (bevel #1) to generate more spin."} {"text":"The statistic used to track penalties was traditionally called \"Penalty Infraction Minutes\" (PIM), although the alternate term \"penalty minutes\" has become common in recent years. It represents the total assessed length of penalties each player or team has accrued."} {"text":"The first codified rules of hockey, known as the Halifax Rules, were brought to Montreal by James Creighton, who organized the first indoor hockey game in 1875. Two years later, the \"Montreal Gazette\" documented the first set of \"Montreal Rules\", which noted that \"charging from behind, tripping, collaring, kicking or shinning the ball shall not be allowed\". The only penalty outlined by these rules was that play would be stopped, and a \"bully\" (faceoff) would take place. Revised rules in 1886 mandated that any player in violation of these rules would be given two warnings, but on a third offence would be removed from the game."} {"text":"It was not until 1904 that players were ruled off the ice for infractions. At that time, a referee could assess a two-, three- or five-minute penalty, depending on the severity of the foul. By 1914, all penalties were five minutes in length, reduced to three minutes two years later, and the offending player was given an additional fine. When the National Hockey League (NHL) was founded in 1917, it mandated that a team could not substitute for any player who was assessed a penalty, thus requiring them to play shorthanded for the duration. The penalty was shortened to two minutes for the 1921\u201322 season, while five- and ten-minute penalties were added two years later."} {"text":"Both the NHL and the International Ice Hockey Federation (IIHF) recognize the common penalty degrees of minor and major penalties, as well as the more severe misconduct, game misconduct, and match penalties."} {"text":"A team with a numerical advantage in players will go on a power play. If they score a goal during this time, the penalty will end and the offending player may return to the ice. In hockey's formative years, teams were shorthanded for the entire length of a minor penalty. The NHL changed this rule following the 1955\u201356 season where the Montreal Canadiens frequently scored multiple goals on one power play. Most famous was a game on November 5, 1955, when Jean B\u00e9liveau scored three goals in 44 seconds, all on the same power play, in a 4\u20132 victory over the Boston Bruins."} {"text":"In some cases, a referee can impose a double or triple minor. The infraction is counted as two or three separate minor penalties. If a team scores a power play goal during such a penalty, only the current block of two minutes being counted down is cancelled; the penalty clock is then reset to the next lowest interval of two minutes (ex. a goal with a double-minor penalty clock at 3:45 is reset to 2:00). Expiration rules of double- or triple-minor penalties due to goals being scored are identical to that of regular minor penalties being served back-to-back."} {"text":"Starting with the 2019-20 season, NHL referees are required to use on-ice video review for all major (non-fighting) penalties in order to either confirm the call or reduce the call to a minor penalty."} {"text":"Under IIHF rules, every major penalty carries an automatic game misconduct penalty; in other competitions, earning three major penalties in a game results in a game misconduct penalty, though a number of infractions that result in a major penalty automatically impose a game misconduct as well."} {"text":"Infractions that often call for a major penalty include spearing, fighting, butt-ending, charging, and boarding."} {"text":"Misconduct penalties are usually called to temporarily take a player off the ice and allow tempers to cool. They are sometimes also assessed in conjunction with fighting majors, giving the offending player(s) the opportunity to calm down as they sit out their ten minutes."} {"text":"IIHF rules state that if the player gets another misconduct penalty, (s)he risks a game misconduct penalty and is ejected."} {"text":"In most leagues, the referee has the discretion to call a game misconduct on a player charged with boarding due to the likelihood of injury to the boarded player. However, in the NHL, if a boarded player suffers a head or facial injury (a concussion risk), the offending player receives an automatic game misconduct."} {"text":"Any player who is dismissed twice for stick infractions, boarding or checking from behind, or dismissed three times for any reason, in a single NHL regular season incurs an automatic one-match ban, and further discipline is possible for subsequent ejections. For each subsequent game misconduct penalty, the automatic suspension shall be increased by one game. Salary lost as a result of a ban is usually donated to a league-supported charity or to a program to assist retired players."} {"text":"Examples of a game misconduct penalty include getting out of the penalty box before the penalty time is served, trying to join or attempt to break up a fight [third man in] or earning a second misconduct penalty in the same game."} {"text":"A player who receives a match penalty is ejected. A match penalty is imposed for deliberately injuring another player as well as attempting to injure another player. Many other penalties automatically become match penalties if injuries actually occur: under NHL rules, \"butt-ending, goalies using blocking glove to the face of another player, head-butting, kicking, punching an unsuspecting player, spearing,\" and \"tape on hands during altercation\" must be called as a match penalty if injuries occur; under IIHF rules, \"kneeing\" and \"checking to the head or neck area\" must be called as a match penalty if injuries occur."} {"text":"NHL referees are required to use on-ice video review for all match penalties in order to either confirm the call or reduce the call to a minor penalty."} {"text":"In NCAA hockey, a similar penalty called a game disqualification results in automatic suspension for the number of games equal to the number of game disqualification penalties the player has been assessed in that season."} {"text":"For statistical purposes, match penalty is counted as ten minutes in NHL and as twenty-five minutes under the IIHF rules."} {"text":"Apart from their use as a penalty, penalty shots also form the shootout that is used to resolve ties in many leagues and tournaments."} {"text":"Similar to a game misconduct in severity, gross misconduct penalties have been eliminated from the NHL rulebook. It was imposed for an action of extreme unsportsmanlike conduct, such as abuse of officials or spectators, and could be assessed to any team official in addition to a player. Infractions which garnered a gross misconduct now earn a game misconduct. The penalty had last been assessed in 2006 on Atlanta Thrashers coach Bob Hartley due to post-game comments made regarding referee Mick McGeough's blown call during a game versus Edmonton. The Phoenix Coyotes' Shane Doan was the last player to be given a gross misconduct penalty in 2005 for alleged ethnic slurs directed at French-Canadian referees (later investigated and subsequently cleared by the NHL)."} {"text":"However, this penalty is still in effect in Canadian hockey. \"A Gross Misconduct penalty shall be assessed [to] any player or team official who conducts herself in such a manner as to make a travesty of the game.\""} {"text":"In leagues which play with a shorthanded overtime (with only three or four attackers on the ice), should a team be penalized with only three players on the ice, an additional skater is added to the other team instead, until a five-on-three is produced. If a penalty in this situation expires without a goal being scored, the penalized player will be allowed back on the ice and will play normally until there is a stoppage; both teams will then be reduced back to the correct numbers. Ending coincidental penalties produce a similar situation, with both teams playing with additional players until play is stopped, allowing teams to be reduced again."} {"text":"While goaltenders can be assessed penalties, a goaltender cannot go to the penalty box and the penalty must be instead served by another player from their team who was on the ice at the time of the infraction (the PIM will be charged to the goaltender). If the goaltender receives either (a) three major penalties (NHL Rule 28.2), (b) one \"game misconduct\" penalty (NHL Rule 28.4), or (c) one \"match\" penalty (NHL Rule 28.5) however, he or she is ejected for the remainder of the game and must be substituted."} {"text":"While a team is short-handed, they are permitted to ice the puck as they wish, without having the icing infraction called against them. This allows short-handed teams to relieve pressure more easily when defending with fewer skaters than their opponents. This exemption does not apply to teams whose opponents have pulled their goaltender for an extra attacker (unless the defending team is killing a penalty at the same time)."} {"text":"In a situation where there are fewer than five minutes remaining in play (the final five minutes of regulation time or the five minutes of regular season overtime), should unequal simultaneous penalties be assessed (a minor or double-minor penalty against one team and a major or match penalty against the other), then instead of both sides serving their full times (which is impossible in the case of the major\/match penalty, as fewer than five minutes remain), the minor penalty is cancelled and its time subtracted from the major penalty, which is then assessed against that team."} {"text":"In addition, under most leagues' \"fight instigator\" rules, a player penalized as a fight instigator in the final five minutes (or during overtime) is charged with a game misconduct penalty and further disciplinary action. This is intended to discourage \"revenge\" fights started by badly-losing teams."} {"text":"In the NHL, infractions that result in penalties include:"} {"text":"Other leagues typically assess penalties for additional infractions. For example, most adult social leagues and women's hockey leagues ban all body checking (a penalty for roughing or illegal check is called), and in most amateur leagues, any head contact whatsoever results in a penalty. If a player pulls down another female's ponytail, they will be charged with a game misconduct penalty. The foul of moving the goalposts is handled differently from league to league; it has historically been a penalty shot, but after David Leggio began deliberately committing the foul to disrupt scoring opportunities, the American Hockey League declared such an act to be a game misconduct and the Deutsche Eishockey Liga automatically awarded the goal."} {"text":"Coaches or players may occasionally opt to commit an infraction on purpose. In some cases, it is hoped that the infraction can be concealed from the officials, avoiding a penalty. Gordie Howe was one player renowned for his ability to commit infractions without being called."} {"text":"Hockey players that opt to commit an infraction despite the punishment do so in order to degrade the opposing team's morale or momentum, or boost their own. This is most common with fighting, because the likely coincidental penalties do not result in a hindrance for their team. Hockey players also sometimes commit infractions with the hope of drawing the other player into committing a retaliatory infraction, and being penalized, while not being caught themselves. Hockey players known as \"pests\" specialize their game in the strategy of trying to draw opponents into taking a penalty. An example is Sean Avery, who was renowned in his ability to goad opponents into taking penalties as well as making other fundamental mistakes. Some players, coaches, and fans find this technique unsportsmanlike."} {"text":"It is also not uncommon to see players \"dive\" or make a borderline hit appear to be a penalty by embellishing or exaggerating their reaction to it; this, however, is a penalty in itself, although it is inconsistently enforced."} {"text":"Another common reason to commit an infraction is as last resort when an opposing player has a scoring opportunity, when a penalty kill is the preferable alternative to the scoring opportunity. These are referred to on most broadcasts as \"good penalties\"."} {"text":"The NHL keeps individual statistics on the penalties each player accrues through the penalties in minutes statistic (abbreviated \"PIM\"). Players renowned for their fighting or for being dirty players will usually lead their team in PIM and have such statistics highlighted by the media."} {"text":"The record for the most penalty minutes in one season is held by Dave Schultz of the Philadelphia Flyers, with 472 in the 1974\u201375 NHL season. The record for most penalty minutes in a career is held by Tiger Williams, who had 3,966 over 14 years. The active penalty minute leader is Zdeno Chara from the Washington Capitals, who has accumulated 1,964 PIM. Chara is now playing in his 24th NHL season."} {"text":"The most penalties in a single game occurred in a fight-filled match between the Ottawa Senators and Philadelphia Flyers on March 5, 2004, when 419 penalty minutes were handed out. Statistically, a game misconduct counts as 10 penalty minutes, in addition to other penalties handed out. In rare cases (as a result of multiple infractions, for instance the player participating in multiple fights), multiple game misconducts may be handed to a player\u00a0\u2014 that is merely statistical, not (automatically) a multi-game suspension, although the league will often suspend the player in a subsequent decision."} {"text":"Centers are required to cover much of the ice in all three zones. Where the center tends to play in the offensive zone is usually a matter of coaching and personal preference. Centers are responsible for keeping the flow of the game moving, and generally handle, and pass the puck more than any other position player. Because of this, most good centers tend to score significantly more assists than goals because the play goes through them as they try to find open teammates. His or her responsibilities in the zone are analogous to the classic number 10 playmaker in soccer."} {"text":"Because the range of offensive styles teams like to use, exactly how centers are used in the offensive zone is as varied as the players themselves. Generally the center's role on offense is to move the offense through himself, setting up other players, and providing support for puck battles. They roam around most areas of the ice in the zone and have a lot of freedom in decision making. They are also expected to constantly be in motion causing defenders to have a hard time tracking them."} {"text":"When a centre's winger is being attacked along the boards, the centre can take position behind the net to receive the pressured winger's pass. Behind the net is a natural place for some centres to play. It is a very difficult position to defend because it forces the opposing defensemen to leave the front of the net. It also gives the centre a clear view of the ice and most importantly the slot area. From here the centre has clear passing lanes and minimizes the distance and difficulty of passes to nearly any part of the slot."} {"text":"Many centres use their mobility and freedom to take advantage of the slot area, the area in between the faceoff dots, about 5 to 15 feet from the goal. The slot area is notorious goal-scoring territory because of its proximity to the net and the difficulty the opposing team has in defending it. Many centres like this area because of its openness. Possessing the puck here gives the centre many different options, as well as a central position in the offensive play. From here he can choose to shoot the puck on net, attempt to draw defenders away from the net by skating, or find open players closer to the goal cage."} {"text":"Additionally, without the puck, the centre can choose to occupy this space looking for deflections of long shots or rebounds. Aside from some larger centres who focus on scoring off rebounds, centres rarely set up directly in front of the net itself because in case of a turnover, it is much harder to get back in position defensively."} {"text":"Some centres will play the halfboards. This position is especially important to a centre on some powerplay sets. Again it gives the centre a clear view of the ice surface and many different options. From here he may choose to pass back to a defenceman on the point, go down the boards to a winger behind the net, or drive the net itself hoping to draw defenders to him. The disadvantage of this position is that it is easily defended, and the centre generally does not have much time to survey the ice looking for an open teammate."} {"text":"Powerplay sets are also quite varied, so the centre's role can range a lot. Many times though the centre will choose to operate in the slot area or on the halfboards. The halfboard position here is made easier to play because the centre has more time to look over the ice surface, and is not pressured by the defenders as much. Again the centre's role is to move the offence through himself\/herself looking for passing lanes to open players or roving the slot area looking for deflections and rebounds."} {"text":"The centre's role in the Neutral zone on the attacking side if he\/she possesses the puck, is to bring the puck into the offensive zone by carrying or dumping the puck in. Although any player may carry the puck into the zone, centres are most often counted on because of their speed, quickness, and ability to stickhandle. If another player possesses the puck attacking into the zone, the centre's job is to provide support if the puck carrier needs to pass to another player across the blue line. Once the zone has been gained the offence may proceed to set up as they see fit."} {"text":"On dump ins, the centre's role is to provide support to the wingers as they battle for possession in the corners, and hunt for loose pucks."} {"text":"Many different strategies have been devised to defend the neutral zone. Often successfully defending the neutral zone leads to fewer opportunities for the opposing team to have offensive possessions."} {"text":"Here the centre will mainly focus on skating and shadowing opposing puck carriers to try to force turnovers. They are responsible for the middle of the ice, and try to cut off long passing lanes to attacking players. If the defending team successfully does force a turnover, the centre is most often responsible for turning the direction of play around or receiving the first pass from a winger who has successfully forced a turnover."} {"text":"The neutral zone trap, pentagon trap, 1-2-2 trap, or zero-forecheck."} {"text":"When playing the trap, the centre typically spearheads the defence by placing himself\/herself in the middle of the ice between the red line and blue line in defensive position. This forces the puck carrier to either side board where the centre and puck side winger close him in, \"trapping\" him\/her between the two defending players and the boards. Here the attacking player has very few options, and generally must retreat to a defenceman, whereupon the defending team can reset the trap. This tactic was pioneered by the New Jersey Devils in the late 1990s and has been used extensively in the NHL and all levels of hockey since."} {"text":"When employing the left wing lock strategy, the centre's role is typically to shadow the puck carrier or provide token pressure in the opposing team's zone to force them to try to pass the puck up ice into the lock. This is a much older strategy and is less commonly employed at elite levels, however it was most recently used extensively by the 2006 Carolina Hurricanes on their way to their first Stanley Cup."} {"text":"Unlike their offensive responsibilities, the centre's defensive responsibilities are relatively straightforward. Again the centre must be able to use their skating ability to cover vast portions of the ice, and is responsible for the greatest percentage of ice in their own zone than of any position."} {"text":"The centre's first and foremost responsibility is defending the slot area from opposing forwards. This is the most difficult area of the ice to defend because of its proximity to the net and its being situated in the middle of the ice. The centre is not only responsible for the opposing centre, but other forwards who venture into the slot as well. Like defencemen, centres are often relied upon to block long distance shots while patrolling the slot. Because there are no boards in the slot area, it is difficult to play physically on opposing forwards so centres must be adept at using their sticks to defend via poke checks, sweep checks, stick lifts, and other stickwork."} {"text":"The perimeter is an advantageous position for the defence, the boards act as an extra defender and the defending team often will try to enclose a puck carrier between the boards and two or more defenders to force turnovers. The centre's general responsibility is to provide support to other players that engage opposing puck carriers in puck battles on the boards by giving the primary defender (normally a defenceman in the defensive zone) an outlet to move the puck to if he\/she is able to win the puck from the offensive player, though the centre does on occasion participate in these puck battles if they must."} {"text":"A quick break is sometimes used to take advantage of the opponent's sloppy transition game. In this set, the defenceman directly passes to the centre curling at the faceoff dot. The centre can then carry the puck out himself\/herself or try to pass to the streaking weakside winger up the ice."} {"text":"The penalty killing unit normally consists of two forwards and two defencemen. The centre's role does not differ appreciably from any other forward, though they are almost always included on the penalty killing unit for the purpose of taking the faceoff. Depending on what formation the penalty kill uses, the centre along with the other forward on the ice will play high side defence, trying to cut off passing lanes in the slot. Secondarily, they pressure offensive players on the boards if they do not have clear possession."} {"text":"The centre should always be prepared for a quick break-out pass by the opposing team. The centre is expected to play the deepest in the offensive zone but also the first of the forwards to backcheck. On the backcheck, the centre should take the first opposing player not covered (usually \"the third man back\")."} {"text":"It is generally the centre's job to handle faceoffs for their team. Centres employ many different tactics to win faceoffs that take advantage their strength or swiftness."} {"text":"Faceoff techniques and preferences vary widely from player to player depending on that player's skill at taking faceoffs, speed, strength, and agility. Although faceoff techniques differ greatly, it is almost universal now that the centre reverses his lower hand and takes the faceoff on his backhand in order to gain more strength when pulling the puck."} {"text":"Bigger, heavier, and stronger centres may prefer to use strength tactics such as tying up the opposing centre and winning the puck with his feet or overpowering the opponent by ripping the puck away using sheer strength. Smaller, quicker centres may employ swiftness tactics such as trying to contact the puck before his opponent has a chance to get his stick in the dot, or the slide technique where he allows his opponent access to the dot easily so he can slide his stick underneath and pull the puck back out."} {"text":"Faceoffs are critical to a team's success on offence or defence. To this end, centres that may be deficient in other areas, especially offensively, can still have value to a team if they are excellent faceoff takers. Journeyman NHL centre Yanic Perreault was offensively limited for much of his career, yet was able to survive in the NHL due to his excellence in the faceoff circle. Perreault is considered one of the best faceoff men in history. Faceoffs are often used as a measure of defensive effectiveness, and good faceoff takers play many minutes on the penalty kill and in late game lead situations where quickly gaining possession of the puck is of vital importance."} {"text":"Defence or defense (in American English) in ice hockey is a player position whose primary responsibility is to prevent the opposing team from scoring. They are often referred to as defencemen, D, D-men or blueliners (the latter a reference to the blue line in ice hockey which represents the boundary of the offensive zone; defencemen generally position themselves along the line to keep the puck in the zone). They were once called cover-point."} {"text":"In regular play, two defencemen complement three forwards and a goaltender on the ice. Exceptions include overtime during the regular season and when a team is shorthanded (i.e. has been assessed a penalty), in which two defencemen are typically joined by only two forwards and a goaltender. In National Hockey League regular season play in overtime, effective with the 2015-16 season, teams (usually) have only three position players and a goaltender on the ice, and may use either two forwards and one defenceman, orrarelytwo defencemen and one forward."} {"text":"Organized play of ice hockey originates from the first indoor game in Montreal in 1875. In subsequent years, the players per side were reduced to seven per side. Positions were standardized, and two correspond to the two defencemen of current six-man rules. These were designated as cover point and point, although they lined up behind the center and the rover, unlike today. Decades later, defencemen were standardized into playing left and right sides of the ice."} {"text":"According to one of the earliest books on ice hockey, Farrell's \"Hockey: Canada's Royal Winter Game\" (1899), Mike Grant of the Montreal Victorias, describes the point as \"essentially defensive. He should not stray too far from his place, because oftentimes he is practically a second goal-minder ... although he should remain close to his goal-keeper, he should never obstruct that man's view of the puck. He should, as a rule, avoid rushing up the ice, but if he has a good opening for such a play he should give the puck to one of the forwards on the first opportunity and then hasten back to his position, which has been occupied, in the interim, by the cover-point.\""} {"text":"Each year the NHL, the premier ice hockey league in the world, presents the James Norris Memorial Trophy to the best defenceman in the league. Bobby Orr of the Boston Bruins \u2013 an eight-time Norris Trophy recipient \u2013 is often considered to be the greatest defenceman in NHL and ice hockey history. In addition to his Norris Trophy honours, he is the only defenceman in NHL history to capture the Art Ross Trophy as the league's leading scorer. In 1998, Orr was selected as the best defenceman of all-time (second overall player behind Wayne Gretzky) in \"The Hockey News\"' Top 100 NHL Players of all-time."} {"text":"Conversely, according to the IIHF Centennial All-Star Team (also chosen by \"The Hockey News\"), the greatest defencemen to play in IIHF-sanctioned international competition are Vyacheslav Fetisov and B\u00f6rje Salming."} {"text":"Defence players are often described by the amount they participate in the offence. The extreme of non-participation in offence is a \"stay-at-home\" defender, who takes few risks and does not score much, instead focusing on defending against the opposing team. A good example is Rod Langway, who won the Norris Trophy while scoring only three goals that season, as the award winners preceding him were primarily offensive defencemen such as Bobby Orr, Denis Potvin, and Larry Robinson."} {"text":"The extreme of participation is an \"offensive defenceman\", who gets aggressively involved in the team's offence. To accomplish this, the offensive defence player often pitches in to keep the play from going offside and moves towards the halfboards and high-slot area for scoring opportunities. This makes it difficult for the opposing team to protect their net from being scored upon if the team can maintain control of the puck. However, this can lead to more odd man rushes and breakaway opportunities for the opposing team if the defender does not succeed. Bobby Orr's end-to-end rushing allowed him to defend effectively as well as attack. By contrast, Paul Coffey enjoyed high offensive production but his defensive play was considered mediocre for most of his career."} {"text":"In the neutral zone, the defence hangs back towards his or her own blue line, usually playing the puck up to other teammates. According to Jay Leach, who writes for NHL.com's \"learn to play hockey\" section, the defence must \"Move the puck hard and quick to the open man. Join the rush, [but] do not lead it.\" Because of this responsibility, defencemen must read the other team's defensive strategy effectively in order to make an effective first pass that furthers the offensive momentum without leaving the defenceman out of position should his team lose control of the puck. In certain situations the best option could be to skate the puck into the zone to maintain offensive speed as well as preventing an offside."} {"text":"In the offensive zone, the defence skaters usually \"play the blue line.\" It is their duty to keep the puck in the offensive zone by stopping it from crossing the blue line that demarcates where the offensive zone begins. Should the puck cross this line, the offence cannot touch the puck in their opponent's zone without stopping play (see offside). Defencemen must be quick to pass the puck around, helping their forwards to open up shooting lanes, or taking open shots themselves when they become available. The defence must also be able to skate quickly to cut off any breakaways, moving themselves back into the defensive zone ahead of the onrushing opponent."} {"text":"Essentially in all three zones of the rink, the defence is the backstop for the puck. It should never go behind the defence, unless the player lets it. The defence keeps the momentum of play squarely directed towards the opposing goal, or at least away from his own."} {"text":"Because defencemen are often expected to shoot on the opposing net from long range, these players often develop the hardest and most accurate slapshots. This is because taking a more stationary position on the blue line rewards pure accuracy and patience, rather than the adept hand\u2013eye coordination attributed to forwards. Al MacInnis, who was seven times decorated with \"Hardest Shot\" in NHL skills competitions, was able to score frequently from the blue line because his slapshot was simply too fast to block effectively."} {"text":"When a team is on a power play, a defence player can set up plays in the offensive zone, and distribute the puck to the teammate that he or she feels is in the best position to score, similar to a point guard in basketball, a playmaker in soccer, and a quarterback in American football and Canadian football. For this reason, a defenceman will often be described as the power play \"quarterback\". This is also referred to as \"playing the point\" (this term derives not from the basketball position, but from an older name for the defence position in hockey itself)."} {"text":"During faceoffs in the defensive zone, most teams have their defence players pair up with opposing wingers to tie them up while leaving his team's forwards open to move the puck, though this is at the discretion of the individual coach. In the offensive zone, the defence player acts in his or her usual role, keeping control of the puck as the forwards fight for position."} {"text":"In the first organized ice hockey, (see Amateur Hockey Association of Canada), defencemen used to line up in an \"I\" formation behind the rover (defunct) as \"point\" and \"cover point\". Defence is still referred to as \"playing the point\", though this term now refers mostly to the role of defencemen on the power-play."} {"text":"The forecheck is an ice hockey defensive play made in the offensive zone with the objective of applying pressure to the opposing team to regain control of the puck. It is a type of checking. Forechecking is generally executed in one of three situations: recovery of the puck after a dump in, after the rebound on a scoring attempt, or immediately after a turnover to regain possession. Forechecking can be aggressive or conservative depending on the coaching style and on the skating skills of the players. Aggressive forechecking strategies are more suited for players with good skating mobility, while more conservative plays such as the neutral zone trap are better suited for players with less agility."} {"text":"In ice hockey, butterfly style is a technique of goaltending distinguished by the goaltender guarding the lower part of the net by dropping to the knees to block attempts to score. The butterfly style derives its name from the resemblance of the spread goal pads and hands to a butterfly's wings. The \"butterfly style\" is contrasted with stand-up style, where most shots on a goal are stopped with the goaltender on his feet."} {"text":"Many factors helped make it a \"de facto\" standard style of play today, including the popularization of the goalie mask by Jacques Plante, Vladislav Tretiak's outstanding use of the style at the 1972 Canada\u2013USSR Summit Series, the National Hockey League (NHL) emergence of Tony Esposito in the 1970s and Dominik Hasek in the 1990s, the development of lightweight materials for pads and the influence of professional goaltending coaches such as Warren Strelow, and Benoit and Fran\u00e7ois Allaire."} {"text":"There are few who exclusively employ a stand-up style in the NHL.Although it is effective and popular among goaltenders, the butterfly style can leave the upper portion of the net more vulnerable to scoring attempts."} {"text":"The modern profly derivative was made most popular by Patrick Roy and is the style most commonly used and taught. The profly style is a specialized progression of the butterfly style. The name derives from a goaltending leg pad model designed specifically for the use of the butterfly. The term eventually evolved into a style for goaltenders who tend to use the butterfly save technique as a base for the majority of their save selections."} {"text":"The term \"hybrid\" is commonly used to measure how far a goaltender strays from using the butterfly technique as a base save. Some goaltending circles use the term \"hybrid\" as a middling term from a pure butterfly goaltender to a pure stand-up goaltender."} {"text":"As in many arts, there is no universal agreement on style classifications with modern goaltending techniques. Modern hybrid coaches such as the late Warren Strelow worked with goaltenders associated with the profly style such as Miikka Kiprusoff. The butterfly is not a style but a save selection used by most goaltenders."} {"text":"The butterfly style is contrasted with \"stand-up\" style goaltenders. The \"profly\" and the \"hybrid\" are more specialized progressions of collections of technical moves enveloped within the modern \"butterfly\" style. The butterfly term is often used to describe the newer \"profly\" style of goaltending refined by players including Ed Belfour, making it popular in the early 2000s by goaltenders such as Rick DiPietro, Martin Biron, Roberto Luongo, Marc-Andr\u00e9 Fleury, Marc Denis, Henrik Lundqvist and Jean-S\u00e9bastien Gigu\u00e8re, the latter being very profly-oriented."} {"text":"The original \"stand-up\" style is considered obsolete by modern goaltending circles. However, there are still a few remaining goaltenders who are commonly said to be in the furthest hybrid spectrum opposite of a pure profly goaltender. These few are often considered to occupy the \"modern stand-up\" style of goaltending. A modern stand-up goaltender almost never completely commits to a full butterfly and stays on their feet as much as possible. Modern stand-up goaltenders commonly have excellent mobility on their skates and show above-average proficiency in puck-handling and making saves with their stick. Martin Brodeur was arguably the last stand-up goaltender remaining in the NHL."} {"text":"There are a number of other recent technical innovations in response to the puck and shooter position on the ice."} {"text":"A hallmark of profly is the puck-side leg staying down when recovering to the skates fully upright, to reposition for a rebound or second shot. Rather than picking up the leg closest to the puck, the leg furthest away from the puck is raised, then pushing the puck-side leg toward the puck. At this point, the goaltender may roll back onto the puck-side skate blade, facing the shooter in the familiar ready stance."} {"text":"Profly goaltenders tend to have an easier time \"skating\" on their knees, also known as the \"backside push\", or the \"butterfly slide\". This term describes where one leg is down, and one is up. The goaltender pushes with his\/her leg up laterally from the heel, laterally toward the down leg. This allows for a slide from the up leg to the down leg without getting off the ice completely. If a goaltender is on the inside corners or if the pad faces as in non-progressed \"butterfly\" styles, the push results in a tendency to roll over onto one's chest and belly."} {"text":"The V-H move (also called the Split Butterfly or loading the post) is a move with which profly style goaltenders identify. This is a relatively recent tactical response to a shooter that is advancing from behind the net towards the front of the net, and has the option to pass. The goaltender places the knee farthest from the shooter down horizontally along the goal line. The knee closest to the puck remains vertical next to the goal post. The advantage is on coverage against quick shots to the near side of the net, while still covering the option to track passes to the front of the goal mouth."} {"text":"Enforcer is an unofficial role in ice hockey. The term is sometimes used synonymously with \"fighter\", \"tough guy\", or \"goon\". An enforcer's job is to deter and respond to dirty or violent play by the opposition. When such play occurs, the enforcer is expected to respond aggressively, by fighting or checking the offender. Enforcers are expected to react particularly harshly to violence against star players or goalies."} {"text":"Enforcers are different from pests, players who seek to agitate opponents and distract them from the game, without necessarily fighting them. The pest's primary role is to draw penalties from opposing players, thus \"getting them off their game\", while not actually intending to fight the opposition player (although exceptions to this do occur). Pests and enforcers often play together on the same line, usually the fourth line."} {"text":"At present in the National Hockey League (NHL), teams generally do not carry more than one player whose primary role is that of an enforcer. Enforcers can play either forward or defense, although they are most frequently used as wingers on the fourth forward checking line. Prized for their aggression, size, checking ability, and fists, enforcers are typically less gifted at skill areas of the game than their teammates. Enforcers are typically among the lowest scoring players on the team and receive a smaller share of ice time. They are also not highly paid compared to other players, and tend to move from team to team."} {"text":"Enforcers sometimes take boxing lessons to improve their fighting. Some players combine aspects of the enforcer role with strong play in other areas of the game. Tiger Williams, Bob Probert, and Chris Simon are examples of enforcers who showed an occasional scoring flair, with Williams and Probert playing in the midseason All-Star Game. Terry O'Reilly once scored 90 points in a season, being the first player to finish in the top ten regular season scorers while amassing at least 200 penalty minutes, and later became captain of the Boston Bruins."} {"text":"Sometimes enforcers can do their job by virtue of their reputation. Clark Gillies was among the best fighters in the NHL during his prime, but over time he rarely had to fight because opponents respected and feared him enough that they would not go after his teammates. Some skilled players, such as legends Gordie Howe and NHL all-star Jarome Iginla, are also capable fighters and can function effectively as their own enforcer. A \"Gordie Howe hat trick\" is a player scoring a goal, assisting on a goal, and being involved in a fight during a single game."} {"text":"In the 1970s, the Boston Bruins and Philadelphia Flyers were known respectively as the \"Big Bad Bruins\" and \"Broad Street Bullies\", for stocking up on grinders and enforcers."} {"text":"Retired enforcer Georges Laraque has suggested the National Hockey League Players' Association provide counselling to enforcers, and sports journalist and writer Roy Macgregor opines that in light of recent tragic events there should be more done about it, including eliminating the role altogether. \"New York Times\" sportswriter John Branch covered Boogaard's death and the epidemic of chronic traumatic encephalopathy that has come as a result of frequent head trauma sustained by hockey enforcers."} {"text":"The inventor of the saucer pass is commonly credited as the Finnish ice hockey legend, Raimo Helminen. According to the book \"Raipe - vaatimattomuuden lyhyt oppim\u00e4\u00e4r\u00e4\", he invented the pass when he was playing against grown-up men from his neighborhood when he was a young child in Koivistonkyl\u00e4, Tampere, Finland."} {"text":"A breakaway is a situation in ice hockey in which a player with the puck has no defending players, except for the goaltender, between himself and the opposing goal, leaving him free to skate in and shoot at will (before the out-of-position defenders can catch him). A breakaway is considered a lapse on the part of the defending team. If a player's progress is illegally impeded by an opposing player or if the goalie throws his stick at the oncoming player, the breakaway player is awarded a penalty shot. If a player faces an empty net (i.e. the opposing team has pulled their goalie) and is illegally impeded by an opposing player, he is automatically awarded a goal for his team instead of taking a penalty shot."} {"text":"The 2-1-2 forecheck, or pinch on a wide rim is an ice hockey forechecking strategy which uses two forwards deep in the offensive zone, with the remaining forward positioned high in the offensive zone, and the two defencemen positioned at the highest part of the zone near the blue line. This forecheck is used to apply both mental and physical pressure on the opposing team as they try to move the puck out of their defensive zone with objective of forcing a turnover. The positioning of the players removes options for moving the puck along the boards, forcing the play to the middle."} {"text":"Each of the five skaters has a specific role in the execution of the 2-1-2 forecheck."} {"text":"This system of forechecking needs to have good skaters in order to be successful. The Edmonton Oilers during their dynasty years were such a club and made use of the 2-1-2 forecheck."} {"text":"In ice hockey, power forward (PWF) is a loosely applied characterization of a forward who is big and strong, equally capable of playing physically or scoring goals and would most likely have high totals in both points and penalties. It is usually used in reference to a forward who is physically large, with the toughness to dig the puck out of the corners, possesses offensive instincts, has mobility, puck-handling skills, may be difficult to knock off the puck or to push away from the front of the goal and willingly engage in fights when he feels it is required. Possessing both physical size and offensive ability, power forwards are also often referred to as the 'complete' hockey player."} {"text":"Historically, \"power forward\" was not originally a hockey term, finding comparatively recent origins from basketball. Harry Sinden, former president of the Boston Bruins, claims \"power forward\" first became part of hockey terminology because of the style of play of Cam Neely, an NHL player from 1983 to 1996, who could play ruggedly and also score goals."} {"text":"Charlie Conacher was the first player to pioneered the style of a power forward in the 1930s, while Gordie Howe is likewise considered a quintessential example of a power forward in the decades before the term entered hockey vernacular."} {"text":"In February 2001, \"Hockey Digest\" published a list of the NHL's best pests. They were: Bob Kelly, Matt Cooke, Esa Tikkanen, Tomas Holmstr\u00f6m, Darius Kasparaitis, Ian Laperri\u00e8re, Tyson Nash, Todd Harvey, Matthew Barnaby, Kris Draper, Bill Lindsay, Jamal Mayers and Steve Staios."} {"text":"In 2009, \"Sports Illustrated\" also compiled their own list of \"Notable Pests of the NHL\". Their list was: Sean Avery, Claude Lemieux, Steve Ott, Jordin Tootoo, Jarkko Ruutu, Matt Cooke, Alexandre Burrows, Chris Neil, Ian Laperri\u00e8re, Darcy Tucker, Chris Simon, Matthew Barnaby, Theo Fleury, Pat Verbeek, Esa Tikkanen, Ken Linseman and Tiger Williams."} {"text":"This position is commonly referred to by the side of the rink that the winger normally takes, i.e. \"left wing\" or \"right wing.\" The side of the rink the player played on traditionally related to the side of their body they take a shot from (i.e. left-shooting playing left wing) but in recent decades more wingers have played the \"off wing\" meaning the opposite side of the direction they shoot, which enables faster release shots if receiving a pass while standing stationary in the offensive zone."} {"text":"The wingers' responsibilities in the offensive zone include the following:"} {"text":"Wingers should be playing high in the zone (close to the blue line), typically covering the defensemen of the opposing team, meaning they block passes from going to the defencemen and block shots from the defenceman. Wingers should always be vigilant for a breakout pass or a chance to chip the puck past the defenceman of the opposing team across the blue line. When wingers receive a pass along the boards, they can exercise a number of options:"} {"text":"Wingers are usually the last players to backcheck out of the offensive zone. On the backcheck, it is essential that they cover the last free opposing player rushing in. Once the puck is controlled by the opposing team in the defensive zone, however, wingers are responsible for covering the defenceman on their side of the ice."} {"text":"Prior to the puck being dropped for a face-off, players other than those taking the face-off must not make any physical contact with players on the opposite team, nor enter the face-off circle (where marked). After the puck is dropped, it is essential for wingers to engage the opposing players to prevent them from obtaining possession of the puck."} {"text":"Once a team has established control of the puck, wingers can set themselves up into an appropriate position."} {"text":"Some wingers are also employed to handle faceoffs."} {"text":"In ice hockey, cycling is an offensive strategy that moves the puck along the boards in the offensive zone to create a scoring chance by making defenders tired or moving them out of position."} {"text":"In ice hockey, a screen is obstruction by a player of the goaltender's view of the puck. The word can also be used as a verb, commonly \"don't \"screen\" the goaltender\", or \"the goalie was \"screened\"\". Screens can be both planned, as when an attacking forward positions himself in front of the net, or accidental, like when a defensemen accidentally blocks the goaltender's view. Attacking players may attempt to take advantage of a screen by taking a shot, which is more difficult for the opposing goaltender to save if he is being screened."} {"text":"The most recognizable implementation of the trap sees the defense stationing four of their players in the neutral zone and one forechecker in the offensive zone. As the offensive team starts to move up the ice, the forechecker (generally the center) will cut off passing lanes to other offensive players by staying in the middle of the ice, forcing the puck carrier to either sideboard. The defensive wingers\u2014typically placed on or near the red line\u2014will be positioned by the boards to challenge the puck carrier, prevent passing, or even keep opponents from moving through. The two defencemen who are positioned on or near the blue lines are the last defence, and must stall the opposition long enough for the wingers to reset themselves and continue the trap."} {"text":"Checking in ice hockey is any of a number of defensive techniques aimed at disrupting an opponent with possession of the puck or separating them from the puck entirely. Most types are not subject to penalty."} {"text":"New NHL standard of rule enforcement, 2005\u201306."} {"text":"For the 2005\u201306 season, the NHL instituted stricter enforcement of many checking violations that in previous seasons would not have been penalized. The intent of the new standard of enforcement was to fundamentally alter the way ice hockey is played, rewarding speed and agility over brute strength, as well as increasing opportunities for scoring and minimizing stoppage of play. However, it is unclear how expanding the definition of a penalty would minimize the stoppage of play, as penalty calls entail play stoppage. One explanation may be that more clearly defined rules give players more distinct boundaries on penalties, resulting in fewer penalties. The intended result is a faster-paced game with generally higher scores than in previous years."} {"text":"New USA Hockey rules on checking, 2011\u201312."} {"text":"Beginning in the 2011\u201312 season, USA Hockey moved the age of legal body checking from 12U to 14U. The discussion of this rule change began with a look into Peewee (12U) and Squirt (10U) levels of hockey. Through observation, it was clear that Squirts skate more aggressively and try to play in the correct manner. Peewees in similar situations would either let the opponent get the puck first so they can check them or hold back so they don't get hit themselves. Injury was not an initial concern, but with research it was brought into the discussion. Research shows that the 11-year-old brain has not developed skills to anticipate. As a result, Peewees acquire injuries four times more in checking vs. non-checking hockey."} {"text":"An extra attacker in ice hockey is a forward or, less commonly, a defenceman who has been substituted in place of the goaltender. The purpose of this substitution is to gain an offensive advantage to score a goal. The removal of the goaltender for an extra attacker is colloquially called \"pulling the goalie\", resulting in an empty net."} {"text":"The extra attacker is typically utilized in two situations:"} {"text":"The term sixth attacker is also used when both teams are at even strength; teams may also pull the goalie when shorthanded by a player, in which case the extra attacker would be a fifth attacker. It is exceptionally rare for a penalized team to do so during five on three situations."} {"text":"Also, in overtime, an extra attacker is added automatically when a team down one player because of penalty is penalised again for a second minor penalty; the team on the power play will play five on three for the rest of the two-man advantage, and until the next whistle. In leagues with a three on three overtime, each minor penalty results in an extra attacker for the team on the power play."} {"text":"Russian and Soviet coaches are known for refusing to pull their goalies when behind late in games, as was the case in the 1980 Winter Olympics medal game between the Soviet Union and the USA."} {"text":"The extra attacker concept was first utilized in the NHL by Art Ross, coach and general manager of the Boston Bruins, who picked up the idea from experimental incidents in amateur and minor-league hockey. In a playoff game against the Montreal Canadiens on March 26, 1931, Ross had goaltender Tiny Thompson go to the bench for a sixth skater in the final minute of play; the Bruins failed to score and lost the game 1\u20130."} {"text":"A 2018 model by Aaron Brown and Cliff Asness based on the 2015\u201316 NHL season suggested that, for a team down one point where losing 2\u20130 is no worse than losing 1\u20130, the ideal to time to pull the goalie is somewhere between 5 and 6 minutes from the end of the match."} {"text":"Each team has three forwards on each line:"} {"text":"The left wing lock is a defensive ice hockey strategy similar to the neutral zone trap."} {"text":"In the most basic form, once puck possession changes, the left wing moves back in line with the defencemen. Each defender (including the left winger) plays a zone defence and is responsible for a third of the ice each. Since there are normally only two defencemen, this tactic helps to avoid odd man rushes."} {"text":"With the reinforced defensive line, the centre and right wing forecheck aggressively. Often the forecheckers will try to drive the puck over to the opponent's right wing."} {"text":"Under coach Scotty Bowman, the Detroit Red Wings began using \"the lock\" heavily during the 1994-95 NHL season, earning the President's Trophy for the league's best record during the regular season. The following season Detroit was even more dominant, finishing one point short of the NHL record for most points in a season by a team. However, the system broke down during the playoffs each year, especially as they were frustrated by the neutral zone trap strategy employed by Jacques Lemaire's New Jersey Devils in the 1995 Stanley Cup Finals. It was not until 1997 that Detroit broke through and finally matched their regular-season success with a Stanley Cup championship."} {"text":"Although \"the lock\" was made famous by the Red Wings and has been used to great success in their Stanley Cup runs in the past decade, they are not credited with inventing it. The \"lock\" was invented in Czechoslovakia to work against the dominant Soviet teams of the 1970s. A former assistant coach under Scotty Bowman, Barry Smith, was credited with seeing the left wing lock in Europe and bringing it back to the Red Wings."} {"text":"The simplicity of \"the lock\" has made it popular at all levels of hockey and it is not uncommon to see it implemented in youth hockey."} {"text":"While grinder often refers to a player of lesser offensive skills, this is not always the case. NHL Hall of Fame inductee Bobby Clarke of the 1970s and 80s Philadelphia Flyers was considered a grinder, but was also a highly productive offensive player. While a \"grinder\" plays a physical style of hockey they are distinguished from an \"enforcer\". While most \"grinders\" will fight, some do not; \"grinder\" refers specifically to a style of defensive hockey which is within the rules of the game. Sometimes grinder is used in combination with \"mucker\" to describe a player as a \"mucker and a grinder\", although it is used as emphasis. In this context, mucker is largely synonymous with grinder."} {"text":"Indicative of the importance of the grinder is that Bobby Clarke and Mike Eruzione, both grinder-style players, played major roles in their respective countries' victories over the offensively-skilled Soviet Union national team. Clarke was a significant factor in Team Canada's victory in the 1972 Super Series, as was Eruzione as captain for the United States' Olympic team in the 1980 \"Miracle on Ice\" victory. Clarke received the Selke Trophy as best defensive forward late in his playing career."} {"text":"In 2012, \"The Hockey News\" named Dave Bolland of the NHL Chicago Blackhawks as \"Best Grinder\"."} {"text":"In ice hockey, a two-way forward is a forward who handles the defensive aspects of the game as well as the offensive aspects. Typically, a player's frame is not an issue in whether he can be a two-way forward. Perseverance is key to being a two-way forward, as it is an attribute that gives rise to battling in the corners or preventing odd man rushes by the opposing team. A two-way forward can contribute for the team both offensively and defensively, scoring important game-winning goals or making big plays from which his team receives a significant advantage over the opponent team. As such, good two-way forwards are often capable playmakers."} {"text":"Two-way forwards that do not have top offensive numbers are sometimes left in the shadows of high-scoring forwards and so are rarely named to all-star games or all-star teams, but commentators often reiterate their importance to a team. The National Hockey League (NHL) presents its best two-way forward with the Frank J. Selke Trophy, awarded to the forward \"\"who demonstrates the most skill in the defensive component of the game\".\""} {"text":"The system is used in international hockey by the Swedish team, due to the large ice surface, and the lack of a two-line pass offside (which would stop play with a two-line pass). It contrasted the neutral zone trap, which was popular in the 1990s, and which stifled fast skating and playmaking by crowding the neutral zone with players. The system was originated by the Boston Bruins of the late 1950s; it was later adopted by the Chicago Blackhawks during the 1960s. The torpedo mode could not be completely implemented in the National Hockey League until 2005 when the red line was eliminated, allowing for two-line passes to spring the torpedoes."} {"text":"The system was used to describe the Swedish national men's hockey team's approach during the 2002 Winter Games, which was punctuated by a preliminary 5-2 win over the eventual gold-medal winning Canadian team."} {"text":"Loafing, floating, or cherry picking in ice hockey is a manoeuver in which a player, the floater (usually a forward, but occasionally a defenceman who used to play the forward position, but can no longer skate the complete length of the ice at pace), literally loafs \u2014 spends time in idleness \u2014 or casually skates behind the opposing team's unsuspecting defencemen while they are in their attacking zone. It is very similar to the cherry picking tactic sometimes used in basketball. Its controversy is also very similar to that of cherry picking in basketball."} {"text":"The tactic is used sparingly as although it sometimes creates a breakaway opportunity for the defending team should they manage to take control of the puck and pass it to the floater, it also creates a five-on-four situation (during even strength play) for the attacking team. Also, a good defenceman usually keeps an eye open for the development of these potential situations where he would immediately backcheck once a floater is spotted."} {"text":"A deke feint or fake is an ice hockey technique whereby a player draws an opposing player out of position or is used to skate by an opponent while maintaining possession and control of the puck. The term is a Canadianism formed by abbreviating the word \"decoy\"."} {"text":"One type is the \"head fake\", using a movement of the head to fool an opposing player over the player's movements or intention."} {"text":"A more complex deke is the \"toe drag\", a deke in which the puck carrier brings the puck forward on their forehand, and subsequently turns their stick and pulls the puck towards themselves with the toe of the blade, while moving past the defender, who has presumably attempted to poke check the puck in its previous position."} {"text":"On defense in American football, rushing is charging across the line of scrimmage towards the quarterback or kicker in the effort to stop or \"sack\" them. The purpose is tackling, hurrying or flushing the quarterback, or blocking or disrupting a kick. In both college and professional football, getting a strong pass rush is an important skill, as even an average quarterback can be productive if he has enough time to find an open receiver, even against a good secondary. To increase pressure, teams will sometimes use a pass-rushing specialist, who is usually a quick defensive end or outside linebacker tasked with aggressively rushing the quarterback in obvious passing situations."} {"text":"One of the most effective methods of rushing the passer is by using a stunt or twist, which is when defensive players quickly change positions at the snap of the ball and engage a different blocker than the offense expected, Defenses typically task three or four defensive lineman to rush the passer on most plays, but most will occasionally increase pressure by blitzing one or more non-lineman at the quarterback when a pass play is anticipated."} {"text":"A pass rush can be effective even if it does not sack the quarterback if it forces the passer to get rid of the ball before he wanted to, resulting in an incomplete pass or interception. To attack a strong pass rush, offenses can throw quicker short passes or run draw plays or screen passes, which are design to lure defenders into the offensive backfield and then quickly get a ball carrier behind them."} {"text":"The run and shoot offense (also known as Run N' Shoot) is an offensive system for American football which emphasizes receiver motion and on-the-fly adjustments of receivers' routes in response to different defenses. It was conceived by former high school coach Glenn \"Tiger\" Ellison and refined and popularized by former Portland State offensive coordinator Mouse Davis."} {"text":"The run and shoot system uses a formation consisting of one running back and usually four wide receivers. This system makes extensive use of receiver motion (having a receiver suddenly change position by running left or right, parallel to the line of scrimmage, just before the ball is snapped), both to create advantageous mismatches with the opposing defensive players and to help reveal what coverage the defense is using. If a defender stays with the motioning receiver, it would imply man-to-man coverage."} {"text":"The basic idea behind the run and shoot is a flexible offense that adjusts \"on the fly,\" with the receivers changing their routes based on the defensive coverage and play of the defenders covering them. The quarterback then not only reads the defensive coverage to determine where to throw the ball, but must also read the defenders to determine the probable route his receivers may run. As a result, the offense is considered complex and difficult to implement due to the intelligence and communication required between quarterback and receivers. The offense also typically relies heavily on the pass, sometimes throwing the ball upwards of 65 to 75% in a game or over the course of a season."} {"text":"In the purest form of the offense, the proper complement would consist of two wide receivers lined up on the outside edges of the formation and two \"slotbacks\" (wide receivers who line up one step back from the line of scrimmage, so as not to be considered \"covered\" and thus ineligible) lined up just outside and behind the two offensive tackles. The formation would look very similar to the Flexbone Offense formation."} {"text":"The original inventor of the run and shoot, Glenn \"Tiger\" Ellison, first started out with a formation that overloaded the left side of the offensive line for his scrambling quarterback. He called it \"The Lonesome Polecat\"."} {"text":"Many of the National Football League teams that used the run and shoot in the early 1990s used true wide receivers in all four receiving positions."} {"text":"Originally, the run and shoot was set up so the quarterback would be under center with the running back lined up a few yards behind him. Later, during his tenure with the University of Hawaii, June Jones used quarterback Colt Brennan out of the shotgun. In this case the running back is usually offset to the right or left of the quarterback."} {"text":"Also at Hawaii, Nick Rolovich tweaked the formation to run out of the pistol, thus creating an opportunity for a mobile quarterback to become a second running back. This led to increased success in the running game."} {"text":"Another formation that can often be seen with the run and shoot is the trips formation, where three wide receivers are situated to the right or left side of the line of scrimmage. Most of the time, this formation will be created out of motion when the W or Y receiver moves to the opposite side of the formation helping force defenses to declare whether they are in man-to-man coverage or zone defense."} {"text":"The Portland State Vikings under head coach Mouse Davis went 42\u201324 in his tenure installing the offense and putting the system on the map. Quarterback Neil Lomax set many records including career NCAA passing yards."} {"text":"At the University of Hawaii, June Jones went 76\u201341 including seeing quarterback Timmy Chang set a record for most NCAA completions and passing yards in 2004 and quarterback Colt Brennan set a record for touchdown passes in 2006 with 58. In 2018, Hawaii brought back the run and shoot offense under former Hawaii QB and head coach Nick Rolovich."} {"text":"A hard count by a quarterback at the beginning of a gridiron football play is an audible snap count that uses an irregular, accented (thus, the term \"hard\") cadence. When used, the center will hike the ball to the quarterback on an accented syllable (for example, \"hut one ... hut two ... hut three ... hut hut HUT\")."} {"text":"Quarterbacks can use a snap with two or more accented syllables in the hope of drawing an opposing player offside before the last accented syllable (for example, \"hut one ... hut two ... hut three hut HUT ... hut HUT\"). A loud home crowd can deprive a visiting quarterback of the ability to use this strategy."} {"text":"This play is often used in a fourth down situation, when fewer than 5 yards are needed for a first down. If the defense jumps offside, they are penalized 5 yards, resulting in a first down for the offense."} {"text":"When used on fourth down if the defense does not go offside, the offense can either call a time out or take a five-yard penalty for delay of game and punt the ball away, or purposefully done to burn time near the end of the game and kick a field goal to tie or win the game."} {"text":"If the defense jumps offside, but the offense begins their play, it is called a \"free play\", because if the offense gains yardage or scores a touchdown, they can decline the penalty and benefit from the gain or score, while if they execute a risky pass that is intercepted or are in their own end zone and are taken down for a safety, the turnover or defensive score is nullified by the offsides penalty."} {"text":"The offense may choose to use the hard count throughout the game, in an attempt to confuse the defense, and get them to play more conservatively."} {"text":"The offense's own offensive line is sometimes fooled by the hard count, resulting in a false start offensive infraction."} {"text":"The multiple offensive is an American football offensive scheme used by several teams in the National Football League and college football. It is a hybrid offense consisting of formations and plays from various other schemes including the pro-style offense, spread offense, and pistol offense, and possibly more."} {"text":"The multiple offense allows for a wide variety of play calls and formations, from spreading the field with 4 or 5 wide receivers to utilizing fullbacks and tight ends to establish a power running game. As such, it can be adjusted to fit the skills of available offensive personnel and can be difficult for an opposing defense to scout and prepare for. On the other hand, it can result in an offense which is \"mediocre at everything\", especially in college football, where practice time is limited."} {"text":"In American football, the dime defense is a defensive alignment that uses six defensive backs. It is usually employed in obvious passing situations. The formation usually consists of six defensive backs, usually two safeties, and four cornerbacks, and has either four down linemen and one linebacker, or three down linemen and two linebackers. This formation is used to prevent the offense from completing a medium- to long-range pass play. This may be because the offense's running game is inefficient, time is an issue, or they need a long pass for a first down. It is also used against teams whose pass-to-run ratio predominantly favors pass. The formation, however, is vulnerable to running plays as the formation is missing two linebackers, or a linebacker and a down lineman."} {"text":"A dime defense differs from the nickel defense \u2013 from which it derives its name \u2013 in that it adds a sixth defensive back to the secondary. This sixth defensive back is called a \"dimeback\" (D). The defense gets its name because a dime, worth ten cents, is the next step up in United States coin currency from a nickel, which is worth 5 cents."} {"text":"There are also \"quarter\" and \"half-dollar\" formations, each protecting against progressively deeper and more likely pass attempts. In 2010, the New York Giants consistently added an extra safety instead of an extra cornerback, resulting in three safeties and three cornerbacks. This has been called a \"giant dime\"."} {"text":"A hot route is a short passing route in American Football used to escape a potential sack from a blitzing defense."} {"text":"A hot route is a variation on the regular running route for a running back. It results usually from an audible called by a quarterback, and is based on a read of a blitzing defense. If the defense does not blitz, the running back runs the regular route. If the defense does blitz, the running back will, instead of blocking the blitzing defensive player, run a short route, such as a bubble screen, and catch the ball which the quarterback dumps off quickly."} {"text":"The Tampa 2 is an American football defensive scheme popularized by (and thus named after) the Tampa Bay Buccaneers National Football League (NFL) team in the mid-1990s\u2013early 2000s. The Tampa 2 is typically employed out of a 4\u20133 defensive alignment, which consists of four linemen, three linebackers, two cornerbacks, and two safeties. The defense is similar to a Cover 2 defense, except the middle linebacker drops into a deep middle coverage for a Cover 3 when he reads a pass play."} {"text":"The term rose to popularity due to the installation and effective execution of this defensive scheme by then-head coach Tony Dungy and defensive coordinator Monte Kiffin, and the style helped the Buccaneers win Super Bowl XXXVII."} {"text":"The roots of the Tampa 2 system actually come from the Pittsburgh Steelers and their Steel Curtain defense of the 1970s. \"My philosophy is really out of the 1975 Pittsburgh Steelers playbook,\" said Dungy (who played for the Steelers early in his career) during media interviews while at Super Bowl XLI. \"That is why I have to laugh when I hear 'Tampa 2'. Chuck Noll and Bud Carson\u2014that is where it came from, I changed very little.\" Lovie Smith mentions having played the system in junior high school during the 1970s, though Carson introduced the idea of moving the middle linebacker into coverage. Carson's system became especially effective with the Steelers' addition of aggressive and athletic middle linebacker Jack Lambert."} {"text":"After Dungy became head coach of the Indianapolis Colts and Lovie Smith (linebackers coach in Tampa from 1996\u20132000) became head coach of the Chicago Bears, they installed the Tampa 2 in their respective teams. During the 2005 NFL season, the Buccaneers, still under defensive coordinator Kiffin, ranked first in the league in fewest total yards allowed, Smith's Bears ranked number two, and Dungy's Colts ranked eleventh. By 2006, the Buffalo Bills, Minnesota Vikings, Kansas City Chiefs, and Detroit Lions had also adopted the defense. In college football, Gene Chizik is among the coaches that successfully implemented the Tampa 2."} {"text":"The scheme is known for its simple format, speed, and the aggressive mentality of its players. Tampa 2 teams are known as gang tacklers with tremendous team speed, and practice to always run to the ball. It also requires a hard hitting secondary to cause turnovers."} {"text":"The personnel used in the Tampa 2 are specific in position and required abilities. All positions in this defense place a premium on speed, and often the result is that they are all undersized by league standards. The defensive linemen in this scheme have to be quick and agile enough to create pressure on the quarterback without the aid of a blitz from either the linebackers or the secondary, with the defensive tackle in the nose position having above-average tackling skills to help stop runs. Warren Sapp is often cited as the primary example of a defensive lineman who flourished in this scheme; indeed, he is now reckoned as the prototype three-technique defensive tackle."} {"text":"The Tampa 2 is particularly effective against teams who are playing from behind, because it limits big plays. It forces offenses to be patient and to settle for short gains and time-consuming drives. This may be due to the nature of the \"bend-but-don't-break\" 2-deep zone coverage scheme and responsibilities safeties play in the Tampa 2."} {"text":"When executed properly, the Tampa 2 defense is difficult to beat, a reason for its longevity, having seen no fundamental changes since first introduced in 1996. Teams that have been successful against this defense have managed to run the ball up the middle past the defensive tackles, or throw passes in the seams between the outside linebackers and the cornerbacks (often the most effective receiver against a Tampa 2 defense is a tight end, since they often line up against this seam)."} {"text":"To defend running plays, the Tampa 2 is a single gap defense where each player is responsible for covering his own gap. The assigned gap changes with game conditions and personnel."} {"text":"Typically this style of defense utilizes smaller but faster linemen and linebackers with above average speed. Also, the defensive backs must be above average hitters."} {"text":"The key theme in stopping the run is directing traffic to the weak-side linebacker. It is therefore necessary to have a skilled tackler at the WLB position (e.g., Derrick Brooks, Lance Briggs, Sean Lee)."} {"text":"Wildcat formation describes a formation for the offense in football in which the ball is snapped not to the quarterback but directly to a player of another position lined up at the quarterback position. (In most systems, this is a running back, but some playbooks have the wide receiver, fullback, or tight end taking the snap.) The Wildcat features an unbalanced offensive line and looks to the defense like a sweep behind zone blocking. A player moves across the formation prior to the snap. However, once this player crosses the position of the running back who will receive the snap, the play develops unlike the sweep."} {"text":"The Wildcat is a gambit rather than an overall offensive philosophy. It can be a part of many offenses. For example, a spread-option offense might use the Wildcat formation to keep the defense guessing, or a West Coast offense may use the power-I formation to threaten a powerful run attack."} {"text":"One possible precursor to the wildcat formation was named the \"wing-T\", and is widely credited to being first implemented by Coach Tubby Raymond and Delaware Fightin' Blue Hens football team. Tubby Raymond later wrote a book on the innovative formation. The wildcat's similarity to the wing-T is the focus on series football, where the initial movements of every play look similar. For example, the wing-T makes use of motion across the formation as well in order to draw a reaction from the defense, but runs several different plays from the same look."} {"text":"Another possible precursor to the wildcat is the offense of Six-Man Football, a form of high school football, played mostly in rural West Texas and Montana, that was developed in 1934. In six-man, the person who receives the snap may not run the ball past the line of scrimmage. To bypass this limitation, teams often snap the ball to a receiver, who then tosses the ball to the potential passer. The passer may then throw the ball to a receiver or run with the ball himself."} {"text":"The virtue of having a running back take the snap in the wildcat formation is that the rushing play is 11-on-11, although different variations have the running back hand off or throw the football. In a standard football formation, when the quarterback stands watching, the offense operates 10-on-11 basis. The motion also presents the defense with an immediate threat to the outside that it must respect no matter what the offense decides to do with the football."} {"text":"\"The Wall Street Journal\" credited Hugh Wyatt, a longtime coach in the Pacific Northwest, with naming the offense. Wyatt, coaching the La Center High School Wildcats, published an article in \"Scholastic Coach and Athletic Director\" magazine in 1998, where he explained his version of the offense, which relied on two wing backs as the two backfield players directly behind the center, alternating to receive the snap. Other high school football programs across the United States adopted Wyatt's Wildcat offense."} {"text":"Alabama's David Palmer was one of the first \"wildcat\" quarterbacks on the national scene running the formation in 1993."} {"text":"The wildcat was popularized on the college level by Bill Snyder, head coach of the Kansas State University Wildcats with Michael Bishop as quarterback in 1997 and 1998 when they made a run at the top of the national rankings. Bishop rushed for 1304 career yards in two seasons, including 748 yards on 177 carries during the '98 season. Snyder's success inspired Urban Meyer at the start of his career. Meyer's subsequent success with quarterback Josh Harris at Bowling Green helped the formation come to the fore."} {"text":"Other college teams have used the wildcat formation regularly, including the wildcats of Kansas State, Kentucky, and Villanova, as well as the Pitt Panthers. Pitt had great success with the formation having star running back LeSean McCoy or running back LaRod Stephens-Howling take the snap. The Panthers scored numerous times from this formation during those years. Villanova won the 2009 FCS championship with a multiple offense that included the wildcat, with wide receiver Matt Szczur taking the snap. Szczur scored a key touchdown in the Wildcats' semifinal against William & Mary out of the formation, and made a number of big plays out of the wildcat against Montana in the final."} {"text":"UCF uses a wildcat formation they call the \"Wild Knight\". It was originally intended to be run by Rob Calabrese, even after he lost the starting job in 2010 to Jeff Godfrey, but he tore his ACL using the play to score a rushing touchdown against Marshall on October 13, 2010. At the time, most agreed that Calabrese was effective at running the Wild Knight formation."} {"text":"The wildcat formation made an appearance in 1998, when Minnesota Vikings' offensive coordinator Brian Billick began employing formations where QB Randall Cunningham lined up as a wide receiver and third-down specialist David Palmer took the direct snap from the center with the option to pass or run."} {"text":"In the 1998 NFC Championship, with 7:58 to go in the third quarter, on a second and 5 play, the Atlanta Falcons deployed quarterback Chris Chandler wide left as a receiver while receiver Tim Dwight took a direct snap and ran 20 yards for a first down."} {"text":"As the popularity of the wildcat spread during the 2008 NFL season, several teams began instituting it as a part of their playbook."} {"text":"Defending plays from the wildcat requires linemen and linebackers to know and execute their own assignments without over-pursuing what may turn into a fake or a reverse. The formation's initial success in 2008 can be attributed in part to surprise\u2014defenses had not practiced their countermeasures against such an unusual offensive strategy. Since then, most teams are well prepared to stop the wildcat; an example came in November 2008 when the Patriots traveled to Miami nine weeks after the Dolphins win in Foxborough; Bill Belichick's defense limited the wildcat to just 27 yards and forced the Dolphins to try a conventional passing attack; the game lead changed six times but the Patriots wore out the Dolphins with a 48\u201328 win."} {"text":"Though defenses now understand how to stop the wildcat, it does not mean the formation is no longer useful. A defense's practice time is finite. Opponents who prepare to stop the wildcat have less time available to prepare for other offensive approaches. Many teams admit to spending an inordinate amount of time having to prepare for this scheme. The Philly Special, an iconic play during Super Bowl LII, was run out of the wildcat."} {"text":"Other teams that use the wildcat formation in the NFL have used different names for their versions. At one time, the Carolina Panthers called their version the 'Mountaineer formation', named after the Appalachian State Mountaineers, the alma mater of their wildcat quarterback Armanti Edwards, who played quarterback for the Mountaineers. The Denver Broncos utilize 'Wild Horses', developed in 2009. The New York Jets referred to their version as the Tigercat formation in reference to Brad Smith having attended the University of Missouri when Smith played for New York from 2009\u20132010. The 2011 Minnesota Vikings referred to their formation as the \"Blazer package\" which employed former UAB Blazers quarterback Joe Webb."} {"text":"Until the 2009 season, a technicality in the league rules made the wildcat offense illegal; essentially, the rule stated that a designated quarterback must be in position to take all snaps. This has since been changed."} {"text":"In American football, the pro set or split backs formation is a formation that has been commonly used as a \"base\" set by professional and amateur teams. The \"pro set\" formation features an offensive backfield that deploys two running backs aligned side-by-side instead of one in front of the other as in traditional I-formation sets. It was an outgrowth of the three-running-back T-formation, with the third running back (one of the halfbacks) in the T becoming a permanent flanker, now referred to as a wide receiver."} {"text":"This formation has been particularly popular because teams can both run and pass the football out of it with an equal amount of success. It keeps defenses guessing what type of play the offense will run. Because the backs are opposite each other, it takes the defense longer to read the gap through which the offense will run the ball."} {"text":"The set can be run with a single tight end and two receivers or no tight ends and three receivers."} {"text":"A standard pro set places the backs about 5 yards behind the line of scrimmage, spaced evenly behind the guards or tackles. In this look, teams may utilize two halfbacks, or one halfback and one fullback."} {"text":"A variation of the pro set places the backs offset toward either side. This look is almost universally used with one fullback and one halfback. The backs line up closer to the line of scrimmage than in a standard pro set, about 3 yards deep. The fullback lines up directly behind the quarterback, in the same spot as in the I-Formation. The halfback then lines up behind either the left or right tackle."} {"text":"Once the run has been established, it can be a dangerous formation. Because of the real threat of a team running out of the pro-set, defenses must respect the play fake and play run. This pulls the safety to the line and opens up the middle of the field deep. Also, with both backs in position to \"pick up\" an outside blitz, the pro set gives a quarterback an abundance of time to find an open receiver."} {"text":"The formation has lost its popularity at the college and professional level recently with the rise of shotgun split back formations. It remains common at the high school level."} {"text":"In the National Football League, in the mid-to-late 2000s, the formation was used almost exclusively by West Coast offense-based teams in occasional third down passing situations and goal-line situations. In the early 2010s, the pro set almost completely disappeared from the NFL, however in the late 2010s it was used once again as an occasional goal line and passing down formation by West Coast offense based teams."} {"text":"The following is a list of common and historically significant formations in American football. In football, the formation describes how the players in a team are positioned on the field. Many variations are possible on both sides of the ball, depending on the strategy being employed. On offense, the formation must include at least seven players on the line of scrimmage, including a center to start the play by snapping the ball."} {"text":"There are no restrictions on the arrangement of defensive players, and, as such, the number of defensive players on the line of scrimmage varies by formation."} {"text":"This list is not exhaustive; there are hundreds of different ways to organize a team's players while still remaining within the \"7 on the line 4 in the backfield\" convention. Still, this list of formations covers enough of the basics that almost every formation can be considered a variant of the ones listed below."} {"text":"The T formation is the precursor to most modern formations in that it places the quarterback directly under center (in contrast to its main competitor of its day, the single wing, which had the quarterback receiving the ball on the fly)."} {"text":"It consists of three running backs lined up abreast about five yards behind the quarterback, forming the shape of a T. It may feature two tight ends (known as the Power T) or one tight end and a wide receiver (in this case known as a split end). When legendary coach George Halas' Chicago Bears used the T-formation to defeat the Washington Redskins by a score of 73\u20130 in the 1940 NFL championship game, it marked the end of the single wing at nearly all levels of play, as teams, over the course of the 1940s, moved to formations with the quarterback \"under center\" like the T. George Halas is credited with perfecting the T formation."} {"text":"Two other I formation variations include the Maryland I and the Power I. These formations lack a flanker, and use the maximum 3 running backs rather than the standard 2. They are used primarily as running formations, often in goal line situations. These may employ either tight ends or split ends (wide receivers) or one of each. The Maryland I was developed by Maryland head coach Tom Nugent. More recently, Utah has utilized this formation with quarterback Brian Johnson."} {"text":"A variation of the ace is known as the spread formation. It utilizes four wide receivers and no tight ends. In the NFL, this formation was the basis of the run and shoot offense that was popular in the 1980s with teams such as the Detroit Lions and the Houston Oilers but has since fallen out of favor as a primary offensive philosophy."} {"text":"It is often used as a pass formation, because of the extra wide receivers. It also makes an effective run formation, because it \"spreads the field\" and forces the defense to respect the pass, thus taking players out of the box. Certain college programs, such as the University of Hawaii and Texas Tech still use it as their primary formation. Brigham Young University also uses the spread offense, although they tend to employ their tight ends more frequently than Hawaii and Texas Tech. Minnesota and TCU are also starting to employ the spread offense."} {"text":"Joe Gibbs, twice head coach of the Washington Redskins, devised an ace variation that used a setback, or \"flexed\" tight end known as an H-back. In this formation, the normal tight-end is almost exclusively a blocker, while the H-back is primarily a pass receiver. This formation is often referred to as a \"two tight end\" set. Some teams (like the Indianapolis Colts under Tony Dungy) use this formation with both tight ends on the line and use two flankers. Many other teams in the NFL, even those that do not use this as a primary formation, still run some plays using a variant of this formation."} {"text":"Also called the \"split backs\" or \"three-end formation\", this is similar to the I-formation and has the same variations. The difference is that the two backs are split behind the quarterback instead of being lined up behind him."} {"text":"Clark Shaughnessy designed the formation from the T Formation in 1949 after acquiring halfback Elroy \"Crazy Legs\" Hirsch. Shaughnessy thought he would make a great receiver but already had two great receivers in Tom Fears and Bob Shaw. Schaughnessy moved Hirsch to the flanker position behind the right end. Thus started what was known as the three-end formation."} {"text":"This formation is most often associated with Bill Walsh's San Francisco 49ers teams of the 1980s and his West Coast Offense. It was also the favored formation of the pass-happy BYU Cougars under the tenure of legendary coach LaVell Edwards. A modern example of the \"pro-set\" can be seen in the Florida State University offense, which favors a Split Backs formation. The Seattle Seahawks under Mike Holmgren also favored this type of formation with the tight end usually being replaced with a third wide receiver."} {"text":"Another variation of the single wing was the A formation."} {"text":"The single wing has recently had a renaissance of sorts with high schools; since it is so rare, its sheer novelty can make it successful."} {"text":"Though the wildcat concept was successful for a time, its effectiveness decreased as defensive coordinators prepared their teams for the change of pace play. The player receiving the snap is usually not a good passer, so defenses can bring linebackers and defensive backs closer to the line of scrimmage to clog potential running lanes. As such, its use has declined since 2009, particularly in the NFL."} {"text":"The double wing, as a formation, is widely acknowledged to have been invented by Glenn \"Pop\" Warner in 1912. It then was an important formation up to the T formation era. For example, Dutch Meyer at TCU, with quarterback Sammy Baugh, won a college national championship in 1935 with a largely double wing offense."} {"text":"With Markham's success came many converts to his offense and many variations of the offense over the years. Perhaps the most well-known of Markham's converts is Hugh Wyatt, who brought more Wing-T to the offense and a greater ability to market the offense. Jerry Valloton also marketed the offense well when he wrote the first book on the offense. Since that time, Tim Murphy, Steve Calande, Jack Greggory, Robert McAdams, and several other coaches have further developed the offense and coaching materials thereof. Their materials may be seen on their respective websites."} {"text":"The Double Wing is widely used at the youth level, becoming more popular at the high school level and has been used at the college level by"} {"text":"The short punt is an older formation popular when scoring was harder and a good punt was an offensive weapon. In times when punting on second and third down was fairly common, teams would line up in the short punt formation and offer the dual threat of punt or pass. \"Harper's Weekly\" in 1915 calls it \"the most valuable formation known to football.\""} {"text":"The formation differs in two significant ways from the single wing. It is generally a balanced formation, and there are backs on both sides of the tailback, offering better pass protection. As a result, it was considered a much better passing formation than running, as the premiere running formation was the single wing. That said, it was regarded as a good formation for trap plays."} {"text":"The formation was used extensively by Fielding Yost's Michigan Wolverines in their early history, and was the base formation for the Benny Friedman led New York Giants in 1931. In the 1956 NFL Championship, the Chicago Bears shifted into a short punt formation in the third quarter, after falling way behind."} {"text":"The Shotgun's invention is credited to Red Hickey, coach of the San Francisco 49ers in 1960. Historically, it was used to great success as a primary formation in the NFL by the Tom Landry-led Dallas Cowboys teams of the 1970s and the 1990s Buffalo Bills teams under Marv Levy, who used a variation known as the K-gun that relied on quarterback Jim Kelly. The shotgun offense became a staple of many college football offenses beginning in the 1990s."} {"text":"This offense was originated with Chris Ault at the University of Nevada, Reno. It is essentially a shotgun variation, with the quarterback lined up closer than in standard shotgun (normally 3 to 4 yards behind center), and a running back lined up behind, rather than next to, the QB (normally at 3 to 4 yards behind quarterback)."} {"text":"The pistol formation adds the dimension of a running game with the halfback being in a singleback position. This has disrupted the timing of some defenses with the way the quarterback hands the ball off to the halfback. This also allows the smaller halfbacks to hide behind the offensive line, causing opposing linebackers and pass-rushing defensive linemen to play more conservatively. The Pistol can also feature the option play. With this offense, the quarterback has the ability to get a better look past the offensive line and at the defense. Pistol formations have gained some popularity in NCAA football, and in fact, variants of this offense were used by the 2007 and 2009 BCS National Champions, LSU and Alabama, respectively."} {"text":"In 2008, Kansas City Chiefs offensive coordinator Chan Gailey began using the Pistol prominently in their offense, and are the first NFL team to do so. He brought the philosophy with him to the Buffalo Bills in 2010. The San Francisco 49ers added the Pistol to their offense in after former Nevada quarterback Colin Kaepernick became the team's starter. By the late 2010s, the pistol had become a favored formation of teams running the run-pass option (RPO) offense, such as the 2019 Baltimore Ravens with quarterback Lamar Jackson."} {"text":"This formation is typically used for trick plays, though it is somewhat counterintuitively effective in short-yardage situations: a screen pass thrown to the strong side of the formation will have enough blockers to generate a push forward, and the mismatch can create enough of an advantage that the center and quarterback can provide enough blocking power to clear a path for the running back. The most recent use of this formation was in 2019, when the Miami Dolphins played the Philadelphia Eagles in the second quarter on 4th and goal when Matt Haack took the snap and flicked the ball to Jason Sanders for a touchdown."} {"text":"The wishbone is a 1960s variation of the T-formation. It consists of three running backs: a fullback lined up directly behind the quarterback, and the two halfbacks split behind the fullback. It can be run with two tight ends, one tight end and one wide receiver, or two wide receivers. Most offensive systems that employ the wishbone use it as their primary formation, and most run the ball much more often than they pass. The wishbone is a common formation for the triple option offense in which the quarterback decides after the snap whether to hand the ball to the fullback for a run up the middle, pitch the ball to a running back on the outside, or keep the ball and run it himself."} {"text":"The wishbone was developed in the 1960s by Emory Bellard, offensive coordinator at the University of Texas under head coach Darrell Royal. The offense was an immediate success, and Texas won the national championship in 1969 running a wishbone \/ option system. It was subsequently adopted by many other college programs in the 1970s, including Alabama and Oklahoma, who also won national titles with variations of the offense. However, as with any hugely successful formation or philosophy, as teams learned how to defend against it, it became much less successful."} {"text":"Today, the wishbone \/ option offense is still used by some high school and smaller college teams, but it is much less common in major college football, where teams tend to employ more pass-oriented attacks. The United States Air Force Academy (aka Air Force), the United States Naval Academy (Navy) and Georgia Tech are among the few NCAA FBS teams that commonly use the wishbone and its variations."} {"text":"The wishbone has very rarely been used in professional football, as it was developed after passing quarterbacks became the norm. NFL quarterbacks are not necessarily good runners, and are in any case too valuable to the offense to risk injury by regularly running with the football. During the strike season of 1987, the San Francisco 49ers used the wishbone successfully against the New York Giants to win 41\u201321. Coach Bill Walsh used the wishbone because of his replacement quarterback's familiarity with a similar formation in college."} {"text":"The flexbone formation is a variation of the wishbone formation. In this formation, one back (the fullback) lines up behind the quarterback. Both ends are often split wide as wide receivers, though some variations include one or two tight ends. The two remaining backs, called wingbacks or slotbacks, line up behind the line of scrimmage just outside the tackles. Usually, one of the wingbacks will go in motion behind the quarterback before the snap, potentially giving him another option to pitch to."} {"text":"Like the wishbone, the flexbone formation is commonly used to run the triple option. However, the flexbone is considered more \"flex\"-ible than the wishbone because, since the wingbacks line up on the line of scrimmage, more run \/ pass options and variations are possible."} {"text":"The Wing T has its roots in what Otto D. Unruh called the \"T-Wing\" formation and is known to have called the play as early as 1938 with the Bethel Threshers."} {"text":"Both the Giants and Eagles developed similar formations of this design. The Eagles named their version the \"Herman Edwards\" play after their cornerback who scored the winning touchdown on the above fateful play."} {"text":"The tackle spread or \"Emory and Henry\" formation is an unusual American football formation that dates to the early 1950s, when the Wasps of Emory & Henry College under head coach Conley Snidow used it as part of their base offense. Instead of the conventional grouping of all five ineligible offensive linemen in the middle of the formation, the Emory and Henry spreads the tackles out to the edge of the field along with two receivers or slotbacks, creating two groupings of three players near each sideline. Meanwhile, the center and the guards remain in the middle of the field along with the quarterback and a running back."} {"text":"The formation has also been used as a basis for trick plays such as a backwards pass to a player near the sideline followed by forward pass down the field."} {"text":"The Emory & Henry formation was revived in the 1990s by Florida and South Carolina coach Steve Spurrier, who coined its commonly used name when he explained that he'd seen Emory and Henry College run it in the 1950s. The New England Patriots used a variation of the formation by placing a (legally declared) eligible-numbered receiver in the ineligible tackle position; the confusion this caused prompted the league to impose a rule change prohibiting that twist beginning in 2015."} {"text":"A tackle-spread formation was included in the video game \"Madden NFL 18\" under the name \"Gun Monster;\" it proved to be a problem for the game's artificial intelligence, which could not discern eligible receivers from ineligible ones."} {"text":"The Cincinnati Bengals under Marvin Lewis occasionally used a variant of the Emory and Henry formation, which they called the \"Star Wars\" formation; in their version, both offensive tackles line up on the same side of the quarterback, thus creating a hybrid between the Emory & Henry and the swinging gate."} {"text":"The A-11 offense combines the Emory and Henry with the wildcat, in that either of the two backs in the backfield can receive the snap and act as quarterback. In its earliest incarnation, it also used a loophole in the high school rulebook that allowed players wearing any uniform number to play at either an ineligible or eligible position, further increasing defensive confusion and allowing for more flexibility among players changing positions between plays. However, this facet of the offense was never legal at the college or professional level, and the high school loophole was closed in 2009."} {"text":"There are no rules regarding the formation of defensive players or their movement before the snap of the ball as the choice of when to snap the ball is that of the offense which would consequently deprive the defense of an opportunity to take a set position. Therefore, the deployment and tactics of defensive players are bound only by the imagination of the play designer and the line of scrimmage. Below are some of the most popular defensive formations through the history of football."} {"text":"The original 6-1 was invented by Steve Owen in 1950 as a counter to the powerful passing attack of Paul Brown's Cleveland Browns. It was called the \"Umbrella\" defense because of the four defensive backs, whose crescent alignment resembled an opened umbrella, and the tactic of allowing the defensive ends to fall back into pass coverage, converting the defense, in Owen's language, from a 6\u20131\u20134 into a 4\u20131\u20136. If offenses grew wise to the drop back, the ends could pass rush instead. Using this new defense, the Giants defeated the Browns twice in 1950 during the regular season."} {"text":"It saw use during the 1950s in Owen's hands, but never became a significant base defense. It was functionally replaced by the more versatile 4\u20133."} {"text":"In this variation of the 3\u20134, known also as the \"3\u20134 eagle\", the nose guard is removed from play and in his place is an extra linebacker, who lines up on the line where the nose guard would be, sometimes slightly behind where the nose guard would be. It allows defenses more flexibility in man to man coverages and zone blitzes. It was created by Los Angeles Rams defensive coordinator Fritz Shurmur, and evolved from Buddy Ryan's 46 defense. Shurmur created the defense in part to take advantage of the pass rush abilities of Kevin Greene, a defensive end sized linebacker. The \"eagle\" in the formation's name comes from the late 1940s-early 1950s Philadelphia Eagles coached by Greasy Neale."} {"text":"The original Eagle defense was a 5\u20132 arrangement, with five defensive linemen and two linebackers. In Neale's defense, as in Shurmur's variation, the nose tackle could also drop into pass coverage, thus Shurmur's use of the Eagle defense name."} {"text":"The 4\u20134 defense consists of four defensive linemen, four linebackers, and three defensive backs (one safety, two corners). It puts \"eight men in the box\" to stop the run, but it sacrifices deep coverage against the pass, especially if the opponent's receivers are better athletes than the cornerbacks. The formation is popular in high school football as well as smaller collegiate teams. If the opposite team is a good passing team, outside linebackers are usually called on to defend slotbacks."} {"text":"The 5-3 defense consists of five defensive linemen, three linebackers, and three defensive backs (one safety, two corners). It appeared in the early thirties as a response to the improving passing offenses of the time, particularly the T formation. It grew in importance as the 1940s progressed, as it was more effective versus the T than the other standard defense of the time, the 6\u20132. By 1950, five man lines were standard in the NFL, either the 5-3 or the 5-2 Eagle. As late as the early 1950s, the Cleveland Browns were using a 5-3 as their base defense."} {"text":"The 6-2 defense consists of six defensive linemen, two linebackers, and three defensive backs (one safety, two corners). This was the primary defense in football, at all levels, during the single wing era (the 1930s), combining enough passing defense to handle the passing attacks of the day along with the ability to handle the power running games of the times. As the T formation grew popular in the 1940s, this formation was replaced in the NFL with the 5-3 and the 5-2 defenses."} {"text":"In colleges, this defensive front has remained viable for a much longer period of time, because colleges, historically, have run a lot more than the NFL. Three common six man fronts seen in this more modern era are the tight six (linebackers over offensive ends, four linemen between linebackers), the wide tackle 6 (linebackers over offensive tackles, two linemen between linebackers) and the split 6 (linebackers over guard-center gap, all linemen outside linebackers)."} {"text":"The 5\u20132 defense consists of five defensive linemen, two linebackers, and four defensive backs (two corners, two safeties). Historically, this was the first major defense with 4 defensive backs, and was used to combat the passing attacks of the time. A later evolution of the original 5-2 is the Oklahoma 5\u20132, which ultimately became the professional 3-4 when the defensive ends of the original 5-2 were substituted over time for the outside linebackers of the 3\u20134. The differences between the Oklahoma 5-2 and the 3-4 are largely semantics."} {"text":"Seven-man line defenses use seven down linemen on the line of scrimmage. The most common seven-man line defenses were the 7-2-2 defense and the 7-1-2-1 defense. They were most common before the forward pass became prevalent, but were still common prior to the inception of the platoon system. They are still sometimes used in goal-line situations."} {"text":"There are a couple paths to the 4-2-5. One is by removing a linebacker from the standard 4\u20133 to add the extra defensive back. The second is by converting the ends of a wide tackle six to safeties (the defensive ends of a wide tackle six already have pass defense responsibilities). A variation is the 2\u20134\u20135, which is primarily run by teams that run the 3\u20134 defense. They replace a defensive tackle with a corner."} {"text":"The 3\u20133\u20135 removes a lineman to the nickelback."} {"text":"The 33 stack uses an extra strong safety, and \"stacks\" linebackers and safeties directly behind the defensive linemen."} {"text":"The 3\u20135\u20133 refers to a defense that has three down linemen (the \"3\" level), three linebackers and two corners (the \"5\" level), one free safety and 2 strong safeties (the \"3\" level). This is similar to a 33 stack, but with players more spread. Also called the \"umbrella\" defense or \"3-deep\". In this set, the third safety would be referred to as a \"weak safety\" (WS) and allows two position safeties at the mid-level with a third safety deep. It is because of this that the secondary safety in a football defense is called a free safety rather than a weak safety"} {"text":"Any defense consisting of six defensive backs. The sixth defensive back is known as the dimeback and this defense is also used in passing situations (particularly when the offense is using four wide receivers). As the extra defensive back in the nickel formation is called the nickel, two nickels gives you a dime, hence the name of the formation."} {"text":"Defense consisting of seven (quarter) or eight (half dollar) defensive backs. The seventh defensive back is often an extra safety, and this defense is used in extreme passing situations (such as to defend against a Hail Mary pass). It is occasionally referred to as the prevent defense because of its use in preventing desperation plays. The cornerbacks and safeties in a prevent defense usually make a point of defending the goal line at the expense of receivers in the middle of the field."} {"text":"The quarter formations are run from a 3\u20131\u20137 or a 4\u20130\u20137 in most instances; the New England Patriots have used an 0\u20134\u20137 in some instances with no down linemen. Half dollar defenses are almost always run from a 3\u20130\u20138 formation. The eighth defensive back in this case is usually a wide receiver from the offense. The wide receiver can capitalize on interception opportunities in the expected high-risk offensive play."} {"text":"Unlike other formations, the extra safety is not referred to as a quarterback or halfback (except in Canadian football), to avoid confusion with the offensive positions of the same names, but rather simply as a defensive back or a safety."} {"text":"Formations with many defensive backs positioned far from the line of scrimmage are susceptible to running plays and short passes. However, since the defense is typically used only in the last few seconds of a game when the defensive team need only keep the offense from scoring a touchdown, giving up a few yards in the middle of the field is inconsequential."} {"text":"More extreme defensive formations have been used when a coach feels that his team is at a particular disadvantage due to the opponent's offensive tactics or poor personnel match-ups."} {"text":"For example, in 2007, New York Jets head coach Eric Mangini employed a scheme against Tom Brady and the New England Patriots that utilized only 1 defensive lineman and 6 linebackers. Prior to the snap, only the lone lineman assumed a three-point stance near the offensive center while the 6 linebackers \"roved\" up and down the line of scrimmage, attempting to confuse the quarterback as to whether they would rush the passer, drop into coverage, or play the run. This defense (combined with poor weather conditions) did slow the Patriot's passing game, but proved ineffective against the run, and the Patriots won the game."} {"text":"Punting formations use a five-man offensive line, three \"upbacks\" (sometimes also referred to as \"personal protectors\") approximately 3 yards behind the line to act as an additional line of defense, two wide receivers known as \"gunners\" either to stop the punt returner or to down the ball, and the punter, 15 yards behind the line of scrimmage to receive the long snap. (If the punting team is deep in its own territory, the 15-yard distance would have to be shortened by up to 5 yards to keep the punter in front of the end line.) The number of upbacks and gunners can vary, and either position can be replaced by a tight end in a \"max protect\" situation."} {"text":"Most field goals feature nine offensive linemen (seven on the line, both ends in the tight end position, with two extra slightly off the line of scrimmage), a place holder who kneels 7 yards behind the line of scrimmage, and a kicker."} {"text":"In 2018, the NFL further amended the rules on the kickoff formation. All players other than the kicker may now line up no more than 1 yard behind the restraining line. The rule also states that there must be five players on both sides of the ball. On each side, two players must line up outside the numbers and two players must be lined up between the numbers and the hashmarks. The NFL also made a rule regarding the receiving team's formation in 2018. Eight players on the receiving team must be lined up in the 15-yard \"set up zone\" measured from the receiving team's restraining line 10 yards from the ball."} {"text":"Kick return formations vary; in most situations, an association football-like formation is used, with eleven players staggered throughout the field including two (rarely, one) kick returners back to field deep kicks, two more twenty yards ahead of them to field squib kicks, two more at about midfield mainly to assist in blocking, and five players located the minimum ten yards from the kicking line. In obvious onside kick formations, more players are moved to the front of the formation, usually top wide receivers and other players who are good at recovering and catching loose balls; this formation is known as the \"hands team\". A kick returner will usually remain back in the event of an unexpected deep kick in this situation."} {"text":"To defend punts, the defensive line usually uses a man-on-man system with seven defensive linemen, two cornerbacks, a linebacker and a kick returner. They may choose to attempt to block the punt, or drop back to block for the receiver."} {"text":"The wishbone formation, also known simply as the bone, is an offensive formation in American football. The style of attack to which it gives rise is known as the wishbone offense. Like the spread offense in the 2000s, the wishbone was considered to be the most productive and innovative offensive scheme in college football during the 1970s and 1980s."} {"text":"While the record books commonly refer to Emory Bellard developing the wishbone formation in 1968 as offensive coordinator at Texas, the wishbone's roots can be traced back to the 1950s. According to Barry Switzer, it was Charles \u201cSpud\u201d Cason, football coach at William Monnig Junior High School of Fort Worth, Texas, who first modified the classic T formation in order \u201cto get a slow fullback into the play quicker.\u201d Cason called the formation \u201cMonnig T\u201d. Bellard learned about Cason's tactics while coaching at Breckenridge High School, a small community west of Fort Worth."} {"text":"Earlier in his career Bellard saw a similar approach implemented by former Detroit Lions guard Ox Emerson, then head coach at Alice High School near Corpus Christi, Texas. Trying to avoid the frequent pounding of his offensive line, Emerson moved one of the starting guards into the backfield, enabling him to get a running start at the opposing defensive line. Bellard served as Emerson's assistant at that time. During his high school coaching career in the late '50s and early '60s, Bellard adopted the basic approaches of both Cason and Emerson, as he won two 3A Texas state championships Breckenridge in 1958 and 1959 and a 4A state title at San Angelo Central High School in 1966, using a wishbone-like option offense."} {"text":"In 1967 Bellard was hired by Darrell Royal and became offensive coordinator a year later. The Texas Longhorns only scored 18.6 points per game in a 6\u20134 season in 1967. After watching Texas A&M\u2014running offensive coordinator Bud Moore and Gene Stallings' option offense\u2014beat Bear Bryant's Alabama team in the 1968 Cotton Bowl Classic, Royal instructed Bellard to design a new three-man back-field triple option offense. Bellard tried to merge his old high school tactics with Stallings' triple option out of the Slot-I formation and Homer Rice's variations of the Veer, an offensive formation created by Bill Yeoman."} {"text":"Bellard later left Texas and \u2013 using the wishbone \u2013 guided Texas A&M and Mississippi State to bowl game appearances in the late 1970s. At Mississippi State Bellard \u201cbroke the bone\u201d and introduced the \u201cwing-bone\u201d, moving one of the halfbacks up to a wing formation and frequently sending him in motion. Another variation of the wishbone formation is called the flexbone."} {"text":"Ironically, the longest running wishbone offense was run not by Texas but by their arch-rivals, the University of Oklahoma, who ran variations of the wishbone well into the mid-1990s. Oklahoma coach Barry Switzer has been credited by some for having \u201cperfected\u201d the use of the wishbone offense and former OU quarterback Jack Mildren is often referred to as \"the Godfather of the wishbone\" by University of Oklahoma football fans. In 1971, the Oklahoma Sooners wishbone offense set the all-time NCAA single-season rushing record at 472.4 yards per game, a record which still stands to this day."} {"text":"Phil Jack Dawson, then head coach of Westbrook High School in Westbrook, Maine, developed an effective defense against the wishbone offense then in use by Texas, called \u201cbackbone defense\u201d. Dawson contacted Ara Parseghian, then head coach of the University of Notre Dame, and convinced him to use it against Texas in the 1971 Cotton Bowl Classic. Notre Dame beat Texas 24-11."} {"text":"In the National Football League, during the strike season of 1987, the San Francisco 49ers used the wishbone successfully against the New York Giants to win 41\u201321. Coach Bill Walsh used the wishbone because of his replacement quarterback's familiarity with a similar formation in college. The Cleveland Browns also utilized the wishbone at the pro level in a 2018 28\u201316 win over the Atlanta Falcons."} {"text":"The Oklahoma playbook describes the quarterback, the architect behind the Wishbone, as, \"a running back who can throw.\" They must also have an aptitude for the option and the decision making that lies within the play design as well as durability (cannot miss a practice)."} {"text":"The fullback is required to be able to handle a physical pounding because he is frequently hit without having the ball; he must also be quick with excellent stamina, and be a good blocker."} {"text":"This makes the wishbone a \"complete\" offense. The offense expects to get a one-on-none in the running game and a one-on-one in open space with the passing game. The safety, who must support the run and also defend against the pass, is under tremendous pressure in this attack. The basic wishbone triple option play accounts for every defender on the field. Every defender is threatened before the basic play begins. There is an invitation to overplay or compensate on the basic play and overplaying or making a misstep on the basic play leaves the defense open for counters that leave no one to make up for the mistake."} {"text":"The wishbone has the quarterback taking the snap from under center, with a fullback close behind him, and two halfbacks (sometimes called \"tailbacks\") further back, one slightly to the left, and the other slightly to the right. The alignment of the four backs makes an inverted Y, or \u201cwishbone\u201d, shape. There is typically one wide receiver and one tight end, but sometimes two wide receivers, or two tight ends."} {"text":"The split-T is an offensive formation in American football that was popular in the 1940s and 1950s. Developed by Missouri Tigers head coach Don Faurot as a variation on the T formation, the split-T was first used in the 1941 season and allowed the Tigers to win all but their season-opening match against the Ohio State Buckeyes and the 1942 Sugar Bowl versus Fordham. Jim Tatum and Bud Wilkinson, who coached under Faurot with the Iowa Pre-Flight Seahawks during World War II, brought the split-T to the Oklahoma Sooners in 1946. After Tatum left for Maryland in 1947, Wilkinson became the head coach and went on to win a record-setting 47 straight games and two national titles between 1953 and 1957."} {"text":"In the basic or tight-T formation, three running backs would line up about five yards behind the quarterback. The offensive linemen would form a fairly tight group in front of the backs. In the split-T, the offensive line was spread out over almost twice as much ground. This prompted the defensive front to widen as well, which created gaps for the offense to exploit."} {"text":"The original split-T was a full house backfield. Later, Faurot would set up a flanker on one sideline. This was done after experience with nine man lines showed the flanker to create issues for the defense. The use of a split end to aid the passing game was optional, and was not an integral feature of either the split-T or the tight-T."} {"text":"Faurot used the new formation to create what may have been the first option offense in football, which was a precursor of the wishbone, veer, and some modern run-first spread offenses. With the defense spread out, the offense would, in general, leave one defensive player on the play side unblocked. The blocking schemes were simple, with very little of the pulling or trapping of the more traditional power-running offenses."} {"text":"The three base plays of the offense were the handoff (a dive play), the keep and the pitch play. The handoff was a fast play, with a halfback driving directly into the line, and the quarterback handing off within one yard of the line of scrimmage. Faurot judged this play to be the most dangerous in his offensive system, as the handoff occurred close to the line of scrimmage, close to potential interference by the defensive team."} {"text":"If the dive play had not been called, then the quarterback kept the ball. The quarterback would run toward a spot just inside the unblocked defensive player. If that player closed on him, he would pitch the ball back to the outside trailing halfback, aiming for a spot outside that outside defensive player. When executed correctly, this resembled the two-on-one fast break of basketball, from which Faurot originally derived the concept (Faurot also lettered in basketball, as a student, and coached the Northeast Missouri State University basketball team to a conference championship prior to his tenure as the head football coach at Missouri)."} {"text":"Don Faurot, the head coach of the Missouri Tigers, developed the split-T and unleashed it onto the college football world in 1941. He combined this new formation with the athletes he had at running back and quarterback and created an offensive juggernaut. The Tigers finished the season 8-1, with the sole loss in the season opening out of conference game at #10 Ohio State. They were the Big Six Conference champions, ranked #7 in the AP poll, and accepted the invitation to play #6 Fordham in the 1942 Sugar Bowl."} {"text":"In 1946, Jim Tatum became the Oklahoma head coach. He installed the split-T offense that he had learned as an assistant coach under Don Faurot at the U.S. Navy's Iowa Pre-Flight school football team during World War II. In his first year, he turned around Oklahoma's losing record and delivered a Big Six Conference championship. In 1947, Tatum left Oklahoma for Maryland, where he saw even more success with the split-T, including a consensus national championship in 1953."} {"text":"Bud Wilkinson, also a Faurot assistant at Iowa Pre-Flight, was the next Sooners head coach. In 1953, after losing to Notre Dame and tying Pittsburgh, Oklahoma beat arch-rivals Texas, 19\u201314, and went on to win their next 46 games in a row, setting an NCAA record that stands to this day. Notre Dame book-ended the streak when they again beat Oklahoma in Norman, 7\u20130 on November 16, 1957."} {"text":"Tatum and Wilkinson would later face off in the 1954 Orange Bowl, when #1\/#1 Maryland and #4\/#5 Oklahoma met on the field for the first time. Both teams used the split-T as their base offense. Other top football programs used the split-T during this period as well, including Alabama, Houston, Notre Dame, Texas, Michigan, Penn State, and Ohio State."} {"text":"Bible, Dana X., \"Championship Football\", Prentice-Hall, 1947."} {"text":"Brown, Paul, and Clary, Jack, \"PB: The Paul Brown Story\", Atheneum, 1979."} {"text":"Faurot, Don \"Secrets of the \"Split-T\" Formation\", Prentice-Hall, 1950."} {"text":"Keith, Harold, \"Forty-Seven Straight: The Wilkinson Years at Oklahoma\", University of Oklahoma Press, 1984."} {"text":"The shotgun formation is a formation used by the offensive team in gridiron football mainly for passing plays, although some teams use it as their base formation. Instead of the quarterback receiving the snap from center at the line of scrimmage, in the shotgun he stands farther back, often five to seven yards off the line. Sometimes the quarterback will have a back on one or both sides before the snap, while other times he will be the lone player in the backfield with everyone spread out as receivers."} {"text":"The shotgun formation can offer certain advantages. The offensive linemen have more room to maneuver behind the scrimmage line and form a tighter, more cohesive oval \u201cpocket\u201d in which the quarterback is protected from \u201cblitzing\u201d by the defense. If the quarterback has speed, mobility or both, he can use this formation to scramble before his pass; or, to run to an open field position in the defensive secondary or to the sideline, usually gaining first-down yardage."} {"text":"Although some running plays can be run effectively from the shotgun, the formation also has weaknesses. The defense knows a pass is more than likely coming, particularly from an empty set lacking any running backs, and there is a higher risk of a botched snap than in a simple center\/quarterback exchange. If the defense is planning a pass rush, this formation gives fast defensive players more open and exposed targets in the offensive backfield, with less cluttered \u201cblitzing\u201d routes to the quarterback and any other halfback in the offensive backfield."} {"text":"Shotgun combines elements of the short punt and spread formations \u2014 \"spread\" in that it has receivers spread widely instead of close to or behind the interior line players. The origins of the term are thought to be that it is like a \"shotgun\" in spraying receivers around the field. (The alignment of the players also suggests the shape of an actual shotgun.) Formations similar or identical to the shotgun used decades previously would be called names such as \"spread double wing\". Short punt formations (so called because the distance between the snapper and the ostensible punter is shorter than in long punt formation) do not usually have as much emphasis on wide receivers."} {"text":"The shotgun evolved from the single wing and the similar double-wing spread; famed triple threat man Sammy Baugh has claimed that the shotgun was effectively the same as the version of the double-wing he ran at Texas Christian University in the 1930s."} {"text":"In the latter part of the 1940s, the Philadelphia Eagles, under Hall of Fame Coach Earl \"Greasy\" Neale, implemented the shotgun formation in their offensive attack with quarterback Tommy Thompson."} {"text":"The formation was named by the man who actually devised it, San Francisco 49ers coach Red Hickey, in 1960. John Brodie was the first National Football League shotgun quarterback, beating out former starter Y. A. Tittle largely because he was mobile enough to effectively run the formation."} {"text":"Since no other NFL teams used the formation during this time, some believed it had been invented by Tom Landry. Instead, Landry simply dusted off the old innovation to address a pressing problem: keeping Staubach protected while an unusually young and inexperienced squad (12 rookies made the 1975 Cowboys roster) jelled. The Cowboys ended up in the Super Bowl that season, in no small part due to its new use of the old formation. The shotgun became a \"signature\" formation for the Cowboys, especially during third down situations."} {"text":"The shotgun was adopted by more teams throughout the pass-happy late 1980s, and was part of almost every team's offense in the 1990s, eventually becoming a base formation for some teams in the late 2000s."} {"text":"In recent years, the shotgun has become vastly prevalent. Many college quarterbacks\u2014such as Tim Tebow, who almost exclusively used the shotgun at Florida\u2014have difficulty adapting to NFL offenses where about a third of snaps are taken under center. However, with the spread offense increasingly used in the NFL, the shotgun is more popular, since the spread allows for more effective running."} {"text":"Though the shotgun is a pass-dominated formation, a cleverly designed halfback draw play can put defenses off guard and a fast halfback can get good yardage before the defense recovers from their mistake. A further development of the play is a halfback option pass, with the quarterback being one of the eligible receivers. Roger Staubach's backup and successor, Danny White, twice caught such a pass for a touchdown. It was noted at the time that he was only eligible because of the shotgun formation (an NFL quarterback who takes a snap from underneath the center was and still is an ineligible receiver, a rule not found in any amateur level of American football)."} {"text":"The shotgun is also used in college, but running is used more often than in the NFL. Most offenses in college who run in the shotgun have a fast quarterback. They often use a play where the quarterback has an option of handing the ball off to the running back who runs to the side opposite the side he was lined up on. The quarterback can also run the opposite way depending on how the defense reacts. Urban Meyer and the Florida Gators used this effectively from 2006 to 2009 with Tim Tebow."} {"text":"The Nevada Wolf Pack currently employs a formation called the \"pistol\", in which the running back, instead of lining up next to the quarterback, lines up behind the quarterback, who in turn has lined up two to three yards behind the center."} {"text":"Coach Urban Meyer has added elements of the option offense to the shotgun offense he employed as coach at Bowling Green State University, the University of Utah, and University of Florida. This \"spread option\" offense is also used by the Missouri Tigers, Ohio State Buckeyes and other college teams with quarterbacks who can run as well as throw effectively."} {"text":"At times the formation has been more common in Canadian football, which allows only three downs to move ten yards downfield instead of the American game's four. Canadian teams are therefore more likely to find themselves with long yardage to make on the penultimate down, and therefore more likely to line up in the shotgun to increase their opportunities for a large gain. Canadian teams also have the advantage that backs positioned behind the line of scrimmage can run forward and cross the line running as the ball is snapped."} {"text":"The 46 defense is an American football defensive formation, an eight men in the box defense, with six players along the line of scrimmage. There are two players at linebacker depth playing linebacker technique, and then three defensive backs. The 46 defense was originally developed and popularized with the Chicago Bears by their defensive coordinator Buddy Ryan, who later became head coach of the Philadelphia Eagles and Arizona Cardinals."} {"text":"Unlike most defensive formations that take their names from the number of defensive linemen and linebackers on the field (i.e. the 4\u20133 defense has 4 linemen and 3 linebackers), the name \"46\" originally came from the jersey number of Doug Plank, who was a starting strong safety for the Bears when Ryan developed the defense, a role typically played in the formation as a surrogate linebacker."} {"text":"The formation was very effective in the 1980s NFL because it often negated a team's running game and forced them to throw the ball. This was difficult for many teams at the time because most offensive passing games centered on the play-action pass, a situation that often favored the defense even further with the quarterback lined up to receive the snap from directly behind the center."} {"text":"Currently, the 46 is rarely used in professional and college football. This is largely because of multiple receiver and spread formations. The eight man line that the 46 presented was most effective against the two back, two wide receiver sets common in the 1980s."} {"text":"A weakness of the 46 defense is that with eight defensive players lining up near the line of scrimmage and only three in the secondary, it leaves areas open for receivers to catch passes. Also, timed passes can be thrown before the players blitzing have a chance to reach the quarterback. When the Miami Dolphins gave the Bears their only loss of the 1985 season, Miami exploited these weaknesses with quarterback Dan Marino's quick release of the ball, and their receivers' ability to beat the one-on-one coverage of Chicago's cornerbacks."} {"text":"Another problem with the 46 defense is that most teams do not have enough impact players to run the 46 as effectively as the Bears and Ryan's other two major successes, the late 1980s Philadelphia Eagles for which he was head coach and the 1993 Houston Oilers for whom he was defensive coordinator, did. Those teams fielded some of the best front-seven defenses ever, and included such players as Jerome Brown, Mike Singletary, Steve McMichael, Richard Dent, Dan Hampton, Clyde Simmons, Reggie White, Otis Wilson, Seth Joyner, William Fuller, and Wilber Marshall."} {"text":"In today's game, the 46 defense is often simplified to its main component of walking the strong safety up to the line of scrimmage as an eighth man in the box to help contain the run. Defenses today may also run safety blitzes and corner blitzes at crucial moments without committing wholly to the \"46\" defense. Up front, teams still use the concept of the \"T-N-T\" alignment, where two defensive ends are covering (lined up directly across from) the guards, and a nose tackle is covering the center. In the case of a zone-blocking scheme, this makes it difficult for the offensive linemen to reach any of the linebackers on the second level."} {"text":"This is where defensive players would line up against a normal Pro Set offense."} {"text":"When three or more receivers are used by the offense, the defense makes what is called a jayhawk adjustment. The charlie linebacker will step back to where the middle linebacker was in the normal alignment, the middle linebacker will move to where the strong safety was aligned and the strong safety will move out to cover the third receiver. If the offense uses a fourth receiver, the middle linebacker lines up in front of the center and the charlie linebacker would cover the fourth receiver."} {"text":"To note, there is nothing particularly innovative about this particular set of assignments. For example, the strong safety could assume either the charley or the jack linebacker role. The linebacker displaced would line up over the weak side offensive tackle, where the strong safety is normally found."} {"text":"Arguably, the two most difficult positions on offense to develop quickly are wide receivers and quarterbacks. A style of play was needed for teams that could not field strong throwing quarterbacks. In the flexbone formation, intelligent and athletic personnel can adapt to playing a quarterback's position without having to throw the ball very well."} {"text":"Flexbone teams are often playing against more talented teams so they must use time management and trickery of the flexbone to even the playing field. By running the ball almost exclusively, a flexbone offense also runs the game clock and limits the opposing teams possibly faster and stronger offense from scoring against their own defense."} {"text":"Another key consideration is that the flexbone formation gives the offense four potential vertical receiving threats at the snap: the two wide receivers and the two slotbacks. This is something that alternative formations such as the I-formation or the traditional wishbone cannot achieve without pre-snap motion that tips the offense's hand. This advantage allows the four-verticals play, a deadly weapon against Cover 3, a common defensive coverage used by the eight-man fronts that a strong running team is likely to face."} {"text":"Since this offense is primarily used by service academies (Air Force, Army, Navy and The Citadel), it helps alleviate the inherent unbalance related to recruiting and being unable to recruit the type of talent that a larger school like Oklahoma or Alabama might be able to. The Flexbone allows for QBs who may be shorter or smaller than ideal (5'10 - 6'2 and weighing around 185-205 pounds) to be able to start because they often have a speed advantage despite not being able to throw the ball as well."} {"text":"As a result of the misdirection and the size of the outside WRs (usually 6'2 or taller and over 200 pounds) to help block, the slotbacks can be similarly sized to the QB. This allows for the Flexbone to have 3 players who can run fast (with SBs also helping serve as receivers or lead blockers) while the FB can be a more traditional HB size of 220-235 pounds."} {"text":"Schools at the FCS (formerly I-AA) level that currently run the Flexbone include Wofford, The Citadel, and more recently the upstart program of the Kennesaw State Owls coached by Brian Bohannon who in 2017 lead the Owls to a 12\u20132 record and appearance in FCS playoffs in only the 3rd year of the program's existence."} {"text":"The flexbone offense is also popular at the high school football level."} {"text":"The short punt formation is an older formation on both offense and defense in American football, popular when scoring was harder and a good punt was itself an offensive weapon. In times when punting on third down was fairly common, teams would line up in the short punt formation and offer the triple threat of punt, run or pass. \"Harper's Weekly\" in 1915 called it \"the most valuable formation known to football.\""} {"text":"The formation is similar to the single wing and modern shotgun by including the possibility of a long snap from center. However, it is generally a balanced formation, and there are backs on both sides of the tailback, offering better pass protection. As a result, it was considered a much better passing formation than running, as the premiere running formation was the single wing. That said, it was regarded as a good formation for trap plays."} {"text":"The formation was invented by Amos Alonzo Stagg in 1896. Andy Smith, coach of California's \"Wonder Teams\" summed up the short-punt philosophy with his motto of \"Kick and wait for the breaks.\" In the early days of the sport the ball was often moved up the field, not through offensive plays, but rather through punting. Once the opposing team got the ball, the defense was relied upon to make the other team's offense lose yards or fumble. To confuse the opponent and attain longer punts, the punting was often done on first or second downs and it was not uncommon for a team to kick more than 40 times in a game."} {"text":"The formation was used extensively by Fielding Yost's \"point-a-minute\", hurry up Michigan Wolverines in their early history, as well as his disciple Dan McGugin's Vanderbilt Commodores. Bill Roper used the short punt at Princeton."} {"text":"The short punt was the base formation for the Benny Friedman-led New York Giants in 1931. In the 1956 NFL Championship, the Chicago Bears shifted into a short punt formation in the third quarter, after falling way behind."} {"text":"In American football, the A formation was a variation of the single-wing formation used with great success by the New York Giants of the 1930s and early 1940s. This formation was masterminded by Giants coach Steve Owen and relied heavily upon Hall of Fame center Mel Hein for its success."} {"text":"The A formation differed from the traditional single-wing in that the quarterback played further back from the line and closer to the center. It also place the backfield opposite the \"strong\" side of the unbalanced line, providing more flexibility in the running game (though less power). The wingback is on the opposite side compared to the single-wing and the quarterback is the primary passer, rather than the tailback. The name of the formation was arbitrary, not from its slight resemblance to the letter \"A\", unlike formations named \"I\", \"T\", \"V\", and \"Y\" for the shapes formed by the backs' positioning; Owen labeled the standard single wing his team's \"B\" formation."} {"text":"One major advantage of the A is the center could snap the ball to any of three players; typically to the fullback or blocking back for runs and the quarterback for passes. The fourth back, the wingback, became a crucial part of the system when Owen introduced a half-spin sweep series in 1938 which featured a wide sweep play to the motioning wingback, a dive inside by the deep fullback, and a bootleg threat away from sweep action by the quarterback. This triple-threat, highly deceptive series anticipated the Wing-T Buck Sweep series by well over a decade."} {"text":"A great center like Hein was a major asset, albeit not essential, in running the A formation \u2014 however only the Giants used this set-up with any frequency. This gave the Giants an advantage in that teams had to prepare specifically to defend the A whenever they played New York."} {"text":"The Veer is an option running play often associated with option offenses in American football, made famous at the collegiate level by Bill Yeoman's Houston Cougars. It is currently run primarily on the high school level, with some usage at the collegiate and the professional level where the Veer's blocking scheme has been modified as part of the zone blocking system. The Veer is an effective ball control offense that can help minimize mismatches in a game for a team. However, it can lead to turnovers with pitches and handoff option reads."} {"text":"The Veer can be run out of any variety of formations, although it was primarily designed to be run out of the split-backed, aptly named veer formation. It has been used out of the I-formation (and its variants, including the Power-I and Maryland I) and the wishbone formation. Some variants of the triple option have now made the jump to the shotgun formation, which has become a popular option formation since Eric Crouch and the University of Nebraska Cornhuskers used the shotgun option during his 2001 Heisman campaign."} {"text":"He attempts to maintain proper pitch relation to the quarterback, technically a few yards outside the quarterback and moving laterally so that the quarterback may pitch the ball as he goes down the field. This entire action takes no longer than a few seconds."} {"text":"The fourth player in the split-veer would be a wide receiver or tight end. His job, depending on the formation, would be to block the force player who is responsible for the flat on the side being attacked. The offense relies on the quarterback making the proper reads, turning up the field (if he decides to keep the ball) and gaining yardage. The dive back must remember to not take the football from the quarterback, rather the quarterback must give it to him. The pitch man must maintain proper spacing from the quarterback to ensure that the quarterback can make an effective pitch that can ensure more yardage."} {"text":"The College Football Hall of Fame credits Bill Yeoman with the invention of the veer formation. Yeoman ran that offense with the Houston Cougars beginning in the mid-1960s and continuing through his career at Houston, which concluded in 1986."} {"text":"When an offensive system is devised for a team, the coach must take into account his players, so the veer can be applied to several situations. It can be used for undersized players so that double teams and angles can be used to block defenders. It can be used to isolate defenders and create predictable responses to the offenses actions. If a team is very disciplined it can take advantage of an undisciplined defense that can not execute their responsibilities precisely on each snap of the game."} {"text":"The veer offense was adopted by Jack Lengyel, the new head coach of the Marshall University Thundering Herd prior to the start of the 1971 season after the 1970 team was killed in a plane crash. Lengyel believed that the veer option offense would be a better offense than the Power I offense he had used at the College of Wooster. Bobby Bowden, then the head coach of West Virginia, offered to tutor Lengyel and his coaches on the intricacies and nuances of the veer option offense. Lengyel installed Reggie Oliver at quarterback. The Young Thundering Herd of Marshall would win two games in 1971: a last-second win against Xavier in their first home game after the crash and the homecoming game against ranked Bowling Green."} {"text":"In the Florida State-Houston game in the Gator Bowl in 1968, the 'Noles brought the safeties up and they ignored the QB, running right past him at times, and crashed into the trailing back, usually Paul Gipson. This took the pitch option away. The Veer wasn't stopped but it was slowed. Florida State won the game 40\u201320."} {"text":"Highly athletic defensive lines can also \"bring the house\" and penetrate the backfield of a veer option offense, disrupting the option read progression and forcing the quarterback to scramble and throw downfield, something which the offense is ill-equipped to do. Persistent backfield penetration can result in a preponderance of Fullback dive plays, which typically result in low gains, putting the offense in a cycle of low-yardage FB dives and incomplete passes under pressure, effectively neutering the \"option\" portion of the offense."} {"text":"Other successful teams known to use the veer are the Kemmerer Rangers (Wyoming), which has won two state titles in the last three years; and the Baker County Wildcats (Florida) who went 10\u20132 and were 30th in the country."} {"text":"Mount Carmel High School in Chicago, IL has used the veer option under head coach Frank Lenti since 1984. In that time Mount Carmel has won 9 state championships, and was crowned team of the decade in Illinois after winning 5 state titles in the 90's. Notable players that have gone on to the NFL include Donovan McNabb and Simeon Rice."} {"text":"Pistol-Flex or Pistol Double-Slot is a hybrid of two well-known American football formations: the pistol and flexbone formations. It was pioneered in 2009 by Paul Markowski, who is currently an offensive consultant for Chestnut Hill College. By combining the strengths of each offensive set, the end result is a formation that is very effective for both passing and running. The triple option can be used from this set very effectively. Markowski has developed a true quadruple option play run out of the Pistol-Flex formation."} {"text":"The base formation of the Pistol-Flex has the QB in a shotgun set four yards behind the center. The B-back is in a three-point stance with his down hand two yards behind the QB's feet. The two slotbacks are set one yard directly behind the offensive tackles to their side. The offensive line splits are all three feet. There are multiple formations that the Pistol-Flex can be run from (Open, Tight, Bone, Box, Twins)."} {"text":"At any given time, there are at least four eligible receivers within one yard of the line of scrimmage, which bodes well for the passing attack."} {"text":"The I formation is one of the most common offensive formations in American football. The I formation draws its name from the vertical (as viewed from the opposing endzone) alignment of quarterback, fullback, and running back, particularly when contrasted with the same players' alignments in the \"T formation\"."} {"text":"The formation begins with the usual 5 offensive linemen (2 offensive tackles, 2 guards, and a center), the quarterback under center, and two backs in-line behind the quarterback. The base variant adds a tight end to one side of the line and two wide receivers, one at each end of the line."} {"text":"The exact origin of the I formation is unclear. Charles M. Hollister of Northwestern in 1900 is one source, as is Bob Zuppke in 1914."} {"text":"Tom Osborne, head coach at Nebraska for a quarter century, further popularized the formation in the early 1970s as offensive coordinator (under head coach Bob Devaney) with consecutive national titles in 1970 and 1971. He incorporated the option into his I formation scheme beginning in 1980, forming the base of the Nebraska offense for over twenty years, and won three national championships in the 1990s. NFL teams followed the success of the I at the college level and adopted it as well."} {"text":"The I formation is typically employed in running situations. In the I formation, the tailback starts six to eight yards behind the scrimmage from an upright position, where he can survey the defense. The formation gives the tailback more opportunities for finding weak points in the defense to run into."} {"text":"The fullback typically fills a blocking, rather than rushing or receiving, role in the modern game. With the fullback in the backfield as a blocker, runs can be made to either side of the line with his additional blocking support. This is contrasted with the use of tight ends as blockers who, being set up at the end of the line, are able to support runs to one side of the line only. The fullback can also be used as a feint\u2014since the defense can spot him more easily than the running back, they may be drawn in his direction while the running back takes the ball the opposite way."} {"text":"Despite the emphasis on the running game, the I formation remains an effective base for a passing attack. The formation supports up to three wide receivers and many running backs serve as an additional receiving threat. While the fullback is rarely a pass receiver, he serves as a capable additional pass blocker protecting the quarterback before the pass. The running threat posed by the formation also lends itself to the play-action pass. The flexible nature of the formation also helps prevent defenses from focusing their attention on either the run or pass."} {"text":"Many subtypes of the I formation exist, generally emphasizing the running or passing strengths of the base version."} {"text":"The I formation, in any variant, can also be modified as Strong or Weak. This formation is commonly called an Offset I. In either case, the fullback lines up roughly a yard laterally to his usual position. \"Strong\" refers to a move towards the TE side of the formation (Primary TE, or flanker's side when in a \"big\" 2TE set), \"weak\" in the opposite direction. These modifications have little effect on expected play call. However, the Offset I allows a fullback to more easily avoid blockers and get out of the backfield to become a receiver."} {"text":"In the NFL, the I formation is less frequently used than in college, as the use of the fullback as a blocker has given way to formations with additional tight ends and wide receivers, who may be called on to block during running plays. The increasingly common ace formation replaces the fullback with an additional receiver, who lines up along the line of scrimmage. The I will typically be used in short-yardage and goal line situations. College football now has a spread football system. Which in turn gets rid of the fullback. What spread formation can do is in a way \"spread\" the defense to cover the entire field."} {"text":"A formation in football refers to the position players line up in before the start of a down. There are both offensive and defensive formations and there are many formations in both categories. Sometimes, formations are referred to as packages."} {"text":"At the highest level of play in the NFL and NCAA, the one constant in all formations is the offensive line, consisting of the left and right tackle, left and right guard, and a center. These five positions are often referred to collectively as the \"line\", and have the primary role of blocking. By rule there must be two additional players on the line of scrimmage called ends. These players are eligible receivers and may play near the linemen (tight ends) or farther away (split end or wide receiver). Most teams play additional players near (but still off) the line of scrimmage to act as extra pass receivers."} {"text":"Up to four players can be behind the offensive line, but one is always designated the quarterback (defined as the player who receives the ball from the center). Upon the snap of the ball, the quarterback becomes the \"ball carrier\". The ball carrier has five options:"} {"text":"The three other backs can be halfbacks (who primarily carry the ball), fullbacks (who primarily block), or they can play near (but not on) the line of scrimmage to act as extra tight ends or wide receivers. A tight end that fills the role as the 4th back is often called an \"H-Back\", and a wide receiver that fills that role is sometimes known as a \"flanker\" or a \"slot\" receiver (depending on where he lines up). Most formations have a \"strong\" side (the side with the tight end, or the side with more players) and a \"weak side\" (the side opposite the tight end, or the side with fewer players)."} {"text":"The ends, which may be either wide receivers or tight ends, may catch a passed ball or receive a handoff."} {"text":"Descriptions and diagrams to display offensive formations typically use the following symbols:"} {"text":"The offense is required to set up a formation before a play, subject to several rules:"} {"text":"Two terms often heard in referring to"} {"text":"defensive formations are \"box\" and \"secondary\". The \"box\" is defined as an area on the defensive side of the ball, within 5 yards of the line of scrimmage and framed by the offensive tackles. This area is most commonly occupied by defensive linemen and linebackers. The \"secondary\" can refer to the defensive backs as a group, or to the area behind the linebackers usually occupied by defensive backs. The two standard NFL defenses, the 4-3 and the 3-4, have 7 players in the box. The phrase \"8 in the box\" is used to indicate that 1 of the 2 safeties has moved into the box to defend against the run."} {"text":"This formation assumes the offense is lined up strong side right (from the offense's point of view). This diagram could be matched up to an offensive formation diagram to make a complete 22 player football field."} {"text":"A trips formation is an offensive football formation, initially used by Joe Gibbs and the Washington Redskins, in which three receivers line up on the same side of the field. The side is usually specified by the quarterback calling \"Trips right\" or \"Trips left\" when he calls the play in the huddle."} {"text":"There are multiple variables of the trips formation, and it may be combined with other types of formations. For example, the call \"Shotgun, trips right, slot left\" formation would indicate that the tight end and two wide receivers would line up on the right side of the field, while two receivers would line up on the left side of the field (one \"wide\", the other slightly off the line of scrimmage in the \"slot\"). The quarterback would line up at least five yards behind the center."} {"text":"The objective of a trips formation is to flood the defense on one side of the field in order to create and exploit holes in zone pass coverage."} {"text":"The Notre Dame Box is a variation of the single-wing formation used in American football, with great success by Notre Dame in college football and the Green Bay Packers of the 1920s and 1930s in the NFL. Green Bay's coach, Curly Lambeau, learned the Notre Dame Box while playing for Knute Rockne in the late 1910s. Rockne learned it from Jesse Harper, who learned it from coach Amos Alonzo Stagg. It contained two ends, and four backs. The formation often featured an \"unbalanced line\" where the center (that is, the player who snapped the ball) was not strictly in the \"center\" of the line, but close to the weakside."} {"text":"Although modern use of this offensive formation is largely defunct and exterminated among college and professional teams, several high school football teams across the United States have revived the Notre Dame Box offense and have been highly efficient and successful. Three notable high schools that successfully implemented the Notre Dame Box offense extensively are Western Harnett High School in Lillington, North Carolina, Nauset Regional High School in Eastham, Massachusetts, and Isabella High School in Maplesville, Alabama."} {"text":"The Chicago Bears's and Clark Shaughnessy's Stanford Indians success with a modernized version of the T-formation in the 1940s eventually led to the demise of the Notre Dame Box, as well as all single-wing variants. The Packers finally switched to the T-formation, after Don Hutson had retired, in 1947. No major NCAA or NFL team has used this formation since and much of the knowledge (i.e. playbooks and, if it ever existed, film) associated with this formation is no longer available."} {"text":"Modern use of the Notre Dame Box."} {"text":"Use of the Notre Dame Box in modern times has been limited in part due to changes in football rulebooks regarding motion. The frequent shifts in the backfield that are employed by the system are still legal, but teams must now set themselves in a formation for at least one second before snapping the ball or sending a player into \"motion\". This motion player must be moving backward or laterally. Canadian football never adopted these changes, and (even though it is not used in that variant of the sport) the original version of the system is still legal."} {"text":"In the late 1990s, Western Harnett High School of Lillington, North Carolina was featured on ESPN after their program experienced a major turnaround credited to their employment of the Notre Dame Box. The head coach of that team, Travis Conner, later moved on to Jacksonville High School in nearby Jacksonville, North Carolina and installed the Notre Dame Box there, as well. He then moved onto Bunker Hill High School and completely transformed their football program, with the new system Bunker Hill produced 3 top 10 rushers with the leader being Reggie Davis who rushed for a single season near 2,000 yards."} {"text":"The formation is very prevalent in the north of England, and is used by many teams in BUAFL due to the lack of talented passers as well at the unpredictable weather conditions."} {"text":"In American football, a T formation (frequently called the full house formation in modern usage, sometimes the Robust T) is a formation used by the offensive team in which three running backs line up in a row about five yards behind the quarterback, forming the shape of a \"T\"."} {"text":"Numerous variations of the T formation have been developed, including the Power-T, where two tight ends are used, the Pro T, which uses one tight end and one wide receiver, or the Wing T, where one of the running backs (or wingback) lines up one step behind and to the side of the tight end."} {"text":"Any of these can be run using the original spacing, which produced a front of about seven yards, or the Split-T spacing, where the linemen were farther apart and the total length of the line was from 10 to 16 yards."} {"text":"The T formation is often said to be the oldest offensive formation in American football and is claimed to have been invented by Walter Camp in 1882. However, as the forward pass was legalized, the original T became obsolete in favor of formations such as the single wing. Innovations, such as a smaller, more throwing-friendly ball, along with the invention of the hand-to-hand snap in the 1930s, led to the T's revival."} {"text":"The T-formation was viewed as a complicated \"gadget\" offense by early football coaches. But NFL owner-coach George Halas and Ralph Jones of the Chicago Bears along with University of Chicago coach Clark Shaughnessy, University of Texas coach Dana X. Bible, and Notre Dame coach Frank Leahy were advocates. Shaughnessy was an advisor to Halas in the 1930s while the head coach at the University of Chicago."} {"text":"The T is referenced in the Chicago Bears fight song, \"Bear Down, Chicago Bears\", written after the 1940 championship over Washington. \"We'll never forget the way you thrilled the nation, with your T formation...\""} {"text":"The T formation is still used in a few instances at the high school level. In Utah, the Duchesne High School team set the state record of 48 consecutive wins using the Wing T. Some smaller colleges and high schools, particularly in the Midwest, still use the T. It is also still used on some levels as a goal line formation (often called a \"full house\" backfield today). Its simplicity, and emphasis on running, makes it particularly popular as a youth football formation."} {"text":"In American football, a nickel defense (also known as a 4\u20132\u20135 or 3\u20133\u20135) is any defensive alignment that uses five defensive backs, of whom the fifth is known as a nickelback. The original and most common form of the nickel defense features four down linemen and two linebackers. Because the traditional 4\u20132 form preserves the defense's ability to stop an opponent's running game, it has remained more popular than its variants, to the extent that even when another formation technically falls within the \"nickel\" definition, coaches and analysts will refer to it by a more specific designation (e.g., \"3\u20133\u20135\" for a lineup of three down linemen and three linebackers) that conveys more information with equal or greater conciseness."} {"text":"In college football, TCU is known to use a nickel defense as its base set, typically playing three safeties and two linebackers. Current Horned Frogs coach Gary Patterson installed the nickel partly out of necessity upon finding that larger and more prominent programs, most notably those of the large public universities in Texas, were able to \"recruit away\" most of the large athletes who would otherwise be available to the TCU program. As it turned out, the nickel proved to be a very good set against the spread offenses proliferating throughout college football in the early 21st century."} {"text":"A common defensive front adjustment for 3\u20134 teams to accommodate the nickel backfield involves putting the two outside linebackers into a three-point stance shading the offensive tackles (i.e., a 5 technique). To complete the adjustment, the 3\u20134 defensive ends are moved to face or shade the offensive guards. The nose tackle is removed for a defensive back. The purpose of this is to leave the four best pass rushers on the field in a long yardage situation. This is not the only adjustment that can be made. Bill Arnsparger would often remove linebackers from a 3\u20134 to create nickel and dime sets, replacing them with defensive backs."} {"text":"Zone coverage (also referred to as a zone defense) is a defense scheme in gridiron football used to protect against the pass."} {"text":"Zone coverage schemes require the linebackers and defensive backs to work together to cover certain areas of the field, making it difficult for the opposing quarterback to complete passes. Zone defenses will generally require linebackers to cover the short and midrange area in the middle of the field, in front of the safeties. In the case where one or two linebackers blitz, the remaining linebacker(s) expands his zone to cover the zone responsibilities of the vacating linebacker(s). Often, blitzing will leave larger holes in the pass defense, but it is a gamble the defensive coordinator wants to make to pressure the quarterback into a poor decision and hopefully an interception or at least an incompletion."} {"text":"In the following, \"cover\" refers to the \"shell\" that the defense rolls into after the snap of the ball, more specifically the number of defenders guarding the deep portion of the field."} {"text":"In passing situations, the defense will assign players to guard portions of the field, forming a defensive \"shell\" that the defense hopes will either prevent the offense from completing a pass or ensure a defensive player is able to tackle the receiver after a completed pass. The general terminology used to describe this alignment is \"Cover #,\" with \"#\" being the number of defensive players forming the coverage shell."} {"text":"Cover One is a man-to-man coverage for all the defensive backs except for one player (usually a safety) who is not assigned a man to cover but rather plays deep and reacts to the development of the play. Often the safety will remain in a pass coverage position and play a zone defense by guarding the middle of the secondary, reacting to runs or completed passes and double-teaming a receiver if needed."} {"text":"In a traditional Cover 1, the free safety plays deep and all of the other defenders lock in man coverage to an assigned player for the duration of the play. Essentially, during the pre-snap read, each defender identifies the coverage responsibilities and does not change the assignment. Some teams play a variant of the Cover 1 called Cover 7. In Cover 7, the free safety still plays deep, but the underneath coverage is much more flexible and the defenders switch assignments as the play develops in an attempt to improve defensive positions to make a play on the ball. Examples of these switches include double covering a certain receiver and using defensive help to undercut a route to block a throwing lane."} {"text":"Cover 1 schemes are usually very aggressive, preferring to proactively disrupt the offense by giving the quarterback little time to make a decision while collapsing the pocket quickly. This is the main advantage of Cover 1 schemes\u2014the ability to blitz from various pre-snap formations while engaging in complex man-to-man coverage schemes post-snap. For example, a safety may blitz while a cornerback is locked in man coverage with a receiver. Or the cornerback may blitz with the safety rotating into man coverage on the receiver post-snap."} {"text":"The main weakness of the Cover 1 scheme is that there is only one deep defender that must cover a large amount of field and provide help on any deep threats. Offenses can attack Cover 1 schemes by sending two receivers on deep routes, provided that the quarterback has enough time for his receivers to get open. The deep defender must decide which receiver to help out on, leaving the other in man coverage which may be a mismatch."} {"text":"A secondary weakness is inherent in its design: the use of man coverage opens up yards after catch lanes. Man coverage is attacked by offenses in various ways that try to isolate their best athletes on defenders by passing them the ball quickly before the defender can react or designing plays that clear defenders from certain areas thus opening yards after catch lanes."} {"text":"Teams that play Cover 2 shells usually subscribe to the \"bend-but-don't-break\" philosophy, preferring to keep offensive players in front of them for short gains while limiting long passes. This is in stark contrast to a more aggressive Cover 1 type scheme which leaves the offensive team's wide receivers in single man-to-man coverage with only one deep helper. By splitting the deep field between two defenders, the defense can drastically reduce the number of long gains."} {"text":"In Cover 2 the cornerbacks are considered to be \"hard\" corners, meaning that they have increased run stopping responsibilities and generally defend against shorter passes, although if two receivers run a deep route on a certain side of the field, that side's corner has deep coverage responsibility as well. The \"hard\" corners also generally bear the responsibility of \"pressing\" or \"jamming\" the offensive receivers- disrupting the receivers intended path downfield. It also relies heavily on the \"Mike\" (Middle) linebacker's ability to quickly drop deep downfield into pass coverage when he reads a pass."} {"text":"A variant of cover two is the Inverted Cover 2, in which either right before or after the snap the corners \"bail\" out while the safeties come up\u2014in effect switching responsibilities. This strategy may be employed to trick a quarterback who has not correctly interpreted the shift. The main drawback here is that the middle of the field is left open."} {"text":"The advantage of cover 2 is that it provides great versatility to the defense as the corners can play run, short pass, and deep pass with the confidence that they have support from two deep safeties."} {"text":"Another disadvantage of Cover 2 is that it leaves only seven men in the \"box\" (the area near the ball at the snap) to defend against the run. In contrast Cover 1 and Cover 3 usually have eight men in the box."} {"text":"A potential problem with the Cover 2 is that defensive pressure on the Quarterback must be provided nearly exclusively by the front linemen as all other defenders are involved in pass coverage. If the defensive linemen do not provide adequate pressure on the Quarterback, the offense is afforded plenty of time to create and exploit passing opportunities. Blitzing in the Cover 2 often creates greater areas of weakness in the defense than other coverages. Thus, unsuccessful blitzes can prove to be more productive for the offense than in other schemes."} {"text":"In Cover 3, the two corners and free safety each have responsibility for a deep third of the field, while the strong safety plays like a linebacker. This coverage is generally considered to be a run stopping defense as it focuses on preventing big pass plays and stopping the run while giving up short passes."} {"text":"On the snap, the CBs work for depth, backpedaling into their assigned zone. One safety moves toward the center of the field. The other safety is free to rotate into the flat area (about 2\u20134 yards beyond the line of scrimmage), provide pass coverage help, or blitz."} {"text":"One of the biggest benefits of the Cover 3 coverage scheme is the ability to walk the strong safety up into the box with minimal to no changes in the coverage due to the pre-snap center field position of the free safety. This enables the defense to play strong against the run, but still prevent explosion plays such as a long pass or break away run. This advantage is most pronounced versus two tight end sets which naturally creates 8 holes for running backs, whereas in cover 2 schemes there are only 7 defenders in the box leaving 1 hole uncovered, or requiring a defender to cover 2 holes."} {"text":"Cover 3 schemes are susceptible to short, timed passes to the outside due to the hard drop of both cornerbacks. This puts pressure on the outside linebackers to react to pass plays and get into their drop quickly if they need to cover a receiver."} {"text":"Another disadvantage of Cover 3 schemes is they are relatively easy to diagnose by opposing quarterbacks. Because of this, teams will often employ slight wrinkles in their coverage to confuse offenses. An example of this includes employing man coverage on one side and zone on another or swapping coverage zones between defenders. Also leaves the seams open and makes the safety choose on four verticals which leaves one open."} {"text":"Cover 4 refers to 4 deep defenders each guarding one-fourth of the deep zone. Cover 4 schemes are almost always used to defend against deep passes. (see also Prevent defense)."} {"text":"The most basic Cover 4 scheme involves 3 CBs and 2 safeties. Upon snap, the CBs work for depth, backpedaling into their assigned zone. Both safeties backpedal towards their assigned zone."} {"text":"As with other coverage shells, Cover 4 is paired with underneath man or zone coverage in its most basic form."} {"text":"The main advantage of a Cover 4 defense is that it is extremely difficult for even the best quarterbacks to complete long passes against it. Therefore, this coverage is generally used as a prevent defense to be used near the end of a game or half, meaning that the defense sacrifices the run and short pass to avoid giving up the big play with the confidence that the clock will soon expire."} {"text":"Cover 4 also has the advantage of using safeties in run support as opposed to cornerbacks as would be the case in a Cover 2 scheme. This gives the defense nine in the box and the ability to stop the run with an extra defender on either side. The play-side safety would come up in support on a running play while the back-side safety would be responsible for the middle third of the field and the cornerbacks would have the deep outside thirds."} {"text":"The main weakness of Cover 4 shells is the large amount of space left open by the retreating defensive backs. Since the defensive backs are working for depth, short pass routes underneath can enable the quarterback to make short- and medium- length passes, as well as isolate a defensive back on a wide receiver near the sideline with little help."} {"text":"Cover 6 call defensive strength to the Field instead of to the offensive formation or front, and organize personnel by Field-side player and Boundary-side player. The position of the ball on the field therefore dictates strength of the offense. In Cover 6 the field safety and field corner cover fourths of the field, and depend on a field outside linebacker to support underneath them. The free safety covers the boundary-side deep half and the boundary corner plays the flat. Thus the field side of the coverage is quarters, and the boundary side is cover 2."} {"text":"The Cover 6 gets its name from the fact that it combines elements of the Cover 2 (the strong safety covering half the field) and the Cover 4 on the opposite side. The Pittsburgh Steelers are a Cover 6 team. The quarters play of the strong side safety, like the Steelers' Troy Polamalu, allows him to support on runs quickly. The Tennessee Titans have also been known to use it."} {"text":"On the strong side, the corner and safety play \"Cover 4 rules\", which as above the corner and safety each have a quarter of the field working for depth in their zones. The \"Sam\" linebacker will be dropping outside to cover the flats. If in 3-4 the Middle Backer will cover that sides hook to curl if not blitzing."} {"text":"On the weak side, the corner and safety play \"Cover 2 rules\", which as above the corner stays home in the flats, and the safety covers the deep half. The \"Will\" backer will play hook to curl or blitz depending on the call. If in 3-4 usually the \"Will\" or the Middle Backer will blitz from that side."} {"text":"The Cover 6 is also good for calling a corner blitz from the weak side, and having the backer cover flats instead."} {"text":"Cover 6 has the disadvantages of both Cover 2 and Cover 4. The field side is generally soft on flat coverage. The field side corner can be left in single coverage deep as well. On runs, the field side may be spread by a tight end and 2 receiver formation, offering an advantage on the edge. The Boundary side is hard behind the corner to the sideline, as well as in the seam between corner and linebacker."} {"text":"Cover 0 refers to pure man coverage with no deep defender. Similar to Cover 1, Cover 0 has the same strengths and weaknesses but employs an extra rusher at the expense of deep coverage help leaving each pass defender man-to-man. Cover 0 is an aggressive scheme that allows for numerous blitz packages, as it's easier for players to drop off their coverage and rush the quarterback. However, there is no \"help over the top\"\u2014if a wide receiver \"beats\" (achieves separation from) his defender, there is no one left in the secondary who can make up the coverage on the receiver, which could result in an easy pass completion and possible touchdown."} {"text":"In American football, an eight-in-the-box defense is a defensive alignment in which 8 of the 11 defensive players are close to the line of scrimmage."} {"text":"The area occupied by defensive linemen and linebackers is often referred to as \"the box\". The box is usually about 3-5 yards in depth and spans the offensive line in width. Normally five to seven defensive players occupy this area but frequently another player is brought into the box for run support against smashmouth-oriented offensive teams or short yardage situations."} {"text":"Obvious advantages come off the eight in the box strategy including more defenders to stop the run game of the opponent which is the main reason for this strategy. The eight in the box scheme is also often used by teams throughout the NFL as a disguise to which players will be coming after the quarterback. This creates a level of difficulty for the offensive linemen because they will not know pre-snap who they will need to block. Quick decisions will need to be made after the snap of the ball."} {"text":"Buck-lateral is an American football play or a series of plays used in the Single-wing formation. Since the Single-Wing formation lost prominence by 1950, the football play referred to as the Buck-lateral is almost gone from football's vocabulary. However, prior to this time, the buck-lateral play gave fullbacks the option to run, lateral, or hand-off the ball to another player. Running the buck-lateral required an offensive scheme that needed the fullback to possess many specialized skills, as opposed to today's fullback who mainly blocks and carries the ball infrequently."} {"text":"Before the invention of the Single-Wing offense by Pop Warner, offenses used simple plays designed for runners to attack the defensive front behind massed line blocking. This battering ram approach usually involved the biggest runner, the fullback, as his main role was to \"buck\" or smash the middle of the defensive front."} {"text":"The term lateral describes a short toss from one back to another that does not advance the ball. (see lateral pass) A ball that goes forward to another player is called a forward pass. The pass and the lateral are both allowed to advance the ball when the offense is operating behind their line of scrimmage. Once beyond the line of scrimmage the lateral is the only means of transferring the ball to another player."} {"text":"The Buck-lateral was a play designed for single wing fullbacks to receive the toss from the center, and start toward the central part of the line to make the play look like a typical smash or buck. However, at some point the fullback might pause to do one of several deceptive options, usually handing-off to passing backs or even keeping the ball and plowing ahead. If the fullback delivers the ball to another back, the new carrier might have several additional options including handing or lateralling the ball to still another back."} {"text":"Warner's Carlisle formation, or Single-Wing, added additional misdirection and trickery to allow for runners to gain yards by deceiving the defense. The Single-wing also allowed the offense to put more blockers at the point of attack than the defense could muster."} {"text":"The buck-lateral play was actually a series of plays that started out the same way with the fullback taking the direct snap from center, then directing his forward movement toward the middle of the line of scrimmage. The play had several scripted or \"read\" options to confuse the defense. The player who was given permission to read the play could determine for himself whether to keep the ball or deliver it to another player. The fullback could basically either keep the ball to pound the middle of the line, or he could give the ball to one of the three other single-wing backs, usually the quarterback. Once in possession, the quarterback then continued the possibilities for initiating other permutations to the play."} {"text":"To understand the mechanics of the play, one has to understand basic terminology of the single-wing formation."} {"text":"The tailback was stationed four and one-half yards behind the short-side guard. In a typical formation, the fullback would line up three and one-half yards behind the long-side guard. One and one-half yards behind the tackle or guard, would be the quarterback or blocking back. Finally, the wingback aligns himself to the outside of the opposing defensive tackle. He is only one yard off the line."} {"text":"In most offenses the tailback was the main ball handler and generator of offense; however, the fullback could also take the direct snap due to his proximity to the tailback. In fact whenever the ball was snapped, one of the two backs would take the snap while the other feigned taking the snap to confuse the defense."} {"text":"A popular scenario for the buck-lateral saw the fullback with the option to hand-off to the quarterback. The quarterback, on taking the ball, could try to sweep the end or toss the ball to the tailback, who had been paralleling the play more deeply in the backfield. If the tailback takes the lateral from the quarterback, he is in position to sweep the end, or even throw the ball to a receiver down field."} {"text":"Coaches created different versions of the buck-lateral depending on the versatility of the backfield. In one version the fullback might fake a hand-off to the quarterback, who is standing with his back to the defense to hide the lack of exchange. In another version, the fullback could give the ball to the quarterback, who then might initiate a reverse by giving the ball to the wingback coming back against the flow of the play. In another twist, the quarterback could take the fullback hand-off and complete a jump-pass."} {"text":"The buck-lateral was especially deceptive and effective, but hard to execute. The single-wing fullback had to have the skills of a modern-day quarterback in handling the ball. Plus, he had to be able to take the punishment associated with bucking the middle of the defense where the bigger, stronger defensive players were stationed."} {"text":"When the fullback took the snap, defensive players expected the play to hit the center of the line because the traditional role of the fullback was to grind out yardage between the tackles. Defensive players who rushed to stop the fullback at the guard-center gap might be totally surprised if the fullback slipped the ball to the nearby quarterback who was heading in another direction."} {"text":"Consequently, single-wing teams that could master the buck-lateral series of plays could be successful by always making the defense guess to where the ball was going. Of course, if the defense loses sight of the ball during the fakes or laterals, then the defense is at an extreme disadvantage."} {"text":"Today's coaches would call the buck-lateral a gadget play, because it was designed to thoroughly confuse the defense by making its members lose sight of the ball with fakes, counter action, and laterals. Trick plays are harder to execute and demand considerably more practice time than less complicated plays."} {"text":"The Kansas City Chiefs used the fullback inside run variant of the Buck Lateral series on a fourth-and-short situation to set up their first touchdown."} {"text":"The A-11 offense is an offensive scheme that has been used in some levels of amateur American football. In this offense, a loophole in the rules governing kicking formations is used to disguise which offensive players would be eligible to receive a pass for any given play. It was designed by Kurt Bryan and Steve Humphries of Piedmont High School in California."} {"text":"The scheme was used at the high school level for two seasons before the national governing body of high school football, the National Federation of State High School Associations, closed the scrimmage kick loophole in February 2009, effectively banning important facets of the offense. Due to rules regarding player numbering and eligible receivers, the scheme as originally designed is not usable at most levels of football, including the National Football League and college football."} {"text":"The A-11 offense was to be the basis of the A-11 Football League (A11FL), a professional football league which was scheduled to play its first season in 2015. However, after announcing franchises names and scheduling \"showcase games\" in early 2014, the A11FL folded before taking the field."} {"text":"The A-11 offense was developed in 2007 by head coach Kurt Bryan and offensive coordinator Steve Humphries at Piedmont High School in Piedmont, California. Coming off a 5\u20136 record in 2006, the coaches were looking for an edge to compete against other teams that fielded more top athletes. Bryan and Humphries found a loophole in the rules concerning allowable punt formations which they used it to design an every-down offense in which all 11 (hence the name \"A-11\") players were potentially eligible to receive a forward pass. Using the A-11, Piedmont's record improved to 7\u20134 in 2007 and 8\u20133 in 2008, with the offense often confusing defenses and scoring more points."} {"text":"While some high school coaches noticed Piedmont's success with the A-11 and began incorporating aspects of the offense into their own playbooks, others called the system \"an unsporting act\" and \"outside of the spirit of the rule code\". Bryan and Humphries began heavily promoting coaching clinics, instructional DVDs, and other materials soon after completing their first season running the offense, which also drew criticism from other coaches."} {"text":"High school athletic associations in North Carolina, West Virginia, Louisiana, and the District of Columbia banned the use of the A-11 for the 2008 season. In February 2009, the National Federation of State High School Associations rules committee voted 46\u20132 to close the loophole allowing the linemen-free formations featured in the A-11. The system's creators petitioned the California Interscholastic Federation to allow use of the offense over the next three seasons on an experimental basis, but the appeal was denied."} {"text":"The scheme's creators modified the system to comply with the rule changes in 2009. Though the offensive personnel is spread out more than in conventional formations, this version of the A-11 abides by the numbering requirements, making it easier for the defense to determine which players could legally go out for a pass. As such, it is similar to spread schemes from the early days of football such as the Emory & Henry formation. As such, unlike the original A-11, the modified version is legal in most levels of football."} {"text":"The most striking characteristic of the A-11 is its use of a formation in which most offensive players except the center are spread out across the line of scrimmage standing upright. In conventional football formations, five or more offensive players are offensive linemen, who set up before each play in a three-point stance and who serve exclusively as blockers. Offensive linemen almost never carry the football and are almost always ineligible to catch a forward pass or even advance beyond the line of scrimmage before a pass is thrown. At most levels of football, (including the National Football League (NFL), college football, and American high school football), offensive linemen must wear jersey numbers from 50 to 79, marking them as ineligible receivers in all but very limited situations."} {"text":"To use the scrimmage kick formation exemption, the player who receives the snap (presumably the kicker or placeholder) must stand at least seven yards behind the line of scrimmage. The A-11 places the quarterback in that position, which becomes a deep shotgun formation. This has the effect of reducing the need for offensive line protection since defensive players have more ground to cover before reaching the passer. The offense also places an additional passing back (similar to the wildcat offense) in the backfield next to the quarterback, creating the potential for either back to receive the snap, pitch to the other back, run or pass the ball, block, or go out for a pass."} {"text":"As mentioned, a loophole in the rules regarding punt formations allowed the A-11 to be used at the high school level until 2009, when the National Federation of State High School Associations rules committee closed the loophole. A modified version that complies with uniform numbering regulations can still be used."} {"text":"The scrimmage kick formation is allowed on fourth downs under NCAA rules and on conversion attempts, and a few situations which define a scrimmage kick formation with an additional requirement that \"it is obvious that a kick may be attempted.\" It is otherwise not allowed for most normal plays, making the original A-11 impossible to use on an every-down basis."} {"text":"The A-11 was also not legal under Canadian football rules. There is no scrimmage kick exemption in the Canadian Football League (CFL). Persons who wish to change position from an eligible to an ineligible receiver (or vice versa) must physically change their uniform to a number that reflects their eligibility, and must seek permission from the official to do so."} {"text":"Furthermore, until the end of the 2008 season the CFL rule book dictated that a designated quarterback must take all snaps, which made the two-quarterback system used by the A-11 (as well as offenses such as the Wildcat) illegal in the CFL. This rule was removed in CFL, mainly so that the Wildcat formation could be used. However, the A-11 is still unusable."} {"text":"Single set back (also known as the \"Lone Setback\" or \"Singleback\" or \"Ace\" formation or \"Oneback\" or \"Solo\") is an offensive base formation in American Football which requires only one running back (usually a halfback) lined up about five yards behind the quarterback. There are many variations on single back formations including two tight ends and two wide receivers, one tight end\/three wide receivers, etc. The running back can line up directly behind the quarterback or offset either the weak side (away from the tight end) or the strong side (towards the tight end)."} {"text":"This formation has gained popularity in the NFL as teams have started trading out a fullback, or blocking back, in favor of another wide receiver or tight end who is usually faster and better able to receive the ball, while still helping the run game with down-field blocks. The effectiveness of the formation is further increased if the team has athletic tight ends with good pass catching abilities, thereby increasing the versatility of the formation. It is, moreover, good for bootlegs and reverses."} {"text":"The prevent defense is a defensive alignment in American football that seeks to prevent the offense from completing a long pass or scoring a touchdown in a single play and seeks to run out the clock. It is used by a defense that is winning by more than a touchdown, late in the fourth quarter, or in specific situations, such as third-and-very-long if it seems clear that the offense must pass the football to gain long yardage."} {"text":"The alignment uses five or more defensive backs (or players in that role), preferring fast players over large players. They back up so far that they concede short-yardage plays but try to ensure that no receiver is uncovered downfield or can get behind them."} {"text":"The prevent defense concedes short gains, such as four to eight yards per play, as long as the clock keeps running, but aims to prevent plays resulting in longer gains."} {"text":"Safeties and cornerbacks pull back to a \"safe zone\" five to ten yards off the line of scrimmage, and the free safety often plays as far as twenty yards back. The defense does not jam receivers on the line. The prevent defense employs zone defense, in which each defensive back is responsible for an area of the field rather than a specific player. The backs watch the quarterback's eyes to determine where he intends to pass the ball."} {"text":"When used late in the fourth quarter to run out the clock, the sidelines become an important area to defend, as a player who receives a pass near the sideline can run out of bounds and stop the clock. The defender's priority is less to prevent a reception than to keep the receiver in bounds following one. This keeps the clock running and reduces the amount of time the offense has to score."} {"text":"The prevent defense uses five or more defensive backs."} {"text":"The nickel defense has five backs, so named because the nickel is the five-cent coin."} {"text":"The quarter defense has three down linemen, one linebacker, and seven defensive backs."} {"text":"The half-dollar defense has eight defensive backs, no linebackers and three defensive linemen. The rare package is used when the offense needs to score a touchdown on the very next play, such as with a desperation Hail Mary pass. In theory, \"dollar defense\" (nine backs) and \"twoonie defenses\" (ten backs) are also possible but, for practical reasons, are almost never used; similar scenarios may involve linebackers replacing defensive linemen."} {"text":"Professional teams may not have enough defensive backs on the roster to play the quarter or half-dollar defenses, so wide receivers sometimes fill the extra positions, particularly in late-game situations when the receivers' offensive skills can be put to defensive use."} {"text":"When the defense concedes short plays, an offense that can practice clock management effectively can score without executing the long pass the defense seeks to prevent. Some coaches avoid using the prevent defense and choose instead to continue playing the same defensive schemes that seemed to be working well to that point. John Madden once said, \"All a prevent defense does is prevent you from winning.\""} {"text":"By conceding to the offense many easy gains for short yardage but no big play, the prevent defense can make the end of the game uninteresting for fans."} {"text":"The attempt to prevent a long-yardage play can be a victim of individual effort, as happened to the Denver Broncos in the 2012 AFC Divisional Round playoff game. With less than 40 seconds to play, the Baltimore Ravens needed a touchdown to tie the game and faced a third down from their own 30-yard line. Broncos safety Rahim Moore allowed Baltimore receiver Jacoby Jones to get behind him and catch a 70-yard touchdown pass from Joe Flacco. The Ravens went on to win the game in double overtime."} {"text":"The spread offense is an offensive scheme in gridiron football that typically places the quarterback in the shotgun formation, and \"spreads\" the defense horizontally using three-, four-, and even five-receiver sets. Used at every level of the game including professional (NFL, CFL), college (NCAA, NAIA, CIS), and high school programs across the US and Canada, spread offenses often employ a no-huddle approach. Some implementations of the spread also feature wide splits between the offensive linemen."} {"text":"Spread offenses can emphasize the pass or the run, with the common attribute that they force the defense to cover the entire field from sideline to sideline. Many spread teams use the read option running play to put pressure on both sides of the defense. Similar to the run and shoot offense, passing-oriented spread offenses often leverage vertical (down field) passing routes to spread the defense vertically, which opens up multiple vertical seams for both the running and passing game."} {"text":"The grandfather of the spread offense is Rusty Russell, a graduate of Howard Payne University, in Brownwood, Texas, and coach of Fort Worth's Masonic Home and School for orphaned boys. Russell began coaching Masonic Home in 1927, and due to the fact that his teams were often over-matched physically by other schools, they were called the \"Mighty Mites\". While there, he deployed the earliest form of a spread offense to great success. Russell's team is the subject of a book by author Jim Dent entitled, \"Twelve Mighty Orphans: The Inspiring True Story of the Mighty Mites Who Ruled Texas Football\"."} {"text":"But, as Bart Wright notes in his 2013 book \"Football Revolution: The Rise of the Spread Offense and How It Transformed College Football\", Meyer's spread the defensive rush to the ball\u2026.\u201d While some later football historians and coaches have confused the Meyer Spread, which relied on great quarterbacks like Baugh and O\u2019Brien to pass around 17 times a game on average, with more contemporary spread offenses, Wright concludes that it is \u201cpreposterous that Meyer\u2019s offense was any sort of antecedent\u201d to the modern spread offense invented by Jack Neumeier around 1970 (see below)."} {"text":"The spread's first evolution came about in 1956 when former NIU Huskies head coach Howard Fletcher adapted Meyer's spread with the shotgun formation to create what he termed the \"Shotgun Spread\" a more pass-oriented version. Under Fletcher's newly created offense, quarterback George Bork led the nation in total offense and passing in 1962 and 1963. Bork became the first man in college football history to pass for 3,000 yards in a season in 1963 while guiding the Huskies to a victory in the Mineral Water Bowl and the NCAA College Division National Championship."} {"text":"The modern spread: Jack Neumeier and \"basketball on grass\"."} {"text":"While there is no evidence to suggest Neumeier had heard of Rusty Russell or Howard Fletcher in 1970, Jack Neumeier evidently built his offensive theories upon a foundation established by other coaches, including Glenn \u201cTiger\u201d Ellison, a high school coach from Ohio and a college teammate and friend of legendary Ohio State coach Woody Hayes. published a book, \"Run and Shoot Football: Offense of the Future\", in 1965 that found its way into Neumeier's library. In his book, Ellison describes his desperate experiments with the \"departure into insanity\" \"Lonesome Polecat\" sandlot-style formation in a successful attempt to avoid a losing season in 1958."} {"text":"Neumeier then took Ellison\u2019s ideas and synthesized something even more innovative than the \u201cRun and Shoot.\u201d Combining motion, four wide receivers, an occasional no-huddle series and a power running game, along with blocking innovations designed for an undersized line added to the mix by his offensive line coach Jack Mathias, Neumeier's great experiment in 1970 and his tinkering during subsequent seasons took football offenses in a new direction."} {"text":"Another piece of the puzzle Neumeier assembled preparing for the 1970 season came from Red Hickey during Hickey's stint coaching the San Francisco 49ers. Hickey first utilized the shotgun formation in a 1960 NFL game against the Baltimore Colts. The shotgun, based on an old short punt formation that dated back to the World War I era, which Pop Warner then updated as a double wing formation in the 1930s at Stanford, featured the quarterback setting up for a long snap seven yards behind the center. Hickey thought it might help to slow the Colt pass rush and give the 49ers quarterback another second or two to spot his receivers."} {"text":"A brief sensation for the 49ers, Hickey's shotgun formation only lasted for the final few games of the 1960 season and a few games into 1961. Opponents soon neutralized the formation when they realized that their defenses could take advantage of the need for the center to focus on the long snap before making his block. Linebackers blitzing up the middle collapsed the pocket protecting 49er quarterbacks. By the end of the 1961 NFL season, football coaches universally agreed that the shotgun formation was dead and buried, until Jack Neumeier resurrected it as part of the new spread passing offense he synthesized."} {"text":"Sid Gillman, after a long career, coached the San Diego Chargers throughout the 1960s. Before his lengthy stint with the Chargers, he coached the Los Angeles Rams. An innovator with the use of motion and passing in football offenses, Gillman also revolutionized the use of game films to study opposing teams. As a trademark of his offenses, Gillman utilized the forward passing of his talented quarterback John Hadl to Hall of Fame split end Lance Alworth and flanker Gary Garrison to open up defenses for the Chargers\u2019 rushing game and to move the ball down the field."} {"text":"Gillman continued coaching off and on into the 1980s. During a stint working with the Los Angeles Express of the short-lived United States Football League during the early \u201880s, Gillman became a leading advocate for what some sportswriters referred to as the \u201cace\u201d formation, a variation of the one-back spread offense that evolved after Jack Neumeier's retirement from coaching."} {"text":"In an article written by Bob Oates of the Los Angeles Times in 1984, Gillman talked about evolving trends and the future of football. \u201c\u2019The thing that makes the ace formation so effective\u2019 Gillman said, \u2018is that it enables you to do so many more things. Its offensive potential \u2013 with four guys up there in receiving positions \u2013 is mathematically almost limitless. It causes the defense more trouble than any two-back formation.\u2019\u201d"} {"text":"Sports historians have called Gillman the \u201cfather of the passing game,\u201d and his focus on studying game films certainly influenced most football coaches by the early 1960s, including Jack Neumeier. While Gillman's innovations with the passing game inspired many followers, neither Gillman nor his prot\u00e9g\u00e9s had utilized the ace formation or developed any other offense resembling the spread as the 1960s came to a close."} {"text":"The head coach of the San Diego State Aztecs during the mid-\u201860s, Don Coryell, found inspiration in Sid Gillman\u2019s passing game. Coryell had developed a national reputation as one of the most prominent innovators of the I formation during the 1950s. Coryell brought the I formation with him when he joined John McKay\u2019s coaching staff at USC for a short stay in 1961, and it became the signature power running formation at what came to be known as Tailback U under McKay and his successor John Robinson."} {"text":"Football aficionados can trace Coryell\u2019s focus on spacing and downfield movement and separation between receivers back to Dutch Meyer\u2019s 1952 book, \"Spread Formation Football.\" Coryell's emphasis on precisely timed and executed pass routes now seems like the norm at all levels of football. But Don Coryell had just begun experimenting with all of these elements in 1970."} {"text":"Nobody dreamed in 1970 that Coach Neumeier's new and innovative one-back spread offense would gradually percolate throughout the football world and eventually become football's dominant offense."} {"text":"In a 2013 article, sports commentator Matt Offer wrote that Neumeier's \u201coffense could be considered ground zero for all that we have come to think of as modern in the game of Football. Spreading the defense horizontally with formations, and vertically with passing concepts. Isolating defenders in match ups where your guy has the best chance to win. It all seems so simple now, but in 1970 when everyone and their mother was running the Veer it truly was revolutionary.\u201d Nationally respected sportswriter Bart Wright's 2013 book on the history of the modern spread offense, \"Football Revolution,\" gives clear credit to Coach Neumeier and his 1970 Granada Hills Highlanders team for originating what football coaches across the nation have come to know as \u201cbasketball on grass.\u201d"} {"text":"In a chapter in Tim Layden's \"Blood, Sweat and Chalk\" entitled \u201cThe One-Back Spread: An L.A. high school coach took a chance and launched an offense \u2013 and John Elway and Drew Brees with it,\u201d Layden talks about the \u201cradical change\u201d introduced by Neumeier with his 1970 Highlanders and his \u201cwide-open spread game.\u201d But it took some amazing luck for Coach Neumeier's football ideas to achieve national attention and ultimately dominance."} {"text":"As they frequently do, following the remarkable success of his 1970 team, other coaches talked about Neumeier's offense and began to incorporate elements of it into their own offensive schemes. Other local high school coaches \u2013 mostly competitors \u2013 saw it, liked it, copied it and began to utilize it. Today, there are books written about Neumeier's offense. Coaching workshops introduce coaches to the one-back spread and teach them how to implement it. They also teach coaches how to defend against it."} {"text":"But the story of how the one-back spread offense \u201cwent viral,\u201d to use today's internet-driven jargon, isn\u2019t quite that simple. In the 1970s, there were no coaching clinics, YouTube videos or internet blogs to make the case for the one-back spread offense to high school coaches, much less college or NFL coaches. Most coaches in 1970 looked at innovative passing offenses with disdain. New football concepts spread slowly through the instinctively conservative ranks of football coaches. Today, it's not even clear who coined the phrase \u201cone-back spread offense.\u201d"} {"text":"Jack Elway began to utilize the one-back spread in his offense at Northridge during the 1978 season. He took it with him when he became head coach at San Jose State a year later. During his tenure at San Jose State and later at Stanford, Jack Elway became an even more successful proselytizer for the one-back spread offense. Elway worked with Jack Neumeier to teach the offense to a number of prominent members of the coaching profession, most significantly Dennis Erickson. Erickson served as Jack Elway\u2019s offensive coordinator at San Jose State."} {"text":"Dennis Erickson initially heard about the spread offense while serving as the offensive coordinator at Fresno State in the late 1970s. Moving on to San Jose State in 1979, he combined his ideas about the offense with Jack Elway\u2019s. As a result of the Elway connection, Erickson spent time that year learning about the offense with Jack Neumeier. In fact, in Matt Opper\u2019s 2013 article, by the late-1970s, Granada Hills had become \u201ca must stop destination for college coaches across the nation.\u201d"} {"text":"Tiller went on to become an outstanding college head coach at Purdue. At Purdue, Tiller utilized the one-back spread offense again with tremendous success. His quarterbacks at Purdue playing out of the one-back spread included Kyle Orton and Drew Brees, among others. In 2000, Brees led the Boilermakers to the Rose Bowl with Neumeier\u2019s offense. Tiller\u2019s teams forced the Big Ten to adapt to the challenges posed by the wide-open one-back spread."} {"text":"Brown confirms this lineage for Urban Meyer's offensive theories and success, also connecting the Ohio State coach to Joe Tiller and Rich Rodriguez, among other coaches who have built successful careers coaching variations on the one-back spread offense. So at least some of Urban Meyer's theories about football offenses, leading to Ohio State's most recent national championship, trace directly back to Jack Neumeier."} {"text":"Today, virtually every NFL, college, high school and youth league football offense shows clear signs of Coach Neumeier's influence. Fans can watch elements of Neumeier's offense at every level of play, from peewee league scrimmages to NFL Super Bowls. In the 2016 College Football Playoff National Championship bowl game and the 2016 Super Bowl, all of the offenses were direct descendants of the turbocharged \u201cbasketball on grass\u201d offense that Jack Neumeier created out of desperation for his undersized 1970 Granada Hills High School football team. His offense continues to live on and thrive years after Jack Neumeier's death in 2004."} {"text":"Reflecting on the enduring impact of Neumeier's spread offense, sportswriter Mary Crouse wrote that \u201cIt amuses Neumeier's first guinea pig, [Dana] Potter, to see a college or pro team throw the ball out of the shotgun on first down or attempt 40 passes a game. The same things stirred critics when Neumeier introduced them to the [Los Angeles] City football scene. \u2018I had a lot of coaches tell me Coach Neumeier's offense would never work in college or the pros,' Potter said. `So it's hilarious for me to see how many teams are using it now. It's neat to see how his offense has evolved.\u2019\""} {"text":"While it took decades for Cactus Jack's Aerial attack \u2013 the up-tempo one-back spread offense \u2013 to percolate throughout the football world, there is no doubt that Coach Neumeier's theories and the success of his 1970 team changed that world forever."} {"text":"The spread offense is specifically designed to open up seams and holes for the offense, and does not specifically focus on the passing or running game, however, like all types of offenses, there can be sub types which can specifically focus on the passing or running game, or even option, fakes or trick plays."} {"text":"The basic pre-snap appearance of the spread offense is constant\u2014multiple receivers on the field. Most contemporary versions of the spread utilize a shotgun snap, although many teams also run the spread with the quarterback under center. Jack Neumeier's 1970 iteration of the spread offense utilized both formations. In addition, the actual execution from those formations varies, depending on the preferences of the coaching staff. While most of these are balanced offenses, such as the one utilized by Larry Fedora's North Carolina Tar Heels, several sub-forms also exist."} {"text":"The spread option is a shotgun-based variant of the classic triple option attack that was prevalent in football well into the 1990s. Notable users of this offense include Ryan Day's Ohio State Buckeyes, Mario Cristobal's Oregon Ducks, Chip Kelly\u2019s UCLA Bruins, Scott Frost\u2019s Nebraska Cornhuskers, Gus Malzahn\u2019s Auburn Tigers, Jim Harbaugh\u2019s Michigan Wolverines and Dan Mullen's Florida Gators."} {"text":"The spread option is a run-first scheme that requires a quarterback that is comfortable carrying the ball, a mobile offensive line that can effectively pull and trap, and receivers that can hold their blocks. Its essence is misdirection. Because it operates from the shotgun, its triple option usually consists of a slot receiver, a tailback, and a dual-threat quarterback."} {"text":"One of the primary plays in the spread option is the zone read, invented and made popular by Rich Rodriguez. The quarterback must be able to read the defensive end and determine whether he is collapsing down the line or playing up-field containment in order to determine the proper play to make with the ball."} {"text":"A key component of the spread option is that the running threat posed by the quarterback forces a defensive lineman or linebacker to \"freeze\" in order to plug the running lane; this has the effect of blocking the target player without needing to put a body on him."} {"text":"Recently, use of the spread has led to new defenses, most noticeably the 3-3-5. Traditional defenses use 4 or 5 down linemen sets to stop an offense, but with the growing number of spread offenses, teams are looking to smaller, faster defensive players to cover more of the field. The strategy and philosophy behind this thinking has been widely debated and many coaches have found success using a 30 front, or using a 40 front against the spread."} {"text":"The 2008 Miami Dolphins also implemented some form of the spread offense in their offensive schemes. Lining up in the \"wildcat\" formation, the Miami Dolphins, borrowing from Gus Malzahn's college spread offense, \u201cdirect snap\u201d the ball to their running back, Ronnie Brown, who was then able to read the defense, and either pass or keep the ball himself."} {"text":"In recent years, the spread offense has become a very popular term used in context of the high school game with the offense's innovative ways to make the game faster and higher scoring. While it has changed the game, and teams that successfully run it are scoring more points, there is debate whether the offensive system is as effective as it seems."} {"text":"Some coaches have taken to packaging their offensive system and marketing them to programs around the country, such as Tony Franklin, who served as an assistant coach at the University of Kentucky under Hal Mumme where he developed his offense based on Mumme's \"Air Raid\" system. Manny Matsakis being another example as he is the inventor of the Triple Shoot Offense, which is a spread set with forms in the Shotgun, Pistol and under center. Matsakis was an assistant coach under both Mike Leach at Texas Tech and Bill Snyder at Kansas State. He is currently the head coach of Enka High School in Asheville, North Carolina."} {"text":"As a reaction to the success of the spread offense in high-profile colleges, such as The University of Florida, innovative high school coaches began retooling the system to work on high school teams. Now the system has been widely adopted, with numerous schools achieving success. Defenses are left with the challenge of defending more of the field than ever before, and the offense was given the advantage of having numerous running and passing lanes created by the defense being so spread out."} {"text":"In American and Canadian football, a single-wing formation was a precursor to the modern spread or shotgun formation. The term usually connotes formations in which the snap is tossed rather than handed\u2014formations with one wingback and a handed snap are commonly called \"wing T\" or \"winged T\"."} {"text":"Created by Glenn \"Pop\" Warner, the single wing was superior to the T formation in its ability to get an extra eligible receiver down field."} {"text":"Among coaches, single-wing football denotes a formation using a long snap from center as well as a deceptive scheme that evolved from Glenn \"Pop\" Warner's offensive style. Traditionally, the single-wing was an offensive formation that featured a core of four backs including a tailback, a fullback, a quarterback (blocking back), and a wingback. Linemen were set \"unbalanced\", with two on one side of the center and four on the other. This was done by moving the off-side guard or tackle to the strong side. The single-wing was one of the first formations attempting to trick the defense instead of over-powering it."} {"text":"For much of the history of the single-wing formation, players were expected to play on both sides of the ball. Consequently, offensive players often turned around to play a corresponding location on defense. The offensive backs played defensive backs, just as the offensive linemen played defensive linemen. Unlike teams of today, single-wing teams had few specialists who only played on certain downs."} {"text":"College football playbooks prior to the 1950s were dominated with permutations of the traditional single-wing envisioned by Warner. Two-time All-American Jack Crain's handwritten playbook clearly denotes how the University of Texas ran their version of the single-wing circa 1939\u20131940. University of Texas Coach Dana X. Bible ran a balanced line, which means that there were the same numbers of linemen on each side of the center. Also, the ends were slightly split."} {"text":"The advent of the T formation in the 1940s led to a decline in the use of single-wing formations. For example, the single-wing coach Dana X. Bible, upon his retirement in 1946, saw his replacement, Blair Cherry, quickly install the T formation like many other college coaches of the day. Wallace Wade said he was \"not convinced that the single wing is not a more potent formation than the T. The single wing we used caused the defense to spread. It called for more intensive coaching on individual assignments.\""} {"text":"However, from 1949 to 1957 Henry \"Red\" Sanders elevated a seldom distinguished UCLA football program to an elite level with his precision single-wing system, winning a National Championship at UCLA in 1954."} {"text":"The Sutherland single-wing was a variation of the single-wing used with great success by Coach Jock Sutherland of the 1930s and 1940s. Note that coach Sutherland mastered many forms of the single-wing, but the formation described here is the one he invented and was named for him."} {"text":"The Sutherland single-wing differs from the traditional single-wing in that the wingback is brought into the backfield as a halfback, flanking the fullback on the other side from the tailback. This allows a more flexible running attack to the weak-side. Both the tailback and halfback are triple threats in this offense. The weakness of this formation is less power than the traditional single-wing and it requires very talented backs to play tailback and halfback effectively."} {"text":"The double-wing is an offensive formation which should not be confused with the Double Wing offense. The double-wing formation is used in many offenses from the youth level through college. The formation was first introduced by Glenn \"Pop\" Warner around 1912. Just a few offenses that use the formation are the double wing, flexbone and wing T offenses. It was the primary formation used by Ara Parseghian when he ran the wing T at Notre Dame, winning National Championships in 1966 and 1973."} {"text":"The formation is not necessarily the same in all offenses and is often a broad term to describe any offense with two wingbacks. In the wing T, the double-wing formation is used to refer to Red, Blue and Loose Red formations."} {"text":"The double-wing formation in American football usually includes one wide receiver, two wingbacks, one fullback, and one tight end."} {"text":"The direct snap or toss from the center usually went to the tailback or fullback; however, the quarterback could also take the ball. The tailback was very important to the success of the offense because he had to run, pass, block, and even punt. Unlike today, the quarterback usually blocked at the point of attack. As with his modern day counterpart, a single-wing quarterback might also act as a field general by calling plays. The fullback was chosen for his larger size so that he could \"buck\" the line. This meant that the fullback would block or carry the ball between the defensive tackles. The wingback could double-team block with an offensive lineman at scrimmage or even run a pass route."} {"text":"The single-wing formation was designed to place double-team blocks at the point of attack. Gaining this extra blocker was achieved in several ways. First, the unbalanced line placed an extra guard or tackle on one side of the center. Second, a wingback stationed outside end could quickly move to a crucial blocking position. Third, the fullback and especially the quarterback could lead the ball carrier producing interference. Finally, linemen, usually guards, would pull at the snap and block at the specified hole. Line splits were always close except for ends who might move out from the tackle."} {"text":"The single-wing formation depended on a center who was skilled both at blocking and at tossing the ball from between his legs to the receiving back. The center had to direct the ball to any of several moving backs, with extreme accuracy, as the play started. Single-wing plays would not work efficiently if the back had to wait on the snap because quick defensive penetration would overrun the play. The center was taught to direct the ball to give the tailback or fullback receiver a running start in the direction that the play was designed to go."} {"text":"The single-wing formation was a deceptive formation with spectators, referees, and defensive players often losing sight of the ball. A backfield player, called a \"spinner\", might turn 360 degrees while faking the ball to the other backs, or even keeping the ball or passing it. Defensive players were often fooled as to which back was carrying the ball."} {"text":"Single-wing teams used both a standard punting formation and a quick punt, often kicking on second or third downs. The quick punt, or quick kick, saw the tailback-punter quickly backing up 5 yards as the ball was in the air from the center to distance him from rushers. The strategy was to keep defensive halfbacks, expecting a possession play, from dropping back to return the ball. The standard punt formation was often used for either punting as well as running or passing the ball. Most teams had a litany of plays that they might run from a punt formation."} {"text":"Prior to 1930 the shape of the football was a prominent oval shape called a prolate spheroid. Due to the shape of the ball, single-wing backs handled the ball more like a basketball, with short tosses and underhand lobs. Gradually, balls were allowed to be elongated enough to produce streamlined passes with a spiral. The spiraled ball could be thrown farther with more accuracy, thus increasing the potential for offenses to use the forward pass more frequently."} {"text":"The single-wing quarterback played a different role than modern-day quarterbacks. While the quarterback may have called the snap count due to his position close to the center of the formation, he may not have called the actual play in the huddle. For much of the history of football, coaches were not allowed to call plays from the sideline. This responsibility may have gone to the team captain. The quarterback was expected to be an excellent blocker at the point of attack. Some playbooks referred to this player as the blocking back. The quarterback also had to handle the ball by faking, handing off, or optioning to other backs."} {"text":"Although the single-wing has lost much of its popularity since World War II, its characteristic features are still prevalent in all levels of modern football. They include pulling guards, double teams, play action passes, laterals, wedge blocking, trap blocking, the sweep, the reverse and the quick kick. Many current offenses, such as the spread option, use single-wing tendencies for running plays, while using wide receivers instead of wingbacks."} {"text":"The current incarnation of the Wildcat offense, which has been adopted by many college, NFL, and high school teams uses many elements of the single-wing formation."} {"text":"Colton California has been a consistently successful single-wing team by reaching the state playoffs on six consecutive seasons."} {"text":"In 1998 The Menominee Maroons won the Michigan high school class BB football championship, and in 2006 and 2007 won the Michigan High School Class B football championship, winning 28 consecutive games over the last 2 years, and reaching the state playoffs for the last 11 years."} {"text":"In 1971 the Corning High School Cardinals of Corning, California had a 9\u20130 undefeated season utilizing a balanced single-wing offense under coach Tag McFadden. They were the number one rated school Division 4 in the state and Mcfadden was garnered coach of year by Cal-Hi Sports."} {"text":"In 1974 and 1975, St. Mark's School (MA) compiled a 13\u20131 record running the Princeton Single Wing."} {"text":"In 1980 Coach Ted Hern brought the single-wing to Moriarty High School, the \"Fighting\" Pintos made three state championship appearances winning 2 state titles, one undefeated season and suffering only 3 losses in four seasons. Coach Frank Ortiz was an assistant coach in the later seasons."} {"text":"Since 1985, Santa Rosa High School has used the single-wing formation under Coach Frank Ortiz. The Lions have made the playoffs every year except three, won their district title 17 times, won the New Mexico AA State Championships in 1993, 1996, 1998, 2007, 2010, 2011, and 2012 and made a total of 13 State Finals appearances."} {"text":"In 2005 St. Mary's of Lynn in Massachusetts won the D4A Eastern Mass Title following two consecutive division titles with Ed Melanson running the Single Wing. Prior to Coach Melanson installing the Single Wing there in 2002, St. Mary's had not had a winning season since 1977."} {"text":"In Kansas, Mark Bliss installed the Single Wing offense at Conway Springs High School in 1997, coaching the team to Kansas Class 3A state championships in 1998, 2001, 2002, and 2003. During his seven seasons at Conway Springs, his teams compiled a record of 81\u20134, including a 62-game winning streak. Conway Springs continues to run the Single Wing offense and added state titles in 2004, 2008, and again in 2011 and are perennial playoffs contenders under Coach Matt Biehler."} {"text":"In Kansas, Ed Buller created a football dynasty centered around the Single Wing offense. In his 40 years of coaching, which ended in 1984, Buller's only losing season was his first. Buller compiled a record of 335\u201378\u20137 and coached the Clyde Bluejays to 10 undefeated seasons along with 39 consecutive winning seasons."} {"text":"In Nebraska Dave Cisar's Screaming Eagle youth football teams have been running the Single Wing offense for 8 seasons. During that time period those teams have gone 78-5 and averaged over 35 points per contest and won two State Titles. He did this with 6 totally different teams in 4 different leagues in various age groups. His teams even used the famous \"fullback full spinner series\" along with the other traditional Single Wing plays. Coach Cisar published a book \"Winning Youth Football a Step by Step Plan\" in 2006 to help youth coaches install this \"old school\" offense."} {"text":"In Connecticut Anthony Sagnella runs the single wing with his North Haven High School Team that reached the 2015 Class L state Championship Game and were defeated by New Caanan, 42\u201335. Tailback Mike Montano was an All-State Selection as a RB with over 1800 Rushing yards and 30 TDs."} {"text":"On September 21, 2008, the Miami Dolphins used a version of the single-wing offense (specifically the Wildcat offense) against the New England Patriots on six plays, which produced five touchdowns (four rushing and one passing) in a 38\u201313 upset victory, after its successful adoption on the college and high school level by several teams."} {"text":"A pro-style offense in American football is any offensive scheme that resembles those predominantly used at the professional level of play in the National Football League (NFL), in contrast to those typically used at the collegiate or high school level. Pro-style offenses are fairly common at top-quality colleges but much less used at the high school level. The term should not be confused with a pro set, which is a specific formation that is used by some offenses at the professional level."} {"text":"Generally, pro-style offenses are more complex than typical college or high school offenses. They are balanced, requiring offensive lines that are adept at both pass and run blocking, quarterbacks (QBs) with good decision-making abilities, and running backs (RBs) who are capable of running between the tackles. Offenses that fall under the pro-style category include the West Coast offense, the Air Coryell offense, and the Erhardt-Perkins offensive system."} {"text":"Often, pro style offenses use certain formations much more commonly than the air raid, run and shoot, flexbone, spread, pistol, or option offenses. Pro-style offenses typically use the fullback (FB) and TEs much more commonly than offenses used at the collegiate or high school levels."} {"text":"Part of the complexity of the offense is that teams at the professional level often employ multiple formations and are willing to use them at any point during an actual game. One example might be that a team uses a Strong I formation run (FB lined up where the TE is located on the line of scrimmage) on 1st Down followed up by a running play out of the Ace formation on second down before attempting a pass on 3rd down out of a two-WR shotgun formation."} {"text":"Another aspect of the complexity is that the running game is primarily built on zone blocking or involves a power run scheme. Both of these require an offensive of line that is very athletic, one play they could be trying to zone block a Linebacker, and the following one could be power blocking a defensive line. Most of the blocking schemes involve a series of rules, or a system in which they operate their blocks. The passing game as a result often employs play-action, often with the QB dropping back from under center, as a means of passing the ball while building on the running game."} {"text":"Coaches who make the transition from the NFL to the NCAA as head coaches often bring with them their pro-style offenses. Such examples include Charlie Weis (former HC at Kansas), Dave Wannstedt (former HC at Pittsburgh), Bill O'Brien (former HC at Penn State). One positive aspect of employing a pro-style offense is that it can help players make transitions from the college level to the professional level quicker as a result of their familiarity with the system's complexity."} {"text":"The triple option is an American football play used to offer several ways to move the football forward on the field of play. The triple option is based on the option run, but uses three players who might run with the ball instead of the two used in a standard option run."} {"text":"There are three basic forms of triple option: the wishbone triple option, the veer triple option, and the I formation triple option. These differ in terms of the personnel on the field and their positioning prior to the start of the play."} {"text":"The wishbone triple option can use several formations including the flexbone or Maryland I. The wishbone triple option is a running play where either the fullback, the quarterback, or one of the halfbacks (also called \"running backs\" [RB] or \"tail backs\") runs the ball."} {"text":"If this is run properly it can be extremely effective as most all defensive players are accounted for by blockers. Once the quarterback or tailback gets beyond the line of scrimmage there should be nobody in front of him because the tackle, guard, tailback, and wide receiver are all downfield picking up the first threat."} {"text":"The play is called the triple option as the fullback dive is the first option, the quarterback keeping the ball is the second option, and the quarterback pitching to the halfback is the third option."} {"text":"A slight variation of this formation is the \"flexbone\", where the running backs move to just outside the tackles, but still behind the line of scrimmage. The running back that the play is using for the third option motions in, and while in motion the ball is snapped. The triple option, in this case, is still run mostly the same as the wishbone."} {"text":"The veer triple option uses two halfbacks and a tight end (TE). The \"inside veer\" play is similar to the wishbone triple option, but the dive option is performed by the halfback on the side of the play, and the other halfback becomes the pitch man. The veer is more challenging to run to the weak side (the side without the tight end) because there is no lead blocker for the pitch man. The \"outside veer\" moves the halfback dive option outside the offensive tackle, forcing the outside linebacker to stop the halfback dive, and forcing the defensive backs to play the pitch option."} {"text":"The triple option can be run out of the I formation as well. With two running backs, it is sometimes called the \"I-veer\", as the play is similar to the two running back veer offense. Three running back I formations such as the Maryland I and the stack I are more similar to the wishbone play."} {"text":"Nebraska in 1980\u20132003 deployed an I formation triple option. They won 3 national titles with it 1994, 1995, and 1997."} {"text":"In recent years, as spread and zone read offenses have become popular, many teams have begun to run variations of the triple option with the quarterback in the shotgun. This has been greatly popularized by the success of coaches such as Rich Rodriguez, Mark Helfrich, and Urban Meyer. The more traditional version of the triple option uses a quarterback under center and is advocated by the service academy coaches, including Fisher DeBerry, formerly of Air Force, and Paul Johnson, formerly head coach of Navy and Georgia Tech (who installed this offense at Hawai'i and Georgia Southern, the latter school winning several Division I Football Championship Subdivision titles using it)."} {"text":"Paul Johnson, along with former assistant and current Navy head coach Ken Niumatalolo, have had the most success with the triple option\/veer in the last few years. The triple option can be used in the spread offense. Teams like Ohio State, Oregon, and Arizona have used an inside zone triple option from the spread. The quarterback reads the defensive end for \"give\" or \"keep\". If the defensive end squeezes down to take the dive, the quarterback will pull the ball and take his reading progression to the outside linebacker or defensive back. If the linebacker\/defensive back takes the quarterback, the quarterback will pitch the ball to his running back who is running in formation with the quarterback."} {"text":"The rule change that resulted in the widespread use of RPOs by college offenses was controversial. By \"destroy[ing] the ages-old division between passing plays and running plays\" the RPO changes offense, defense and officiating roles. \"The Wall Street Journal\" highlighted the option in the lead-up to the 2017 playoff between Alabama and Clemson, in which both teams \"will [try to] use [it] to win\"."} {"text":"The RPO has also been utilized in the NFL despite rules disallowing linemen to block more than one yard downfield on passing plays, though NFL QBs must make quicker reads to avoid a penalty if they decide to throw a forward pass."} {"text":"University of Nevada head coach Chris Ault popularized the single back alignment (and renamed it the \"Pistol\") in 2005. While the pistol offense has been experimented with by dozens of college football teams such as LSU, Syracuse, Indiana, and Missouri, Ault's Nevada Wolf Pack is most strongly associated with the formation. Using the Pistol Offense, during the 2009 season, Nevada led the nation in rushing at 345 yards a game and were second in total offense at 506 yards. The Wolf Pack also became the first team in college football history with three 1,000-yard rushers in the same season: quarterback Colin Kaepernick and running backs Luke Lippincott and Vai Taua."} {"text":"Football Championship Subdivision team James Madison University used \"The Pistol\" to help beat #13 ranked Virginia Tech on September 11, 2010. The pistol has also made the transition to the NFL, mainly being used by the Carolina Panthers with Cam Newton and Robert Griffin III of the Washington Redskins, as well as the aforementioned Colin Kaepernick with the San Francisco 49ers, who in the NFL Playoffs versus the Green Bay Packers set the all-time single game rushing record for a quarterback with 181 yards. Along with the wildcat, the pistol has added more of a college \"playmaker\" aspect to the professional game."} {"text":"On December 5, 2010, the Pittsburgh Steelers used the Pistol offense so quarterback Ben Roethlisberger could play with a bad foot."} {"text":"In American football, a smashmouth offense is an offensive system that relies on a strong running game, where most of the plays run by the offense are handoffs to the fullback or tailback. It is a more traditional style of offense that often results in a higher time of possession by running the ball heavily. So-called \"smash-mouth football\" is often run out of the I-formation or wishbone formation, with tight ends and receivers used as blockers. Though the offense is run-oriented, pass opportunities can develop as defenses play close to the line. Play-action can be very effective for a run-oriented team."} {"text":"\"Three Yards and a Cloud of Dust\"."} {"text":"This term describes run-heavy offenses such as those used by coach Woody Hayes of Ohio State University in the 1950s and 1960s. A grind-it-out ball control offense, it relies on time of possession utilizing a high percentage of inside running plays off of handoffs by the quarterback to advance the ball down the field. Hayes relied primarily on the fullback off-tackle play. A quarterback under Hayes would often throw fewer than 10 passes a game. Hayes is credited as saying \"Three things can happen when you pass the ball, and two of them are bad\"."} {"text":"Pro Football Focus (also written as ProFootballFocus, and often referred to by its initials, PFF) is a website that focuses on thorough analysis of the National Football League (NFL) and NCAA Division-I football in the United States. PFF produces 0-100 Player Grades and a range of advanced statistics for teams and players by watching, charting and grading every player on every play in every game both at the NFL and FBS level."} {"text":"PFF was founded by Neil Hornsby in the United Kingdom. Dissatisfied with some limitations of standard statistics, Hornsby began grading players in 2004. The staff gradually expanded over the next few years, and the site was launched in 2007. The 2006 NFL season is the first season for which PFF has complete data. For the 2011 season, PFF provided customized data to three NFL teams, agents, media and NFL players. In 2014, sports commentator and former NFL player Cris Collinsworth bought a majority interest in the service, which moved its operations to Cincinnati, near where Collinsworth lives in Ft. Thomas, Kentucky. PFF began collecting data for every NCAA Division-I college football game in 2014."} {"text":"As of 2019, PFF provides customized data to all 32 NFL teams, 74 NCAA FBS teams, 4 CFL teams, national\/regional media (i.e. Washington Post, The Athletic, ESPN) and sports agencies\/agents."} {"text":"PFF grades every NFL player on every play on a scale of -2 to +2 using half point increments. The grades are based on context and performance. A four-yard run that gains a first down after two broken tackles will receive a better grade than a four-yard run on 3rd & 5, where the ball carrier does nothing more than expected. A quarterback who makes a good pass that a receiver tips into the arms of a defender will not negatively affect the quarterback's grade on that play, despite the overall negative result for the team."} {"text":"Furthermore, grades are separated by play type. Beyond just an overall grade, an offensive lineman receives one grade for pass-blocking and one for run-blocking. The average grade is meant to be zero, and raw grades are normalized."} {"text":"In watching every game, PFF is also able to record information and create data that is typically unavailable. One example is how frequently individual offensive linemen yield pressure."} {"text":"PFF covers every player on every play of every game at the NFL and major college football level and creates advanced stats based on the information gleaned from this."} {"text":"PFF has been criticized by the analytics community regarding the accuracy and veracity of its ratings. In contrast to the purely quantitative ratings released by sources like Football Outsiders, TeamRankings, and numberFire, PFF uses qualitative and opinion-based grading as the root of its 0-100 Player Grades -- not its advanced statistics. As such, the 0-100 Player Grades are not truly quantitative and could be seen as being prone to bias, poor sample sizing, or other issues."} {"text":"The hurry-up offense is nearly as old as football itself. John Heisman's 1899 Auburn Tigers ran an early version of the hurry-up. Michigan coach Fielding Yost was known as \"Hurry up;\" as he had Bennie Owen call signals for the next play - even while still lying beneath the tackle pile from the previous snap."} {"text":"The first team to employ a version of the no-huddle approach as the normal offensive play strategy was the 1988 Cincinnati Bengals under Sam Wyche with Boomer Esiason as the quarterback. This approach, called the \"attack offense,\" involved a number of strategies including shortened huddles and huddling much closer to the line of scrimmage than usual. The no-huddle approach was used by many teams before but in specific situations for a limited time. This strategy proved to be very effective in limiting substitutions, creating fatigue in the opposing defense, creating play-calling issues for the defense, and various other advantages. The Bengals' regular employment of this offense was extremely effective. The employment of this version of the \"no-huddle\" propelled the Bengals to their second appearance in the Super Bowl."} {"text":"In recent times Peyton Manning, formerly with the Indianapolis Colts and later the Denver Broncos, is best known for this technique, frequently changing the play at the line of scrimmage depending on the coverage that he sees from the opposing defense."} {"text":"In 2013, Chip Kelly became head coach of the Philadelphia Eagles and adapted the hurry-up offense that he used effectively at Oregon to the NFL. During the 2014 season, the Eagles averaged around 22 seconds per play, which is the fastest time of any NFL team since this statistic has been kept."} {"text":"Differences between the NFL and college approaches."} {"text":"The two-minute drill is a high-pressure and fast-paced situational strategy where a team will focus on clock management, maximizing the number of plays available for a scoring attempt before a half (or game) expires. The tactics employed during this time involve managing players, substitutions, time-outs, and clock-stopping plays to get as many plays in as possible. In the first half, either team may employ the two-minute drill; however, near the end of the game, only a team tied or losing employs the strategy. Most famously, the two-minute drill references end-of-game drives by a team tied or trailing by one possession."} {"text":"The two-minute drill is named for the point in the game, frequently after the two minute warning, when it is employed. If significantly more time remains, a team's standard strategies are still viable; if significantly less, a team has little option beyond a Hail Mary pass or the hook and lateral."} {"text":"Finally, as the offense gets closer to scoring, their clock management stance may shift towards running out the clock in an effort to deny the opponent their own opportunity for a two-minute drill."} {"text":"In American football, the West Coast offense is an offense that places a greater emphasis on passing than on running."} {"text":"There are two similar but distinct offensive strategic systems that are commonly referred to as \"West Coast offenses\". Originally, the term referred to the Air Coryell system popularized by Don Coryell. Following a journalistic error, however, it now more commonly refers to the offensive system devised by Bill Walsh while he was the offensive coordinator of the Cincinnati Bengals. The offense is characterized by short, horizontal passing routes in lieu of running plays to \"stretch out\" defenses, opening up the potential for long runs or long passes. It was popularized when Walsh was the head coach of the San Francisco 49ers."} {"text":"Initially, Walsh resisted having the term misapplied to his own distinct system and was especially incensed by the use of the word \"finesse\" in reference to his sophisticated offensive schemes. Zimmerman notes that an article of his so misapplying the term provoked a phone call from an upset Walsh: \"He called me up... (saying) that wasn't his offense\". Still, the moniker stuck. Now the term is commonly used to refer to a range of pass-oriented offenses that may not be closely related to either the Air Coryell system or Walsh's pass-strategy."} {"text":"The origins of the offensive system devised by Walsh go back to Paul Brown, coach of the Cleveland Browns and later the Cincinnati Bengals. Under Brown's tenure, Walsh was tasked with devising an offensive plan suited to Bengal quarterback Virgil Carter, who had an accurate but relatively weak arm. In response, Walsh created a system based on short, high-percentage passes, favoring straight and direct ten- to fifteen-yard strikes over forty- to fifty-yard \"bombs\". This system compensated for any weakness in the quarterback's arm, as it allowed the ball to be thrown to short and intermediate routes where receivers with running ability could make up for any shortage in yards after the catch."} {"text":"Bernie Kosar used the term to describe the offense formalized by Sid Gillman with the AFL"} {"text":"Chargers in the 1960s and later by Don Coryell's St. Louis Cardinals and Chargers in the 1970s and 1980s. Al Davis, an assistant under Gillman, also carried his version to the Oakland Raiders, where his successors John Rauch, John Madden, and Tom Flores continued to employ and expand upon its basic principles. This is the \"West Coast offense\" as Kosar originally used the term. It is now commonly referred to as the \"Air Coryell\" timed system, however, and instead the term West Coast offense is usually used to describe Walsh's system."} {"text":"The offense uses a specific naming system, with the routes for wide receivers and tight ends receiving three digit numbers, and routes for backs having unique names. For example, a pass play in 3 digit form might be \"Split Right 787 check swing, check V\". (see Offensive Nomenclature). This provides an efficient way to communicate many different plays with minimal memorization. Conversely, the Walsh \"West Coast offense\" could in theory have more freedom, since route combinations are not limited by 0-9 digits, but at the price of much more memorization required by the players."} {"text":"Walsh formulated what has become popularly known as the West Coast offense during his tenure as assistant coach for the Cincinnati Bengals from 1968 to 1975, while working under the tutelage of mentor Paul Brown. Bengals quarterback Virgil Carter would be the first player to successfully implement Walsh's system, leading the NFL in pass completion percentage in 1971. Ken Anderson later replaced Carter as Cincinnati's starting QB, and was even more successful. In his 16-year career in the NFL, Anderson made four trips to the Pro Bowl, won four passing titles, was named NFL MVP in 1981 (and also appeared in Super Bowl XVI that year), and set what was then the record for completion percentage in a single season in 1982 (70.66%)."} {"text":"Several members of Bill Walsh's coaching tree went on to successfully implement his West Coast Offense system."} {"text":"George Seifert succeeded Walsh as San Francisco's head coach in 1989, and won two Super Bowls with the 49ers; once with Joe Montana at quarterback in 1989, and later with fellow Hall of Famer Steve Young in 1994."} {"text":"Paul Hackett was another former assistant coach who once served under Walsh. He served as a 49ers assistant from 1983 to 1985, coaching quarterbacks and wide receivers. During this time, Hackett helped San Francisco win Super Bowl XIX. He next served as offensive coordinator for the Dallas Cowboys under Tom Landry from 1986 to 1988. Hackett would later teach his version of Walsh's offense to several coaches, including former Green Bay Packers head coach Mike McCarthy. McCarthy, who was the Packers head coach from 2006 until December 2018, would go on to win a Super Bowl himself with the use of the West Coast offense in 2010, with the help of superstar quarterback Aaron Rodgers."} {"text":"One of Holmgren's former assistants, Jon Gruden, has had reasonable success running the West Coast offense in his own right. He started his head coaching career with the Oakland Raiders, leading them from 1998 to 2001, and turned the Raiders into a strong playoff contender. Gruden then went on to become head coach of the Tampa Bay Buccaneers, winning Super Bowl XXXVII after the 2002 season. Gruden coached the Buccaneers from 2002 to 2008. After several years as a color commentator on ESPN Monday Night Football, he signed a deal to return to the Raiders as head coach for the 2018 NFL season."} {"text":"Shanahan also served as head coach of the Washington Redskins from 2010 to 2013, but his time in Washington was significantly less successful than his tenure with the Broncos. Despite guiding the Redskins to the NFC East division title in 2012, along with a trip to the NFL playoffs, he only compiled a 24\u201340 record over 4 seasons with an 0-1 playoff mark. Overall, Shanahan accumulated an overall record of 178-144: 170\u2013138 in the regular season, with an 8-6 postseason record that included 2 Super Bowl victories."} {"text":"Gary Kubiak has had a stellar career as an NFL head coach in his own right. Kubiak served as the head coach of the Houston Texans from 2006 to 2013. After serving as the Baltimore Ravens' offensive coordinator in 2014, he became head coach of the Broncos in the 2015 season, and won Super Bowl 50."} {"text":"LaVell Edwards and Dewey Warren created an offensive system similar to the West Coast offense at Brigham Young University (BYU) in 1973."} {"text":"One reason for the success of this version of the offense was in its simplicity. Norm Chow said the offenses had around 12 basic pass plays and 5 basic run plays that were run from a variety of formations, with only some plays tagged for extra versatility, so that the players knew the offense by the second day of practice."} {"text":"The highpoint of the BYU offense was an NCAA Division I-A national football championship in 1984 and a Heisman Trophy for Ty Detmer in 1990. BYU broke over 100 NCAA records for passing and total offense during Edwards' tenure. Several coaches and players associated with BYU's football program had success with this offense at BYU and elsewhere, including Virgil Carter, Mike Holmgren, Andy Reid, Brian Billick, Ted Tollner, Doug Scovil, Norm Chow, Jim McMahon, Steve Young, Ty Detmer, and Steve Sarkisian."} {"text":"The University of Washington Huskies were among the first of the Pac-10 teams and in 1970, under coach Jim Owens and quarterback Sonny Sixkiller, used the \"Sixkiller\" variation of Coryell's West Coast offense with great success. Years later in 2002, under coach Keith Gilbertson and quarterback Cody Pickett, the Huskies ran a variation of Walsh's West Coast offense to a conference championship and a top four passing attack averaging 352.4 yards per game. Today, the West Coast offense no longer only resides on the west coast, but can be found in schools across the nation, including Boise State, and Vanderbilt. Former Pittsburgh and Stanford head coach Walt Harris also used a variation of the West Coast offense during his stint at Pittsburgh."} {"text":"The popular term \"West Coast Offense\" is more of a philosophy and an approach to the game than it is a set of plays or formations. Traditional offensive thinking argues that a team must establish its running game first, which will draw the defense in and open up vertical passing lanes downfield; \"i.e.\", passing lanes that run perpendicular to the line of scrimmage."} {"text":"Bill Walsh's West Coast Offense differs from traditional offense by emphasizing a short, horizontal passing attack to help stretch out the defense, thus opening up options for longer running plays and longer passes that can achieve greater gains. The West Coast Offense as implemented under Walsh features precisely run pass patterns by the receivers that make up about 65% to 80% of the offensive scheme. With the defense stretched out, the offense is then free to focus the remaining plays on longer throws of more than 14 yards and mid to long yard rushes."} {"text":"Walsh's West Coast Offense attempts to open up running and passing lanes for the backs and receivers to exploit, by causing the defense to concentrate on short passes."} {"text":"Since most down and distance situations can be attacked with a pass or a run, the intent is to make offensive play calling unpredictable and thus keep the defense's play \"honest\", forcing defenders to be prepared for a multitude of possible offensive plays rather than focusing aggressively on one likely play from the offense."} {"text":"Another key part of the Walsh implementation was \"pass first, run later\", It was Walsh's intention to gain an early lead by passing the ball, then run the ball on a tired defense late in the game, wearing them down further and running down the clock. The San Francisco 49ers, under both Bill Walsh and George Seifert, often executed this strategy very effectively."} {"text":"The majority of West Coast routes occur within 15 yards of the line of scrimmage. 3-step and 5-step drops by the quarterback take the place of the run and force the opposing defense to commit their focus solely on those intermediate routes. Contrary to popular belief, the offense also uses the 7-step drop for shallow crosses, deep ins and comebacks. For instance, past Michigan Wolverines offenses utilized the 5- and 7-step drops about 85% of the time with West Coast pass schemes implemented by then-quarterbacks coach Scot Loeffler. Because of the speed of modern defenses, only utilizing the 3- and 5-step pass game would be ineffective since the defense could squat and break hard on short-to-intermediate throws with no fear of a down field pass."} {"text":"The original West Coast Offense of Sid Gillman uses some of the same principles (pass to establish the run, quarterback throws to timed spots), but offensive formations are generally less complicated with more wideouts and motion. The timed spots are often farther down field than in the Walsh-style offense, and the system requires a greater reliance on traditional pocket passing."} {"text":"Another aspect that makes the West Coast one of the most difficult to master is that it requires a deeper connection between quarterback and receiver, and an ability to communicate mid-play. On any given route, a receiver has as many as three options; a hitch, a slant and a fly, depending on what the defense is showing. The quarterback is responsible for recognizing the defense and the reaction of the receiver to it and adjusting the route if needed. This explains the communication mistakes that commonly occur on West Coast offensive plays where the quarterback throws to a spot that the receiver is running away from."} {"text":"The West Coast offense requires a quarterback who throws extremely accurately, and often blindly, very close to opposing players. In addition, it requires the quarterback to be able to quickly pick the best one of five receivers to throw to, certainly much more quickly than in previously used systems. Often, the quarterback has no time to think about the play and must act robotically, executing the play exactly as instructed by the offensive coordinator, who calls the plays for him."} {"text":"Another aspect of the West Coast offense is the use of fast running quarterbacks. In blitz or short-yardage situations, many of the West Coast offense's strengths are negated by defenses blocking running and passing lanes. A running quarterback can compensate by acting as a runner himself, paralyzing an overly aggressive defense. Quarterbacks such as Randall Cunningham and Michael Vick have been successful runners in this offense, as well as other notable scrambling quarterbacks such as Jake Plummer, Donovan McNabb, Aaron Rodgers, Russell Wilson and Tyrod Taylor. The west coast offense also utilizes play-action passes to fool the defense in order to get receivers open, which is usually successful with running quarterbacks."} {"text":"Although not related to the West Coast offense, the similar \"dink-and-dunk\" offense has also helped quarterbacks that are more adept to older systems. Kurt Warner (a disciple of a variation of Air Coryell) and Ben Roethlisberger (a traditional gunslinger) are notable examples of non-West Coast quarterbacks that found success in the \"dink-and-dunk\" system."} {"text":"In American football a play is a close to the ground \"plan of action\" or \"strategy\" used to move the ball down the field. A play begins at either the snap from the center or at kickoff. Most commonly plays occur at the snap during a down. These plays range from basic to very intricate. Football players keep a record of these plays in their playbook."} {"text":"A play begins in one of two ways:"} {"text":"Once the play begins, it will continue until one of the following events happens:"} {"text":"When the play ends, the ball is set for the next play. For the first three instances above, the ball is set at the point of its \"maximum forward progress\". That means that if a runner is driven back in the process of a tackle OR is ruled down by lack of forward progress, the ball is placed as close to his opponent's goal line as he had gotten before being driven back. If he runs backwards of his own volition, the ball is marked where he goes down. In the case of an incomplete pass, the ball is placed at the previous line of scrimmage."} {"text":"The offensive team must have seven players on the line of scrimmage at the start of a play. Those players may be positioned at any place along the line of scrimmage (which extends all of the way across the playing field)."} {"text":"The defensive team may position as many as 11 players on the line of scrimmage. Usually, there are from 3 to 8 defensive players on the line of scrimmage."} {"text":"In a running play, the ball is advanced beyond the line of scrimmage by a player who receives it from behind the line of scrimmage. The player advancing the ball can be:"} {"text":"Also called \"dive\", \"plunge\", \"buck\", \"guts\", \"slam\" or numerous other names. The most basic run play is a run up the middle. In this case, the ball is handed off from the quarterback to a running back. The back then aims for a predetermined hole between his offensive linemen. This hole can be either between center and guard or between guard and tackle. The offensive line will run block, pushing defenders away from the chosen hole. Often, the fullback will lead block through the hole first to clear a path for the half back or running back."} {"text":"W T G C G T E"} {"text":"The 'bread-and-butter' of a run-oriented offense, this is typically the most common run play. Rather than aiming for a hole in the line, the running back aims for the spot just \"outside\" the tackle. This type of play allows for more improvisation by the running back once he is past the line, since there is often more open field in this area than in any run up the middle."} {"text":"In a toss play, the RB \"curves out\" toward the sideline on either side and the QB pitches (\"tosses\") the ball to the RB."} {"text":"In a sweep play, the fullback begins by running towards the sideline before heading forward. This motion allows for some of the offensive linemen, often one or both guards, to \"pull\" from their normal positions and establish a \"lane\" for the running back to run through. A lead blocking fullback often leads him through the lane. This play, known as the Packers sweep, was the central play in Vince Lombardi's \"run-to-daylight\" offense that was so successful for the Green Bay Packers of the 1960s."} {"text":"In a trap, a guard on the \"back\" side of the play (away from the direction the fullback or running"} {"text":"back is heading) will pull and lead block for the running back (most of the time, the guard will blindside an unblocked down linemen, and kick him out of the play). Often, the fullback will take the place of the guard, and block the opening allowed by this."} {"text":"W T G C G | T E"} {"text":"Also called a \"misdirection\". In this play, the runner begins by taking a step or two \"away\" from his intended path, then doubling back and heading in the opposite direction. Often defenders are clueing on the first move of the running back. The defenders committed to the first step, but the play moves in the opposite direction."} {"text":"Counter plays are often (but not always) coupled with \"influence\" blocking, where the offensive line blocks the defense towards (rather than away from) the intended direction of the play. This gambit often causes the defenders to think the play is going in the opposite direction, and they react as such."} {"text":"Also called a \"delay\". In a draw play, the offensive line drops into pass blocking positions, and the quarterback takes a drop as though he were going to pass. He then hands the ball off to his running back (or keeps it himself) and runs forward past the rushing defenders. The idea is that the defenders will be tricked in advancing on the quarterback as though it were a pass play, and this will vacate the area just beyond the line of scrimmage for the runner to take advantage of."} {"text":"The quarterback fakes a handoff to the running back and continues running with the ball opposite from the direction the running back was headed. The bootleg can have blockers similar to a \"sweep\" (and in such cases is it often called a \"quarterback sweep\") or it can be run \"naked\", that is without any blockers at all. A naked bootleg relies on the defense buying the fake handoff and moving to tackle the running back rather than the quarterback."} {"text":"The quarterback takes the snap and immediately dives to one side of the center or the other. This is often a short yardage play designed when only a yard or so is needed for a first down or a touchdown. Often the only players on either side of the ball that know the play is coming are the quarterback and the center (hence the \"sneak\" aspect of it), as the play is often decided by the quarterback upon seeing the defense. The play is often called by a silent signal between quarterback and center (a pinch or a tap in the direction the sneak is headed)."} {"text":"The wide receiver takes a handoff directly from the quarterback. The receiver then may proceed to do one of two things: he either runs the ball towards the line of scrimmage in order to gain yardage, or more rarely, he attempts to pass to another eligible pass receiver."} {"text":"This play resembles a sweep, but before the running back crosses the line of scrimmage, he hands the ball off to a wide receiver going in them reverse (opposite) direction of where the running back was going. If the defense was drawn to the side of the field the running back was going towards, the receiver can outrun the defense to the other side of the field and make a big gain."} {"text":"An option play is a play in which the quarterback holds the ball and runs to either side of the offensive line, waiting for an opportunity to run upfield and advance the ball. At the same time, the running back follows, allowing the quarterback the 'option' of pitching the ball just before he is tackled. This tactic forces defensive players to commit to either preventing the pitch or tackling the quarterback, allowing the offensive team to choose the best result."} {"text":"The option play requires a very fast and mobile quarterback to execute it, and employs a considerable deal of risk, because if the pitch is mishandled it is a live ball that can be recovered by the defense, plus the quarterback can be hurt."} {"text":"The option is rarely seen outside of college football, as high school teams lack the skill to execute it properly, and defensive players on professional teams are quick enough to disrupt the play to the point that it doesn't merit the risk involved, until the increased usages of read-option and RPO offenses in NFL since the 2010s with increasing number of dual-threat quarterbacks."} {"text":"College football teams West Virginia, Air Force, Florida in 2000s often employ this play style."} {"text":"A common form of the option executed on the high school, collegiate, and occasionally professional levels is the veer."} {"text":"A route is a path or pattern that a receiver in American football and Canadian football runs to get open for a forward pass."} {"text":"A \"go\" or \"fly route\" is a deep route used typically when the receiver has a speed advantage over the defensive back. In the route, the receiver will run as fast as possible in straight line parallel to the sideline, in an attempt to outrun the defender who is covering them."} {"text":"A post is a deep play where wide receivers run straight down the field a short distance (10-15 yards), and then angle in toward the center of the field (toward the goal 'posts') where the ball is caught at high speed. When this play was originally designed, the goal posts were on the \"zero\" yard line, in the front of the endzone - thus, a cornerback in man coverage would be led into the post."} {"text":"In a skinny post, the route is shorter and quicker than a deep post, which may cover 30 or 40 yards. This may also be referred to as a \"glance in\" or a \"bang eight.\""} {"text":"A \"flag\" or \"corner route\" is a deep play where wide receivers run straight down the field a long distance (40\u201350 yards), and then angle out towards the end zone and sideline. It takes its name from the flags that marked the ends of the goal and end lines before the introduction of flexible pylons."} {"text":"An out route will usually feature the receiver running 7 to 10 yards downfield and then making a 90 degree turn towards the sideline.-."} {"text":"The In or Drag route is the opposite of the Out route. As its name suggests, the route will usually feature the receiver running 7 to 10 yards downfield and then making a 90 degree turn towards the center of the field."} {"text":"A receiver takes two steps or more downfield then cuts diagonally across the field behind the linebackers and in front of the safeties."} {"text":"An eligible receiver runs a predetermined number of steps or yards upfield before stopping and turning back in slightly to face the quarterback, in the hopes that the defender cannot react and disrupt the pass before positive yardage is made."} {"text":"Particularly in the highest levels of competition (professional and major college), a play may call for the receiver to 'read' the defensive coverage against him, and run a second route if the first option would be ineffectual. As an example, the receiver may be instructed to begin with a slant route, but if the defender has that covered, switch to an out route. For this to work correctly, the passer must make the same read as the receiver."} {"text":"A screen pass is a pass that is normally thrown to a receiver or running back behind the line of scrimmage. It is thrown behind the line of scrimmage so that the pulling linemen can get their blocks established. There is another screen called a bubble screen where there are 3 receivers bunched together to one side, and after the snap the ball is almost instantly thrown to the one farthest behind the line of scrimmage."} {"text":"The quarterback takes the snap and drops back to fake a handoff to the running back. The quarterback then rapidly pulls the ball back from the faked handoff, trying to hide it from the defense. The running back continues to move upfield as if he has the ball in his hands. The offensive line starts to run block, but then quickly goes into pass protection."} {"text":"The receivers appear to block at first, then go into their routes."} {"text":"On a play-action pass, which is essentially the opposite of the draw play, the quarterback hopes to fake the defenders into thinking the offense is going to run the ball. The effects of this play is to slow down the pass rush of the defense and it forces the defensive backs to make a decision between covering a receiver or coming up to help stop the run."} {"text":"Stunts are a special means of rushing the quarterback done to confuse the opposing team's offensive line. Properly executing a stunt requires two or more defensive lineman working together. One defensive lineman will take an angled path towards an offensive lineman that he is not lined up across from. This will usually cause the offensive lineman he is lined up across from to follow him while also occupying the offensive lineman he angled towards. In turn, the defensive lineman who would have been blocked by the offensive lineman that is being angled to will loop behind his teammate and rush through the gap that was created by the offensive lineman who followed the defensive lineman taking the angle."} {"text":"E T | G C G T W"} {"text":"A blitz occurs when the defense sends non defensive-line personnel (either linebackers or defensive backs) to rush the quarterback. A blitz is an expansion upon the effective concept of the aforementioned pass rush."} {"text":"In attempting to halt the advancing of the football by the offensive team, the defensive team has many options. There are various formations that are commonly employed to defend against a passing attack."} {"text":"Man-to-man coverage is when every receiver is covered by a defensive back or linebacker. It is a coverage often used while blitzing because there are not enough players available to effectively execute zone coverage. Man-to-man coverage may be used while not blitzing by teams who have superior defensive backs or against teams with inferior receivers."} {"text":"Zone defense is when defensive players (typically defensive backs and linebackers) are responsible for a specific area on the field during pass coverage. Zones are usually more effective against long passes. When playing in a zone defense, a defensive player is able to observe what the quarterback is attempting to do, anticipate where a pass may be thrown, and perhaps intercept the pass. Zone defenses tend to produce interceptions of passes or outstanding collisions with receivers after they have made pass receptions."} {"text":"Strategy forms a major part of American football. Both teams plan many aspects of their plays (offense) and response to plays (defense), such as what formations they take, who they put on the field, and the roles and instructions each player are given. Throughout a game, each team adapts to the other's apparent strengths and weaknesses, trying various approaches to outmaneuver or overpower their opponent in order to win the game."} {"text":"The goal of the offense is, most generally, to score points. In order to accomplish this goal, coaches and players plan and execute plays \u2013 based on a variety of factors: The players involved, the opponent's defensive strategy, the amount of time remaining before halftime or the end of the game, and the number of points needed to win the game. Strategically, the offense can prolong their possession of the ball to prevent the opponent from scoring. Offensive scoring chances, or drives, end when they cannot move the ball 10 yards or the ball is turned over via fumble or interception."} {"text":"On offense, there are three types of players: linemen, backs, and receivers. These players' positions and duties on the field vary from one offensive scheme to another."} {"text":"The position names (as well as the abbreviations recognized by coaches, players, and fans) vary from one team's playbook to another, but what follow are among the most commonly used:"} {"text":"Backs are so named because they line up behind (in back of) the line of scrimmage at the start of the play."} {"text":"Before the ball is snapped the offensive team lines up in a formation. The type of formation used is determined by the game situation. Teams often have \"special formations\" that they only use in obvious passing situations, short yardage, goal line situations, or formations they have developed for that particular game just to confuse the defense."} {"text":"There are a nearly unlimited number of possible formations \u2013 a few of the more common ones are:"} {"text":"When the team is in formation and the quarterback gives a signal, either by calling out instructions or giving a non-verbal cue (a so-called \"silent count\"), the center snaps the ball to the quarterback and a play begins."} {"text":"A running play occurs when the quarterback hands the ball to another player, who then attempts to carry the ball past the line of scrimmage and gain yards, or the quarterback keeps the ball himself and runs beyond the line of scrimmage. In both cases, the offensive line's main job is to run block, preventing the defensive players from tackling the ball carrier."} {"text":"The choice of running play depends on the strengths of an offensive team, the weaknesses of the defense they are opposing, and the distance needed to score a touchdown or gain a first down. There are many kinds of running plays, including:"} {"text":"When a passing play occurs, the backs and receivers run specific patterns, or routes, and the quarterback throws the ball to one of the players. On these plays, the offensive line's main job is to prevent defensive players from tackling the quarterback before he throws the ball (a \"sack\") or disrupting the quarterback in any other way during the play."} {"text":"When successful, passing plays tend to cover more ground than running plays, so they are often used when the offensive team needs to gain a large number of yards. Even if they do not need to gain a large number of yards, it would be foolish to keep doing run plays because the defense could predict it. However, run plays are used to tire the defensive linemen in between passing plays in order to protect the QB from sacks."} {"text":"One general rule teams must take into account when creating their passing strategy is that only certain players are allowed to catch forward passes. If a player who is not an eligible receiver receives a thrown pass, the team could be penalized. However, if prior to a play the team reports to the referee that a normally ineligible receiver will act as an eligible receiver for one play, that player is allowed to catch passes. Teams will use this strategy from time to time to confuse the defense or force them to devote more attention to possible pass catchers."} {"text":"Using a combination of passing plays and running plays, the offense tries to gain the yards needed for a first down, touchdown, or field goal. Over the years several football coaches and offensive coordinators have developed some well-known and widely used offensive strategies:"} {"text":"Distinct from the offensive strategies or philosophies, which govern how a team moves the ball down the field, whether a team relies on downfield passes, short passes, inside runs, etc. are the ways in which plays are called. These play calling systems often developed alongside certain offensive strategies, though the systems themselves can work with any strategy. The differences between the systems focus on the specific language used to communicate plays to players. In the NFL, three basic systems predominate:"} {"text":"The goal of defensive strategy is to prevent the opposing offense from gaining yards and scoring points, either by preventing the offense from advancing the ball beyond the line of scrimmage or by the defense taking the ball away from the offense (referred to as a turnover) and scoring points themselves."} {"text":"On defense, there are three types of players: linemen, linebackers, and defensive backs (also called secondary players). These players' specific positions on the field and duties during the game vary depending on the type of defense being used as well as the kind of offense the defense is facing."} {"text":"The defensive line lines up in front of the offensive line. The defensive lineman's responsibility is to prevent the offensive line from opening up running lanes for the running back or to sack the quarterback, depending on whether the play is a passing or running play. Most of the time, defensive linemen attack the offensive line but in some plays they drop back in pass coverage to confuse the opposite team."} {"text":"Linebackers stand behind the defensive linemen or set themselves up on the line of scrimmage. Depending on the type of defensive strategy being used, a linebacker's responsibilities can include helping to stop the run, rushing the quarterback, or dropping back in pass protection."} {"text":"Defensive backs stand behind the linebackers. Their primary responsibility is pass coverage, although they can also be involved in stopping the run or rushing the quarterback."} {"text":"By far the most common alignments are four down linemen and three linebackers (a \"4\u20133\" defense), or three down linemen and four linebackers (\"3\u20134\"), but other formations such as five linemen and two linebackers (\"5\u20132\"), or three linemen, three linebackers, and five defensive backs (\"3\u20133\u20135\") are also used by a number of teams."} {"text":"As with offensive formations, there are many combinations that can be used to set up a defense. Unusual defensive alignments are constantly used in an effort to neutralize a given offense's strengths. In winning Super Bowl XXV, the New York Giants played with two down linemen, four linebackers and five defensive backs, a strategy that prevented their opponents, the Buffalo Bills, a team with a strong passing game, from completing long passes. In a 2004 game, the New England Patriots used no down linemen and seven linebackers for two plays against the Miami Dolphins."} {"text":"Some of the more familiar defensive formations include:"} {"text":"The defense must wait until the ball is snapped by the opposing center before they can move across the line of scrimmage or otherwise engage any of the offensive players. Once an opposing offense has broken their huddle and lined up in their formation, defensive players often call out instructions to each other to make last-second adjustments to the defense."} {"text":"To prevent the opposing offense from gaining yards on the ground, a defense might put more emphasis on their run defense. This generally involves placing more players close to the line of scrimmage to get to the ball carrier more quickly. This strategy is often used when the opposing offense only needs to gain a few yards to make a first down or score a touchdown."} {"text":"When the defense believes the opposing offense will pass the ball, they go into pass defense. There are two general schemes for defending against the pass:"} {"text":"There are times when a defense believes that the best way to stop the offense is to rush the quarterback, which involves sending several players charging at the line of scrimmage in an attempt to tackle the quarterback before he can throw the ball or hand it to another player. Any player on the defense is allowed to rush the quarterback, and many schemes have been developed over 50 years that involve complicated or unusual blitz \"packages\"."} {"text":"Defensive strategies differ somewhat from offensive strategies in that, unlike offenses that have very specific, detailed plans and assignments for each player, defenses are more reactive, with each player's general goal being to \"stop the offense\" by tackling the ball carrier, breaking up passing plays, taking the ball away from the offense, or sacking the quarterback. Whereas precision and timing are among the most important parts of offensive strategy, defensive strategies often emphasize aggressiveness and the ability to react to plays as they develop."} {"text":"Nevertheless, there are many defensive strategies that have been developed over the years that coaches use as a framework for their general defense, making specific adjustments depending on the capabilities of their players and the opponent they are facing."} {"text":"Some of the most commonly known and used defensive strategies include:"} {"text":"A special team is the group of players who take the field during kickoffs, free kicks, punts, and field goal attempts. Most football teams' special teams include one or more kickers, a long snapper (who specializes in accurate snaps over long distances), kick returners who catch and carry the ball after it is kicked by the opposing team, and blockers who defend during kicks and returns."} {"text":"Most special teams are made up of players who act as backups or substitutes on the team's offensive and defensive units. Because of the risk of injury, it is uncommon for a starting offensive or defensive player to also play on a special teams unit."} {"text":"A variety of strategic plays can be attempted during kickoffs, punts, and field goals\u2014to surprise the opposition and score points, gain yardage or first downs, or recover possession of the kicked ball."} {"text":"A kickoff occurs at the beginning of each half, overtime period (not in college), and following each touchdown, successful field goal, or safety. Strategically, the coach of the other team may choose to have his players kick the ball in one of several ways:"} {"text":"The \"no punting\" strategy is one that forsakes the practice of punting and instead attempts to make fourth down conversions on as many plays as possible. It has been implemented at Pulaski Academy, a top-ranked prep school, and has been advocated by Gregg Easterbrook in his \"Tuesday Morning Quarterback\" column and by author Jon Wertheim. Fourth down decisions to punt have been analyzed mathematically by David Romer."} {"text":"Field goals are worth one point after a scored touchdown, or three points in the event that a team does not score a touchdown but feels it is positioned close enough for the kicker to make the attempt."} {"text":"Thus it is strategically important for kicking teams to get as close to the ball as possible after a punt, so that they may quickly tackle a returner, down the ball as close to the opposing team's end zone as possible, and (if possible) recover the ball after a fumble and regain possession of the ball."} {"text":"The Hidden Game of Football is an influential book on American football statistics published in 1988 and written by Bob Carroll, John Thorn, and Pete Palmer. It was the first systematic statistical approach to analyzing American football in a book and is still considered the seminal work on the topic."} {"text":"Football Outsiders (FO) is a website started in July 2003 which focuses on advanced statistical analysis of the NFL. The site is run by a staff of regular writers, who produce a series of weekly columns using both the site's in-house statistics and their personal analyses of NFL games."} {"text":"In 2005 and 2006, the site partnered with FOXSports.com to cross-publish many of the Outsiders' regular features, including power rankings based on a \"weighted\" version of the DVOA (Defense-adjusted Value Over Average) statistic. In 2007, Football Outsiders content appeared on FOXSports.com (in a reduced capacity) along with AOL Sports and ESPN.com. Since 2008, the site has partnered exclusively with ESPN and provides mostly ESPN Insider content. In 2009, Football Outsiders began analyzing college football using similar statistical principles."} {"text":"Football Outsiders was launched in August 2003 by Aaron Schatz, with two regular columns, one of which used an early version of the proprietary DVOA statistic. The original purpose of the site was to disprove a statement by \"Boston Globe\" reporter Ron Borges that the 2002 New England Patriots failed to make the postseason because they could not establish the run. Over the course of time, the site added more writers, and hosted Gregg Easterbrook for part of 2003."} {"text":"Between 2004 and 2005, the site introduced new statistics such as Defense-adjusted Points Above Replacement (DPAR, later Defense-adjusted \"Yards\" Above Replacement, DYAR) and Adjusted Line Yards (ALY). In 2005, the site began to cross-publish many of its columns on FOXsports.com. In 2005, Football Outsiders also took over publication of \"Pro Football Prospectus\", a book giving a preview of the upcoming NFL season. In 2009, the annual was renamed \"Football Outsiders Almanac\"."} {"text":"Currently, the site has incorporated the 1983-2020 NFL seasons into their statistics."} {"text":"Football Outsiders has devised a series of proprietary formulas to calculate different advanced metrics."} {"text":"DVOA (Defense-adjusted Value Over Average) calculates a team's success based on the down-and-distance of each play during the season, then calculates how much more or less successful each team is compared to the league average. According to Football Outsiders, DVOA \"breaks down every single play of the NFL season to see how much success offensive players achieved in each specific situation compared to the league average in that situation, adjusted for the strength of the opponent. ... Football has one objective -- to get to the end zone -- and two ways to achieve that, by gaining yards and getting first downs. These two goals need to be balanced to determine a player's value or a team's performance.\""} {"text":"There is a separate DVOA measurement for special teams, which \"compare[s] each kick or punt to the league average for based on the point value of field position at the position of each kick, catch, and return.\""} {"text":"DYAR (Defense-adjusted Yards Above Replacement) calculates each player's cumulative value above or below a \"replacement-level\" alternative. DYAR differs from DVOA in calculating a player's total value through the course of a year, and not on a play-for-play rate. States Football Outsiders, \"DVOA, by virtue of being a percentage or rate statistic, doesn\u2019t take into account the cumulative value of having a player producing at a league-median level over the course of an above-average number of plays. By definition, a median level of performance is better than that provided by half of the league and the ability to maintain that level of performance while carrying a heavy work load is very valuable indeed.\""} {"text":"Adjusted Line Yards (ALY) \"differentiate[s] between the contribution of the running back and the contribution of the offensive line.\" ALY attempts to \"separate the effect that the running back has on a particular play from the effect of the offensive line (and other offensive blockers) and the effect of the defense. ... Yardage ends up falling into roughly the following combinations: Losses, 0-4 yards, 5-10 yards, and 11+ yards. In general, the offensive line is 20% more responsible for lost yardage than it is for yardage gained up to four yards, but 50% less responsible for yardage gained from 5-10 yards, and not responsible for yardage past that. Thus, the creation of Adjusted Line Yards.\""} {"text":"Drive Stats calculate a team's average success rate on a possession-by-possession basis: \"[E]ach team's total number of drives as well as average yards per drive, points per drive, touchdowns per drive, punts per drive, and turnovers per drive, interceptions per drive, and fumbles lost per drive. LOS\/Drive represents average starting field position (line of scrimmage) per drive from the offensive point of view. Drive stats are given for offense and defense, with NET representing simply offense minus defense.\""} {"text":"Another metric Football Outsiders uses is Pythagorean projection, which estimates wins in a season by a formula originally conceived by baseball analyst Bill James, that takes the square of team points, and divides it by the sum of the squares of team points scored and allowed."} {"text":"The 2011 edition of \"Football Outsiders Almanac\" states, \"From 1988 through 2004, 11 of 16 Super Bowls were won by the team that led the NFL in Pythagorean wins, while only seven were won by the team with the most actual victories. Super Bowl champions that led the league in Pythagorean wins but not actual wins include the 2004 Patriots, 2000 Ravens, 1999 Rams and 1997 Broncos.\""} {"text":"Although Football Outsiders Almanac acknowledges that the formula had been less-successful in picking Super Bowl participants from 2005-2008, it reasserted itself in 2009 and 2010."} {"text":"Furthermore, \"[t]he Pythagorean projection is also still a valuable predictor of year-to-year improvement. Teams that win a minimum of one full game more than their Pythagorean projection tend to regress the following year; teams that win a minimum of one full game less than their Pythagoerean projection tend to improve the following year, particularly if they were at or above .500 despite their underachieving. For example, the 2008 New Orleans Saints went 8-8 despite 9.5 Pythagorean wins, hinting at the improvement that came with the next year's championship season.\""} {"text":"Each year, Football Outsiders calculates the best and worst teams, per play, with the DVOA metric (see above). Below is a list of the highest- and lowest-rated teams in the league in each year from 1985-2019."} {"text":"Pro Football Prospectus and Football Outsiders Almanac."} {"text":"From 2005 through 2008, Football Outsiders published the \"Pro Football Prospectus\" book each year before the football season began. It included an essay for each team analyzing the previous season, evaluating off-season moves, and projecting future performance."} {"text":"In 2009, Football Outsiders did not publish a \"Pro Football Prospectus\" volume, but instead produced the self-published \"Football Outsiders Almanac 2009\". The reason for this is explained in the book:"} {"text":"So why the name change, and why aren\u2019t we in bookstores?"} {"text":"For those who don\u2019t know, our first four books were published through an agreement with Prospectus Entertainment Ventures, the company that owns Baseball Prospectus (as well as the expansion projects Basketball Prospectus and Puck Prospectus). It was PEV that had the publishing contract (first with Workman, then Plume). This year, for various reasons, Plume decided they no longer wanted to publish books related to other sports besides baseball. Other publishers were interested in doing our book, but by the time Plume made their decision, it was too late to get on the publication schedule for 2009."} {"text":"Bump and run coverage is a strategy formerly widely used by defensive backs in American professional football in which a defender lined up directly in front of a wide receiver and tried to impede him with arms, hands, or entire body and disrupt his intended route. This originated in the American Football League in the 1960s, one of whose earliest experts was Willie Brown of the Oakland Raiders. Mel Blount of the Pittsburgh Steelers specialized in this coverage to such a point as to cause numerous rule changes (see below) strictly limiting when and where a defender may make contact with a potential receiver in order to make it easier for receivers to run their routes and increase scoring."} {"text":"In contrast, under NCAA rules, contact is allowed anywhere on the field as long as contact is in front of the defender and a pass is not in the air."} {"text":"This play works well against routes that require the receiver to be in a certain spot at a certain time. The disadvantage, however, is that the receiver can shed contact and get behind the cornerback for a big play. This varies from the more traditional defensive formation in which a defensive player will give the receiver a \"cushion\" of about 5 yards to prevent the receiver from getting behind him. In the NFL, a defensive back is allowed any sort of contact within the 5 yard bump zone except for holding the receiver, otherwise the defensive back can be called for an illegal contact penalty, costing 5 yards and an automatic first down, enforced since 1978, and known colloquially as the Mel Blount Rule."} {"text":"In the sports of American football or Canadian football, the act of icing the kicker or freezing the kicker is the act of calling a timeout immediately prior to the snap in order to disrupt the process of kicking a field goal. The intent is to throw the kicker off of their routine and force them to feel pressure for a longer amount of time. The tactic is used at the collegiate and professional levels, although its effectiveness has not been proven."} {"text":"In order to ice a kicker, either a player or a coach on the defending team will call a timeout just as the kicker is about to attempt a game-tying or game-winning field goal. This is intended to either stop the kick immediately as the kicker is mentally prepared, or allow for the kicker to kick immediately after the timeout so that the initial kick does not count, in an attempt to mentally disrupt the kicker for the actual kick. If the tactic is successful, the kicker will miss the kick due to choking. Should the kicker make the subsequent kick, then the attempt to ice the kicker is considered unsuccessful."} {"text":"One variant of this tactic, attributed to former Denver Broncos coach Mike Shanahan, is to call time out from the sidelines just before the ball is snapped. This prevents the kicking team from realizing the kick will not count until after the play is over. However, this has the potential to backfire: the invalid first kick could miss or be blocked, only to be followed by a successful second kick."} {"text":"A similar tactic is also common in basketball, known as icing the shooter. A team may call a time out just before the opposing team's free-throw shooter is given the ball on the final free throw, in an attempt to disrupt the shooter, typically if a missed free throw allows for the calling team to either have a chance to win the game with a successful field goal, or allows the calling team to preserve a lead."} {"text":"In American football, Air Coryell is the offensive scheme and philosophy developed by former San Diego Chargers coach Don Coryell. The offensive philosophy has been also called the \"Coryell offense\" or the \"vertical offense\"."} {"text":"With Dan Fouts as quarterback, the San Diego Chargers' offense was among the greatest passing offenses in National Football League history. The Chargers led the league in passing yards an NFL record six consecutive years from 1978 to 1983 and again in 1985. They also led the league in total yards in offense 1978\u201383 and 1985. Dan Fouts, Charlie Joiner, and Kellen Winslow would all be inducted into the Pro Football Hall of Fame from those Charger teams."} {"text":"The pro set was the default NFL scheme prior to Don Coryell. It was generally a running offense that used play action fakes to set up deep passing attempts when defenses stacked up vs the running game. On pass plays, it provided one or even two backs to help protect the quarterback."} {"text":"The pro set features a tight end, two wide receivers, a halfback and a fullback, often split behind the quarterback. While QBs can take snaps from under center or from the shotgun position, QBs generally take snaps from under center in the pro set to allow for more effective use of the play action pass. Offenses tended to be ball-control, grind-it-out style offenses. In 1978, the contact from defenders on receivers was minimized with the passing of the Mel Blount Rule."} {"text":"Coryell opens up passing in the NFL."} {"text":"Today most NFL offenses' passing games are at least partially based on Coryell conventions."} {"text":"Former coach of the St. Louis Rams, Mike Martz, says \"Don is the father of the modern passing game. People talk about the West Coast offense, but Don started the 'West Coast' decades ago and kept updating it. You look around the NFL now, and so many teams are running a version of the Coryell offense. Coaches have added their own touches, but it's still Coryell's offense. He has disciples all over the league. He changed the game.\"."} {"text":"The offense did not have any set formations, as receivers could line up anywhere on any given pass play. Passes were thrown to a spot before the receiver even got there, allowing defenders no hint where the pass was being targeted. Each receiver had two or three different route options they could adjust depending on the coverage during the play. Throwing a deep pass was the first option on each play. Coryell's offense had more progressions than Gillman's, with backup options for screen passes and underneath routes."} {"text":"The Coryell offense is a combination of deep and mid range passing and power running. The offense relies on getting all five receivers out into patterns that combined stretched the field, setting up defensive backs with route technique, and the quarterback throwing to a spot on time where the receiver can catch and turn upfield. Pass protection is critical to success because at least two of the five receivers will run a deep in, skinny post, comeback, speed out, or shallow cross."} {"text":"Overall, the goal of the Coryell offense is to have at least two downfield, fast wide receivers who adjust to the deep pass very well, combined with a sturdy pocket quarterback with a strong arm. The Coryell offense uses three key weapons. The first is a strong inside running game, the second is its ability to strike deep with two or more receivers on any play, and the third is to not only use those two attacks in cooperation with each other, but to include a great deal of mid-range passing to a TE, WR, or back."} {"text":"After the Chargers in 1980 acquired running back Chuck Muncie, the offense started using a single set back featuring Muncie as the lone running back and adding a second tight end into the game. When defenses countered with extra defensive backs, the offense would run the ball. Joe Gibbs, the Chargers offensive coordinator at the time, said that marked \"the evolution of the one-back offense.\""} {"text":"Originally it was known as the West Coast offense until an article about San Francisco 49ers Head Coach Bill Walsh in \"Sports Illustrated\" in the early 80s incorrectly called Walsh's offense \"the West Coast offense,\" and this mis-labelling stuck. Subsequently, Coryell's offense scheme was referred to as \"Air Coryell\"\u2014the name announcers had assigned to his high powered Charger offenses in San Diego, featuring 3 Hall of Famers in QB Dan Fouts, WR Charlie Joiner, & TE Kellen Winslow, as well as Pro Bowl WR Wes Chandler & HB Chuck Muncie. Today it is also known as the \"Coryell offense\", although the \"vertical offense\" is another accepted name."} {"text":"In NFL coaching circles, the most famous and successful advocates of the Air Coryell system are Norv Turner, Mike Martz and Al Saunders."} {"text":"The Mike Martz variant is a much more robust offense with a more complex playbook. It is a much more aggressive passing offense, frequently deploying pre-snap motion and shifts, with the running game often forgotten. There is much less of a focus on play action. The Martz variant favors an elusive feature back, such as Hall of Famer Marshall Faulk, who can catch the ball, over the power runners the Turner scheme favors. Martz credits both his influences on his variation of the offensive system and his overall coaching philosophy to Don Coryell. Martz learned the so-called 3 digit system the offense is famous for with how the plays are called from Turner when they were both in Washington."} {"text":"This may have been especially true when the Rams surprisingly lost Super Bowl XXXVI to the New England Patriots. In that contest, the Patriots' defense successfully contained Marshall Faulk, holding him to only 76 yards rushing and 54 yards receiving. The Rams offense gained more yards than the Patriots offense, 427-267. But the New England defense forced 3 St. Louis turnovers. The Patriots scored 17 points off those turnovers. It should also be noted that Kurt Warner's rhythm was disrupted by Patriots head coach Bill Belichick's defensive game plan. Warner went 28-44 on his passing attempts, throwing for 365 yards and scoring 2 touchdowns (1 running, 1 passing). However, Warner also threw two costly interceptions which proved to eventually help the Patriots win the Super Bowl."} {"text":"The Coryell offense attacked vertically through seams, while the West Coast offense moved laterally as much as vertically through angles on curl and slant routes. The Coryell offense had lower completion percentages than the West Coast offense, but the returns were greater on a successful play. \"The Coryell offense required more talented players, a passer who could get the ball there, and men who can really run\u2014a lot of them,\" said Walsh. He said the West Coast offense was developed out of necessity to operate with less talented players. He noted, \"[Coryell] already had the talent and used it brilliantly.\""} {"text":"In American football, a two-level defense is a defensive formation with only two layers of defense instead of the customary three layers."} {"text":"The two-level was invented to combat the run and shoot offense in the 1980s, but has stayed in use due to its adaptability in combating all types of offenses. The defense of spread formations remains a strong suit of this model."} {"text":"A play calling system in American football is the specific language and methods used to call offensive plays."} {"text":"It is distinct from the play calling philosophy, which is concerned with overall strategy: whether a team favors passing or running, whether a team seeks to speed up or slow down play, what part of the field passes should target, and so on. The play calling system comprises tactics for making calls for individual plays and communicating those decisions to the players."} {"text":"In any football play, each of the team's eleven players on offense has a specific, scripted task. Success requires that players' tasks mesh into an effective play. A team maximizes the difficulty for the opposition by having a wide variety of plays, which means that players' tasks vary on different plays. A play calling system informs each player of his task in the current play."} {"text":"There are constraints in designing a play calling system. The 40-second play clock means a team has 30 seconds or less from the end of one play to prepare for the next play. A complicated play calling system that lets a team tailor a play more precisely is harder for players to memorize and communicate. Noise from the fans in the stadium can interfere with communication, sometimes deliberately. To the extent the opposition can intercept and understand the call, it can prepare for it better."} {"text":"The design of a play calling system answers the following questions:"} {"text":"Three general approaches to play calling dominate the National Football League:"} {"text":"In the West Coast system, all plays have code names. They indicate the specific formation and tell players where to line up. This code name is followed by modifiers that communicate variations on the play. For running plays, the modifier specifies the blocking scheme and the path that the primary ball carrier takes during the run, usually indicating which of nine numbered gaps, or holes, between offensive-line players he aims for in his run. For passing plays, the modifier indicates what pass route each player is supposed to take."} {"text":"Here are some plays from one specific West Coast playbook, and what the names mean:"} {"text":"The West Coast system has its roots in the system devised by Paul Brown as the head coach of the Cleveland Browns and Cincinnati Bengals. It became known as the West Coast system when Brown's protege Bill Walsh used a similar scheme as head coach of the San Francisco 49ers during their success of the 1980s and 1990s. The West Coast system was designed alongside the West Coast offense, though it is not confined to that offense."} {"text":"The heart of the system devised by Don Coryell is a three-digit number that gives assignments to each of three pass receivers; for instance, the split end, the tight end, and the flanker, in that order; or the leftmost receiver, middle receiver, and right receiver, in that order. Each digit is a code for one of nine passing routes the receiver is to run, based on a \"route tree\". Some routes include a change of direction with which to throw off the defender covering the receiver. Through the route tree, the quarterback knows where each receiver will be and can quickly scan to see who is most open."} {"text":"The nine numbered passing routes tell a receiver to run as follows when the ball is snapped:"} {"text":"The Coryell system is primarily concerned with efficiently devising pass plays, an important factor in the Air Coryell offense. It allows quick and unambiguous communication with each receiver on a passing play. However, if there are more than three receivers or more than 9 pass routes, or to assign a route to additional players, the system must be modified, as done in the West Coast system, reducing the efficiency advantage. In such a modified system, the quarterback might call, \"896 H-Shallow F-Curl\", assigning numbered routes to the three receivers (the split end, the tight end, and the flanker), while \"H-Shallow\" and \"F-Curl\" refer to routes run by the halfback and fullback."} {"text":"A typical Erhardt\u2013Perkins concept assigns each player a task based on his initial location. For example, \"Ghost\" is a three-receiver concept: the outside receiver runs a vertical or fly route, the middle receiver runs an 8-yard out route, and the inside receiver runs a flat route. \"Ghost\" works in any personnel package or formation; it can be run with a five wide receiver set in a spread formation, or \"base personnel\" in the I formation where the fullback motions into the slot position."} {"text":"The Erhardt\u2013Perkins system is more flexible than the other two systems. The play call is simple and brief. The team can use the remaining time on the play clock not to assign instructions but to study the defense and adapt its plan. The Erhardt\u2013Perkins system works well with the no-huddle offense. The offense can run at a faster pace, getting more offensive plays in per game, conserving the time on the game clock, and keeping the defense on its heels."} {"text":"However, the Erhardt\u2013Perkins system requires versatile and intelligent players. The same player may line up as a running back, tight end, or wide receiver on any given play, so players need adequate skills to play several positions. Erhardt\u2013Perkins requires that players memorize the entire playbook. Each player must know every route in every concept, and be able to run each route depending on which position in the formation he occupies. Players who are successful under other play calling systems can become lost in the complexities of Erhardt\u2013Perkins. In 2015, 14-year NFL veteran wide receiver Reggie Wayne asked to be released from the New England Patriots after only 2 pre-season games. It was reported that Wayne thought that the playbook was too complicated to learn."} {"text":"The Erhardt\u2013Perkins system was developed by Ron Erhardt and Ray Perkins, two assistant coaches who worked under Chuck Fairbanks for the Patriots during the 1970s. The system was later implemented by the New York Giants in 1982 when Perkins was hired as their head coach, and Erhardt as his offensive coordinator. A third coach who followed Perkins and Erhardt from the Patriots to the Giants was defensive assistant Bill Parcells, who succeeded Perkins as head coach. Being primarily a defensive coach, Parcells retained Erhardt as his offensive coordinator and let him continue to use the Erhardt\u2013Perkins offense and its play calling system. The system was disseminated through the league by various members of the Parcells coaching tree, and is used effectively by Patriots head coach Bill Belichick."} {"text":"The New England Patriots generally run a modified Erhardt-Perkins offensive system and a Fairbanks-Bullough 3\u20134 defensive system, though they have also used a 4\u20133 defense and increased their use of the nickel defense."} {"text":"The Patriots run a modified \"Ron Erhardt-Ray Perkins\" offensive system first installed by Charlie Weis under Bill Belichick. Both Ron Erhardt and Ray Perkins served as offensive assistant coaches under the defensive-minded Chuck Fairbanks while he was head coach of the Patriots in the 1970s. This system is known for its multiple formation and personnel grouping variations on a core number of base plays. Under this system, each formation and each play are separately numbered. Additional word descriptions further modify each play."} {"text":"The Erhardt-Perkins system traditionally had a reputation of being a smash-mouth offense that maximizes a team's time of possession and does not frequently call upon its running backs to serve as receivers. Erhardt often said, \"throw to score, run to win.\" This may have been especially true during the years Bill Parcells ran this system as the head coach of the New York Giants."} {"text":"An example of a running play under this system is \"Zero, Ride Thirty-six\". Zero sets the formation. Thirty indicates who will be the ball carrier running with the ball. Six indicates which hole between the offensive linemen the ball carrier will attempt to run through (see Offensive Nomenclature)."} {"text":"Parcells ran the Erhardt-Perkins offensive system during his pro coaching years, which is where Weis originally learned it. Many teams coached by members of the Parcells-Belichick coaching tree currently use this system, such as Notre Dame during Weis' tenure. The Pittsburgh Steelers also continued to run this system during the Bill Cowher years, from when Ron Erhardt was their offensive coordinator. The Carolina Panthers ran this system as well, under Jeff Davidson, a former Belichick assistant."} {"text":"Comparison to \"West Coast\" and \"Air Coryell\" offenses."} {"text":"In the view of some experts, there are only approximately five or six major offensive systems run in the NFL today."} {"text":"The nomenclature of the Erhardt-Perkins system is very different from the Bill Walsh West Coast offense. Formations under the West Coast offense are commonly named after colors (i.e., Green Right). The west coast offense commonly utilizes high percentage, short slanting passes and running backs as receivers. It prefers to have mobile quarterbacks (since its running backs may not be available to block) and large receivers who are able to gain additional yards after the catch."} {"text":"The nomenclature of the Erhardt-Perkins system is also very different from the Ernie Zampese-Don Coryell \"Air Coryell\" timed system. Route patterns of the receivers are numbered instead of named in the Air Coryell system (thereby making memorization easier). For example, an Air Coryell play such as \"924 F stop swing\" indicates that the primary wide receiver (X) should run a 9 pattern (a go), the tight end (Y) should run a 2 pattern (a slant), the secondary wide receiver (Z) should run a 4 pattern (a curl) and the F-back should go out for a swing pass (see Offensive nomenclature). Timing and precision are extremely important under the Air Coryell system, as the routes are intended to run like successive clockwork in order to be successful."} {"text":"Around 2011, Bill Belichick increasingly adopted an up-tempo, no-huddle offense for his team. The idea behind this strategy is for the offense to call plays rapidly without pause and without a huddle. The intention was to tire the defensive side of the ball out more quickly, prevent them from changing their personnel on the field, and limit the complexity of their plays."} {"text":"The \"Fairbanks-Bullough\" 3\u20134 system is known as a two gap system, because each of the defensive linemen are required to cover the gaps to both sides of the offensive lineman that try to block them. Defensive linemen in this system tend to be stouter, as they need to be able to hold their place without being overwhelmed in order to allow the linebackers behind them to make plays. This is the reason that defensive linemen such as Richard Seymour and Vince Wilfork do not always rack up sack and tackle statistics despite their critical importance to the team."} {"text":"The system is at times more conservative than certain other defenses currently in vogue in the league, despite the constant threat of its potent linebacker blitz. The Patriots defensive system generally places an emphasis on physicality and discipline over mobility and risk taking and is sometimes characterized as a \"bend but do not break defense\". The Patriots are also known for putting a great deal of emphasis on the front seven (defensive line and linebackers) but less so on the secondary."} {"text":"The 3\u20134 defense was originally devised by Bud Wilkinson at the University of Oklahoma in the late 1940s. Former Patriots and Oklahoma coach Chuck Fairbanks is credited with being a major figure in first bringing the 3\u20134 defense to the NFL in 1974. It is unclear if the Patriots under Fairbanks or the Houston Oilers under Bum Phillips were the first team to bring the 3\u20134 defense to the NFL."} {"text":"Patriots defensive coordinator Hank Bullough made significant further innovations to the system. Parcells was linebackers coach under Ron Erhardt as head coach of the Patriots in 1980 (after Fairbanks left for Colorado in 1978 and Bullough lost out on the head coaching position). When Parcells returned to the Giants as defensive coordinator under Ray Perkins in 1981, he brought the 3\u20134 defense with him."} {"text":"Bill Belichick was initially exposed to the 3\u20134 defense while working as an assistant under Red Miller, head coach of the Denver Broncos and a former Patriots offensive coordinator under Fairbanks. Joe Collier was the defensive coordinator under Red Miller at the time, and his Orange Crush Defense was very successful at stifling opposing offenses. The Broncos had decided to adopt the 3\u20134 in 1977. Bill Belichick subsequently refined his understanding of the 3\u20134 as a linebackers coach and defensive coordinator under Parcells with the Giants. Belichick returned the 3\u20134 defense back to New England when he became coach of the team in 2000. Romeo Crennel subsequently became defensive coordinator for the team."} {"text":"Bill Parcells ran the Fairbanks-Bullough 3\u20134 defensive system during his coaching years. He served as an NFL head coach for 19 seasons, coaching the New York Giants (1983\u20131990), New England Patriots (1993\u20131996), New York Jets (1997\u20131999) and Dallas Cowboys (2003\u20132006). Parcells, who won 2 Super Bowls with the Giants in 1986 and 1990, earned a reputation for turning teams that were in a period of decline into postseason contenders. He is the only coach in NFL history to take 4 different teams to the NFL playoffs and 3 different NFL teams to a conference championship game. Parcells enjoyed more successful seasons when Bill Belichick served as his defensive coordinator. In 2013, Bill Parcells was inducted into the Pro Football Hall of Fame."} {"text":"Many teams coached by members of the Parcells-Belichick coaching tree currently run similar defensive systems, such as the University of Alabama under Nick Saban and the Cleveland Browns under Eric Mangini from 2009\u20132010."} {"text":"The 3\u20134 zone blitz defense was developed by Dick LeBeau as defensive coordinator of the Cincinnati Bengals. Prior to becoming defensive coordinator of the Bengals, LeBeau was tutored by Bengals defensive coordinator Hank Bullough. LeBeau's system commonly calls upon linemen to be mobile enough to drop back into zone coverage in place of blitzing linebackers. Elements of the 3\u20134 zone blitz defense have been incorporated over time into the modern Phillips 3\u20134."} {"text":"Changes to New England's defensive scheme over time."} {"text":"Over time, New England has also used a 4\u20133 defense and increased its usage of nickel defense. Belichick believes that teaching the techniques and fundamentals of his defense is more important than what alignment his defenses use, noting that he used a 4\u20133 defense when he coached the Cleveland Browns."} {"text":"The New England Patriots are noted for the following characteristics:"} {"text":"For example, in Super Bowl XXXVI, the Patriots' defense used an aggressive bump and run nickel and dime package instead of their base 3\u20134 to disrupt the timing of the highly touted Air Coryell system employed by the Rams under Mike Martz (also known as \"The Greatest Show on Turf\"). This modifiable aspect of the Patriots system is in stark contrast to simpler systems like the Tampa 2 defense, in which the same scheme is often run repeatedly with the emphasis being on execution rather than on flexibility."} {"text":"In his book \"How Football Explains America\", Sal Paolantonio noted the many parallels between the Patriots' philosophy and military training taught at West Point. This is likely the result of Bill Parcells' having coached at West Point for four years and Bill Belichick's close ties with the Naval Academy."} {"text":"In American football the air raid offense refers to an offensive scheme popularized by such coaches as Mike Leach, Hal Mumme, Sonny Dykes, and Tony Franklin during their tenures at Iowa Wesleyan University, Valdosta State, Kentucky, Oklahoma, Texas Tech, Louisiana Tech, and Washington State."} {"text":"The system is designed out of a shotgun formation with four wide receivers and one running back. The formations are a variation of the run and shoot offense with two outside receivers and two inside slot receivers. The offense also uses trips formations featuring three wide receivers on one side of the field and a lone single receiver on the other side."} {"text":"The offense owes much to the influence of BYU head coach LaVell Edwards who used the splits and several key passing concepts during the 1970s, 1980s, and 1990s while coaching players such as Jim McMahon, Steve Young, Robbie Bosco, and Ty Detmer. Mike Leach has made reference that he and Hal Mumme largely incorporated much of the BYU passing attack into what is now known as the air raid offense. Some of the concepts such as the shallow cross route were incorporated into such offenses as the West Coast offense during the early 1990s as well, prominently under Mike Shanahan while he was the head coach of the Denver Broncos."} {"text":"The scheme is notable for its focus on passing. As many as 65\u201375% of the calls during a season result in a passing play. The quarterback has the freedom to audible to any play based on what the defense is showing him at the line of scrimmage. In at least one instance, as a result of the quarterback's ability to audible, as many as 90% of the run plays called in a season were chosen by audible at the line of scrimmage."} {"text":"An important element in this offense is the inclusion of the no huddle. The quarterback and the offense race up to the line of scrimmage, diagnose what the defense is showing, and then snap the ball based on the quarterback's play call. This not only allows a team to come back if they are many points down as seen in the 2006 Insight Bowl, but it also allows them to tire out the defense, allowing for bigger runs and longer pass completions. The fast pace limits the defense's ability to substitute players and adjust their scheme. The hurried pace can cause defensive mental mistakes such as missed assignments, being out of position or too many men on the field."} {"text":"Another important aspect of the air raid offense is the split of the offensive linemen. In a conventional offense, the linemen are bunched together fairly tightly but in an air raid offense, linemen are often split apart about a half to a full yard from another. While in theory this allows easier blitz lanes, it forces the defensive ends and defensive tackles to run further to reach the quarterback for a sack. The quick, short passes offset any Blitz that may come. Another advantage is that by forcing the defensive line to widen, it opens up wide passing lanes for the quarterback to throw the ball through with less chance of having his pass knocked down or intercepted."} {"text":"Fundamental air raid play concepts include Mesh, Stick and Corner, All Curls, 4 Verts, and Fast Screens. These plays are designed to get the ball out of the quarterback's hand quickly, stretch the defense horizontally and vertically, and allow the quarterback to key on one defensive player who will forced to make a decision on which receiver to cover in his assigned area. While air raid plays are commonly designed to beat zone coverages, they also work well against man-to-man schemes since air raid offenses often employ receivers with more than average speed, thus giving them an advantage in man-to-man coverage."} {"text":"The mesh concept is the bread and butter of the air raid offense and stretches the defense vertically with an outside receiver running a deep route, typically a post route, the running back sliding out into the flat after checking for blocking assignments, and the two remaining receivers running shallow crossing routes that setup a natural pick, or coverage rub."} {"text":"In gridiron football, clock management is the manipulation of a game clock and play clock to achieve a desired result, typically near the end of a match. It is analogous to \"running out the clock\" (and associated counter-tactics) seen in many sports, and the act of trying to hasten the game's end is often referred to by this term. Clock management strategies are a significant part of American football, where an elaborate set of rules dictates when the game clock stops between downs, and when it continues to run."} {"text":"Upon kickoff, the clock is started when a member of the receiving team touches the ball, or, if the member of the receiving team touches the ball in their end zone, carries the ball out of the end zone. The clock is stopped when that player goes out of bounds. (The clock never starts if the receiving team downs the ball in their own end zone for a touchback.) The clock is then restarted when the offense snaps the ball for their first play and continues to run unless one of the following occurs, in which case the clock is stopped at the end of the play and restarts at the next snap unless otherwise provided:"} {"text":"If the clock runs out during a play, the current play is allowed to continue to its conclusion. If the clock runs out between downs, the period ends in American football, but in Canadian football the offense is allowed one last down."} {"text":"Each team is given three timeouts per half which they can use to stop the clock from running after a play. In the NFL, teams get two timeouts in a preseason or regular season overtime period, or three in a postseason overtime half."} {"text":"On a fair-catch punt, the clock starts at the snap and stops at the end of the play."} {"text":"A team on offense that has the higher score seeks to use as much time as possible. A drive may therefore benefit the team, even if it scores no points, by taking time off the clock. The team may:"} {"text":"The team may use counterintuitive game plans, such as declining to score or allowing the opponents to score, to accelerate the end of the game."} {"text":"A team on offense that has the lower score seeks to conserve time. The team may:"} {"text":"A team that is tied or trailing by one or two points but is within the red zone (and thus in easy field goal range) seeks to burn a specific amount of time off the clock, such that they can stop the clock with five or fewer seconds on the clock, so that their placekicker can kick a field goal with no time remaining and win the game."} {"text":"One exceptionally rare strategy that a team in possession of the ball near the end of the game can use is the fair catch kick. For the fair catch kick to be a viable option, several conditions must be met: the opposing team must have punted the ball within play and the receiving team used the fair catch to secure the ball, the punt must have been exceptionally short so that the spot of the fair catch is within field goal range and unlikely to be returned, the team using the fair catch kick must be either tied or within three points, and the game must not be played under NCAA rules (the NCAA has no fair catch kick rule)."} {"text":"Various rules ensure that the defense cannot deliberately commit fouls to manipulate the game clock, and in the most extreme such cases, an unfair act can be declared and the game forfeited to the offense. (Likewise, if the offense commits fouls to burn off time and get extra downs, the clock is reset and unsportsmanlike conduct is called on them.)"} {"text":"Several of the strategies discussed above for American football above can be used in the Canadian code, however rule differences mean that running out the clock much more difficult:"} {"text":"These differences make for radically different endgames if the team with the lead has the ball. In the NFL, a team can run 120 seconds (2 minutes)--and slightly more in the NCAA--off the clock without gaining a first down (assuming that the defensive team is out of timeouts). In the Canadian game, just over 40 seconds can be run off."} {"text":"Advanced Football Analytics (formerly Advanced NFL Stats) was a website dedicated to the analysis of the National Football League (NFL) using mathematical and statistical methods. The site's lead author was noted football researcher and analyst Brian Burke. Burke is a regular contributor to \"The New York Times\" NFL coverage, \"The Washington Post\"s Redskins coverage, and supplies research for other notable publications and writers."} {"text":"Advanced Football Analytics features a variety of analytical techniques and applications. The site predicts game outcomes and rates teams using a logistic regression model based on team efficiency statistics. It also features a live in-game win probability model that estimates the chances either opponent will win a game in progress. Advanced Football Analytics uses its win probability model to analyze strategic coaching decisions such as whether to kick or attempt first down conversions."} {"text":"Research topics include game theory applications, luck and randomness, play calling, home field advantage, run-pass balance, and the relative importance of various facets of performance (offensive passing, offensive rushing, defensive passing, etc.). Also featured is research on weather factors, team payroll, and the NFL Draft."} {"text":"The site has pioneered other analytical concepts such as Air Yards, which is the distance forward of the line of scrimmage that a pass travels. It removes the contribution of Yards After Catch (YAC) run by a receiver."} {"text":"Advanced Football Analytics also features a catalog of unique individual player stats. Each player's contribution toward his team's wins, known as Win Probability Added (WPA), is available for each season since 2000. Expected Points Added (EPA), success rate (SR), and many other innovative metrics are available."} {"text":"During the NFL off-season, Burke has posted original research related to other North American professional sports leagues."} {"text":"Strategic thinking is defined as a mental or thinking process applied by an individual in the context of achieving a goal or set of goals in a game or other endeavor. As a cognitive activity, it produces thought."} {"text":"When applied in an organizational strategic management process, strategic thinking involves the generation and application of unique business insights and opportunities intended to create competitive advantage for a firm or organization. It can be done individually, as well as collaboratively among key people who can positively alter an organization's future. Group strategic thinking may create more value by enabling a proactive and creative dialogue, where individuals gain other people's perspectives on critical and complex issues. This is regarded as a benefit in highly competitive and fast-changing business landscapes."} {"text":"Strategic thinking includes finding and developing a strategic foresight capacity for an organization, by exploring all possible organizational futures, and challenging conventional thinking to foster decision making today. Recent strategic thought points ever more clearly towards the conclusion that the critical strategic question is not the conventional \"What?\", but \"Why?\" or \"How?\". The work of Henry Mintzberg and other authors, further support the conclusion; and also draw a clear distinction between strategic thinking and strategic planning, another important strategic management thought process."} {"text":"General Andre Beaufre wrote in 1963 that strategic thinking \"is a mental process, at once abstract and rational, which must be capable of synthesizing both psychological and material data. The strategist must have a great capacity for both analysis and synthesis; analysis is necessary to assemble the data on which he makes his diagnosis, synthesis in order to produce from these data the diagnosis itself\u2014and the diagnosis in fact amounts to a choice between alternative courses of action.\""} {"text":"There are many tools and techniques to promote and discipline strategic thinking. The flowchart to the right provides a process for classifying a phenomenon as a scenario in the intuitive logics tradition, and how it differs from a number of other planning approaches."} {"text":"In the view of F. Graetz, strategic thinking and planning are \u201cdistinct, but interrelated and complementary thought processes\u201d that must sustain and support one another for effective strategic management. Graetz's model holds that the role of strategic thinking is \"to seek innovation and imagine new and very different futures that may lead the company to redefine its core strategies and even its industry\". Strategic planning's role is \"to realise and to support strategies developed through the strategic thinking process and to integrate these back into the business\"."} {"text":"According to Jeanne Liedtka, strategic thinking differs from strategic planning along the following dimensions of strategic management:"} {"text":"Liedtka observed five \u201cmajor attributes of strategic thinking in practice\u201d that resemble competencies:"} {"text":"Negging (derived from the verb \"neg\", meaning \"negative feedback\") is an act of emotional manipulation whereby a person makes a deliberate backhanded compliment or otherwise flirtatious remark to another person to undermine their confidence and increase their need of the manipulator's approval. The term was coined and prescribed by pickup artists."} {"text":"Negging is often viewed as straightforward insult rather than as a pick-up line, in spite of the fact that proponents of the technique traditionally stress it is not an insult. Erik von Markovik, who is usually credited with popularising the term negs, explains the difference thus: \"A neg is not an insult but a negative social value judgment that is telegraphed. It's the same as if you pulled out a tissue and blew your nose. There's nothing insulting about blowing your nose. You haven't explicitly rejected her. But at the same time, she will feel that you aren't even trying to impress her. This makes her curious as to why and makes you a challenge.\""} {"text":"Neil Strauss, in his book \"Rules of the Game\", also stresses that the primary point of the technique is not to put women down but for a man to disqualify himself as a potential suitor. On this account he refers to negs as \"disqualifiers\", although the technique described in the book is recognisably the same as von Markovik's. Strauss is equally clear that negs should not be used as insults: \"a disqualifier should never be hostile, critical, judgmental, or condescending. There's a line between flirting and hurting. And disqualification is never intended to be mean and insulting.\""} {"text":"The term has been popularized in social media and mainstream media. The opposite is \"pozzing\", whereby one pays a person a compliment in order to gain their affection."} {"text":"The chain-linked model or Kline model of innovation was introduced by mechanical engineer Stephen J. Kline in 1985, and further described by Kline and economist Nathan Rosenberg in 1986. The chain-linked model is an attempt to describe complexities in the innovation process. The model is regarded as Kline's most significant contribution."} {"text":"In the chain-linked model, new knowledge is not necessarily the driver for innovation. Instead, the process begins with the identification of an unfilled market need. This drives research and design, then redesign and production, and finally marketing, with complex feedback loops between all the stages. There are also important feedback loops with the organization's and the world's stored base of knowledge, with new basic research conducted or commissioned as necessary, to fill in gaps."} {"text":"It is often contrasted with the so-called linear model of innovation, in which basic research leads to applied development, then engineering, then manufacturing, and finally marketing and distribution."} {"text":"The Kline model was conceived primarily with commercial industrial settings in mind, but has found broad applicability in other settings, for example in military technology development. Variations and extensions of the model have been described by a number of investigators."} {"text":"In backgammon, there are a number of strategies that are distinct to match play as opposed to money play. These differences are most apparent when a player is within a few points of winning the match."} {"text":"Backgammon matches are played to a set number of points, ranging from 3 for informal matches to 25 or more for high level tournaments. Traditionally matches are played to an odd number of points, however there is no theoretical reason why a match should not be played to an even number of points."} {"text":"As with money play, the doubling cube is used. At the start of each game, the doubling cube is placed on the bar with the number 64 showing; the cube is then said to be \"centered, on 1\". When the cube is centered, the player about to roll may propose that the game be played for twice the current stakes. Their opponent must either accept (\"take\") the doubled stakes or resign (\"drop\" or \"pass\") the game immediately."} {"text":"When both players are several points away from the target score, doubling strategy is broadly similar to that of money play. The theoretical point for accepting a double is when a player's winning chances are 25% or higher. Suppose a player were offered the same double in the same position 4 times. If the player dropped 4 doubles, they would have a net loss of 4 points. If they accepted the double at 2, lost 3 games and won 1, the net loss would still be 4 points, i.e. 2 * (3 - 1)"} {"text":"In fact, a player can accept a double at slightly worse odds than 25%, due to the value of owning the cube, giving them the exclusive right to redouble. The corollary of this is that a player should be wary of \"giving the cube away\" too readily; generally an advantage corresponding to 70% winning chances or more is needed before a double becomes correct. The \"doubling window\" between which both the double and the take are correct is approximately 70% - 78%."} {"text":"Players generally attempt to double at the \"top of the market\", i.e. as close as possible to the opponent's theoretical take point. If a player offers a double and the opponent correctly drops, the player is said to have \"lost his market\" or \"doubled someone out\". If both the double and the take are correct, the player has \"kept his market\" or \"doubled someone in\". While it is preferable to double someone in, due to the volatile nature of the game this is not always possible. For example if a player throws a \"joker\" which radically changes the assessment of the position (for example a 6-6 in a racing situation), a position which was not a correct double on the previous roll may now be a \"drop\"."} {"text":"A complicating factor is the possibility of gammons (or more rarely backgammons). When a player has a reasonable chance of winning a gammon, a position may be \"too good\" to double, i.e. it may be correct to attempt to score 2 points by winning an undoubled gammon rather than \"cash\" a certain 1 point by doubling the opponent out. Additionally, the threat of a gammon can sometimes make it correct to double even with less than 65% winning chances, or to drop a double with more than 25% winning chances."} {"text":"Complicating things still further, the specific match score can have a significant effect on correct doubling and checker play strategy."} {"text":"To facilitate discussion of match play strategy, scores are \"normalised\", i.e. referred to in terms of the number of points each player is away from victory. For example, if a player is leading 3-2 in a 5 point match, this is referred to as \"2-away, 3-away\" or \"-2, -3\"; likewise if a player leads 13-12 in a 15 point match."} {"text":"\"Double match point\" (or DMP) refers to any situation where the match depends on the result of a single game, gammons, backgammons and cube actions being irrelevant. Common situations where double match point strategy comes into effect include a score of 1-away, 1-away, a post-Crawford game at 1-away, 2-away in which the leader accepts the inevitable early double, and a doubled game at 2-away, 2-away."} {"text":"In double match point games, a \"blitz\", in which a player aggressively pursues a gammon by continually hitting in his own board at the risk of overextending his position, becomes a poor strategy. On the other hand, since gammons don't matter, back games, in which a player maintains two or more anchors in the opponent's home board with a view to hitting later in the game, become a more attractive option. However, double match point games are most commonly decided by a simple racing strategy when one player has an opportunity to break contact with the opponent while ahead in the race."} {"text":"Between competent players, this will almost always be the final game of the match; one of the players will double early, the other will take, and play will take on a double match point character. The explanation for this is as follows:"} {"text":"Since the cube will not come into play, there are two possible ways for the trailing player to win the match; he can win a gammon in the next game, or he can win the next game and then win the decider at DMP. The combined odds for the trailing player to win the match can be calculated at approximately 30%, assuming that 20% of the wins are gammons:"} {"text":"0.10 (odds of winning gammon in the next game) + 0.40 (odds of winning a single game) * 0.50 (odds of winning the following game) = 0.30."} {"text":"Since gammon wins are very favorable to the trailer and gammon losses are very costly to the leader, this score is referred to as \"gammon go\" (GG) for the trailer and \"gammon save\" (GS) for the leader. The trailer should play more aggressively in pursuit of a gammon i.e. try to steer the game into a blitz, a back game (for either side) or a prime vs prime battle. The leader will try to avoid losing a gammon by attempting to establish an advanced anchor in the opponent's board or else try for a simple running game."} {"text":"1-away, 2-away (Post-Crawford game) - the \"free drop\"."} {"text":"The trailer should double at the first opportunity, thereby converting the game into a double match point situation. However, the leader has the option of the \"free drop\". If the leader is at a disadvantage, however slight, he should drop the double and start a new game at 1-away, 1-away. Whether the leader takes or drops, the next game will be the decider at DMP, so it is preferable to start a new game at 50 - 50 than continue the present game at 49.5 - 50.5."} {"text":"The free drop is only a minor advantage to the leader, so to all intents and purposes a score of 1-away, 2-away post-Crawford is equivalent to 1-away, 1-away. The free drop also comes into consideration in any post-Crawford game in which the trailer is an even number of points away from victory. For example 1-away, 6-away post-Crawford is equivalent to 1-away, 5-away save for the leader's free drop. In this case the leader may elect to \"save\" his free drop if he is at only a minimal disadvantage (e.g. having a sound position but losing the opening roll)."} {"text":"1-away, 3-away (Post-Crawford game) - \"the trick\"."} {"text":"While it is technically correct for the trailer to cube at the first opportunity, the leader's takepoint is less than 10% at this score so it can sometimes be beneficial for the trailer to wait until there's a larger advantage. Since the 1-away 2-away and 1-away 1-away are almost equivalent, the leader gives away almost nothing by accepting the cube."} {"text":"By waiting, the trailer gives the leader the chance to mistakenly drop, allowing the trailer to \"steal\" a point. However, while the leader's takepoint is very low based on single-point losses, gammons are very costly with the cube on 2 so the leader should not take positions with significant gammon chances."} {"text":"A concept-driven strategy is a process for formulating strategy that draws on the explanation of how humans inquire provided by linguistic pragmatic philosophy. This argues that thinking starts by selecting (explicitly or implicitly) a set of concepts (frames, patterns, lens, principles, etc.) gained from our past experiences. These are used to reflect on whatever happens, or is done, in the future."} {"text":"Concept-driven strategy therefore starts from agreeing and enacting a set of strategic concepts (organizing principles) that \"works best\" for an organisation. For example, a hospital might set its strategy as intending to be Caring, World Class, Local, Evidence Based, and Team Based. A University might set its strategy as intending to be Ranked, Problem Solving, Online, Equis, and Offering Pathways. A commercial corporation might set its strategy as intending to be Innovative, Global, Have Visible Supply Chains, Agile and Market Share Dominant. These strategic concepts make up its \"Statement of Intent\" (or Purpose)."} {"text":"The Statement of Purpose, Statement of Intent or concept-driven approach to strategy formulation therefore focuses on setting and enacting a set strategic concepts. If a participatory approach is being used these concepts will be acquired through a process of collaboration with stakeholders. Once agreed the strategic concepts can be used to coordinate activities and act as a set of decision making criteria. The set of concepts that make up the Statement of Intent is then used to make sense of an unpredictable future across an organisation in a co-ordinated manner."} {"text":"Linguistic pragmatism argues that our prior conceptions interpret our perception (sensory inputs). These conceptions are represented by concepts like running, smiling, justice, reasoning and agility. They are patterns of activity, experienced in our past and remembered. They can be named by those with language and so shared."} {"text":"Bagginni explains pragmatic concepts using the classic example of whether the earth is flat or round."} {"text":"Another example would be that we can think of the war in Iraqi differently by reflecting off the concepts of oil security, Imperialism, aggressive capitalism, liberation or democracy."} {"text":"The concept-driven approach to strategy formulation involves setting and using a set of linguistic pragmatic concepts."}