text
stringlengths
13
991
The auditory dorsal stream connects the auditory cortex with the parietal lobe, which in turn connects with inferior frontal gyrus. In both humans and non-human primates, the auditory dorsal stream is responsible for sound localization, and is accordingly known as the auditory 'where' pathway. In humans, this pathway (especially in the left hemisphere) is also responsible for speech production, speech repetition, lip-reading, and phonological working memory and long-term memory.
The auditory dorsal stream also has non-language related functions, such as sound localization and guidance of eye movements. Recent studies also indicate a role of the ADS in localization of family/tribe members, as a study that recorded from the cortex of an epileptic patient reported that the pSTG, but not aSTG, is selective for the presence of new speakers. An fMRI study of fetuses at their third trimester also demonstrated that area Spt is more selective to female speech than pure tones, and a sub-section of Spt is selective to the speech of their mother in contrast to unfamiliar female voices.
Neuroscientific research has provided a scientific understanding of how sign language is processed in the brain. There are over 135 discrete sign languages around the world- making use of different accents formed by separate areas of a country.
By resorting to lesion analyses and neuroimaging, neuroscientists have discovered that whether it be spoken or sign language, human brains process language in general, in a similar manner regarding which area of the brain is being used. Lesion analyses are used to examine the consequences of damage to specific brain regions involved in language while neuroimaging explore regions that are engaged in the processing of language.
Previous hypotheses have been made that damage to Broca's area or Wernicke’s area does not affect sign language being perceived; however, it is not the case. Studies have shown that damage to these areas are similar in results in spoken language where sign errors are present and/or repeated. In both types of languages, they are affected by damage to the left hemisphere of the brain rather than the right -usually dealing with the arts.
There are obvious patterns for utilizing and processing language. In sign language, Broca’s area is activated while processing sign language employs Wernicke’s area similar to that of spoken language
There have been other hypotheses about the lateralization of the two hemispheres. Specifically, the right hemisphere was thought to contribute to the overall communication of a language globally whereas the left hemisphere would be dominant in generating the language locally. Through research in aphasias, RHD signers were found to have a problem maintaining the spatial portion of their signs, confusing similar signs at different locations necessary to communicate with another properly. LHD signers, on the other hand, had similar results to those of hearing patients. Furthermore, other studies have emphasized that sign language is present bilaterally but will need to continue researching to reach a conclusion.
There is a comparatively small body of research on the neurology of reading and writing. Most of the studies performed deal with reading rather than writing or spelling, and the majority of both kinds focus solely on the English language. English orthography is less transparent than that of other languages using a Latin script. Another difficulty is that some studies focus on spelling words of English and omit the few logographic characters found in the script.
In terms of spelling, English words can be divided into three categories – regular, irregular, and “novel words” or “nonwords.” Regular words are those in which there is a regular, one-to-one correspondence between grapheme and phoneme in spelling. Irregular words are those in which no such correspondence exists. Nonwords are those that exhibit the expected orthography of regular words but do not carry meaning, such as nonce words and onomatopoeia.
An issue in the cognitive and neurological study of reading and spelling in English is whether a single-route or dual-route model best describes how literate speakers are able to read and write all three categories of English words according to accepted standards of orthographic correctness. Single-route models posit that lexical memory is used to store all spellings of words for retrieval in a single process. Dual-route models posit that lexical memory is employed to process irregular and high-frequency regular words, while low-frequency regular words and nonwords are processed using a sub-lexical set of phonological rules.
The single-route model for reading has found support in computer modelling studies, which suggest that readers identify words by their orthographic similarities to phonologically alike words. However, cognitive and lesion studies lean towards the dual-route model. Cognitive spelling studies on children and adults suggest that spellers employ phonological rules in spelling regular words and nonwords, while lexical memory is accessed to spell irregular words and high-frequency words of all types. Similarly, lesion studies indicate that lexical memory is used to store irregular words and certain regular words, while phonological rules are used to spell nonwords.
Far less information exists on the cognition and neurology of non-alphabetic and non-English scripts. Every language has a morphological and a phonological component, either of which can be recorded by a writing system. Scripts recording words and morphemes are considered logographic, while those recording phonological segments, such as syllabaries and alphabets, are phonographic. Most systems combine the two and have both logographic and phonographic characters.
In terms of complexity, writing systems can be characterized as “transparent” or “opaque” and as “shallow” or “deep.” A “transparent” system exhibits an obvious correspondence between grapheme and sound, while in an “opaque” system this relationship is less obvious. The terms “shallow” and “deep” refer to the extent that a system’s orthography represents morphemes as opposed to phonological segments. Systems that record larger morphosyntactic or phonological segments, such as logographic systems and syllabaries put greater demand on the memory of users. It would thus be expected that an opaque or deep writing system would put greater demand on areas of the brain used for lexical memory than would a system with transparent or shallow orthography.
A Growth Point is a technical term in cognitive linguistics and gesture research. It refers to the earliest beginnings of a spoken utterance in the mind of a speaker, combining the beginnings of a mimetic gesture with the preliminary verbal expression of the person's thought.
An alternate theory of deriving the meaning of newly learned words by young children during language acquisition stems from John Locke's "associative proposal theory". Compared to the "intentional proposal theory", associative proposal theory refers to the deduction of meaning by comparing the novel object to environmental stimuli. A study conducted by Yu & Ballard (2007), introduced cross-situational learning, a method based on Locke's theory. Cross-situtational learning theory is a mechanism in which the child learns meaning of words over multiple exposures in varying contexts in an attempt to eliminate uncertainty of the word's true meaning on an exposure-by-exposure basis.
Some researchers are concerned that experiments testing for fast mapping are produced in artificial settings. They feel that fast mapping doesn't occur as often in more real life, natural situations. They believe that testing for fast mapping should focus more on the actual understanding of a word instead of just its reproduction. For some, testing to see if the child can use the new word in a different situation constitutes true knowledge of a word, rather than simply identifying the new word.
Variables affecting an individual's fast mapping ability.
When learning novel words, it is believed that early exposure to multiple linguistic systems facilitates the acquisition of new words later in life. This effect was referred to by Kaushanskaya and Marian (2009) as the bilingual advantage. That being said, a bilingual individual's ability to fast map can vary greatly throughout their life.
During the language acquisition process, a child may require a greater amount of time to determine a correct referent than a child who is a monolingual speaker. By the time a bilingual child is of school age, they perform equally on naming tasks when compared to monolingual children. By the age of adulthood, bilingual individuals have acquired word-learning strategies believed to be of assistance on fast mapping tasks. One example is speech practice, a strategy where the participant listens and reproduces the word in order to assist in remembering and decrease the likelihood of forgetting .
Bilingualism can increase an individual's cognitive abilities and contribute to their success in fast mapping words, even when they are using a nonnative language.
Children growing up in a low-socioeconomic status environment receive less attention than those in high-socioeconomic status environments. As a result, these children may be exposed to fewer words and therefore their language development may suffer. On norm-references vocabulary tests, children from low- socioeconomic homes tend to score lower than same-age children from a high-socioeconomic environment. However, when examining their fast mapping abilities there were no significant differences observed in their ability to learn and remember novel words. Children from low SES families were able to use multiple sources of information in order to fast map novel words. When working with children from low SES homes, providing a context of the word that attributes meaning, is a linguistic strategy that can benefit the child's word knowledge development.
Three learning supports that have been proven to help with the fast mapping of words are saliency, repetition and generation of information. The amount of face-to-face interaction a child has with their parent affects his or her ability to fast map novel words. Interaction with a parent leads to greater exposure to words in different contexts, which in turn promotes language acquisition. Face to face interaction cannot be replaced by educational shows because although repetition is used, children do not receive the same level of correction or trial and error from simply watching. When a child is asked to generate the word it promotes the transition to long-term memory to a larger extent.
Evidence of fast mapping in other animals.
It appears that fast mapping is not only limited to humans, but can occur in dogs as well.
The first example of fast mapping in dogs was published in 2004. In it, a dog named Rico was able to learn the labels of over 200 various items. He was also able to identify novel objects simply by exclusion learning. Exclusion learning occurs when one learns the name of a novel object because one is already familiar with the names of other objects belonging to the same group. The researchers, who conducted the experiment, mention the possibility that a language acquisition device specific to humans does not control fast mapping. They believe that fast mapping is possibly directed by simple memory mechanisms.
In 2010, a second example was published. This time, a dog named Chaser demonstrated, in a controlled research environment, that she had learned over 1000 object names. She also demonstrated that she could attribute these objects to named categories through fast mapping inferential reasoning. It's important to note that, at the time of publication, Chaser was still learning object names at the same pace as before. Thus, her 1000 words, or lexicals, should not be regarded as an upper limit, but a benchmark. While there are many components of language that were not demonstrated in this study, the 1000 word benchmark is remarkable because many studies on language learning correlate a 1000 lexical vocabulary with, roughly, 75% spoken language comprehension.
Another study on Chaser was published in 2013. In this study, Chaser demonstrated flexible understanding of simple sentences. In these sentences, syntax was altered in various contexts to prove she had not just memorized full phrases or inferred the expectation through gestures from her evaluators. Discovering this skill in a dog is noteworthy on its own, but verb meaning can be fast mapped through syntax. This creates questions about what parts of speech dogs could infer, as previous studies focused on nouns. These findings create further questions about the fast mapping abilities of dogs when viewed in light of a study published in Science in 2016 that proved dogs process lexical and intonational cues separately. That is, they respond to both tone and word meaning.
However, excitement about the fast-mapping skills of dogs should be tempered. Research in humans has found fast-mapping abilities and vocabulary size are not correlated in unenriched environments. Research has determined that language exposure alone is not enough to develop vocabulary through fast-mapping. Instead, the learner needs to be an active participant in communications to convert fast-mapping abilities into vocabulary.
It is not commonplace to communicate with dogs, nor any non-primate animal, in a productive fashion as they are non-verbal. As such, Chaser's vocabulary and sentence comprehension is attributed to Dr. Pilley's rigorous methodology.
An experiment was performed to assess fast mapping in adults with typical language abilities, disorders of spoken/written language (hDSWL), and adults with hDSWL and ADHD.
The conclusion draws from the experiment revealed that adults with ADHD were the least accurate at "mapping semantic features and slower to respond to lexical labels."
The article reasoned that the tasks of fast mapping requires high attentional demand and so "a lapse in attention could lead to diminished encoding of the new information."
Research in artificial intelligence and machine learning to reproduce computationally this ability, termed one-shot learning. This is pursued to reduce the learning curve, as other models like reinforcement learning need thousand of exposures to a situation to learn it.
Autoclitics are verbal responses that modify the effect on the listener of the primary operants that comprise B.F. Skinner's classification of Verbal Behavior.
An autoclitic is a verbal behavior that modifies the functions of other verbal behaviors. For example, "I think it is raining" possesses the autoclitic "I think," which moderates the strength of the statement "it is raining." Research that involves autoclitics includes Lodhi & Greer (1989).
Skinner describes grammatical manipulations, such as the order or grouping of responses, as autoclitic. The ordering of patterns may be a function of relevant strength, temporal ordering, or other factors. Skinner speaks to the use of predication and the use of tags, contrasting the Latin forms, which use tags—and English, which uses grouping and ordering. Skinner proposes the relational autoclitic as a descriptor for these kinds of relationships.
Composition represents a special class of autoclitic responding, because the responding is itself a response to previously existing verbal responses. The autoclitic is controlled not only by the effects on the listener but upon the speaker as listener of their own responses. Skinner notes that "emotional and imaginal" behavior has little to do with grammar and syntax. Obscene words and poetry are likely to be effective, even when emitted non-grammatically.
Self-editing as a compositional process follows the autoclitic process of manipulating responses. After the responses are changed with autoclitics they are examined for their effects and then "rejected or released." Conditions may prevent self-editing, such as a very high response strength.
The physical topography of the rejection of verbal behavior in the process of editing varies from the partial emission of a written word to the apparent non-emission of a vocal response. It may include ensuring that responses simply do not reach a listener, as in not delivering a manuscript or letter. Manipulative autoclitics can revoke words by striking them out, as in a court of law. Similar effects may arise from expression like "Forget it."
A speaker may fail to react as a listener to their own speech under conditions where the emission of verbal responses is very quick. The speed may be a function of strength or of differential reinforcement. Physical interruption may arise as in the case of those who are hearing impaired, or under conditions of mechanical impairment such as ambient noise. Skinner argues the Ouija board may operate to mask feedback and so produce unedited verbal behavior.
The main use of language is to transfer thoughts from one mind, to another mind. The bits of linguistic information that enter into one person's mind, from another, cause people to entertain a new thought with profound effects on his world knowledge, inferencing, and subsequent behavior. Language neither creates nor distorts conceptual life. Thought comes first, while language is an expression. There are certain limitations among language, and humans cannot express all that they think.
Language of thought theories rely on the belief that mental representation has linguistic structure. Thoughts are "sentences in the head", meaning they take place within a mental language. Two theories work in support of the language of thought theory. Causal syntactic theory of mental practices hypothesizes that mental processes are causal processes defined over the syntax of mental representations. Representational theory of mind hypothesizes that propositional attitudes are relations between subjects and mental representations. In tandem, these theories explain how the brain can produce rational thought and behavior. All three of these theories were inspired by the development of modern logical inference. They were also inspired by Alan Turing's work on causal processes that require formal procedures within physical machines.
LOTH hinges on the belief that the mind works like a computer, always in computational processes. The theory believes that mental representation has both a combinatorial syntax and compositional semantics. The claim is that mental representations possess combinatorial syntax and compositional semantic—that is, mental representations are sentences in a mental language. Alan Turing's work on physical machines implementation of causal processes that require formal procedures was modeled after these beliefs.
Another prominent linguist, Stephen Pinker, developed this idea of a mental language in his book "The Language Instinct" (1994). Pinker refers to this mental language as "mentalese". In the glossary of his book, Pinker defines mentalese as a hypothetical language used specifically for thought. This hypothetical language houses mental representations of concepts such as the meaning of words and sentences.
Different cultures use numbers in different ways. The Munduruku culture for example, has number words only up to five. In addition, they refer to the number 5 as "a hand" and the number 10 as "two hands". Numbers above 10 are usually referred to as "many".
Language may influence color processing. Having more names for different colors, or different shades of colors, makes it easier both for children and for adults to recognize them. Research has found that all languages have names for black and white and that the colors defined by each language follow a certain pattern (i.e. a language with three colors also defines red, one with four defines green OR yellow, one with six defines blue, then brown, then other colors).
The Sapir–Whorf hypothesis is the premise of the 2016 science fiction film "Arrival". The protagonist explains that "the Sapir–Whorf hypothesis is the theory that the language you speak determines how you think".
A psycholinguist is a social scientist who studies psycholinguistics, which connects psychology and linguistics. Psycholinguistics is interdisciplinary in nature and is studied by people in a variety of fields, such as, psychology, cognitive science, linguistics, neuroscience and many more. The main aim of psycholinguistics is to outline and describe the process of producing and comprehending communication.
More specifically, a psycholinguist studies language, speech production, and comprehension by using behavioral and neurological methods traditionally developed in the field of psychology, but other methods such as corpus analysis are also widely used. Psycholinguists typically receive undergraduate degrees in linguistics or psychology and then seek a higher degree. Psycholinguistics is not usually a degree of its own; graduate degrees range from scientific studies to criminal justice. The majority of students who become psycholinguists receive a master's degree or a Ph.D.; however, there are also some opportunities available for those who choose not to attend graduate school.
Psycholinguists currently represent a widely diverse field. Many psycholinguists are also considered to be neurolinguists, cognitive linguists, neurocognitive linguists, or are associated with those who are. There are subtle differences between the titles, though they are all attempting to use different facets of similar issues. Psycholinguists are sometimes categorized into separate groups by the models and theories in which they believe. The two main groups, either interactive or autonomous, are based on ideas of language processing. Psycholinguists who support the interactive side, believe that our levels of processing for language work side-by-side and share information as words are received. The other argument is the autonomous side, which believes that the levels of processing for language occur independent of one another.
When conducting research, psycholinguists use a variety of techniques that can involve qualitative and/or quantitative data. Typical methods of research include: observation (language recording), experimentation (issuing language tests), and self-reports (participants report what they are experiencing). The research tends to result in either theoretical evidence or a realistic application.
There are many associations that include professionals in the psycholinguist field worldwide, such as the following:
In psycholinguistics, the collaborative model (or conversational model) is a theory for explaining how speaking and understanding work in conversation, specifically how people in conversation coordinate to determine definite references.
The model was initially proposed in 1986 by psycholinguists Herb Clark and Deanna Wilkes-Gibbs. It asserts that conversation partners must act collaboratively to reach a mutual understanding – i.e. the speaker must tailor their utterances to better suit the listener, and the listener must indicate to the speaker that they have understood.
In this ongoing process, both conversation partners must work together in order to establish what a given noun phrase is referring to. The referential process can be initiated by the speaker using one of at least six types of noun phrases: the elementary noun phrase, the episodic noun phrase, the installment noun phrase, the provisional noun phrase, the dummy noun phrase, and/or the proxy noun phrase.
Once this presentation is made, the listener must accept it either through presupposing acceptance (i.e. letting the speaker continue uninterrupted) or asserting acceptance (i.e. through a continuer such as "yes", okay", or a head nod). The speaker must then acknowledge this signal of acceptance. In this process, presentation and acceptance goes back and forth, and some utterances can simultaneously be both presentations and acceptances. This model also posits that conversationalists strive for minimum collaborative effort by making references based more on permanent properties than temporary properties and by refining perspective on referents through simplification and narrowing .
The collaborative model finds its roots in Grice's cooperative principle and four Gricean maxims, theories which prominently established the idea that conversation is a collaborative process between speaker and listener.
However, until the Clark & Wilkes-Gibbs study, the prevailing theory was the literary model (or autonomous model or traditional model). This model likened the process of a speaker establishing reference to an author writing a book to distant readers. In the literary model, the speaker is the one who retains complete control and responsibility over the course of referent determination. The listener, in this theory, simply hears and understands the definite description as if they were reading it and, if successful, figures out the identity of the referent on their own.
This autonomous view of reference establishment wasn't challenged until a paper by D.R. Olson was published in 1970. It was then suggested that there very well could be a collaborative element in the process of establishing reference. Olson, while still holding to the literary model, suggested that speakers select the words they do based on context and what they believe the listener will understand.
Clark and Wilkes-Gibbs criticized the literary model in their 1986 paper; they asserted that the model failed to account for the dynamic nature of verbal conversations.
In the same paper, they proposed the Collaborative Model as an alternative. They believed this model was more able to explain the aforementioned features of conversation. They had conducted an experiment to support this theory and also to further determine how the acceptance process worked.
The experiment consisted of two participants seated at tables separated by an opaque screen. On the tables in front of each participant were a series of Tangram figures arranged in different orders. One participant, called the director, was tasked with getting the other participant, called the matcher, to accurately match his configuration of figures through conversation alone. This process was to be repeated 5 additional times by the same individuals, playing the same roles.
The collaborative model they proposed allowed them to make several predictions about what would happen. They predicted that it would require many more words to establish reference the first time, as the participants would need to use non-standard noun phrases which would make it difficult to determine which figures were being talked about. However, they hypothesized that later references to the same figures would take fewer words and a shorter amount of time, because by this point definite reference would have been mutually established, and also because the subjects would be able to rely on established standard noun phrases.
The results of the study confirmed many of their beliefs, and outlined some of the processes of collaborative reference, including establishing the types of noun phrases used in presentation, and their frequency.
The following actions were observed in participants working towards mutual acceptance of a reference;
Grounding is the final stage in the collaborative process. The concept was proposed by Herbert H. Clark and Susan E. Brennan in 1991. It comprises the collection of "mutual knowledge, mutual beliefs, and mutual assumptions" that is essential for communication between two people. Successful grounding in communication requires parties "to coordinate both the content and process".
The parties engaging in grounding exchange information over what they do or do not understand over the course of a communication and they will continue to clarify concepts until they have agreed on grounding criterion. There are generally two phases in grounding:
Subsequent studies affirmed many of Clark and Wilkes-Gibbs' theories. These included a study by Clark and Michael Schober in 1989 that dealt with overhearers and contrasting how well they understand compared to direct addressees. In the literary model, overhearers would be expected to understand as well as addressees, while in the collaborative model, overhearers would be expected to do worse, since they are not part of the collaborative process and the speaker is not concerned with making sure anyone but the addressee understands.
The study conducted by the pair mimicked the Clark/Wilkes-Gibbs study, but included a silent overhearer as part of the process. The speaker and addressee were allowed to converse, while the overhearer attempted to arrange his figures according to what the speaker was saying. In different versions of this study, overhearers had access to a tape recording of the speaker's directions, while in another they simply all sat in the same room.
The study found that overhearers had significantly more difficulty than addressees in both experiments, therefore, according to the researchers, lending credence to the collaborative model.
The literary model described above still stands as a directly opposing viewpoint to the collaborative model. Subsequent studies also sought to point out weaknesses in the theory. One study, by Brown and Dell, took issue with the aspect of the theory that suggests that speakers have particular listeners in mind when determining reference. Instead, they suggested, speakers have generic listeners in mind. This egocentric theory proposed that people's estimates of another's knowledge are biased towards their own and that early syntactic choices may be made without regard to the addressees' needs, while beliefs about the addressees knowledge did not affect utterance choices until later on, usually in the form of repairs.
Another study, in 2002 by Barr and Keysar, also criticized the particular listener view and partner-specific reference. In the experiment, addresses and speakers established definite references for a series of objects on a wall. Then, another speaker entered, using the same references. The theory was that, if the partner-specific view of establishing reference was correct, the addressee would be slower to identify objects(as measured by eye movement) out of confusion because the reference used had been established with another speaker. They found this not to be the case, in fact, reaction time was similar.
In neuroscience and psychology, the term language center refers collectively to the areas of the brain which serve a particular function for speech processing and production. Language is a core system, which gives humans the capacity to solve difficult problems and provides them with a unique type of social interaction. Language allows individuals to attribute symbols (e.g. words or signs) to specific concepts and display them through sentences and phrases that follow proper grammatical rules. Moreover, speech is the mechanism in which language is orally expressed.
Information is exchanged in a larger system including language-related regions. These regions are connected by white matter fiber tracts that make possible the transmission of information between regions. The white matter fiber bunches were recognized to be important for language production after suggesting that it is possible to make a connection between multiple language centers. The three classical language areas that are involved in language production and processing are Broca’s and Wernicke's areas, and the angular gyrus.
Broca's Area was first suggested to play a role in speech function by the French neurologist and anthropologist Paul Broca in 1861. The basis for this discovery was the analysis of speech problems resulting from injuries to this region of the brain, located in the inferior frontal gyrus. Paul Broca had a patient called Leborgne who could only pronounce the word “tan” when speaking. Paul Broca, after working with another patient with similar impairment, concluded that damage in the inferior frontal gyrus affected articulate language.
Broca’s area is well-known for being the syntactic processing  “center”. It has been known since Paul Broca associated speech production with an area in the posterior inferior frontal gyrus, which he called “Broca’s area”. Although this area is in charge of speech production, its particular role in the language system is unknown. However, it is involved in phonological, semantic, and syntactic processing and working memory. The anterior region of Broca’s area is involved in semantic processing, while the posterior region in the phonological processing (Bohsali, 2015). Moreover, the whole of Broca’s area has been shown to have a higher activation while doing reading tasks than other types of tasks.
In a simple explanation of speech production, this area approaches phonological word representation chronologically divided into segments of syllables which then is sent to different motor areas where they are converted into a phonetic code. The study of how this area produces speech has been made with paradigms using both single and complex words.
Broca’s area is correlated with phonological segmentation, unification, and syntactic processing, which are all connected to linguistic information. This area, although it synchronizes the transformation of information within cortical systems involved in spoken word production, does not contribute to the production of single words. The inferior frontal lobe is the one in charge of word production.
Furthermore, Broca’s area is structurally related to the thalamus and both are engaged in language processing. The connectivity between both areas is two thalamic nuclei, the pulvinar, and the ventral nucleus, which are involved in language processing and linguistic functions similar to BA 44 and 45 in Broca’s area. Pulvinar is connected to many frontal regions of the frontal cortex and ventral nucleus is involved in speech production. The frontal speech regions of the brain have been shown to participate in speech sound perception.
Broca's Area is today still considered an important language center, playing a central role in processing syntax, grammar, and sentence structure.
Wernicke’s area was named for German doctor Carl Wernicke, who discovered it in 1874 in the course of his research into aphasias (loss of ability to speak).This area of the brain is involved in language comprehension. Therefore, Wernicke’s area is for understanding oral language. Besides Wernicke’s area, the left posterior superior temporal gyrus (pSTG), middle temporal gyrus (MTG), inferior temporal gyrus (ITG), supramarginal gyrus (SMG), and angular gyrus (AG) participate in language comprehension. Therefore, language comprehension is not located in a specific area. Contrarily, it involves large regions of the inferior parietal lobe and left temporal.
While the finale of speech production is a sequence of muscle movements, the activation of knowledge about the sequence of phonemes (consonants and vowel speech sounds) that creates a word is a phonological retrieval. Wernicke’s area contributes to phonological retrieval. All speech production tasks (e.g. word retrieval, repetition, and reading aloud) require phonological retrieval. The phonological retrieval system involved in speech repetition is the auditory phoneme perception system and the visual letter perception system is the one that serves for reading aloud. The communicative speech production entails a phase preceding phonological retrieval. The speech comprehension implicates representing sequences of phonemes onto word meaning.
The angular gyrus is an important element in processing concrete and abstract concepts. It also has a role in verbal working memory during retrieval for verbal information and in visual memory for when turning written language into spoken language. The left AG is activated in semantic processing requiring concept retrieval and conceptual integration. Moreover, the left AG is activated during problems of multiplication and addition requiring retrieval of arithmetic factors in verbal memory. Therefore, it is involved in verbal coding of numbers.
The insula is implicated in speech and language, taking part in functional and structural connections with motor, linguistic, sensory, and limbic brain areas. The knowledge about the function of the insula in speech production comes from different studies with patients who suffered from apraxia of speech. These studies have led researchers to know about the involvement of different parts of the insula. These parts are: the left anterior insula, which is related to speech production; and the bilateral anterior insula, involved in misleading speech comprehension.
Many different sources state that the study of the brain and therefore, language disorders, originated in the 19th century and linguistic analysis of those disorders began throughout the 20th century. Studying language impairments in the brain after injuries aids to comprehend how the brain works and how it changes after an injury. When this happens, the brain suffers an impairment that is referred to as “aphasia”. Lesions to Broca's Area resulted primarily in disruptions to speech production; damage to Wernicke's Area, which is located in the lower part of the temporal lobe, lead mainly to disruptions in speech reception.
There are numerous distinctive ways in which language can be affected. Phonemic paraphasia, an attribute of conduction aphasia and Wernicke aphasia, is not the speech comprehension impairment. Instead, it is the speech production damage, where the desire phonemes are selected erroneously or in an incorrect sequence. Therefore, although Wernicke’s aphasia, a combination of phonological retrieval and semantic systems impairment, affects speech comprehension, it also involves speech production damage. Phonemic paraphasia and anomia (impaired word retrieval) are the results of phonological retrieval impairment.
Another lesion that involves impairment in language production and processing is the “apraxia of speech”, a difficulty synchronizing articulators essential for speech production. This lesion is located in the superior pre-central gyrus of the insula and is more likely to occur to patients with Broca’s aphasia. Dominant ventral anterior (VA) nucleus, another type of lesion, is the result of word-finding and semantic paraphasia’s difficulties engaging in language processing. Moreover, individuals with thalamic lesions experience difficulties linking semantic concepts with correct phonological representations in word production.
Dyslexia is a language processing disorder. It involves learning difficulties such as reading, writing, word recognition, phonological recording, numeracy, and spelling. Although having access to appropriate intervention during childhood, these difficulties continue throughout the lifespan. Moreover, children are diagnosed with dyslexia when more than one factor affecting learning, such as reading, appears visible. Children diagnosed with dyslexia that have difficulties in concrete cognitive functioning is called an assumption of specificity, and it helps to diagnose dyslexia.
Some characteristics that distinguish dyslexics are incompetent phonological processing abilities causing misread of unfamiliar words and affecting comprehension; inadequacy of working memory affecting speaking, reading, and writing; errors in oral reading; oral skills difficulties as expressing oneself; and writing skills problems like expressing and spelling errors. Dyslexics not only experience learning difficulties but also other secondary characteristics as having difficulties organizing, planning, social interactions, motor skills, visual perception, and short-term memory. These characteristics affect personal and academic life.
Dysarthria is a motor speech disorder caused by damage in the central and/or peripheral nervous system and it is related to degenerative neurological diseases, such as Parkinson’s disease, cerebrovascular accident (CVA) and traumatic brain injury (TBI). Dysarthria is caused by a mechanical difficulty in the vocal cords or neurological disease-producing abnormal articulation of phonemes, such as instead of “b” a “p”. A type of dyspraxia based on distortions of words is called apraxic dysarthria This type is related to facial apraxia and motor aphasia if Broca’s area is involved.
Improvements in computer technology, in the late 20th century, has allowed a better understanding of the correlation between brain and language, and the disorder that this entails. This improvement has permitted a better visualization of the brain structure in high resolution three-dimensional images. It has also allowed to observe brain activity through the blood flow (Dronkers, Ivanova, & Baldo, 2017).
In the past, research was primarily based on observations of loss of ability resulting from damage to the cerebral cortex. Indeed, medical imaging has represented a radical step forward for research on speech processing. Since then, a whole series of relatively large areas of the brain are involved in speech processing. In more recent research, subcortical regions (those lying below the cerebral cortex such as the putamen and the caudate nucleus), as well as the pre-motor areas (BA 6), have received increased attention. It is now generally assumed that the following structures of the cerebral cortex near the primary and secondary auditory cortices play a fundamental role in speech processing:
·       "Superior temporal gyrus" (STG): morphosyntactic processing (anterior section), integration of syntactic and semantic information (posterior section)
·       "Inferior frontal gyrus" (IFG, Brodmann area (BA) 45/47): syntactic processing, working memory
·       "Inferior frontal gyrus" (IFG, BA 44): syntactic processing, working memory
·       "Middle temporal gyrus" (MTG): lexical semantic processing
·       Angular gyrus (AG): semantic processes (posterior temporal cortex)
The left hemisphere is usually dominant in right-handed people, although bilateral activations are not uncommon in the area of syntactic processing. It is now accepted that the right hemisphere plays an important role in the processing of suprasegmental acoustic features like prosody; which is “the rhythmic and melodic variations in speech”. There are two types of prosodic information: emotional prosody (right hemisphere), which is the emotional that the speaker gives to the speech, and linguistic prosody (left hemisphere), the syntactic and thematic structure of the speech.
Most areas of speech processing develop in the second year of life in the dominant half (hemisphere) of the brain, which often (though not necessarily) corresponds to the opposite of the dominant hand. 98% of right-handed people are left-hemisphere dominant, and the majority of left-handed people are as well.
Computerized tomographic (CT) scans is another technique of the 1970s, which produce low spatial resolution but provides the location of the injury "in vivo". Moreover, Voxel-based Lesion Symptom Mapping (VLSM) and Voxel-Based Morphometry (VBM) techniques contributed to the understanding that specific brain regions have different roles when supporting speech processing. VLSM has been used to observe complex language functions sustained by different regions. Furthermore, VBM is a helpful technique to analysis language impairments related to neurodegenerative disease.