text stringlengths 13 991 |
|---|
However, the term "psycholinguistics" only came into widespread usage in 1946 when Kantor's student Nicholas Pronko published an article entitled "Psycholinguistics: A Review". Pronko's desire was to unify myriad related theoretical approaches under a single name. Psycholinguistics was used for the first time to talk about an interdisciplinary science "that could be coherent", as well as being the title of "Psycholinguistics: A Survey of Theory and Research Problems", a 1954 book by Charles E. Osgood and Thomas A. Sebeok. |
Though there is still much debate, there are two primary theories on childhood language acquisition: |
The field of linguistics and psycholinguistics has since been defined by pro-and-con reactions to Chomsky. The view in favor of Chomsky still holds that the human ability to use language (specifically the ability to use recursion) is qualitatively different from any sort of animal ability. This ability may have resulted from a favorable mutation or from an adaptation of skills that originally evolved for other purposes. |
The structures and uses of language are related to the formation of ontological insights. Some see this system as "structured cooperation between language-users" who use conceptual and semantic deference in order to exchange meaning and knowledge, as well as give meaning to language, thereby examining and describing "semantic processes bound by a 'stopping' constraint which are not cases of ordinary deferring." Deferring is normally done for a reason, and a rational person is always disposed to defer if there is good reason. |
The theory of the "semantic differential" supposes universal distinctions, such as: |
One question in the realm of language comprehension is how people understand sentences as they read (i.e., sentence processing). Experimental research has spawned several theories about the architecture and mechanisms of sentence comprehension. These theories are typically concerned with the types of information, contained in the sentence, that the reader can use to build meaning, and at what point in reading does that information becomes available to the reader. Issues such as "modular" versus "interactive" processing have been theoretical divides in the field. |
In contrast to the modular view, an interactive theory of sentence processing, such as a constraint-based lexical approach assumes that all available information contained within a sentence can be processed at any time. Under an interactive view, the semantics of a sentence (such as plausibility) can come into play early on to help determine the structure of a sentence. Hence, in the sentence above, the reader would be able to make use of plausibility information in order to assume that "the evidence" is being examined instead of doing the examining. There are data to support both modular and interactive views; which view is correct is debatable. |
When reading, saccades can cause the mind to skip over words because it does not see them as important to the sentence, and the mind completely omits it from the sentence or supplies the wrong word in its stead. This can be seen in "Paris in thethe Spring". This is a common psychological test, where the mind will often skip the second "the", especially when there is a line break in between the two. |
Language production refers to how people produce language, either in written or spoken form, in a way that conveys meanings comprehensible to others. One of the most effective ways to explain the way people represent meanings using rule-governed languages is by observing and analyzing instances of speech errors, which include speech disfluencies like false starts, repetition, reformulation and constant pauses in between words or sentences, as well as slips of the tongue, like-blendings, substitutions, exchanges (e.g. Spoonerism), and various pronunciation errors. |
These speech errors have significant implications for understanding how language is produced, in that they reflect that: |
It is useful to differentiate between three separate phases of language production: |
Psycholinguistic research has largely concerned itself with the study of formulation because the conceptualization phase remains largely elusive and mysterious. |
Many of the experiments conducted in psycholinguistics, especially early on, are behavioral in nature. In these types of studies, subjects are presented with linguistic stimuli and asked to respond. For example, they may be asked to make a judgment about a word (lexical decision), reproduce the stimulus, or say a visually presented word aloud. Reaction times to respond to the stimuli (usually on the order of milliseconds) and proportion of correct responses are the most often employed measures of performance in behavioral tasks. Such experiments often take advantage of priming effects, whereby a "priming" word or phrase appearing in the experiment can speed up the lexical decision for a related "target" word later. |
As an example of how behavioral methods can be used in psycholinguistics research, Fischler (1977) investigated word encoding, using a lexical-decision task. He asked participants to make decisions about whether two strings of letters were English words. Sometimes the strings would be actual English words requiring a "yes" response, and other times they would be non-words requiring a "no" response. A subset of the licit words were related semantically (e.g., cat–dog) while others were unrelated (e.g., bread–stem). Fischler found that related word pairs were responded to faster, compared to unrelated word pairs, which suggests that semantic relatedness can facilitate word encoding. |
Recently, eye tracking has been used to study online language processing. Beginning with Rayner (1978), the importance of understanding eye-movements during reading was established. Later, Tanenhaus et al. (1995) used a visual-world paradigm to study the cognitive processes related to spoken language. Assuming that eye movements are closely linked to the current focus of attention, language processing can be studied by monitoring eye movements while a subject is listening to spoken language. |
The analysis of systematic errors in speech, as well as the writing and typing of language, can provide evidence of the process that has generated it. Errors of speech, in particular, grant insight into how the mind produces language while a speaker is mid-utterance. Speech errors tend to occur in the lexical, morpheme, and phoneme encoding steps of language production, as seen by the ways errors can manifest themselves. |
The types of speech errors, with some examples, include: |
Speech errors will usually occur in the stages that involve lexical, morpheme, or phoneme encoding, and usually not in the first step of semantic encoding. This can be attributed to a speaker still conjuring the idea of what to say; and unless he changes his mind, can not be mistaken for what he wanted to say. |
Until the recent advent of non-invasive medical techniques, brain surgery was the preferred way for language researchers to discover how language affects the brain. For example, severing the corpus callosum (the bundle of nerves that connects the two hemispheres of the brain) was at one time a treatment for some forms of epilepsy. Researchers could then study the ways in which the comprehension and production of language were affected by such drastic surgery. Where an illness made brain surgery necessary, language researchers had an opportunity to pursue their research. |
Newer, non-invasive techniques now include brain imaging by positron emission tomography (PET); functional magnetic resonance imaging (fMRI); event-related potentials (ERPs) in electroencephalography (EEG) and magnetoencephalography (MEG); and transcranial magnetic stimulation (TMS). Brain imaging techniques vary in their spatial and temporal resolutions (fMRI has a resolution of a few thousand neurons per pixel, and ERP has millisecond accuracy). Each methodology has advantages and disadvantages for the study of psycholinguistics. |
Computational modelling, such as the DRC model of reading and word recognition proposed by Max Coltheart and colleagues, is another methodology, which refers to the practice of setting up cognitive models in the form of executable computer programs. Such programs are useful because they require theorists to be explicit in their hypotheses and because they can be used to generate accurate predictions for theoretical models that are so complex that discursive analysis is unreliable. Other examples of computational modelling are McClelland and Elman's TRACE model of speech perception and Franklin Chang's Dual-Path model of sentence production. |
Psycholinguistics is concerned with the nature of the processes that the brain undergoes in order to comprehend and produce language. For example, the cohort model seeks to describe how words are retrieved from the mental lexicon when an individual hears or sees linguistic input. Using new non-invasive imaging techniques, recent research seeks to shed light on the areas of the brain involved in language processing. |
Another unanswered question in psycholinguistics is whether the human ability to use syntax originates from innate mental structures or social interaction, and whether or not some animals can be taught the syntax of human language. |
Two other major subfields of psycholinguistics investigate first language acquisition, the process by which infants acquire language, and second language acquisition. It is much more difficult for adults to acquire second languages than it is for infants to learn their first language (infants are able to learn more than one native language easily). Thus, sensitive periods may exist during which language can be learned readily. A great deal of research in psycholinguistics focuses on how this ability develops and diminishes over time. It also seems to be the case that the more languages one knows, the easier it is to learn more. |
The field of aphasiology deals with language deficits that arise because of brain damage. Studies in aphasiology can offer both advances in therapy for individuals suffering from aphasia and further insight into how the brain processes language. |
A short list of books that deal with psycholinguistics, written in language accessible to the non-expert, includes: |
International Association for the Study of Child Language |
The International Association for the Study of Child Language (IASCL) is an academic society for first language acquisition researchers. |
IASCL was founded in 1970 by a group of prominent language acquisition researchers to promote international and interdisciplinary cooperation in the study of child language. Its major activity is the sponsorship of the triennial International Congress for the Study of Child Language, for which it publishes proceedings. It also publishes the "Child Language Bulletin" approximately twice a year. |
A mora (plural "morae" or "moras"; often symbolized μ) is a unit in phonology that describes syllable weight, which in some languages determines stress or timing. A mora is a sound which comes after a short pause in a syllable. The term comes from the Latin word for "linger, delay", which was also used to translate the Greek word "chronos" (time) in its metrical sense. |
Monomoraic syllables have one mora, bimoraic syllables have two, and trimoraic syllables have three, although this last type is relatively rare. |
In general, morae are formed as follows: |
In general, monomoraic syllables are called "light syllables", bimoraic syllables are called "heavy syllables", and trimoraic syllables (in languages that have them) are called "superheavy syllables". Some languages, such as Old English and present-day English, can have syllables with up to four morae. |
A prosodic stress system in which moraically heavy syllables are assigned stress is said to have the property of quantity sensitivity. |
For the purpose of determining accent in Ancient Greek, short vowels have one mora, and long vowels and diphthongs have two morae. Thus long "ē" (eta: ) can be understood as a sequence of two short vowels: "ee". |
Ancient Greek pitch accent is placed on only one mora in a word. An acute (, ) represents high pitch on the only mora of a short vowel or the last mora of a long vowel ("é", "eé"). A circumflex () represents high pitch on the first mora of a long vowel ("ée"). |
In Old English, short diphthongs and monophthongs were monomoraic, long diphthongs and monophthongs were bimoraic, consonants ending in a syllable were each a mora, and geminate consonants added a mora to the preceding syllable. In Modern English, the rules are similar, except that all diphthongs are bimoraic. In English, and probably also in Old English, syllables cannot have more than four morae, with loss of sounds occurring if a syllable would have more than 4 otherwise. From the Old English period through to today, all content words must be at least two morae long. |
Gilbertese, an Austronesian language spoken mainly in Kiribati, is a trimoraic language. The typical foot in Gilbertese contains three morae. These trimoraic constituents are units of stress in Gilbertese. These "ternary metrical constituents of the sort found in Gilbertese are quite rare cross-linguistically, and as far as we know, Gilbertese is the only language in the world reported to have a ternary constraint on prosodic word size." |
In Hawaiian, both syllables and morae are important. Stress falls on the penultimate mora, though in words long enough to have two stresses, only the final stress is predictable. However, although a diphthong, such as "oi," consists of two morae, stress may fall only on the first, a restriction not found with other vowel sequences such as "io." That is, there is a distinction between "oi," a bimoraic syllable, and "io," which is two syllables. |
Most dialects of Japanese, including the standard, use morae, known in Japanese as "haku" () or "mōra" (), rather than syllables, as the basis of the sound system. Writing Japanese in kana (hiragana and katakana) is said by those scholars who use the term "mora" to demonstrate a moraic system of writing. For example, in the two-syllable word "mōra", the "ō" is a long vowel and counts as two morae. The word is written in three symbols, , corresponding here to "mo-o-ra", each containing one mora. Therefore, scholars argue that the 5/7/5 pattern of the "haiku" in modern Japanese is of morae rather than syllables. |
The Japanese syllable-final "n" is also said to be moraic, as is the first part of a geminate consonant. For example, the Japanese name for "Japan", , has two different pronunciations, one with three morae ("Nihon") and one with four ("Nippon"). In the hiragana spelling, the three morae of "Ni-ho-n" are represented by three characters (), and the four morae of "Ni-p-po-n" need four characters to be written out as . |
Similarly, the names "Tōkyō" ("To-u-kyo-u", ), "Ōsaka" ("O-o-sa-ka", ), and "Nagasaki" ("Na-ga-sa-ki", ) all have four morae, even though, on this analysis, they can be said to have two, three and four syllables, respectively. The number of morae in a word is not always equal to the number of graphemes when written in kana; for example, even though it has four morae, the Japanese name for "Tōkyō" () is written with five graphemes, because one of these graphemes () represents a "yōon", a feature of the Japanese writing system that indicates that the preceding consonant is palatalized. |
In Luganda, a short vowel constitutes one mora while a long vowel constitutes two morae. A simple consonant has no morae, and a doubled or prenasalised consonant has one. No syllable may contain more than three morae. The tone system in Luganda is based on morae. See Luganda tones. |
In Sanskrit, the mora is expressed as the "mātrā". For example, the short vowel "a" (pronounced like a schwa) is assigned a value of one "mātrā", the long vowel "ā" is assigned a value of two "mātrā"s, and the compound vowel (diphthong) "ai" (which has either two simple short vowels, "a"+"i", or one long and one short vowel, "ā"+"i") is assigned a value of two "mātrā"s. In addition, there is "plutham" (trimoraic) and "dīrgha plutham" ("long "plutham"" = quadrimoraic). |
Sanskrit prosody and metrics have a deep history of taking into account moraic weight, as it were, rather than straight syllables, divided into "laghu" (, "light") and "dīrgha"/"guru" (/, "heavy") feet based on how many morae can be isolated in each word. Thus, for example, the word "kartṛ" (), meaning "agent" or "doer", does not contain simply two syllabic units, but contains rather, in order, a "dīrgha"/"guru" foot and a "laghu" foot. The reason is that the conjoined consonants "rt" render the normally light "ka" syllable heavy. |
Semantic satiation is a psychological phenomenon in which repetition causes a word or phrase to temporarily lose meaning for the listener, who then perceives the speech as repeated meaningless sounds. Extended inspection or analysis (staring at the word or phrase for a lengthy period of time) in place of repetition also produces the same effect. |
Leon Jakobovits James coined the phrase "semantic satiation" in his 1962 doctoral dissertation at McGill University. It was demonstrated as a stable phenomenon that is possibly similar to a cognitive form of reactive inhibition. Prior to that, the expression "verbal satiation" had been used along with terms that express the idea of mental fatigue. The dissertation listed many of the names others had used for the phenomenon: |
James presented several experiments that demonstrated the operation of the semantic satiation effect in various cognitive tasks such as rating words and figures that are presented repeatedly in a short time, verbally repeating words then grouping them into concepts, adding numbers after repeating them out loud, and bilingual translations of words repeated in one of the two languages. In each case, the subjects would repeat a word or number for several seconds, then perform the cognitive task using that word. It was demonstrated that repeating a word prior to its use in a task made the task somewhat more difficult. |
An explanation for the phenomenon is that, in the cortex, verbal repetition repeatedly arouses a specific neural pattern that corresponds to the meaning of the word. Rapid repetition makes both the peripheral sensorimotor activity and central neural activation fire repeatedly. This is known to cause reactive inhibition, hence a reduction in the intensity of the activity with each repetition. Jakobovits James (1962) calls this conclusion the beginning of "experimental neurosemantics". |
Studies that further explored semantic satiation include the work of Pilotti, Antrobus, and Duff (1997), which claimed that it is possible that the true locus of this phenomenon is presemantic instead of semantic adaptation. There is also the experiment conducted by Kouinos et al. (2000), which revealed that semantic satiation is not necessarily a byproduct of "impoverishment of perceptual inputs." |
Jakobovits cited several possible semantic satiation applications and these include its integration in the treatment of phobias through systematic desensitization. He argued that "in principle, semantic satiation as an applied tool ought to work wherever some specifiable cognitive activity mediates some behavior that one wishes to alter." An application has also been developed to reduce speech anxiety by stutterers by creating semantic satiation through repetition, thus reducing the intensity of negative emotions triggered during speech. |
There are studies that also linked semantic satiation in education. For instance, the work of Tian and Huber (2010) explored the impact of this phenomenon on word learning and effective reading. The authors claimed that this process can serve as a unique approach to test for discounting through loss of association since it allows the separation of the "lexical level from semantic level effects in a meaning-based task that involves repetitions of words." Semantic satiation has also been used as a tool to gain more understanding on language acquisition such as those studies that investigated the nature of multilingualism. |
Speech shadowing is a psycholinguistic experimental technique in which subjects repeat speech at a delay to the onset of hearing the phrase. The time between hearing the speech and responding, is how long the brain takes to process and produce speech. The task instructs participants to shadow speech, which generates intent to reproduce the phrase while motor regions in the brain unconsciously process the syntax and semantics of the words spoken. Words repeated during the shadowing task would also imitate the parlance of the shadowed speech. |
The reaction time between perceiving speech and then producing speech has been recorded at 250 ms for a standardised test. However, for people with left dominant brains, the reaction time has been recorded at 150 ms. Functional imaging finds that the shadowing of speech occurs through the dorsal stream. This area links auditory and motor representations of speech through a pathway that starts in the superior temporal cortex, extends to the inferior parietal cortex and ends with the posterior and inferior frontal cortexes, specifically in Broca's area. |
The speech shadowing technique was created as a research technique by the Leningrad Group led by Ludmilla Chistovich and Valerij Kozhevnikov in the late 1950s. In the 1950s, the Motor theory of speech perception was also in development through Alvin Liberman and Franklin S. Cooper. It has been used for research on stuttering and divided attention, with focus on the distraction of conversational audio while driving. Speech shadowing also has applications for language learning, as an interpretation method and in singing. |
Ludmilla Chistovich and Valerij Kozhevnikov focused on research of the mental processes that stimulate the functions of perception and production of speech in communication. In linguistics, speech perception was the chronological process that analysed steadily paced and similar sounding words but Chistovich and Kozhevnikov found speech perception to be the staggered integration of syllables known as non-linear dynamics. This refers to the diversity of tones and syllables in speech, which is perceived without a conscious detection of delay and forgotten with the limited working memory capacity. This observation developed research towards the speech shadowing technique for research in psycholinguistics. |
Shadowing was used to measure the reaction time taken to repeat consonant-vowel syllables. Alveolar consonants were measured when the tongue first touched an artificial palate and labial consonants were measured by the contact of metal pieces when the upper and lower lips pressed together. The participant would begin to mimic the consonant as the speaker finished the utterance of the consonant. This consistent rapid response shifted research focus towards close speech shadowing. |
Close speech shadowing is when the technique requires an immediate repetition, at the fastest pace a person is able to achieve. It does not allow people to hear the entire phrase beforehand or to understand the words vocalised until the end of a sentence. It was found that close speech shadowing would occur at the shortest delay of 250 ms. It has also been found to occur with a minimum delay between 150 m/s in left-hemisphere dominant brains. The left hemisphere is associated with enhanced performance with linguistic skill and information processing. It engages with analytic patterns of thought and experiences ease with the speech shadowing task. |
The short delay of response occurs as the motor regions of the brain have recorded cues that are related to consonants. The brain would then estimate the adjacent vowel syllable before it is heard. When the vowel is registered through the auditory system, it would confirm the action to produce speech based on the estimate. If the vowel estimate is denied, a short delay in response occurs as the motor region configures an alternate vowel. |
Research has developed a biological model as to how the meaning of speech can be perceived instantaneously even though the sentence has never been heard before. An understanding of syntactic, lexical and phonemic characteristics is first required for this to occur. Speech perception also requires the physical components of the auditory system to recognise similarities in sounds. Within the basilar membrane, energy is transferred, and specific frequencies can be detected and activate auditory hairs. The auditory hairs can be stimulated to sharpened activity when a tonal emission is held for 100 ms. This length of time indicates that speech shadowing ability can be enhanced by a moderately paced phrase. |
Shadowing is more complex than only the use of the auditory system. A shadow response can reduce the delay by analysing the temporal difference between the pronunciation of phonemes within a syllable. During a shadowing task, the process of perceiving speech and a subsequent response by the production of speech does not occur separately, it would partially overlap. The auditory system shifts between a translation stage of perceiving phonemes and a choice phase of anticipating the following phonemes to create an immediate response. This period of overlap occurs in 20 – 90 ms, depending on the combination of vowels with consonants. |
Speech perception also has links to phonological processing skills. This includes recognition of all phonemes in a language and how they can combine to form common syllables. A low understanding of phonological norms can negatively affect performance in a speech shadowing task. This is measured through the inclusion of proper and nonsense words in the task. High phonological processing skills produced shorter reaction times and low phonological processing skilled participants experienced uncertainty and slower responses. |
The speech shadowing technique is part of research methods that examine the mechanics of stuttering and identifies practical improvement strategies. A primary characteristic of stuttering is a repeated movement, characterised by the repetition of a syllable. In this activity, stutters are made to shadow a repeated movement that is internally or externally sourced. It reduces the likelihood of stuttering as the linguistic mental block is overturned and conditioned to provide an opening for fluid speech. Mirror neurones of the frontal lobe are active during this exercise and act to link speech perception and production. This process combined with cortical priming is engaged to produce the visible response. |
Another primary characteristic of stuttering is a fixed posture, involving the prolongation of sounds. Speech shadowing research involving fixed postures produces no benefit in improving speech flow. The elongation of words in this stuttering characteristic does not align with the auditory system, which functions efficiently with moderately paced speech. |
Speech shadowing has also been used in research into pseudo-stuttering, a voluntary speech impediment. Pseudo-stuttering involves identifying primary stuttering characteristics and realistic shadowing. It is used as an activity when studying fluency disorders, for students to experience how psychological and social outcomes are impacted by stuttering with strangers. Participants of this activity reported feelings of anxiety, frustration and embarrassment, which aligned with the reported emotional states of natural stutterers. The participants also reported lowered expectations towards sufferers in public situations. |
The speech shadowing technique is used in dichotic listening tests, produced by E. Colin Cherry in 1953. During dichotic listening tests, subjects are presented with two different messages, one in the right ear and one in the left ear. The participants are instructed to focus on one of the two messages and to shadow the attended message out loud. The perceptual ability of the participant is measured as subjects attend to the instructed message while the alternate message behaves as a distraction. Various stimuli are then presented to the other ear, and subjects are afterwards queried on what can be recalled from these messages despite instruction to ignore. Speech shadowing has here been manipulated as an experimental technique to study and test divided attention. |
Research into the effect of audio stimuli resulting from mobile phone use while driving, has used the speech shadowing technique in its methodology. Speech shadowing tasks that have combined a conversational stimulus with a visual stimulus while driving are reported by participants as a distraction that directs focus away from the road and visual periphery. The study concludes that the combination of audio and visual stimuli have little effect on a driver’s ability to manoeuvre a vehicle but it does impair spatial and temporal judgement, which is not detected by the driver. This includes a driver’s judgement of their speed, distance from a parallel vehicle and a delayed reaction to a sudden brake from a driver ahead. |
The speech shadowing technique had also been used to research whether it is the action of producing speech or concentration on the semantics of speech that distracts drivers. The task of simple speech shadowing had no effects on driving ability but the combination of simple speech shadowing with a content associated follow-up activity showed impairment in reaction time. The high attentional demand required for this alternate task shifts concentration from the primary task of driving. This impairment is problematic as fast reaction time when driving is required to respond to general traffic signals and signage as well as unpredictable events to maintain safety. |
When learning a foreign language, shadowing can be used as a technique to practice speech and to acquire knowledge. It follows an interactionist perspective of language development. The method of speech shadowing in a learning setting involves providing shadowing tasks of incremental semantic and pronunciation difficulty and rating the accuracy of the shadowed response. It was previously difficult to create a standardised scoring system as learners would slur and skip words when uncertain in order to keep up with the pace of the phrases that were to be shadowed. Automatic scoring using alignment-based and clustering-based scoring techniques were designed and are now implemented to improve the experience of learning of a foreign language through speech shadowing techniques. |
Remote learning of language can occur without the presence of a real-time speaker through text-to-speech applications and using the principle of speech shadowing. As part of the process to perceive sound, the auditory system distinguishes formant frequencies. The first formant characteristic perceived in the cochlear is the most prominent cue as it there is an attentional shift towards this signal. The formant characteristics of synthetically produced speech currently differs to speech produced by the human vocal tract. This information received effects the pronunciation of speech produced in a shadowing activity. Applications for learning languages are focused on developing greater accuracy in pronunciation and pitch since these features are also replicated when shadowing speech. |
Speech shadowing can be used in the alternate form of vocal shadowing. It also requires the process of perception and production but with inverted energy distributions of a low input and a large output. Vocal shadowing perceives pure tones and focuses on the manipulation of the vocal tract to produce a shadowed response. Singers in comparison to non-singers are able to produce a shadowed response phrase that includes more accuracy in achieving the target frequencies and rapid movement between the frequencies. Research associates this ability with greater control and awareness of the vocal-fold breadth. The glottal stop is a technique manipulated by singers during shadowing to enhance frequency change. |
Letter frequency effect - the effect of letter frequency according to which the frequency with which the letter is encountered influences the recognition time of a letter. Letters of high frequency show a significant advantage over letters of low frequency in letter naming, same-different matching, and visual search. Letters of high frequency are recognized faster than letters of low frequency. Appelman and Mayzner (1981) in their re-analysis of the studies concerning letter frequency effect have found that in 3 out of 6 studies using reaction times (RTs) as a dependent variable the letter frequency correlated significantly with RTs. |
Majority of studies on letter frequency effect failed to find a significant letter frequency effect. These studies, however, used the same-different matching task in which the participants see two letters and are to respond if these letters are same or different. Therefore, the absence of letter frequency effect in these studies may be due to the participants using the visual form of a letter instead of a letter itself to match the letters. |
The hypothesis of linguistic relativity, also known as the Sapir–Whorf hypothesis , the Whorf hypothesis, or Whorfianism, is a principle suggesting that the structure of a language affects its speakers' worldview or cognition, and thus people's perceptions are relative to their spoken language. |
Linguistic relativity has been understood in many different, often contradictory ways throughout its history. The idea is often stated in two forms: the "strong hypothesis", now referred to as linguistic determinism, was held by some of the early linguists before World War II, while the "weak hypothesis" is mostly held by some of the modern linguists. |
The term "Sapir–Whorf hypothesis" is considered a misnomer by linguists for several reasons: Sapir and Whorf never co-authored any works, and never stated their ideas in terms of a hypothesis. The distinction between a weak and a strong version of this hypothesis is also a later invention; Sapir and Whorf never set up such a dichotomy, although often their writings and their views of this relativity principle are phrased in stronger or weaker terms. |
The principle of linguistic relativity and the relation between language and thought has also received attention in varying academic fields from philosophy to psychology and anthropology, and it has also inspired and colored works of fiction and the invention of constructed languages. |
From the late 1980s, a new school of linguistic relativity scholars has examined the effects of differences in linguistic categorization on cognition, finding broad support for non-deterministic versions of the hypothesis in experimental contexts. Some effects of linguistic relativity have been shown in several semantic domains, although they are generally weak. Currently, a balanced view of linguistic relativity is espoused by most linguists holding that language influences certain kinds of cognitive processes in non-trivial ways, but that other processes are better seen as arising from connectionist factors. Research is focused on exploring the ways and extent to which language influences thought. |
In the late 18th and early 19th centuries, the idea of the existence of different national characters, or "Volksgeister", of different ethnic groups was the moving force behind the German romantics school and the beginning ideologies of ethnic nationalism. |
Swedish philosopher Emanuel Swedenborg inspired several of the German Romantics. As early as 1749, he alludes to something along the lines of linguistic relativity in commenting on a passage in the table of nations in the book of Genesis: |
In 1771 he spelled this out more explicitly: |
Johann Georg Hamann is often suggested to be the first among the actual German Romantics to speak of the concept of "the genius of a language." In his "Essay Concerning an Academic Question", Hamann suggests that a people's language affects their worldview: |
In 1820, Wilhelm von Humboldt connected the study of language to the national romanticist program by proposing the view that language is the fabric of thought. Thoughts are produced as a kind of internal dialog using the same grammar as the thinker's native language. This view was part of a larger picture in which the world view of an ethnic nation, their "Weltanschauung", was seen as being faithfully reflected in the grammar of their language. Von Humboldt argued that languages with an inflectional morphological type, such as German, English and the other Indo-European languages, were the most perfect languages and that accordingly this explained the dominance of their speakers over the speakers of less perfect languages. Wilhelm von Humboldt declared in 1820: |
In Humboldt's humanistic understanding of linguistics, each language creates the individual's worldview in its particular way through its lexical and grammatical categories, conceptual organization, and syntactic models. |
Herder worked alongside Hamann to establish the idea of whether or not language had a human/rational or a divine origin. Herder added the emotional component of the hypothesis and Humboldt then took this information and applied to various languages to expand on the hypothesis. |
Boas' student Edward Sapir reached back to the Humboldtian idea that languages contained the key to understanding the world views of peoples. He espoused the viewpoint that because of the differences in the grammatical systems of languages no two languages were similar enough to allow for perfect cross-translation. Sapir also thought because language represented reality differently, it followed that the speakers of different languages would perceive reality differently. |
On the other hand, Sapir explicitly rejected strong linguistic determinism by stating, "It would be naïve to imagine that any analysis of experience is dependent on pattern expressed in language." |
Sapir was explicit that the connections between language and culture were neither thoroughgoing nor particularly deep, if they existed at all: |
Sapir offered similar observations about speakers of so-called "world" or "modern" languages, noting, "possession of a common language is still and will continue to be a smoother of the way to a mutual understanding between England and America, but it is very clear that other factors, some of them rapidly cumulative, are working powerfully to counteract this leveling influence. A common language cannot indefinitely set the seal on a common culture when the geographical, physical, and economics determinants of the culture are no longer the same throughout the area." |
While Sapir never made a point of studying directly how languages affected thought, some notion of (probably "weak") linguistic relativity underlay his basic understanding of language, and would be taken up by Whorf. |
More than any linguist, Benjamin Lee Whorf has become associated with what he called the "linguistic relativity principle". Studying Native American languages, he attempted to account for the ways in which grammatical systems and language-use differences affected perception. Whorf's opinions regarding the nature of the relation between language and thought remain under contention. Critics such as Lenneberg, Black, and Pinker attribute to Whorf a strong linguistic determinism, while Lucy, Silverstein and Levinson point to Whorf's explicit rejections of determinism, and where he contends that translation and commensuration are possible. |
Detractors such as Lenneberg, Chomsky and Pinker criticized him for insufficient clarity in his description of how language influences thought, and for not proving his conjectures. Most of his arguments were in the form of anecdotes and speculations that served as attempts to show how 'exotic' grammatical traits were connected to what were apparently equally exotic worlds of thought. In Whorf's words: |
Among Whorf's best-known examples of linguistic relativity are instances where an indigenous language has several terms for a concept that is only described with one word in European languages (Whorf used the acronym SAE "Standard Average European" to allude to the rather similar grammatical structures of the well-studied European languages in contrast to the greater diversity of less-studied languages). |
One of Whorf's examples was the supposedly large number of words for 'snow' in the Inuit language, an example which later was contested as a misrepresentation. |
Another is the Hopi language's words for water, one indicating drinking water in a container and another indicating a natural body of water. These examples of polysemy served the double purpose of showing that indigenous languages sometimes made more fine grained semantic distinctions than European languages and that direct translation between two languages, even of seemingly basic concepts such as snow or water, is not always possible. |
Whorf’s argument about Hopi speakers’ conceptualization about time is an example of the structure-centered approach to research into linguistic relativity, which Lucy identified as one of three main strands of research in the field. The "structure-centered" approach starts with a language's structural peculiarity and examines its possible ramifications for thought and behavior. The defining example is Whorf's observation of discrepancies between the grammar of time expressions in Hopi and English. More recent research in this vein is Lucy's research describing how usage of the categories of grammatical number and of numeral classifiers in the Mayan language Yucatec result in Mayan speakers classifying objects according to material rather than to shape as preferred by English speakers. |
Whorf died in 1941 at age 44, leaving multiple unpublished papers. His line of thought was continued by linguists and anthropologists such as Hoijer and Lee who both continued investigations into the effect of language on habitual thought, and Trager, who prepared a number of Whorf's papers for posthumous publishing. The most important event for the dissemination of Whorf's ideas to a larger public was the publication in 1956 of his major writings on the topic of linguistic relativity in a single volume titled "Language, Thought and Reality". |
In 1953, Eric Lenneberg criticized Whorf's examples from an objectivist view of language holding that languages are principally meant to represent events in the real world and that even though languages express these ideas in various ways, the meanings of such expressions and therefore the thoughts of the speaker are equivalent. He argued that Whorf's English descriptions of a Hopi speaker's view of time were in fact translations of the Hopi concept into English, therefore disproving linguistic relativity. However Whorf was concerned with how the habitual "use" of language influences habitual behavior, rather than translatability. Whorf's point was that while English speakers may be able to "understand" how a Hopi speaker thinks, they do not "think" in that way. |
Lenneberg's main criticism of Whorf's works was that he never showed the connection between a linguistic phenomenon and a mental phenomenon. With Brown, Lenneberg proposed that proving such a connection required directly matching linguistic phenomena with behavior. They assessed linguistic relativity experimentally and published their findings in 1954. |
Since neither Sapir nor Whorf had ever stated a formal hypothesis, Brown and Lenneberg formulated their own. Their two tenets were (i) "the world is differently experienced and conceived in different linguistic communities" and (ii) "language causes a particular cognitive structure". Brown later developed them into the so-called "weak" and "strong" formulation: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.